|
I think glVertexAttribPointer converts values to floats. Try glVertexAttribIPointer.
|
# ¿ Nov 4, 2015 07:54 |
|
|
# ¿ Apr 29, 2024 12:05 |
|
Handedness is simple. Pick a hand, stick your thumb out to the side, index finger parallel with your palm, and middle finger in the direction your palm is facing. If you can orient your hand to match the positive axes, then that's the handedness of the coordinate system.
|
# ¿ Nov 23, 2015 08:28 |
|
Doc Block posted:You can do that with either hand, though... Doc Block posted:But yeah, the general consensus seems to be that the coordinate system in the image I posted is left handed, which leaves me wondering why I'm seeing people say OpenGL's coordinate system is right handed, since IIRC in OpenGL +Z is going into the screen.
|
# ¿ Nov 23, 2015 08:43 |
|
To be clear, if you're using modern OpenGL and constructing matrices yourself, you're working directly in device space. The GLU functions (and many libraries that imitate them) constructed their matrices such that vertex data would be reflected out of the other space.
|
# ¿ Nov 23, 2015 08:47 |
|
Suspicious Dish posted:By default, GL defines clip space as being right-handed, but, again, this is just a function of the near and far planes in clip space. You can change it with glDepthRange to flip the near and far planes around, which has existed since day 1, no extension required. OpenGL documentation posted:After clipping and division by w, depth coordinates range from -1 to 1, corresponding to the near and far clipping planes. Suspicious Dish posted:I've never actually heard of or considered backface culling to be about handed-ness, but I can see your point. You can change it with glFrontFace, as usual.
|
# ¿ Nov 23, 2015 20:05 |
|
Xerophyte posted:Well, it kinda is. The direction of the geometry normal is defined by the triangle's vertex order, and the normal can be either pointing out from a counter-clockwise rotation (a right-handed system) or pointing out from a clockwise rotation (a left-handed system). Facing is a type of handedness in that sense. Mirror transforms reverse the direction of the winding and therefore also the direction of the geometry normal. This is mostly academic and you're not going to find an API asking you to select your primitive handedness or anything.
|
# ¿ Nov 24, 2015 10:03 |
|
I think the trick is that the dev-to-efficient-game-engine time ends up being much shorter in turn.
|
# ¿ Dec 10, 2015 20:38 |
|
If anything, Vulkan represents a drastic increase in accessibility--no longer will we be so dependent upon proprietary "this is how to make the driver happy" incantations.
|
# ¿ Dec 19, 2015 02:04 |
|
mobby_6kl posted:Ok stupid question time because I'm obviously not able to figure this out on my own: How do I convert between the coordinates on the screen/viewport and the X/Y at a particular z depth in the scene? I found some information that mostly focuses on identifying objects in the scene, so the solution involves intersecting rays and crap that I obviously don't need - my goal is to place an object at, say, z=-5 (away from the camera) so that it appears at a particular x,y in the viewport.
|
# ¿ Mar 26, 2016 19:07 |
|
If you really want to learn a low level graphics API these days you should probably just go straight to Vulkan. It's big and complicated and the drivers aren't super stable yet, but at least it makes sense.
|
# ¿ Dec 14, 2016 00:25 |
|
Uploading texture data to the GPU in parallel to rendering is a good idea (if you map the memory you can even avoid going through userspace system memory first) and, executed correctly, will provide a much better user experience. Precisely controlling parallelism is much easier with Vulkan than GL, though; in GL you may find yourself relying on the driver correctly guessing what your intention is.
|
# ¿ Feb 12, 2017 01:21 |
|
Colonel J posted:I'm trying to finally wrap up my Master's degree, and I'm looking for a scene to try out the technique I'm developing; surprisingly finding a good free scene on Google is a pretty awful process, and I'm not having much luck finding good stuff. I'm looking mainly for an interior type scene, such as an apartment, ideally with a couple rooms and textures / good color contrast between the different surfaces (I'm working on indirect illumination). Can you share anything about your work? I've been reading about realtime GI lately and am quite interested in new developments.
|
# ¿ Mar 13, 2017 07:09 |
|
Static lighting and environments then? I've been particularly interested in totally dynamic solutions; I've played with a very compelling demo based on Radiance Hints (Papaioannou, 2011), which uses a regular grid of "probes" computed live by sampling reflective shadow maps. It uses screen-space techniques to reduce leakage, which isn't perfect but reduces the incidence of visually obvious errors. Someday I want to try applying it to massive environments by using toroidally-addressed clipmaps for the radiance cache. It surprises me a bit to hear that you can work in terms of individual SH coefficients for gradient-descent and error-bounding. Is that mathematically rigorous? I'm 100% prepared to believe it is, my grasp of SH math is pretty loose. I wonder if you could improve your results by adjusting your error function. It sounds like you're currently minimizing the global error, but are dissatisfied with the results due to the visual impact of local errors. What if you defined your error function to be the maximum local error? I imagine this might require a more stochastic approach to gradient descent than you currently need, and likely quite a lot more CPU time, but it seems more consistent with your desired results. I'm having trouble following why every probe has global influence on shading. That certainly seems like an issue, since then the time complexity of shading a single point is proportional to the size of the entire scene. Ralith fucked around with this message at 22:35 on Mar 13, 2017 |
# ¿ Mar 13, 2017 22:33 |
|
Xerophyte posted:You could try a non-bitmap approach. The maximum quality solution is to render the TrueType splines directly but it's quite slow, challenging to handle filtering and aliasing, plus you have annoying corner cases for overlapping patches. I think at least some of our applications at Autodesk at least used to do fonts by switching between mipmapped bitmaps for small glyphs and tesselation for large glyphs; not really sure if that's still the case though. I think for 2D CAD purposes where aesthetics aren't super important people often just approximate text outlines with a series of lines that you can handle the same as any other lines. Definitely don't try to store data for every possible zoom level; that will force you to use discrete zoom levels and require more space than just storing the high-resolution version and using hardware downsampling as necessary.
|
# ¿ Apr 12, 2017 22:12 |
|
Hyvok posted:Just rendering the text as a mesh might be an option as well, I just saw some site recommend against that due to the amount of vertices you will need to handle. Which did sound a bit odd since you can have A LOT of vertices nowadays... Zerf posted:Please elaborate on this. If we ignore preprocessing, rendering distance field fonts is just a plain texture lookup and some simple maths(which is essentially free because the texture lookup). haveblue posted:How much unicode do you want to support? The occasional accented character is probably easy enough but if you want to get really into the weeds with composed characters and bidirectional writing and such you might want to considering letting the OS text service handle the whole thing and make you one finished texture per label. Rolling your own unicode-aware layout is not something to be entered into lightly. Ralith fucked around with this message at 03:58 on Apr 14, 2017 |
# ¿ Apr 14, 2017 03:54 |
|
Zerf posted:Oh, I see, that's why. Thanks. On the other hand, here's an excerpt from the GLyphy Github repo:
|
# ¿ Apr 14, 2017 18:38 |
|
peepsalot posted:Hi this is not specifically for OpenGL/DX but I have a 3D geometry question. I'm looking for some info on how to implement a general algorithm that can take a 3d triangle mesh (closed/solid) and a Z value and return the 2d intersection of the 3d object with the plane at Z. Every triangle in the 3D mesh corresponds to 0, 1, or 2 vertices in the set of 2D polygons that represent the slice you're looking for. If the mesh is watertight, each vertex should lie on a multiple of two edges if the polygon it's associated with is nonempty. One approach would therefore be to loop over the set of triangles, compute the edge or vertex (if any) contributed by that triangle, and then compute the polygons by deduplicating vertexes. To determine which side of any given edge is "inside," just project the normal of the associated triangle onto your plane (or store your edges in a way manner that encodes the sidedness in the first place).
|
# ¿ Jul 11, 2017 20:14 |
|
peepsalot posted:How would you handle coplanar tris, or tri having a single edge coincident with the plane. Also, I'm thinking for the tris where only one point intersects the plane that those can be safely ignored? Triangles that only intersect at a single point only need to be handled if you care about them. I don't know what exactly your application is, so I can't answer that for you, but if you're going to be re-generating a mesh of the object that you want to fit it pretty well then you'll probably want to retain them so that pointed shapes with an axis perpendicular to your planes don't get blunted. For sufficiently high plane density/low probability of an exact intersection this of course isn't necessary, but if you ignore the case entirely it'll make things fragile. peepsalot posted:I should probably also mention that this is intended to eventually re-mesh the whole 3d object, so that all the vertices in the new mesh are aligned with layers (similar to how 3d printing "slicer" programs work). Ralith fucked around with this message at 03:00 on Jul 12, 2017 |
# ¿ Jul 12, 2017 02:56 |
|
Speaking of projection matrices, I learned about reversed Z projections not too long ago and got one working in Vulkan recently. Infinite far planes without Z fighting are fun! There's basically no downside AFAICT, if your hardware supports 32-bit float depth buffers (it should) and you aren't almost out of memory.
|
# ¿ Jul 15, 2017 06:45 |
|
Jewel posted:Same, only yesterday! Porting a AAA title and noticed they have a depth buffer that goes from 0 to 250000. Strangely they still use a normal one too but haven't seen how much use is on each.
|
# ¿ Jul 15, 2017 20:11 |
|
Joda posted:Don't APIs act real strange if you put the near-plane at exactly 0 though? Xerophyte posted:Spontaneously, it seems like if you're using float32 depth storage then it makes sense to use a bigger range than [0,1] to make better use of the full range of the type. I have no idea if the different distribution of quantization points will interact badly with the reciprocal, I don't immediately see why it would though. Floats have logarithmic-ish spacing over the entire range (well, ignoring denormals). 0-1 makes the math simple and there's practically no benefit to going for increased precision, and you might well end up with significant errors for astronomically distant stuff.
|
# ¿ Jul 16, 2017 20:11 |
|
peepsalot posted:I have this idea to define a sort of B-rep solid 3D model in a probabalistic way. Given a set of trivariate normal distributions(gaussian probability blobs in 3D space), I want to take a weighted sum of each probability distribution in the set, and render a contour surface wherever this sum equals some threshold.
|
# ¿ Oct 26, 2017 19:33 |
|
haveblue posted:The mathematical name for that is implicit surface and while that wiki page is way above my pay grade it may give you some ideas to start with.
|
# ¿ Oct 26, 2017 19:55 |
|
Xerophyte posted:I'm not really that familiar with meshing algorithms. I don't believe SDFs can be meshed in in a better way than any other implicit surface there. Xerophyte posted:Anecdotally, I know the approach Weta took for meshing the various SDF fractals they used for Ego in Guardians of the Galaxy 2 was to do a bunch of simple renders of the SDF, feed those renders into their photogrammetry software, then get a point cloud, then mesh that. I don't think I'd recommend that approach, but apparently it's good enough for production.
|
# ¿ Oct 30, 2017 01:00 |
|
Absurd Alhazred posted:I know you're not suppose to multiply too many primitives, but it is standard to use it for points -> billboards, right?
|
# ¿ Mar 27, 2018 05:16 |
|
Absurd Alhazred posted:So having most of the information in the instance variables and applying it to a single quad is better than having the same number of vertices and using a geomtry shader to expand them into quads? The only way to be sure is to find or build some benchmarks that model your usecase on your hardware, of course.
|
# ¿ Mar 27, 2018 05:42 |
|
schme posted:I have failed to find what this is about :
|
# ¿ Mar 27, 2018 20:40 |
|
Side-effects are rarer in GLSL, so it's even more esoteric a pattern, but yeah the semantics are the same.
|
# ¿ Mar 27, 2018 22:54 |
|
Absurd Alhazred posted:WMR's native API is a mess. Unless they've changed things significantly, you need to have a managed Windows app just to access it directly. Your best bet is to use SteamVR, instead, which doesn't have the same issues. There's Windows Mixed Reality for Steam VR, they're all free as far as I know (once you have the WMR set). At this point, I've put off future VR development until OpenXR comes out. I have a lot of respect for the Unity/Epic engineers who were faced with the task of wrapping SteamVR up into something approximately reliable.
|
# ¿ Sep 30, 2018 07:56 |
|
Absurd Alhazred posted:I mean, at least SteamVR lets you just use it in C++ with OpenGL, DirectX, or Vulkan. What don't you like about it? I do have more experience with Oculus, I will admit.
|
# ¿ Oct 1, 2018 05:49 |
|
Absurd Alhazred posted:Might they have gotten better since you last used them? I just added a new feature to our code last week using a more recent addition to the API, and it was mostly painless.
|
# ¿ Oct 2, 2018 23:53 |
|
I dunno what graphics API you're using, but have you correctly specified the memory dependency between that shader and whatever previously wrote to Texture?
|
# ¿ Oct 8, 2018 08:07 |
|
Only way to be sure is by testing, especially on diverse hardware. Note that the maximum varies considerably.
|
# ¿ May 28, 2019 22:12 |
|
Rhusitaurion posted:Many months later, I actually ended up doing something like this, using Vulkan, but I'm wondering if there's a better way than what I've done. No comment on the abstract algorithm, but there are a few technical errors here. First, you don't need three command buffers. If you're only using a single queue, which is probably the case, you only need one command buffer for the entire frame. Semaphores are only used for synchronizing presentation and operations that span multiple queues. Note that you don't need to use a dedicated compute queue just because it's there; the graphics queue is guaranteed to support compute operations, and for work that your frame is blocked on it's the right place. Events definitely aren't appropriate. What you need here is a memory barrier between your writes and the following reads, and between your reads and the following writes. Without suitable barriers your code is unsound, even if it appears to work in a particular case. Second, maybe I misunderstood but it sounds like you're zeroing out memory, then immediately overwriting it? That's not necessary. Third, a single global atomic will probably serialize your compute operations, severely compromising performance. Solutions to this can get pretty complex; maybe look into a parallel prefix sum scheme to allocate vertex space. A separate set of buffers per frame is a good idea, because it will allow one frame's vertex state to be pipelined with the next frame's compute stage. Ralith fucked around with this message at 04:10 on May 8, 2020 |
# ¿ May 8, 2020 03:57 |
|
Rhusitaurion posted:Yeah I realize now that I didn't explain this well. The compute stage treats an indirect draw struct's indexCount as an atomic, to "allocate" space in a buffer to write index data in. That index data changes per-frame, so I have to re-zero the counter before each compute dispatch. There's also another atomic that works the same way for the vertex data that the indices index. Is there some other way to reset or avoid resetting these? [quote="Rhusitaurion" post=""504705630"] Well, it's 2 atomics per object, but yeah, it's probably not great. Thanks for the pointer. I'll look into it, but it sounds complicated so the current solution may remain in place for a while... [/quote] Yeah, it's a whole big complicated thing, don't blame you for punting it. It'd be nice if there was reusable code for this somewhere, but reusable abstractions in glsl are hard.
|
# ¿ May 8, 2020 17:03 |
|
Rhusitaurion posted:Dumb question about memory barriers - this page says that no GPU gives a poo poo about VkBufferMemoryBarrier vs. VkMemoryBarrier. This seems to imply that if I use a VkBufferMemoryBarrier per object to synchronize reset->compute->draw, it will be implemented as a global barrier, so I might as well just do all resets, then all computes, then all draws with global barriers in between. Rhusitaurion posted:But as far as I can tell, this is essentially what my semaphore solution is currently accomplishing, since semaphores work like a full memory barrier.
|
# ¿ May 8, 2020 20:26 |
|
Rhusitaurion posted:I'm probably misinterpreting the spec here, but the section on semaphore signaling says that all memory accesses by the device are in the first access scope, and similarly for waiting, all memory accesses by the device are in the second scope. Granted it might not be the best way to do it, but it seems like relying on a semaphore for memory dependencies is allowed.
|
# ¿ May 9, 2020 05:43 |
|
Rhusitaurion posted:Got it. Thanks for the advice - I've switched over to a single command buffer with barriers, and it seems like it works. Not sure if I got the src and dst masks and whatnot correct, but the validation layers are not complaining, at least!
|
# ¿ May 9, 2020 18:08 |
|
Contended global atomics are very slow. I've had good results from using subgroup operations to do one atomic op per subgroup, though.
|
# ¿ Oct 31, 2020 17:13 |
|
|
# ¿ Apr 29, 2024 12:05 |
|
If the target locations are effectively random, contention might not be too big an issue, though I suppose that's scene dependent.
|
# ¿ Nov 2, 2020 17:48 |