|
It works in OpenGL ES with GL_EXT_shader_framebuffer_fetch, not sure about desktop OpenGL.
|
# ? Nov 20, 2015 06:59 |
|
|
# ? May 14, 2024 23:45 |
|
OK I thought about the issue some more and it turns out that color idea would be a hack that wouldn't quite do what I want anyways. What I really need is to render the same thing to two different depth buffers. I have a number of meshes to render, and I want one of the depth buffers to be cleared in between drawing each mesh, the other should build up the depth data for the whole scene, only clear at the beginning of each frame.
|
# ? Nov 20, 2015 22:55 |
|
Can you explain what sort of effect you're trying to achieve?
|
# ? Nov 21, 2015 01:03 |
|
I used to turn off vsync in my apps to get a vague idea of how fast my program was running. In DX12 this seems to not work the same way. Even if I present immediately, I still have a queue of in-flight command lists that I'm waiting on the gpu to finish using before I can reset them. I sort of miss the old way even though it only really served as a cruddy perf metric. Is there a way to keep the gpu busy doing drawing commands? I know it's not really a useful thing to do, but I'm still curious how to properly do it. From what I understand you don't really want to touch a command list after you tell it to execute until it's actually finished executing, right? So how do you pack in more work without creating a jillion command lists? I was thinking of repeatedly rendering to a render target and copying the render target to a swap chain render target whenever one is available (I think this needs a jillion lists). Or alternatively just hosing the current command list if all the command lists that are writing to the swap chain buffers are still queued? Sex Bumbo fucked around with this message at 01:52 on Nov 21, 2015 |
# ? Nov 21, 2015 01:42 |
|
Joda posted:This is kind of a tangential question, but as far as I can tell this is the place I'm most likely to find people who work/have worked with rendering academically. Since no one else answered this one. I've seen 3-ish approaches: 1: Code the basic graphics -- in TikZ or matlab, through debug outputs in your program, using a homebrewn SVG generator in one case -- and use real image editors to annotate. Works if you need to illustrate something primitive or some type of data your program can essentially just output directly, looks like crappy programmer art otherwise. 2: Draw your own 3D illustrations. Typically in Illustrator, but fanatical open source types go for Inkscape and survive. Works for people who can draw, which is definitely not me. 3: Probably the most common at least for published research in industry is to hire an illustrator (or at least bribe a designer friend) to do your complex 3D illustration. Who'll probably use Illustrator, but at that point it matters less. So ... I guess it depends? I'll say that my master's thesis and the couple of internal papers I've written with my current team went for the classic of debug renderings with arrows and outlines drawn on top, but that's mostly because maybe a dozen people will read them and those people can ask me if anything is unclear. If I were to somehow actually publish something at some point I'd get someone else to do my illustrations.
|
# ? Nov 21, 2015 02:21 |
|
This is 2D graphics but I haven't seen anyone link to Benedikt Bitterli's Tantalum here, which is a shame since it's cool as hell. He also includes a full derivation of the 2D light transport formulas. E: It's also really pretty, ok? E2: So pretty... Xerophyte fucked around with this message at 16:54 on Nov 21, 2015 |
# ? Nov 21, 2015 02:31 |
|
Xerophyte posted:This is 2D graphics but I haven't seen anyone link to Benedikt Bitterli's Tantalum here, which is a shame since it's cool as hell. He also includes a full derivation of the 2D light transport formulas. Thank you for that! This is really really cool.
|
# ? Nov 21, 2015 15:23 |
|
Can anyone recommend a cube map / skybox stitcher for Mac? Something that can take iPhone panoramas or a series of photos to create the 6 box-side images for a cube map.
|
# ? Nov 21, 2015 20:05 |
|
So lets say I wanted to make something like this but instead of having particles flow over a grid (which is easy and I've done), I want them to flow along predefined lines. I'm basically looking for a way to visualize flow direction and speed in a DAG in a realtime way that looks cool. I was thinking that this would be a good starting point, but I'm feeling like that would be overkill. Am I barking up the wrong tree here or should I try my hand at the valve solution?
|
# ? Nov 22, 2015 02:24 |
|
It seems like no one agrees on the definition for the "handedness" of a coordinate system. I've heard OpenGL's called left handed, heard it called right handed, same for DirectX and Metal. Some people say that your thumb points towards +X, index towards +Y, and middle finger towards +Z, and whether you use your left or right hand is the "handedness" (i.e. left handed means +Z goes away from the viewer, right handed has +Z coming towards the viewer), but that doesn't account for weirdos who insist X and Y are both horizontal and that Z represents height. What would you call this coordinate system? I'm asking because I seem to have screwed up my matrix math somewhere, using examples from sources using different definitions of handedness. Doc Block fucked around with this message at 06:28 on Nov 23, 2015 |
# ? Nov 23, 2015 06:22 |
|
If you start with standard 2D cartesian coordinates on a piece of paper and want to extend them to 3D then you have two options: positive z axis coming out of paper or going into the paper. The former is right-handed coordinates and the latter is left-handed. There's a lot of different ways to think about why that is, curling your fingers and pointing your thumb, pointing some other set of fingers at random, whatever you find a helpful mnemonic. The other factors you're talking about are all just rotations of those two options, but you can't turn a left-handed coordinate system into a right-handed one, or vice versa, using rotation. Regardless, that makes the coordinate system in your picture left-handed, because if you rotate it back to the standard 2D cartesian case then +Z points into the paper.
jawbroken fucked around with this message at 07:22 on Nov 23, 2015 |
# ? Nov 23, 2015 07:06 |
|
Handedness is simple. Pick a hand, stick your thumb out to the side, index finger parallel with your palm, and middle finger in the direction your palm is facing. If you can orient your hand to match the positive axes, then that's the handedness of the coordinate system.
|
# ? Nov 23, 2015 08:28 |
|
You can do that with either hand, though. The discussion is complicated by the weirdos who insist on having X and Y be the horizontal axes, with Z being up instead of depth. But yeah, the general consensus seems to be that the coordinate system in the image I posted is left handed. But it makes me wonder why I've seen people say OpenGL's coordinate system is right handed, since IIRC in OpenGL +Z is going into the screen. Ugh. I just want my matrixes to not be weird. Right now I have to do a special case for rotation, and invert the angle when I rotate something manually but leave it alone when I'm just plugging the physics engine's rotation values into the rotation matrix function. Doc Block fucked around with this message at 08:44 on Nov 23, 2015 |
# ? Nov 23, 2015 08:40 |
|
Doc Block posted:You can do that with either hand, though... Doc Block posted:But yeah, the general consensus seems to be that the coordinate system in the image I posted is left handed, which leaves me wondering why I'm seeing people say OpenGL's coordinate system is right handed, since IIRC in OpenGL +Z is going into the screen.
|
# ? Nov 23, 2015 08:43 |
|
I see... 3D math is my greatest weakness when it comes to game development.
|
# ? Nov 23, 2015 08:45 |
|
To be clear, if you're using modern OpenGL and constructing matrices yourself, you're working directly in device space. The GLU functions (and many libraries that imitate them) constructed their matrices such that vertex data would be reflected out of the other space.
|
# ? Nov 23, 2015 08:47 |
|
I'm using Metal, which is device space as well. I For whatever reason, their rotation matrix function seems to be rotating things counter-clockwise (so +45 degrees around the Z axis results in the object being tilted to the left), while the physics engine I'm using is rotating things clockwise, so I have to special case out manual vs physics engine rotation and invert the angle for one of them. I had thought that maybe rotations appearing "wrong" had to do with the Z axis getting flipped somewhere, but that doesn't seem to be the case. And when I started looking online for alternate rotation matrix functions that applied the rotation clockwise, my sleep-deprived brain got confused. Doc Block fucked around with this message at 09:08 on Nov 23, 2015 |
# ? Nov 23, 2015 08:57 |
|
The GPU does not care. Handedness is simply a function of how you set up your perspective matrix.
|
# ? Nov 23, 2015 09:07 |
|
There are a lot of orientations and chiralities that make sense depending on context. If you're a raster hardware engineer you might think of your output as a buffer, with (x = 0, y = 0) at the top left and (x = width, y = height) at the bottom right corner. Then it's natural to use -Y as the up direction in 3D, and +Z or -Z as into the screen depending on whether or not you like left or right handed chirality. If you're an architect then you might think of your drawing of the floor plan as having x and y coordinates. In that case it's natural to use +Z or -Z as the up direction when converting that to 3D. It's not entirely true that the GPU (or rather the hardware APIs) don't care about this stuff. Clip space has an inherent handedness. Window (and possibly texture, if the API has that concept) space has another handedness, which may or may not be consistent with clip space. There's also a handedness for triangle winding, where a winding is right-handed if a front facing triangle has counterclockwise winding. All of these are arbitrary and the API may or may not let you control them (e.g. glClipControl and the like). It would be nice if there was an obvious right convention for all of these, but there isn't and so you end up getting used to working in all of them.
|
# ? Nov 23, 2015 12:23 |
|
Doc Block posted:I'm using Metal, which is device space as well. I I think a counter-clockwise rotation function is fairly typical. It's your physics engine that's weird. As mentioned most things, including clip space coordinates, can be controlled, but it's nice to leave them as-is for anyone new coming into your code so they can just make some assumptions about how things work and be right. Remember you can just make a matrix that converts from one space to another so that you can do all your world space positioning in one handedness and view space positioning in different one if you want. The GPU doesn't know which you're using, it only cares about the numbers you output into clip space. You don't even need a projection matrix for 3D transforms, it's just a convenient way to project things (but you should definitely use one anyway). Sex Bumbo fucked around with this message at 18:58 on Nov 23, 2015 |
# ? Nov 23, 2015 18:56 |
|
Xerophyte posted:There are a lot of orientations and chiralities that make sense depending on context. By default, GL defines clip space as being right-handed, but, again, this is just a function of the near and far planes in clip space. You can change it with glDepthRange to flip the near and far planes around, which has existed since day 1, no extension required. I've never actually heard of or considered backface culling to be about handed-ness, but I can see your point. You can change it with glFrontFace, as usual. The GPU doesn't have any innate concept of any of this, it just does math from the driver. I write GL drivers for modern GPUs, and glDepthRange and such don't usually go to the registers, they're just folded into a higher-level transformation. Culling, however, is a register property, but it's simply about identifying the winding order of assembled primitives.
|
# ? Nov 23, 2015 19:05 |
Doc Block posted:I'm using Metal, which is device space as well. I Are you sure your physics engine doesn't have a way to get the model transformation matrix directly? Not sure about other physics engines/libraries, but Bullet 's motion states have a getOpenGLMatrix() function that will fill out an OpenGL formatted 4x4 matrix for you. I imagine something like that would be a fairly standard feature for any physics engine aimed at games. Of course, Metal may use a different major form than OpenGL, but that should be an easy fix.
|
|
# ? Nov 23, 2015 19:38 |
|
Suspicious Dish posted:By default, GL defines clip space as being right-handed, but, again, this is just a function of the near and far planes in clip space. You can change it with glDepthRange to flip the near and far planes around, which has existed since day 1, no extension required. OpenGL documentation posted:After clipping and division by w, depth coordinates range from -1 to 1, corresponding to the near and far clipping planes. Suspicious Dish posted:I've never actually heard of or considered backface culling to be about handed-ness, but I can see your point. You can change it with glFrontFace, as usual.
|
# ? Nov 23, 2015 20:05 |
|
Ralith posted:Huh? This sounds like left-handed coordinates. Or is +y down? Clip space, referring to post-projection multiply but pre-perspective divide, is left handed. I.e. you want your projection matrix to put you into a left handed coordinate system where near z/w = -1 and far z/w = 1. It doesn't matter if you're coming from a left handed or right handed system before this, all that matters is that you end up in this left handed system. You can make a matrix that takes you from either and puts you into this. The post-division NDC space is naturally going to be the same space as clip space and easier to understand since it's the same except divided by w. The ndc->window transform is a fixed function transform quite unlike the view->clip transform which is presumably done in a shader. Here's a lovely explanation. The red parts have a bunch of processes but remember they're all optional, all that matters is you provide some clip space coordinate to the gpu, it doesn't know or care how you made it. NDC is -1 <= x, y, z <= 1 and left handed. Sex Bumbo fucked around with this message at 21:33 on Nov 23, 2015 |
# ? Nov 23, 2015 21:25 |
|
Ralith posted:It's not that backface culling is about handedness, it's that winding changes when you reflect geometry. Well, it kinda is. The direction of the geometry normal is defined by the triangle's vertex order, and the normal can be either pointing out from a counter-clockwise rotation (a right-handed system) or pointing out from a clockwise rotation (a left-handed system). Facing is a type of handedness in that sense. Mirror transforms reverse the direction of the winding and therefore also the direction of the geometry normal. This is mostly academic and you're not going to find an API asking you to select your primitive handedness or anything. I work on a component called "Raytracing API Development" on the org chart so my view of a graphics API is probably less hardware-oriented than most this thread. We don't have to worry about some of the handedness issues of a raster API -- our projection transform is an arbitrary user-defined function that maps from image raster positions to rays -- and I'll happily admit that my knowledge on exactly how clip space works in GL is fuzzy at best.
|
# ? Nov 23, 2015 22:58 |
|
Joda posted:Are you sure your physics engine doesn't have a way to get the model transformation matrix directly? Not sure about other physics engines/libraries, but Bullet 's motion states have a getOpenGLMatrix() function that will fill out an OpenGL formatted 4x4 matrix for you. I imagine something like that would be a fairly standard feature for any physics engine aimed at games. Of course, Metal may use a different major form than OpenGL, but that should be an easy fix. I should point out that my physics engine is 2D. Since my game is top-down, I'm cheating and using Chipmunk2D for physics since I've used it to make games before, and know it can do exactly what I need (namely planetary/solar orbiting). In the interest of doing what a 2D game developer would expect, Chipmunk2D handles rotation so that positive values result in clockwise rotation, and 0 degrees points North instead of East. I've fixed my game now, so that it knows to invert the rotation value if it isn't coming from the physics engine. I know that counter-clockwise is probably the mathematically correct way, but it's easier on my brain if I keep it so that positive value = clockwise.
|
# ? Nov 24, 2015 00:27 |
|
So I've got a weird problem with my flowing lines thing For some reason the border between lines gets really thick, and then resets to "normal" when I set the "time" uniform to 0. Any ideas? The shader code is pretty simple. code:
|
# ? Nov 24, 2015 00:41 |
|
You're drawing a fullscreen quad, right? Are you clearing the framebuffer every frame?
|
# ? Nov 24, 2015 02:41 |
|
Xerophyte posted:Well, it kinda is. The direction of the geometry normal is defined by the triangle's vertex order, and the normal can be either pointing out from a counter-clockwise rotation (a right-handed system) or pointing out from a clockwise rotation (a left-handed system). Facing is a type of handedness in that sense. Mirror transforms reverse the direction of the winding and therefore also the direction of the geometry normal. This is mostly academic and you're not going to find an API asking you to select your primitive handedness or anything.
|
# ? Nov 24, 2015 10:03 |
|
Ralith posted:The normal of a triangle is determined based on winding, as specified by glFrontFace. Yes, my point is that picking a setting in glFrontFace is a choice between a left-handed or right-handed vertex order relative to the normal. It is a handedness, regardless of how your vector space is oriented.
|
# ? Nov 24, 2015 10:45 |
|
Sex Bumbo posted:You're drawing a fullscreen quad, right? Are you clearing the framebuffer every frame? Yes and yes. It's weird because it loops with my time uniform, but I have no idea why. edit: however, this code works like you'd expect, all the textures scroll in the same direction and there aren't any weird aliasing lines. code:
Changed the texture to a "water" image and it looks fine. Huh. edit3: Hmmm weird lines still exist with new texture, they're just much less noticeable. Tres Burritos fucked around with this message at 16:34 on Nov 24, 2015 |
# ? Nov 24, 2015 13:03 |
|
I have a lot of ideas about this, try turning on point sampling for your flow direction texture first though and see what that does to it.
|
# ? Nov 24, 2015 19:54 |
|
Sex Bumbo posted:I have a lot of ideas about this, try turning on point sampling for your flow direction texture first though and see what that does to it. I'm not sure what that means. I've already got the texture as a sampler2d in my shader, is that it?
|
# ? Nov 25, 2015 01:03 |
|
Something like glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); instead of GL_LINEAR or whatever. Can you upload your canvas texture? Sex Bumbo fucked around with this message at 02:11 on Nov 25, 2015 |
# ? Nov 25, 2015 02:09 |
|
Okay, it seems to look better now, but it's still gets "worse" as time goes on. The canvas is generated on the fly by user input (for testing) you can check out the demo (click twice on the canvas, then hit "Not Rendering") and the source.
|
# ? Nov 25, 2015 02:35 |
|
If you output the texcoords you're sampling from you'll see the problem. Canvas is blending the colors of your line edges, and thus blending your vectors. This means you're sampling at random parts of your checkerboard that you probably don't want to be. If you make the checkerboard point sampled too you'll see smaller but somehow more noticeable artifacting. I'm not sure how to stop canvas from blending like that, but if you want to avoid the artifacting, you need to do four evaluations of your checker texture and bilinear filter them in your shader.
|
# ? Nov 25, 2015 03:47 |
|
... poo poo ...
|
# ? Nov 25, 2015 13:56 |
|
It's probably not that bad -- the main thing you need to figure out is what's supposed to happen when two vectors get blended together. I don't know a lot about canvas but if you assume the two lines will always blend together, you need to figure out what to do in this case: E: Using a noise texture would definitely solve the problem. Also if you set your scrolling texture to wrap you don't need those modulos. Also also it's better to just wrap things as vec2s instead of using x and y variables.
|
# ? Nov 25, 2015 18:43 |
|
Yeah, the checkerboard was just for testing, I was planning on *some* kind of noise anyway. Looks much better. Also thanks for the shader tips, I can do neat stuff with them but there are huge gaps in my knowledge.
|
# ? Nov 25, 2015 23:13 |
|
|
# ? May 14, 2024 23:45 |
In OpenGL, is it possible to set up two textures to receive the output from the same colour attachment and enable blending on only one of them? I'm doing a gaussian filter whose result I want to output to an empty texture, as well as a texture that sums up several runs of the filter (using blending.) Currently the shader outputs the same result to two different colour attachments with blending enabled on one of them. I'd like to make my gaussian filter shader more generic so that I can use it for something that only needs a single output.
|
|
# ? Nov 27, 2015 13:36 |