Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
zzz
May 10, 2008
It works in OpenGL ES with GL_EXT_shader_framebuffer_fetch, not sure about desktop OpenGL.

Adbot
ADBOT LOVES YOU

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

OK I thought about the issue some more and it turns out that color idea would be a hack that wouldn't quite do what I want anyways.

What I really need is to render the same thing to two different depth buffers. I have a number of meshes to render, and I want one of the depth buffers to be cleared in between drawing each mesh, the other should build up the depth data for the whole scene, only clear at the beginning of each frame.

Sex Bumbo
Aug 14, 2004
Can you explain what sort of effect you're trying to achieve?

Sex Bumbo
Aug 14, 2004
I used to turn off vsync in my apps to get a vague idea of how fast my program was running. In DX12 this seems to not work the same way. Even if I present immediately, I still have a queue of in-flight command lists that I'm waiting on the gpu to finish using before I can reset them. I sort of miss the old way even though it only really served as a cruddy perf metric.

Is there a way to keep the gpu busy doing drawing commands? I know it's not really a useful thing to do, but I'm still curious how to properly do it. From what I understand you don't really want to touch a command list after you tell it to execute until it's actually finished executing, right? So how do you pack in more work without creating a jillion command lists? I was thinking of repeatedly rendering to a render target and copying the render target to a swap chain render target whenever one is available (I think this needs a jillion lists). Or alternatively just hosing the current command list if all the command lists that are writing to the swap chain buffers are still queued?

Sex Bumbo fucked around with this message at 01:52 on Nov 21, 2015

Xerophyte
Mar 17, 2008

This space intentionally left blank

Joda posted:

This is kind of a tangential question, but as far as I can tell this is the place I'm most likely to find people who work/have worked with rendering academically.

What drawing program do/did you use for theses and papers to demonstrate spacial concepts? I'm currently working in Geogebra, and it works great for 2D simplifications of concepts, but there are some things where a 3D drawing is simply needed, and doing those in Geogebra are a pain.

Since no one else answered this one. I've seen 3-ish approaches:
1: Code the basic graphics -- in TikZ or matlab, through debug outputs in your program, using a homebrewn SVG generator in one case -- and use real image editors to annotate. Works if you need to illustrate something primitive or some type of data your program can essentially just output directly, looks like crappy programmer art otherwise.
2: Draw your own 3D illustrations. Typically in Illustrator, but fanatical open source types go for Inkscape and survive. Works for people who can draw, which is definitely not me.
3: Probably the most common at least for published research in industry is to hire an illustrator (or at least bribe a designer friend) to do your complex 3D illustration. Who'll probably use Illustrator, but at that point it matters less.

So ... I guess it depends? I'll say that my master's thesis and the couple of internal papers I've written with my current team went for the classic of debug renderings with arrows and outlines drawn on top, but that's mostly because maybe a dozen people will read them and those people can ask me if anything is unclear. If I were to somehow actually publish something at some point I'd get someone else to do my illustrations.

Xerophyte
Mar 17, 2008

This space intentionally left blank
This is 2D graphics but I haven't seen anyone link to Benedikt Bitterli's Tantalum here, which is a shame since it's cool as hell. He also includes a full derivation of the 2D light transport formulas. :swoon:

E: It's also really pretty, ok?


E2: So pretty...

Xerophyte fucked around with this message at 16:54 on Nov 21, 2015

Jewel
May 2, 2009

Xerophyte posted:

This is 2D graphics but I haven't seen anyone link to Benedikt Bitterli's Tantalum here, which is a shame since it's cool as hell. He also includes a full derivation of the 2D light transport formulas. :swoon:

E: It's also really pretty, ok?


Thank you for that! This is really really cool.

lord funk
Feb 16, 2004

Can anyone recommend a cube map / skybox stitcher for Mac? Something that can take iPhone panoramas or a series of photos to create the 6 box-side images for a cube map.

Tres Burritos
Sep 3, 2009

So lets say I wanted to make something like this but instead of having particles flow over a grid (which is easy and I've done), I want them to flow along predefined lines. I'm basically looking for a way to visualize flow direction and speed in a DAG in a realtime way that looks cool.

I was thinking that this would be a good starting point, but I'm feeling like that would be overkill. Am I barking up the wrong tree here or should I try my hand at the valve solution?

Doc Block
Apr 15, 2003
Fun Shoe
It seems like no one agrees on the definition for the "handedness" of a coordinate system. I've heard OpenGL's called left handed, heard it called right handed, same for DirectX and Metal.

Some people say that your thumb points towards +X, index towards +Y, and middle finger towards +Z, and whether you use your left or right hand is the "handedness" (i.e. left handed means +Z goes away from the viewer, right handed has +Z coming towards the viewer), but that doesn't account for weirdos who insist X and Y are both horizontal and that Z represents height.

What would you call this coordinate system?


I'm asking because I seem to have screwed up my matrix math somewhere, using examples from sources using different definitions of handedness.

Doc Block fucked around with this message at 06:28 on Nov 23, 2015

jawbroken
Aug 13, 2007

messmate king
If you start with standard 2D cartesian coordinates on a piece of paper and want to extend them to 3D then you have two options: positive z axis coming out of paper or going into the paper. The former is right-handed coordinates and the latter is left-handed. There's a lot of different ways to think about why that is, curling your fingers and pointing your thumb, pointing some other set of fingers at random, whatever you find a helpful mnemonic. The other factors you're talking about are all just rotations of those two options, but you can't turn a left-handed coordinate system into a right-handed one, or vice versa, using rotation. Regardless, that makes the coordinate system in your picture left-handed, because if you rotate it back to the standard 2D cartesian case then +Z points into the paper.

jawbroken fucked around with this message at 07:22 on Nov 23, 2015

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today
Handedness is simple. Pick a hand, stick your thumb out to the side, index finger parallel with your palm, and middle finger in the direction your palm is facing. If you can orient your hand to match the positive axes, then that's the handedness of the coordinate system.

Doc Block
Apr 15, 2003
Fun Shoe
You can do that with either hand, though. The discussion is complicated by the weirdos who insist on having X and Y be the horizontal axes, with Z being up instead of depth.

But yeah, the general consensus seems to be that the coordinate system in the image I posted is left handed. But it makes me wonder why I've seen people say OpenGL's coordinate system is right handed, since IIRC in OpenGL +Z is going into the screen.

Ugh. I just want my matrixes to not be weird. Right now I have to do a special case for rotation, and invert the angle when I rotate something manually but leave it alone when I'm just plugging the physics engine's rotation values into the rotation matrix function.

Doc Block fucked around with this message at 08:44 on Nov 23, 2015

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Doc Block posted:

You can do that with either hand, though...
Er, sorry, I forgot to mention that your thumb is X, index finger Y, middle finger Z.

Doc Block posted:

But yeah, the general consensus seems to be that the coordinate system in the image I posted is left handed, which leaves me wondering why I'm seeing people say OpenGL's coordinate system is right handed, since IIRC in OpenGL +Z is going into the screen.
IIRC the old fixed-function OpenGL/GLU convenience functions took inputs in right-handed coordinates, while the low-level device space has always been left-handed. Or maybe it's the other way around. The confusion arises from them differing, so when you say "OpenGL's coordinate system" nobody really knows what you're talking about.

Doc Block
Apr 15, 2003
Fun Shoe
I see...

3D math is my greatest weakness when it comes to game development.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today
To be clear, if you're using modern OpenGL and constructing matrices yourself, you're working directly in device space. The GLU functions (and many libraries that imitate them) constructed their matrices such that vertex data would be reflected out of the other space.

Doc Block
Apr 15, 2003
Fun Shoe
I'm using Metal, which is device space as well. I stole copied Apple's simd matrix math functions from their Metal example code, whose projection matrix functions set up a left-handed coordinate system (+Z into the screen).

For whatever reason, their rotation matrix function seems to be rotating things counter-clockwise (so +45 degrees around the Z axis results in the object being tilted to the left), while the physics engine I'm using is rotating things clockwise, so I have to special case out manual vs physics engine rotation and invert the angle for one of them.

I had thought that maybe rotations appearing "wrong" had to do with the Z axis getting flipped somewhere, but that doesn't seem to be the case. And when I started looking online for alternate rotation matrix functions that applied the rotation clockwise, my sleep-deprived brain got confused.

Doc Block fucked around with this message at 09:08 on Nov 23, 2015

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
The GPU does not care. Handedness is simply a function of how you set up your perspective matrix.

Xerophyte
Mar 17, 2008

This space intentionally left blank
There are a lot of orientations and chiralities that make sense depending on context.

If you're a raster hardware engineer you might think of your output as a buffer, with (x = 0, y = 0) at the top left and (x = width, y = height) at the bottom right corner. Then it's natural to use -Y as the up direction in 3D, and +Z or -Z as into the screen depending on whether or not you like left or right handed chirality.

If you're an architect then you might think of your drawing of the floor plan as having x and y coordinates. In that case it's natural to use +Z or -Z as the up direction when converting that to 3D.

It's not entirely true that the GPU (or rather the hardware APIs) don't care about this stuff. Clip space has an inherent handedness. Window (and possibly texture, if the API has that concept) space has another handedness, which may or may not be consistent with clip space. There's also a handedness for triangle winding, where a winding is right-handed if a front facing triangle has counterclockwise winding. All of these are arbitrary and the API may or may not let you control them (e.g. glClipControl and the like).

It would be nice if there was an obvious right convention for all of these, but there isn't and so you end up getting used to working in all of them.

Sex Bumbo
Aug 14, 2004

Doc Block posted:

I'm using Metal, which is device space as well. I stole copied Apple's simd matrix math functions from their Metal example code, whose projection matrix functions set up a left-handed coordinate system (+Z into the screen).

For whatever reason, their rotation matrix function seems to be rotating things counter-clockwise (so +45 degrees around the Z axis results in the object being tilted to the left), while the physics engine I'm using is rotating things clockwise, so I have to special case out manual vs physics engine rotation and invert the angle for one of them.

I had thought that maybe rotations appearing "wrong" had to do with the Z axis getting flipped somewhere, but that doesn't seem to be the case. And when I started looking online for alternate rotation matrix functions that applied the rotation clockwise, my sleep-deprived brain got confused.

I think a counter-clockwise rotation function is fairly typical. It's your physics engine that's weird. As mentioned most things, including clip space coordinates, can be controlled, but it's nice to leave them as-is for anyone new coming into your code so they can just make some assumptions about how things work and be right.

Remember you can just make a matrix that converts from one space to another so that you can do all your world space positioning in one handedness and view space positioning in different one if you want. The GPU doesn't know which you're using, it only cares about the numbers you output into clip space. You don't even need a projection matrix for 3D transforms, it's just a convenient way to project things (but you should definitely use one anyway).

Sex Bumbo fucked around with this message at 18:58 on Nov 23, 2015

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Xerophyte posted:

There are a lot of orientations and chiralities that make sense depending on context.

By default, GL defines clip space as being right-handed, but, again, this is just a function of the near and far planes in clip space. You can change it with glDepthRange to flip the near and far planes around, which has existed since day 1, no extension required.

I've never actually heard of or considered backface culling to be about handed-ness, but I can see your point. You can change it with glFrontFace, as usual.

The GPU doesn't have any innate concept of any of this, it just does math from the driver. I write GL drivers for modern GPUs, and glDepthRange and such don't usually go to the registers, they're just folded into a higher-level transformation. Culling, however, is a register property, but it's simply about identifying the winding order of assembled primitives.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Doc Block posted:

I'm using Metal, which is device space as well. I stole copied Apple's simd matrix math functions from their Metal example code, whose projection matrix functions set up a left-handed coordinate system (+Z into the screen).

For whatever reason, their rotation matrix function seems to be rotating things counter-clockwise (so +45 degrees around the Z axis results in the object being tilted to the left), while the physics engine I'm using is rotating things clockwise, so I have to special case out manual vs physics engine rotation and invert the angle for one of them.

I had thought that maybe rotations appearing "wrong" had to do with the Z axis getting flipped somewhere, but that doesn't seem to be the case. And when I started looking online for alternate rotation matrix functions that applied the rotation clockwise, my sleep-deprived brain got confused.

Are you sure your physics engine doesn't have a way to get the model transformation matrix directly? Not sure about other physics engines/libraries, but Bullet 's motion states have a getOpenGLMatrix() function that will fill out an OpenGL formatted 4x4 matrix for you. I imagine something like that would be a fairly standard feature for any physics engine aimed at games. Of course, Metal may use a different major form than OpenGL, but that should be an easy fix.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Suspicious Dish posted:

By default, GL defines clip space as being right-handed, but, again, this is just a function of the near and far planes in clip space. You can change it with glDepthRange to flip the near and far planes around, which has existed since day 1, no extension required.

OpenGL documentation posted:

After clipping and division by w, depth coordinates range from -1 to 1, corresponding to the near and far clipping planes.
Huh? This sounds like left-handed coordinates. Or is +y down?

Suspicious Dish posted:

I've never actually heard of or considered backface culling to be about handed-ness, but I can see your point. You can change it with glFrontFace, as usual.
It's not that backface culling is about handedness, it's that winding changes when you reflect geometry.

Sex Bumbo
Aug 14, 2004

Ralith posted:

Huh? This sounds like left-handed coordinates. Or is +y down?
You're right, Suspicious Dish is incorrect.

Clip space, referring to post-projection multiply but pre-perspective divide, is left handed. I.e. you want your projection matrix to put you into a left handed coordinate system where near z/w = -1 and far z/w = 1. It doesn't matter if you're coming from a left handed or right handed system before this, all that matters is that you end up in this left handed system. You can make a matrix that takes you from either and puts you into this. The post-division NDC space is naturally going to be the same space as clip space and easier to understand since it's the same except divided by w. The ndc->window transform is a fixed function transform quite unlike the view->clip transform which is presumably done in a shader.

Here's a lovely explanation. The red parts have a bunch of processes but remember they're all optional, all that matters is you provide some clip space coordinate to the gpu, it doesn't know or care how you made it. NDC is -1 <= x, y, z <= 1 and left handed.

Sex Bumbo fucked around with this message at 21:33 on Nov 23, 2015

Xerophyte
Mar 17, 2008

This space intentionally left blank

Ralith posted:

It's not that backface culling is about handedness, it's that winding changes when you reflect geometry.

Well, it kinda is. The direction of the geometry normal is defined by the triangle's vertex order, and the normal can be either pointing out from a counter-clockwise rotation (a right-handed system) or pointing out from a clockwise rotation (a left-handed system). Facing is a type of handedness in that sense. Mirror transforms reverse the direction of the winding and therefore also the direction of the geometry normal. This is mostly academic and you're not going to find an API asking you to select your primitive handedness or anything.

I work on a component called "Raytracing API Development" on the org chart so my view of a graphics API is probably less hardware-oriented than most this thread. We don't have to worry about some of the handedness issues of a raster API -- our projection transform is an arbitrary user-defined function that maps from image raster positions to rays -- and I'll happily admit that my knowledge on exactly how clip space works in GL is fuzzy at best.

Doc Block
Apr 15, 2003
Fun Shoe

Joda posted:

Are you sure your physics engine doesn't have a way to get the model transformation matrix directly? Not sure about other physics engines/libraries, but Bullet 's motion states have a getOpenGLMatrix() function that will fill out an OpenGL formatted 4x4 matrix for you. I imagine something like that would be a fairly standard feature for any physics engine aimed at games. Of course, Metal may use a different major form than OpenGL, but that should be an easy fix.

I should point out that my physics engine is 2D. Since my game is top-down, I'm cheating and using Chipmunk2D for physics since I've used it to make games before, and know it can do exactly what I need (namely planetary/solar orbiting).

In the interest of doing what a 2D game developer would expect, Chipmunk2D handles rotation so that positive values result in clockwise rotation, and 0 degrees points North instead of East.

I've fixed my game now, so that it knows to invert the rotation value if it isn't coming from the physics engine. I know that counter-clockwise is probably the mathematically correct way, but it's easier on my brain if I keep it so that positive value = clockwise.

Tres Burritos
Sep 3, 2009

So I've got a weird problem with my flowing lines thing



For some reason the border between lines gets really thick, and then resets to "normal" when I set the "time" uniform to 0.

Any ideas? The shader code is pretty simple.

code:
    uniform vec2 resolution;
    //the canvas that I get the line direction texture from
    uniform sampler2D canvas;

    //just a checkerboard texture thats 256x256
    uniform sampler2D checkerboard;

    uniform vec2 resolution2;
    uniform float time;
    void main(){
        //the position this fragment is on the main canvas texture
        vec2 pos = gl_FragCoord.xy / resolution.xy;

        //the color of the fragment, this is used to get the direction of a line
        vec4 canvasColor = texture2D(canvas,pos);

        //direcion in x,y from -1 to 1
        float xDir = (canvasColor.r - 0.5) / 0.5;
        float yDir = (canvasColor.g - 0.5) / -0.5;

        //make it a vector and normalize it
        vec2 flowDir = normalize(vec2(xDir,yDir));

        //checkerboard texture position
        float checkX = mod( (mod(gl_FragCoord.x, resolution2.x) / resolution2.x) + (flowDir.x * time),1.0);
        float checkY = mod( (mod(gl_FragCoord.y, resolution2.y) / resolution2.y) + (flowDir.y * time),1.0);

	//finally get the color for fragment from checkerboard texture
        vec4 checkColor = texture2D(checkerboard,vec2(checkX,checkY));

        //set that to the color
        gl_FragColor = vec4(checkColor.x * canvasColor.a,checkColor.y * canvasColor.a,checkColor.z * canvasColor.a,canvasColor.a);
    }
Maybe it has something to do with alphas?

Sex Bumbo
Aug 14, 2004
You're drawing a fullscreen quad, right? Are you clearing the framebuffer every frame?

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Xerophyte posted:

Well, it kinda is. The direction of the geometry normal is defined by the triangle's vertex order, and the normal can be either pointing out from a counter-clockwise rotation (a right-handed system) or pointing out from a clockwise rotation (a left-handed system). Facing is a type of handedness in that sense. Mirror transforms reverse the direction of the winding and therefore also the direction of the geometry normal. This is mostly academic and you're not going to find an API asking you to select your primitive handedness or anything.

I work on a component called "Raytracing API Development" on the org chart so my view of a graphics API is probably less hardware-oriented than most this thread. We don't have to worry about some of the handedness issues of a raster API -- our projection transform is an arbitrary user-defined function that maps from image raster positions to rays -- and I'll happily admit that my knowledge on exactly how clip space works in GL is fuzzy at best.
The normal of a triangle is determined based on winding, as specified by glFrontFace. If the matrices you transform your vertex data by involve reflection, then you'll need to account for this in your winding. Handedness is irrelevant, you just need to be sure your winding is as intended at the final stage.

Xerophyte
Mar 17, 2008

This space intentionally left blank

Ralith posted:

The normal of a triangle is determined based on winding, as specified by glFrontFace.

Yes, my point is that picking a setting in glFrontFace is a choice between a left-handed or right-handed vertex order relative to the normal. It is a handedness, regardless of how your vector space is oriented.

Tres Burritos
Sep 3, 2009

Sex Bumbo posted:

You're drawing a fullscreen quad, right? Are you clearing the framebuffer every frame?

Yes and yes. It's weird because it loops with my time uniform, but I have no idea why.

edit: however, this code works like you'd expect, all the textures scroll in the same direction and there aren't any weird aliasing lines.

code:
void main(){
    vec2 pos = gl_FragCoord.xy / resolution.xy;
    vec4 canvasColor = texture2D(canvas,pos);

    float checkX = mod((mod(gl_FragCoord.x, resolution2.x) / resolution2.x) + time,1.0);
    float checkY = mod((mod(gl_FragCoord.y, resolution2.y) / resolution2.y) + time,1.0);
    vec4 checkColor = texture2D(checkerboard,vec2(checkX,checkY));

    gl_FragColor = vec4(checkColor.x,checkColor.y,checkColor.z,canvasColor.a);
}
edit2:

Changed the texture to a "water" image and it looks fine. Huh.

edit3:

Hmmm weird lines still exist with new texture, they're just much less noticeable.

Tres Burritos fucked around with this message at 16:34 on Nov 24, 2015

Sex Bumbo
Aug 14, 2004
I have a lot of ideas about this, try turning on point sampling for your flow direction texture first though and see what that does to it.

Tres Burritos
Sep 3, 2009

Sex Bumbo posted:

I have a lot of ideas about this, try turning on point sampling for your flow direction texture first though and see what that does to it.

I'm not sure what that means. I've already got the texture as a sampler2d in my shader, is that it?

Sex Bumbo
Aug 14, 2004
Something like

glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

instead of GL_LINEAR or whatever.


Can you upload your canvas texture?

Sex Bumbo fucked around with this message at 02:11 on Nov 25, 2015

Tres Burritos
Sep 3, 2009

Okay, it seems to look better now, but it's still gets "worse" as time goes on.

The canvas is generated on the fly by user input (for testing) you can check out the demo (click twice on the canvas, then hit "Not Rendering") and the source.

Sex Bumbo
Aug 14, 2004
If you output the texcoords you're sampling from you'll see the problem. Canvas is blending the colors of your line edges, and thus blending your vectors. This means you're sampling at random parts of your checkerboard that you probably don't want to be. If you make the checkerboard point sampled too you'll see smaller but somehow more noticeable artifacting. I'm not sure how to stop canvas from blending like that, but if you want to avoid the artifacting, you need to do four evaluations of your checker texture and bilinear filter them in your shader.

Tres Burritos
Sep 3, 2009

... poo poo ...

Sex Bumbo
Aug 14, 2004
It's probably not that bad -- the main thing you need to figure out is what's supposed to happen when two vectors get blended together. I don't know a lot about canvas but if you assume the two lines will always blend together, you need to figure out what to do in this case:



E: Using a noise texture would definitely solve the problem.

Also if you set your scrolling texture to wrap you don't need those modulos. Also also it's better to just wrap things as vec2s instead of using x and y variables.

Tres Burritos
Sep 3, 2009

Yeah, the checkerboard was just for testing, I was planning on *some* kind of noise anyway. Looks much better.

Also thanks for the shader tips, I can do neat stuff with them but there are huge gaps in my knowledge.

Adbot
ADBOT LOVES YOU

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
In OpenGL, is it possible to set up two textures to receive the output from the same colour attachment and enable blending on only one of them? I'm doing a gaussian filter whose result I want to output to an empty texture, as well as a texture that sums up several runs of the filter (using blending.) Currently the shader outputs the same result to two different colour attachments with blending enabled on one of them. I'd like to make my gaussian filter shader more generic so that I can use it for something that only needs a single output.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply