Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Hubis posted:

Sure, though since I'm still frantically editing slides I'll probably save going into great depth until I have some free time :)

Basically, the hard part of direct in-scattering (1 bounce) is the fact that the light source may not be visible from all points along the ray. If it were, you could just evaluate the integral from they eye to the scene depth once and be done with it. This technique takes advantage of the fact that, since the visibility function is binary, you can re-state it as the sum of integrals over the lit portions. Furthermore, you can re-state an integral over an interval is the integral to the end-point minus the integral to the starting point:

code:

L = I(a, b) + I(c, d)
L = [I(0, b) + I(0, d)] - [I(0, a) + I(0, c)]


So what we do is render a mesh that corresponds to the volume that is visible to the light, evaluating the integral in the pixel shader and adding the result if it's a front-face or subtracting the result if it's a back-face. We do this by using a tessellated volume corresponding to the world-space coverage of the light and using the shadow map to offset the depth so it matches the world. That's the wireframe view you see at the end.

There's some tricks in how the integrals are evaluated, how the media is modeled, filtering to provide better results, etc. but the concept itself is pretty straightforward once you wrap your head around the math.


:3::hf::3:


I'll be going into all the details at the talk. :ninja:

How much is related to this cool paper: https://research.nvidia.com/sites/default/files/publications/ftizb_cameraReady.pdf

Adbot
ADBOT LOVES YOU

Sex Bumbo
Aug 14, 2004
Also thanks for replying with deets I ain't spending a grand and a week of time to go to GDC so its nice to not get gated out of poo poo.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Malcolm XML posted:

Webrender is cool pcwalton did a talk on it recently

WebRender is good work, but it's still a relatively easy problem -- boxes with radii (prerender a bunch of circles, use them from atlases), drop shadows (blur in compute shader), etc.

Generic paths and curves, getting miter limits and path winding, implementing all of the esoterica of PostScript, that's the difficult thing.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

It's basically a logical extension of the the same general idea, but
a) Generates the volume dynamically to allow for lighting/shadowing
b) Solves the actual lighting integral (rather than just finding the net thickness) so it's physically correct and allows for more flexible media behavior (phase functions, etc)
c) Takes advantage of modern GPU features for some other stuff


Not at all directly, and in fact they are somewhat incompatible (since you care about shadowing for points that won't actually appear in the frame-buffer directly).

Sex Bumbo posted:

Also thanks for replying with deets I ain't spending a grand and a week of time to go to GDC so its nice to not get gated out of poo poo.

We should be posting the presentations online a few weeks post-GDC, so you can check it out then. I'll update with a link once it's up.

elite_garbage_man
Apr 3, 2010
I THINK THAT "PRIMA DONNA" IS "PRE-MADONNA". I MAY BE ILLITERATE.
Peter Shirley recently released a couple of ebooks on writing a ray tracer in C++

Ray Tracing in One Weekend
Ray Tracing: the Next Week

Tres Burritos
Sep 3, 2009

elite_garbage_man posted:

Peter Shirley recently released a couple of ebooks on writing a ray tracer in C++

Ray Tracing in One Weekend
Ray Tracing: the Next Week

Neato! Bought em.

Minsky
May 23, 2001

Cool these should be a nice brush-up until PBRT 3rd Ed comes out hopefully some time this year.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Suspicious Dish posted:

WebRender is good work, but it's still a relatively easy problem -- boxes with radii (prerender a bunch of circles, use them from atlases), drop shadows (blur in compute shader), etc.

Generic paths and curves, getting miter limits and path winding, implementing all of the esoterica of PostScript, that's the difficult thing.

I'm genuinely curious to know how many programs actually use all of postscript's weird features. Like can DPS serve files over a socket using http://www.pugo.org/main/project_pshttpd/

Xerophyte
Mar 17, 2008

This space intentionally left blank

Minsky posted:

Cool these should be a nice brush-up until PBRT 3rd Ed comes out hopefully some time this year.

Proofing is apparently done and it's gone to the printers. A month ago or so Matt Pharr said (well, tweeted) that book-in-hand was expected around June. So: outlook positive. Maybe adding Wenzel Jakob to the author list will mean I get to look at the terrible secrets of Mitsuba that are otherwise forbidden to me...

Unrelatedly, the siggraph hotel reservations opened today. I guess one advantage of hosting it in Anaheim is that all the hotels are near the convention center. On the minus side, they're all in Anaheim.

Minsky
May 23, 2001

Xerophyte posted:

Proofing is apparently done and it's gone to the printers. A month ago or so Matt Pharr said (well, tweeted) that book-in-hand was expected around June. So: outlook positive. Maybe adding Wenzel Jakob to the author list will mean I get to look at the terrible secrets of Mitsuba that are otherwise forbidden to me...

Unrelatedly, the siggraph hotel reservations opened today. I guess one advantage of hosting it in Anaheim is that all the hotels are near the convention center. On the minus side, they're all in Anaheim.

I really look forward to it. It's by far my favorite graphics book I've read. It covers such a great breadth of rendering topics and doesn't shy from showing the full implementation for each of them.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Sex Bumbo posted:

Also thanks for replying with deets I ain't spending a grand and a week of time to go to GDC so its nice to not get gated out of poo poo.

yeah gently caress gdc for this, as someone who just dabbles here and there is v frustrating

Minsky
May 23, 2001

Speaking of not gating things in GDC, here is the Khronos Vulkan presentation livestream:

http://livestream.com/khronosgroup/vulkan

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

baldurk posted:

This was already in my schedule planner :3:.

Also if we're plugging GDC talks then I have am doing a short demo as part of Practical Development for Vulkan (presented by Valve Software).

Welp, congrats on selling out the whole room - there were probably 20-30 people who couldn't get in! Let me know if you are able to post the slides online.

baldurk
Jun 21, 2005

If you won't try to find coherence in the world, have the courtesy of becoming apathetic.
AFAIK the video&recording will be up gdcvault-style somewhere but for free (not gated).

I liked your talk too :). I didn't think to ask this at the time but do you ever find issues with lots of thin overdrawing polys if the shadow map is very noisy?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

baldurk posted:

AFAIK the video&recording will be up gdcvault-style somewhere but for free (not gated).

I liked your talk too :). I didn't think to ask this at the time but do you ever find issues with lots of thin overdrawing polys if the shadow map is very noisy?

I actually address this at the very end (the last "issues" slide). We had a bad performance case where you look perpendicular to the sun at sunset through a forest of bare trees (so lots of high frequency occlusion).

The key observation is that narrow intervals of lighting contribute very little inscattering further away because of the exponential falloff of the transmission. One thing you can do is have a dynamic tessellation scale -- have it higher when you are looking towards the light source (where you want very tight silhouettes) but have it fall off faster when you're looking perpendicular. In other words, scale it based on "T = N + M*abs(dot(V, L))".

The other thing you can do, which we're looking into, is pre-processing the shadow map with something like a median filter based on how far the shadow map texel is from the eye. This would reduce the distant complexity, but not in a way that affects the image.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
Ok stupid question time because I'm obviously not able to figure this out on my own: How do I convert between the coordinates on the screen/viewport and the X/Y at a particular z depth in the scene? I found some information that mostly focuses on identifying objects in the scene, so the solution involves intersecting rays and crap that I obviously don't need - my goal is to place an object at, say, z=-5 (away from the camera) so that it appears at a particular x,y in the viewport.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
There is an entire plane that is at z=-5, so the smart-rear end answer is "any point will do". Can you give a more concrete example of what you're looking for? Or did I get that backwards?

mobby_6kl
Aug 9, 2009

by Fluffdaddy
Sure, it's kind of a mess really because there are so many coordinate systems for me to keep track of.

The setup is pretty simple, the camera is looking straight down and the view isn't moved, for now. I want to have my background at a known depth (say, -5), and everything else goes above it. There are actually several things for which I would need this:
  • I have some map tiles that I need to place and scale such that they take up a certain portion of the viewport (fill, or, with 10% or px margins or something).
  • Suppose I know Moscow is at (500, 500px) in my map texture. Where is it in the world now so I can drop a nuke there?
  • If a user clicks on the screen, where was it on the map tiles in world coordinates
  • From there, I'd also need to get all the way back to texture or geographical coordinates as well
I'm finding this a bit tricky to explain concisely and unambiguously, but hope this makes a bit more sense.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

mobby_6kl posted:

  • If a user clicks on the screen, where was it on the map tiles in world coordinates

Let's start with this. So, the typical way of doing this is raycasting. For this, you need two things: a ray to throw outwards, and a plane to intersect with.

First, let's create the ray from the camera. Let's you have coordinates in NDC space -- ranging from (-1,-1) to (1,1), representing where the mouse is on the screen.

Create a ray going outwards. In a right-handed space, that's (0.2, -0.4, -1, 1). Now, to convert it to eye space, we multiply by the inverse of the projection matrix, since that's what got us from eye space to clip space in the first place. And then do the same thing for the inverse model-view of the camera. Now you have a ray going outwards from the camera.

You can do whatever you want with it, but let's intersect it with your plane at z=-5 to get coordinates on that plane. I'm sure you can Google ray-plane intersection.

My code for this looks like this:

JavaScript code:
        _unprojRay: function(out, x, y) {
            var rayClip = vec4.clone([x, y, -1, 1]);
            var rayEye = vec4.create();
            var projInv = mat4.create();
            mat4.invert(projInv, this._projection);
            vec4.transformMat4(rayEye, rayClip, projInv);
            rayEye = vec4.clone([rayEye[0], rayEye[1], -1, 0]);

            var rayWorld = vec4.create();
            var mvInv = mat4.create();
            mat4.invert(mvInv, this._modelView);
            vec4.transformMat4(rayWorld, rayEye, mvInv);
            rayWorld = vec3.clone([rayWorld[0], rayWorld[1], rayWorld[2]]);
            vec3.normalize(out, rayWorld);
        },

        _castRay: function(x, y) {
            var model = this._pickModel(x, y);
            if (!model)
                return;

            // XXX: pick surfaces, not models
            var surface = model._surface;

            var direction = vec3.create();
            this._unprojRay(direction, x, y);
            var pos = this._cameraPos;

            var surfacePlaneN = surface.normal;
            var surfacePlaneV = vec3.clone(surface.origin);
            vec3.transformMat4(surfacePlaneV, surfacePlaneV, model.localMatrix);

            var denom = vec3.dot(direction, surfacePlaneN);
            var p = vec3.create();
            vec3.subtract(p, pos, surfacePlaneV);
            var t = -vec3.dot(p, surfacePlaneN) / denom;
            var out = vec3.create();
            vec3.scale(out, direction, t);
            vec3.add(out, pos, out);

            model.setContactPoint(out);
            this._contactPoints.push(out);
        },

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

mobby_6kl posted:

Ok stupid question time because I'm obviously not able to figure this out on my own: How do I convert between the coordinates on the screen/viewport and the X/Y at a particular z depth in the scene? I found some information that mostly focuses on identifying objects in the scene, so the solution involves intersecting rays and crap that I obviously don't need - my goal is to place an object at, say, z=-5 (away from the camera) so that it appears at a particular x,y in the viewport.
The graphics card is doing a specific sequence of well-defined linear transformations on the vertex data you upload to decide which pixels to color. You basically just need to run those backwards, which is generally easy because you can just invert the matrices. Depending on how you're doing your rendering, you might either already be intensely familiar with those transformations due to having written them yourself in shaders, or you might need to dig the model/view/projection matrices out of a monolithic engine somewhere.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Ralith posted:

The graphics card is doing a specific sequence of well-defined linear transformations on the vertex data you upload to decide which pixels to color. You basically just need to run those backwards, which is generally easy because you can just invert the matrices. Depending on how you're doing your rendering, you might either already be intensely familiar with those transformations due to having written them yourself in shaders, or you might need to dig the model/view/projection matrices out of a monolithic engine somewhere.
I did write everything from scratch but don't really remember how half of it is supposed to work, it's an old project that I dug out to finish. This is what I was thinking, but couldn't figure out how to get the final coordinates. Seems like a ray-plane was what I was missing.

Suspicious Dish posted:

Let's start with this. So, the typical way of doing this is raycasting. For this, you need two things: a ray to throw outwards, and a plane to intersect with.

First, let's create the ray from the camera. Let's you have coordinates in NDC space -- ranging from (-1,-1) to (1,1), representing where the mouse is on the screen.

Thanks, got it working this way pretty much. It took a while to figure out a frustrating issue where my MV matrix was in an incorrect state depending on when this was called, so either my X or Y mapping was randomly off.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Hubis posted:

We should be posting the presentations online a few weeks post-GDC, so you can check it out then. I'll update with a link once it's up.

Talk/Library is up now, in case anyone is interested: https://developer.nvidia.com/VolumetricLighting

I need to get them to add the slides PDF with notes as well.

Sex Bumbo
Aug 14, 2004

Hubis posted:

Talk/Library is up now, in case anyone is interested: https://developer.nvidia.com/VolumetricLighting

I need to get them to add the slides PDF with notes as well.

I think I get this, so you're generating a funny extruded mesh from a shadow map right? Are you just drawing a grid and sampling it like a heightmap, or something more complex? What do you do for a point light mesh?

Can you help explain more of the point light integration? During rendering you'll end up with multiple distance ranges for which inscattering occurs, right? So for a single range, you'd have two points in the lookup texture -- both on the same row of the texture in the slide. Are you summing up the values between?

Sex Bumbo fucked around with this message at 19:47 on Mar 28, 2016

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Sex Bumbo posted:

I think I get this, so you're generating a funny extruded mesh from a shadow map right? Are you just drawing a grid and sampling it like a heightmap, or something more complex? What do you do for a point light mesh?

Right - actually, I render a cube that corresponds to the entire clip space of the shadow map: [(-1,-1,-1), (1,1,1)] in clip coordinates. When you multiply this by the shadow map inverse view-proj, you get a volume corresponding to the shadow map frustum, with Z=1 as the far plane. That fare plane is tesselated and offset exactly like you describe.

For a point light, right now I am just rendering a cube around the light source with all six faces tessellated, and then lookup the depth from the dual paraboloid map based on the vector from the light to the vertex. It would probably be more efficient to use something like a geodesic sphere instead, but this let me reuse the existing tessellation code (since it uses square control points).

Sex Bumbo posted:

Can you help explain more of the point light integration? During rendering you'll end up with multiple distance ranges for which inscattering occurs, right? So for a single range, you'd have two points in the lookup texture -- both on the same row of the texture in the slide. Are you summing up the values between?

Right. The lookup texture gets the differential value at each texel, then I run a compute shader that matches across the rows (32 texels at a time), does a parallel prefix sum, and then adds the running row total offset. If you multiply this by the step size, you are just doing numerical integration across the row (so I only need to sample one texel to know the integral at a point, or two texels to know the integral over a specific integral).

Sex Bumbo
Aug 14, 2004
Why can't the prefix sum be pre-computed? What changes about it frame to frame?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Sex Bumbo posted:

Why can't the prefix sum be pre-computed? What changes about it frame to frame?

The distance between the eye and light can change, which will affect the rate of change as you move along "d". I also tightly map the range of "d" based on the current eye/light positions to maximize useful texels in the LUT. Finally, if the media properties change (shifting weather, etc) you'd need to recompute as well.

All that said, you definitely COULD just compute it once per light if you wanted (also assuming the media properties never changed). You might have to up the resolution of the LUT to account for the wider range, but I don't think that would be too bad. The truth is that I found calculating the LUT was incredibly cheap, so I haven't spent much time optimising it and tended towards simplicity and flexibility of API. There is definitely a lot of low-hanging fruit in the current implementation.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
I'm certainly not in the same league in the slightest, but I also recently gave a talk!!! It's toy 3D graphics 101 to a bunch of non-graphics people at work. Live-coded a very basic software 3D engine and then ported it to WebGL.

Sorry about the audio/video quality was broadcasting over slow Wi-Fi to another guy recording it also over Wi-Fi.

https://www.youtube.com/watch?v=v8jUNC5WdYE

It's probably boring af to a bunch of 10-year industry professionals, but I really enjoyed giving it so...

The_Franz
Aug 8, 2003

Am I missing something, or should these shaders be functionally identical?

code:
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable

layout (set = 0, binding = 1) uniform sampler   samp;
layout (set = 0, binding = 2) uniform texture2D tex;

layout(location = 0) in vec2 texcoord;
layout(location = 1) in vec4 color;

layout (location = 0) out vec4 outputColor;

void main(void)
{
	outputColor = color;
	outputColor.a *= texture(sampler2D(tex, samp), texcoord).r;
}
and

code:
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable

layout (set = 0, binding = 1) uniform sampler   samp;
layout (set = 0, binding = 2) uniform texture2D tex[1];

layout(location = 0) in vec2 texcoord;
layout(location = 1) in vec4 color;

layout (location = 0) out vec4 outputColor;

void main(void)
{
	outputColor = color;
	outputColor.a *= texture(sampler2D(tex[0], samp), texcoord).r;
}
glSlang compiles both with no warnings or errors, the disassembled SPIR-V for both seems valid and there are no errors from the validation layers when loading them, creating, writing to and binding the descriptor sets or submitting draw calls, yet the first works while the second produces no visible output. Inspecting the bindings during a frame using the second shader in RenderDoc shows that the sampler and texture are bound correctly along with all of the other shader parameters, so it seems like it should work. Am I making a mistake that nothing is catching or is this potentially a driver bug?

baldurk
Jun 21, 2005

If you won't try to find coherence in the world, have the courtesy of becoming apathetic.
I certainly wouldn't count out the possibility that it's a driver bug. It seems like the shaders should be fine to me although it's always possible something else in the surrounding code not posted is wrong. The only thing I'd suggest experimenting with is to see if an array of already combined image samplers works - ie. sampler2D samptex[1];

Otherwise try the typical thing if possible of trying on another vendor's implementation, or report it as a bug with a simple repro case.

The_Franz
Aug 8, 2003

baldurk posted:

Otherwise try the typical thing if possible of trying on another vendor's implementation, or report it as a bug with a simple repro case.

Yep, doesn't work on my desktop with an Nvidia Card but works on my HTPC at home which has AMD. I'll slim-down the repro case as much as I can and post it on Nvidia's dev board.

MrPablo
Mar 21, 2003

Suspicious Dish posted:

I'm certainly not in the same league in the slightest, but I also recently gave a talk!!! It's toy 3D graphics 101 to a bunch of non-graphics people at work. Live-coded a very basic software 3D engine and then ported it to WebGL.

Sorry about the audio/video quality was broadcasting over slow Wi-Fi to another guy recording it also over Wi-Fi.

https://www.youtube.com/watch?v=v8jUNC5WdYE

It's probably boring af to a bunch of 10-year industry professionals, but I really enjoyed giving it so...

I can't say I learned anything, but it was fun to watch :).

mobby_6kl
Aug 9, 2009

by Fluffdaddy
So the stuff from earlier works fine now but I have another question.

Right now I'm drawing my background tiles by texturing a quad just like everything else. This works ok, but is kind of a pain in the rear end especially with the scaling, as any amount of it makes the images look like poo poo. What would be better is to just always draw them as background flat and 1:1 so there's no scaling going on, and one pixel of texture is one pixel on the screen. Since this probably isn't something that frequently needs to be done in OpenGL, I haven't been able to find much so far though.

Doc Block
Apr 15, 2003
Fun Shoe
So... just don't do any scaling? Lots and lots of 2D games use OpenGL so they can have hardware accelerated scaling, rotation, and alpha blending, plus shaders etc

Doc Block
Apr 15, 2003
Fun Shoe
To be helpful, set up your orthographic projection so that coordinates match the viewport pixels 1:1.

What are you scaling, and why? Your 2D artwork still has to match the size you want it to appear on screen, or else you'll have to scale it up/down.

Doc Block fucked around with this message at 06:32 on Apr 5, 2016

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Draw screen-aligned quads -- set up your viewport and projection so that your quads are aligned to fragments.

You likely don't want filtering, so instead of using texture2D in your shader, use texelFetch instead, which also takes non-normalized texture coordinates, making your math easier.

It can be a pain to set up the scene and get it all working right.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I haven't worked in 2D at all, but isn't it an option to have a texture at the highest resolution you would need, and then use mipmap linear-linear filtering? Of course, if your camera is fixed in terms of zoom, the best solution would be a single texture, no mipmaps, no filtering, but if you need to, say, zoom out at any time, I think mipmaps might give the most acceptable dynamic solution.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
The background and camera are a fixed distance away at the moment. The textures are literal 256x256 map tiles, so I'll be just drawing as many of them as are necessary to cover the viewport. And so to zoom in/out, I can just load the appropriate set of tiles, like google maps does, basically.

So yeah, I think a single texture with no mipmaps or filtering would be perfectly fine. Is this below how I would get that?

Suspicious Dish posted:

Draw screen-aligned quads -- set up your viewport and projection so that your quads are aligned to fragments.

You likely don't want filtering, so instead of using texture2D in your shader, use texelFetch instead, which also takes non-normalized texture coordinates, making your math easier.

It can be a pain to set up the scene and get it all working right.
I have no idea how to do any of this but it sounds like a plan, I'll give this a try.

Doc Block posted:

To be helpful, set up your orthographic projection so that coordinates match the viewport pixels 1:1.

What are you scaling, and why? Your 2D artwork still has to match the size you want it to appear on screen, or else you'll have to scale it up/down.
Thanks. Would it be enough to set the projection to the viewport width/height or is any other trickery required to get it to 1:1 mapping?

See above for the scaling thing.

Doc Block
Apr 15, 2003
Fun Shoe
You can set it to match the viewport, that's fairly common for 2D games. Then your map quads will have what are effectively screen-space coordinates.

But then you have to take into account different screen resolutions in your map drawing code...

edit: you may want to look at offsetting the ortho projection by a half pixel, so each texel of your map texture gets drawn in the center of each screen pixel. This used to be standard advice for GPU-accelerated 2D games, but IDK if anyone does it anymore.

Doc Block fucked around with this message at 15:56 on Apr 5, 2016

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
OpenGL wants to convert everything to be in the range -1 to 1. The big secret is that you don't even need any projection matrix to do that, as long as the output of the vertex shader is in that space.

Let's pretend you have a fixed viewport of 800x600. You draw quads in that space. Your vertex shader looks like:

code:

void main () {
    vec2 pos = a_position;

    // Remap from (0, 800) to (-1, 1)
    pos.x = pos.x / 400.0 - 1.0;
    pos.y = pos.y / 300.0 - 1.0;

    gl_Position = vec4(pos, 1.0, 1.0);
}

Now, your quads should line up 100% on the screen. No need for cameras or anything.

The orthographic projection matrix is just a convenient, one liner way of doing exactly that math, but more confusing for you until you understand it.

Adbot
ADBOT LOVES YOU

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
Yep. It's not the prettiest but these are the two shaders I've been using to draw simple sprites:

code:
attribute vec2 a_position;
attribute vec2 a_texCoord;

uniform vec2 u_resolution;
uniform vec2 u_position, u_size;

varying vec2 v_texCoord;
uniform vec2 u_texResolution;
uniform vec2 u_texPosition, u_texSize;

void main() {
	//convert from 0,0->1,1 to pixel positions:
	vec2 pixel_pos = a_position * u_size + u_position; 

	//convert the rectangle from pixels to 0.0 to 1.0
	vec2 zeroToOne = pixel_pos / u_resolution;
	
	//convert from 0 -> 1 to -1 -> 1
	gl_Position = vec4((zeroToOne * 2.0 - 1.0), 0, 1);
	
	//convert from 0,0->1,1 to pixel positions:
	vec2 texPixelPos = a_texCoord * u_texSize + u_texPosition; 

	//convert the rectangle from pixels to 0.0 to 1.0
	v_texCoord = texPixelPos / u_texResolution;
}
code:
uniform sampler2D u_image;

varying vec2 v_texCoord;

void main() {
	gl_FragColor = texture2D(u_image, v_texCoord);
}
Then you render with a unit sized quad (0,0 to 1,1). u_resolution is the screen resolution, u_size and u_position are the sprite size and position. u_texResolution, u_texPosition, u_texSize is the same info but for the sprite source texture.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply