|
Hubis posted:Sure, though since I'm still frantically editing slides I'll probably save going into great depth until I have some free time How much is related to this cool paper: https://research.nvidia.com/sites/default/files/publications/ftizb_cameraReady.pdf
|
# ? Mar 9, 2016 19:58 |
|
|
# ? May 15, 2024 06:59 |
|
Also thanks for replying with deets I ain't spending a grand and a week of time to go to GDC so its nice to not get gated out of poo poo.
|
# ? Mar 9, 2016 20:00 |
|
Malcolm XML posted:Webrender is cool pcwalton did a talk on it recently WebRender is good work, but it's still a relatively easy problem -- boxes with radii (prerender a bunch of circles, use them from atlases), drop shadows (blur in compute shader), etc. Generic paths and curves, getting miter limits and path winding, implementing all of the esoterica of PostScript, that's the difficult thing.
|
# ? Mar 9, 2016 20:14 |
|
Sex Bumbo posted:How does this compare to oldschool fog polygon volumes? Like http://developer.download.nvidia.com/SDK/9.5/Samples/DEMOS/Direct3D9/src/FogPolygonVolumes3/docs/FogPolygonVolumes3.pdf It's basically a logical extension of the the same general idea, but a) Generates the volume dynamically to allow for lighting/shadowing b) Solves the actual lighting integral (rather than just finding the net thickness) so it's physically correct and allows for more flexible media behavior (phase functions, etc) c) Takes advantage of modern GPU features for some other stuff Malcolm XML posted:How much is related to this cool paper: https://research.nvidia.com/sites/default/files/publications/ftizb_cameraReady.pdf Not at all directly, and in fact they are somewhat incompatible (since you care about shadowing for points that won't actually appear in the frame-buffer directly). Sex Bumbo posted:Also thanks for replying with deets I ain't spending a grand and a week of time to go to GDC so its nice to not get gated out of poo poo. We should be posting the presentations online a few weeks post-GDC, so you can check it out then. I'll update with a link once it's up.
|
# ? Mar 9, 2016 20:14 |
|
Peter Shirley recently released a couple of ebooks on writing a ray tracer in C++ Ray Tracing in One Weekend Ray Tracing: the Next Week
|
# ? Mar 14, 2016 05:33 |
|
elite_garbage_man posted:Peter Shirley recently released a couple of ebooks on writing a ray tracer in C++ Neato! Bought em.
|
# ? Mar 14, 2016 17:33 |
|
Cool these should be a nice brush-up until PBRT 3rd Ed comes out hopefully some time this year.
|
# ? Mar 14, 2016 22:15 |
|
Suspicious Dish posted:WebRender is good work, but it's still a relatively easy problem -- boxes with radii (prerender a bunch of circles, use them from atlases), drop shadows (blur in compute shader), etc. I'm genuinely curious to know how many programs actually use all of postscript's weird features. Like can DPS serve files over a socket using http://www.pugo.org/main/project_pshttpd/
|
# ? Mar 14, 2016 23:54 |
|
Minsky posted:Cool these should be a nice brush-up until PBRT 3rd Ed comes out hopefully some time this year. Proofing is apparently done and it's gone to the printers. A month ago or so Matt Pharr said (well, tweeted) that book-in-hand was expected around June. So: outlook positive. Maybe adding Wenzel Jakob to the author list will mean I get to look at the terrible secrets of Mitsuba that are otherwise forbidden to me... Unrelatedly, the siggraph hotel reservations opened today. I guess one advantage of hosting it in Anaheim is that all the hotels are near the convention center. On the minus side, they're all in Anaheim.
|
# ? Mar 15, 2016 00:32 |
|
Xerophyte posted:Proofing is apparently done and it's gone to the printers. A month ago or so Matt Pharr said (well, tweeted) that book-in-hand was expected around June. So: outlook positive. Maybe adding Wenzel Jakob to the author list will mean I get to look at the terrible secrets of Mitsuba that are otherwise forbidden to me... I really look forward to it. It's by far my favorite graphics book I've read. It covers such a great breadth of rendering topics and doesn't shy from showing the full implementation for each of them.
|
# ? Mar 16, 2016 20:32 |
|
Sex Bumbo posted:Also thanks for replying with deets I ain't spending a grand and a week of time to go to GDC so its nice to not get gated out of poo poo. yeah gently caress gdc for this, as someone who just dabbles here and there is v frustrating
|
# ? Mar 16, 2016 20:47 |
|
Speaking of not gating things in GDC, here is the Khronos Vulkan presentation livestream: http://livestream.com/khronosgroup/vulkan
|
# ? Mar 16, 2016 22:47 |
|
baldurk posted:This was already in my schedule planner . Welp, congrats on selling out the whole room - there were probably 20-30 people who couldn't get in! Let me know if you are able to post the slides online.
|
# ? Mar 17, 2016 21:16 |
|
AFAIK the video&recording will be up gdcvault-style somewhere but for free (not gated). I liked your talk too . I didn't think to ask this at the time but do you ever find issues with lots of thin overdrawing polys if the shadow map is very noisy?
|
# ? Mar 19, 2016 15:15 |
|
baldurk posted:AFAIK the video&recording will be up gdcvault-style somewhere but for free (not gated). I actually address this at the very end (the last "issues" slide). We had a bad performance case where you look perpendicular to the sun at sunset through a forest of bare trees (so lots of high frequency occlusion). The key observation is that narrow intervals of lighting contribute very little inscattering further away because of the exponential falloff of the transmission. One thing you can do is have a dynamic tessellation scale -- have it higher when you are looking towards the light source (where you want very tight silhouettes) but have it fall off faster when you're looking perpendicular. In other words, scale it based on "T = N + M*abs(dot(V, L))". The other thing you can do, which we're looking into, is pre-processing the shadow map with something like a median filter based on how far the shadow map texel is from the eye. This would reduce the distant complexity, but not in a way that affects the image.
|
# ? Mar 20, 2016 00:23 |
|
Ok stupid question time because I'm obviously not able to figure this out on my own: How do I convert between the coordinates on the screen/viewport and the X/Y at a particular z depth in the scene? I found some information that mostly focuses on identifying objects in the scene, so the solution involves intersecting rays and crap that I obviously don't need - my goal is to place an object at, say, z=-5 (away from the camera) so that it appears at a particular x,y in the viewport.
|
# ? Mar 26, 2016 02:57 |
|
There is an entire plane that is at z=-5, so the smart-rear end answer is "any point will do". Can you give a more concrete example of what you're looking for? Or did I get that backwards?
|
# ? Mar 26, 2016 03:32 |
|
Sure, it's kind of a mess really because there are so many coordinate systems for me to keep track of. The setup is pretty simple, the camera is looking straight down and the view isn't moved, for now. I want to have my background at a known depth (say, -5), and everything else goes above it. There are actually several things for which I would need this:
|
# ? Mar 26, 2016 12:32 |
|
mobby_6kl posted:
Let's start with this. So, the typical way of doing this is raycasting. For this, you need two things: a ray to throw outwards, and a plane to intersect with. First, let's create the ray from the camera. Let's you have coordinates in NDC space -- ranging from (-1,-1) to (1,1), representing where the mouse is on the screen. Create a ray going outwards. In a right-handed space, that's (0.2, -0.4, -1, 1). Now, to convert it to eye space, we multiply by the inverse of the projection matrix, since that's what got us from eye space to clip space in the first place. And then do the same thing for the inverse model-view of the camera. Now you have a ray going outwards from the camera. You can do whatever you want with it, but let's intersect it with your plane at z=-5 to get coordinates on that plane. I'm sure you can Google ray-plane intersection. My code for this looks like this: JavaScript code:
|
# ? Mar 26, 2016 17:45 |
|
mobby_6kl posted:Ok stupid question time because I'm obviously not able to figure this out on my own: How do I convert between the coordinates on the screen/viewport and the X/Y at a particular z depth in the scene? I found some information that mostly focuses on identifying objects in the scene, so the solution involves intersecting rays and crap that I obviously don't need - my goal is to place an object at, say, z=-5 (away from the camera) so that it appears at a particular x,y in the viewport.
|
# ? Mar 26, 2016 19:07 |
|
Ralith posted:The graphics card is doing a specific sequence of well-defined linear transformations on the vertex data you upload to decide which pixels to color. You basically just need to run those backwards, which is generally easy because you can just invert the matrices. Depending on how you're doing your rendering, you might either already be intensely familiar with those transformations due to having written them yourself in shaders, or you might need to dig the model/view/projection matrices out of a monolithic engine somewhere. Suspicious Dish posted:Let's start with this. So, the typical way of doing this is raycasting. For this, you need two things: a ray to throw outwards, and a plane to intersect with. Thanks, got it working this way pretty much. It took a while to figure out a frustrating issue where my MV matrix was in an incorrect state depending on when this was called, so either my X or Y mapping was randomly off.
|
# ? Mar 27, 2016 17:10 |
|
Hubis posted:We should be posting the presentations online a few weeks post-GDC, so you can check it out then. I'll update with a link once it's up. Talk/Library is up now, in case anyone is interested: https://developer.nvidia.com/VolumetricLighting I need to get them to add the slides PDF with notes as well.
|
# ? Mar 28, 2016 12:50 |
|
Hubis posted:Talk/Library is up now, in case anyone is interested: https://developer.nvidia.com/VolumetricLighting I think I get this, so you're generating a funny extruded mesh from a shadow map right? Are you just drawing a grid and sampling it like a heightmap, or something more complex? What do you do for a point light mesh? Can you help explain more of the point light integration? During rendering you'll end up with multiple distance ranges for which inscattering occurs, right? So for a single range, you'd have two points in the lookup texture -- both on the same row of the texture in the slide. Are you summing up the values between? Sex Bumbo fucked around with this message at 19:47 on Mar 28, 2016 |
# ? Mar 28, 2016 19:26 |
|
Sex Bumbo posted:I think I get this, so you're generating a funny extruded mesh from a shadow map right? Are you just drawing a grid and sampling it like a heightmap, or something more complex? What do you do for a point light mesh? Right - actually, I render a cube that corresponds to the entire clip space of the shadow map: [(-1,-1,-1), (1,1,1)] in clip coordinates. When you multiply this by the shadow map inverse view-proj, you get a volume corresponding to the shadow map frustum, with Z=1 as the far plane. That fare plane is tesselated and offset exactly like you describe. For a point light, right now I am just rendering a cube around the light source with all six faces tessellated, and then lookup the depth from the dual paraboloid map based on the vector from the light to the vertex. It would probably be more efficient to use something like a geodesic sphere instead, but this let me reuse the existing tessellation code (since it uses square control points). Sex Bumbo posted:Can you help explain more of the point light integration? During rendering you'll end up with multiple distance ranges for which inscattering occurs, right? So for a single range, you'd have two points in the lookup texture -- both on the same row of the texture in the slide. Are you summing up the values between? Right. The lookup texture gets the differential value at each texel, then I run a compute shader that matches across the rows (32 texels at a time), does a parallel prefix sum, and then adds the running row total offset. If you multiply this by the step size, you are just doing numerical integration across the row (so I only need to sample one texel to know the integral at a point, or two texels to know the integral over a specific integral).
|
# ? Mar 28, 2016 20:34 |
|
Why can't the prefix sum be pre-computed? What changes about it frame to frame?
|
# ? Mar 28, 2016 21:18 |
|
Sex Bumbo posted:Why can't the prefix sum be pre-computed? What changes about it frame to frame? The distance between the eye and light can change, which will affect the rate of change as you move along "d". I also tightly map the range of "d" based on the current eye/light positions to maximize useful texels in the LUT. Finally, if the media properties change (shifting weather, etc) you'd need to recompute as well. All that said, you definitely COULD just compute it once per light if you wanted (also assuming the media properties never changed). You might have to up the resolution of the LUT to account for the wider range, but I don't think that would be too bad. The truth is that I found calculating the LUT was incredibly cheap, so I haven't spent much time optimising it and tended towards simplicity and flexibility of API. There is definitely a lot of low-hanging fruit in the current implementation.
|
# ? Mar 28, 2016 23:28 |
|
I'm certainly not in the same league in the slightest, but I also recently gave a talk!!! It's toy 3D graphics 101 to a bunch of non-graphics people at work. Live-coded a very basic software 3D engine and then ported it to WebGL. Sorry about the audio/video quality was broadcasting over slow Wi-Fi to another guy recording it also over Wi-Fi. https://www.youtube.com/watch?v=v8jUNC5WdYE It's probably boring af to a bunch of 10-year industry professionals, but I really enjoyed giving it so...
|
# ? Mar 29, 2016 06:55 |
|
Am I missing something, or should these shaders be functionally identical?code:
code:
|
# ? Mar 31, 2016 16:42 |
|
I certainly wouldn't count out the possibility that it's a driver bug. It seems like the shaders should be fine to me although it's always possible something else in the surrounding code not posted is wrong. The only thing I'd suggest experimenting with is to see if an array of already combined image samplers works - ie. sampler2D samptex[1]; Otherwise try the typical thing if possible of trying on another vendor's implementation, or report it as a bug with a simple repro case.
|
# ? Mar 31, 2016 16:52 |
|
baldurk posted:Otherwise try the typical thing if possible of trying on another vendor's implementation, or report it as a bug with a simple repro case. Yep, doesn't work on my desktop with an Nvidia Card but works on my HTPC at home which has AMD. I'll slim-down the repro case as much as I can and post it on Nvidia's dev board.
|
# ? Mar 31, 2016 19:09 |
|
Suspicious Dish posted:I'm certainly not in the same league in the slightest, but I also recently gave a talk!!! It's toy 3D graphics 101 to a bunch of non-graphics people at work. Live-coded a very basic software 3D engine and then ported it to WebGL. I can't say I learned anything, but it was fun to watch .
|
# ? Mar 31, 2016 23:32 |
|
So the stuff from earlier works fine now but I have another question. Right now I'm drawing my background tiles by texturing a quad just like everything else. This works ok, but is kind of a pain in the rear end especially with the scaling, as any amount of it makes the images look like poo poo. What would be better is to just always draw them as background flat and 1:1 so there's no scaling going on, and one pixel of texture is one pixel on the screen. Since this probably isn't something that frequently needs to be done in OpenGL, I haven't been able to find much so far though.
|
# ? Apr 4, 2016 21:36 |
|
So... just don't do any scaling? Lots and lots of 2D games use OpenGL so they can have hardware accelerated scaling, rotation, and alpha blending, plus shaders etc
|
# ? Apr 5, 2016 06:24 |
|
To be helpful, set up your orthographic projection so that coordinates match the viewport pixels 1:1. What are you scaling, and why? Your 2D artwork still has to match the size you want it to appear on screen, or else you'll have to scale it up/down. Doc Block fucked around with this message at 06:32 on Apr 5, 2016 |
# ? Apr 5, 2016 06:26 |
|
Draw screen-aligned quads -- set up your viewport and projection so that your quads are aligned to fragments. You likely don't want filtering, so instead of using texture2D in your shader, use texelFetch instead, which also takes non-normalized texture coordinates, making your math easier. It can be a pain to set up the scene and get it all working right.
|
# ? Apr 5, 2016 07:19 |
I haven't worked in 2D at all, but isn't it an option to have a texture at the highest resolution you would need, and then use mipmap linear-linear filtering? Of course, if your camera is fixed in terms of zoom, the best solution would be a single texture, no mipmaps, no filtering, but if you need to, say, zoom out at any time, I think mipmaps might give the most acceptable dynamic solution.
|
|
# ? Apr 5, 2016 10:06 |
|
The background and camera are a fixed distance away at the moment. The textures are literal 256x256 map tiles, so I'll be just drawing as many of them as are necessary to cover the viewport. And so to zoom in/out, I can just load the appropriate set of tiles, like google maps does, basically. So yeah, I think a single texture with no mipmaps or filtering would be perfectly fine. Is this below how I would get that? Suspicious Dish posted:Draw screen-aligned quads -- set up your viewport and projection so that your quads are aligned to fragments. Doc Block posted:To be helpful, set up your orthographic projection so that coordinates match the viewport pixels 1:1. See above for the scaling thing.
|
# ? Apr 5, 2016 15:02 |
|
You can set it to match the viewport, that's fairly common for 2D games. Then your map quads will have what are effectively screen-space coordinates. But then you have to take into account different screen resolutions in your map drawing code... edit: you may want to look at offsetting the ortho projection by a half pixel, so each texel of your map texture gets drawn in the center of each screen pixel. This used to be standard advice for GPU-accelerated 2D games, but IDK if anyone does it anymore. Doc Block fucked around with this message at 15:56 on Apr 5, 2016 |
# ? Apr 5, 2016 15:50 |
|
OpenGL wants to convert everything to be in the range -1 to 1. The big secret is that you don't even need any projection matrix to do that, as long as the output of the vertex shader is in that space. Let's pretend you have a fixed viewport of 800x600. You draw quads in that space. Your vertex shader looks like: code:
Now, your quads should line up 100% on the screen. No need for cameras or anything. The orthographic projection matrix is just a convenient, one liner way of doing exactly that math, but more confusing for you until you understand it.
|
# ? Apr 5, 2016 17:28 |
|
|
# ? May 15, 2024 06:59 |
|
Yep. It's not the prettiest but these are the two shaders I've been using to draw simple sprites:code:
code:
|
# ? Apr 5, 2016 17:37 |