|
How can I set a vertex to an arbitrary location in a vertex shader? I want something like: code:
|
# ? Dec 27, 2012 14:31 |
|
|
# ? May 16, 2024 19:35 |
|
unixbeard posted:How can I set a vertex to an arbitrary location in a vertex shader? So you don't see a single point being drawn? What kind of primitives are you trying to render?
|
# ? Dec 27, 2012 19:49 |
|
I see all the points, they should be random but end up in a circle shape. It should look like this But ends up looking like this It's just vertex data as points. The locations are stored in an FBO/texture, the shader gets a bunch of vertexes that are the offset into the texture which it uses to look up the x/y location, then sets gl_Position to what it gets. It's rendered using an FBO. It feels like I am missing something simple but I don't know what. Im using openFrameworks but the code is up here http://pastebin.com/wcUr17xZ.
|
# ? Dec 27, 2012 23:26 |
|
How are you generating the points? If your source of randomness or however you convert that source to x, y, and z isn't uniform you'll see patterns in the result even if the shader is correct.
haveblue fucked around with this message at 23:55 on Dec 27, 2012 |
# ? Dec 27, 2012 23:50 |
|
gooby on rails posted:How are you generating the points? If your source of randomness or however you convert that source to x, y, and z isn't uniform you'll see patterns in the result even if the shader is correct.
|
# ? Dec 28, 2012 01:39 |
|
gooby on rails posted:How are you generating the points? If your source of randomness or however you convert that source to x, y, and z isn't uniform you'll see patterns in the result even if the shader is correct. I thought of this but the points are random. I both keep a copy of the points and draw them with this: code:
roomforthetuna posted:If you convert it to x, y and z, and they are reasonably evenly distributed, and the coordinates are being passed through a perspective transform, then you'll see results like those pictured. You only really want to randomize x and y (relative to the camera) if you want something that looks like noise. It seems to me I should be able to emulate any of the fixed pipeline processing in the shader. I've done other stuff with shaders and a bunch of points stored in a texture and not had this issue. I've tried other transforms but no luck. What exactly I am missing I really dont know.
|
# ? Dec 28, 2012 07:12 |
|
I tried using a regular grid vs random locations, and it more or less works, wtf The only issue is the first row/col seems a bit squished, but the code seems to be ok. There really is something funky going on here.
|
# ? Dec 28, 2012 10:39 |
|
Could be CLAMP_TO_EDGE on the texture that's causing the squishiness.
|
# ? Dec 28, 2012 11:08 |
|
Sorry I should have been more clear: it "works" for regularly spaced points, but still turns everything into a circle for the random points. So the vertex translation shader appears to be OK, and something else is causing the random data to end up in a circle.
|
# ? Dec 28, 2012 12:09 |
|
Long shot, but mip-mapping and filtering are disabled, right?
|
# ? Dec 28, 2012 12:23 |
|
ynohtna posted:Long shot, but mip-mapping and filtering are disabled, right? Oop that's it, it was filtering. I owe you a beer if you ever happen to be in Sydney. I set mag filter to GL_NEAREST and now it does what I expected, the default was GL_LINEAR. I was also wondering why some of the particles were darker than others as i set them all to 1.0 and there was no alpha data, maybe that should've been a hint? Why did you think it might have been that? I'm still pretty new to OpenGL. Thanks though i've spent a lot of time on this. I knew it was something dumb
|
# ? Dec 28, 2012 12:57 |
|
Could someone explain why the filtering was making it into a sphere? Seems weird. Also yo fellow Sydney-person.
|
# ? Dec 28, 2012 13:05 |
|
What's with all the Sydney nerds programming at 11PM on a Friday night, during the silly season? Truly a city of lame dudes (put my name down in that list too).
|
# ? Dec 28, 2012 13:10 |
|
Cool - glad to have helped! As to why the filtering pulled the points towards the centre it's 'cos the filtering averages each lookup to the 4 neighbour points and the average of a group from an uniformly distributed n-dimensional set will bias towards the domain's centre.
|
# ? Dec 28, 2012 13:30 |
|
Am I missing something obvious here with my input layout? Trying to get instancing working... Runtime is complaining that: D3D11: ERROR: ID3D11Device::CreateInputLayout: The provided input signature expects to read an element with SemanticName/Index: 'TEXCOORD'/0, but the declaration doesn't provide a matching name. [ STATE_CREATION ERROR #163: CREATEINPUTLAYOUT_MISSINGELEMENT ] The input layout: code:
code:
edit: Jesus loving christ I saw it right after I posted it. I'm loving retarded. layout[2].SemanticIndex = 1; Should be 0. edit 2: Another question... I want to use a matrix for my instance position instead of just a vector, but I don't see a semantic for anything more than float4? Does it even matter? I just went with TEXCOORD since that was the first thing on google. Can I actually have a matrix as the input or do I really need to use 4 individual vectors... Like what the gently caress is going on here? Looks like I can do it. http://www.gamedev.net/topic/608857-dx11-how-to-transfer-matrix-to-vs/ but looking at this... http://msdn.microsoft.com/en-us/library/windows/desktop/bb509647(v=vs.85).aspx Why is WORLDVIEW absent from here? slovach fucked around with this message at 14:03 on Dec 29, 2012 |
# ? Dec 29, 2012 04:33 |
|
I'm reading this GPU Gems article on shadow maps, and in the section on Percentage-Closer Filtering they say that shadow maps cannot be prefiltered. Why is this? http://http.developer.nvidia.com/GPUGems/gpugems_ch11.html
|
# ? Jan 3, 2013 12:47 |
|
Boz0r posted:I'm reading this GPU Gems article on shadow maps, and in the section on Percentage-Closer Filtering they say that shadow maps cannot be prefiltered. Why is this? (In case you're wondering why shadows aren't completely hard-edged from that, it's because modern GPUs will automatically filter shadow map lookups at a given depth, but the filtering only goes as far as emulating linear filtering) Variance shadow maps CAN be filtered and anti-aliased, but they suffer from artifacts in certain scenarios.
|
# ? Jan 3, 2013 17:02 |
|
I'm using a really expensive shader, but I only need to run it sometimes. To take care of the 'sometimes' I'm trying to use the stencil buffer: an earlier pass always writes 1, and then the expensive pass has 'not 1' as its stencil check and 'keep' as all its operations. Unfortunately, the expensive shader needs to output depth. Somehow, this results in it always being run, even when completely masked out by the stencil buffer. Heck, even if I set its stencil buffer check to 'never' it still runs. If I remove the depth output, everything works as expected. Note that the shader also uses 'discard' ops. Anyway, it seems that the fact that I output depth results in the stencil check taking place after the pixel shader runs, instead of before. Is this just some quirk of my specific GPU (GTX 580) or is there a good reason for this? I don't understand why the fact that the fragment's depth value might change is relevant, as that doesn't change the fact that it's masked out even by the basic stencil check, so the zfail op doesn't matter anymore. And anyway, it's set to 'keep'. Maybe the GPU just turns off early z and early stencil rejection together?
|
# ? Jan 4, 2013 13:02 |
|
Contains Acetone fucked around with this message at 17:46 on Jun 24, 2020 |
# ? Jan 4, 2013 17:59 |
|
OneEightHundred posted:Because shadow maps do not store light intensity, they store depth. Depth values are either in front of the depth distance you're checking for shadowing or they aren't, so averaging a depth value that is shadowed with one that is not shadowed for instance will not give you a depth value that is half shadowed, it just gives you a new distance that is either shadowed or not. Thanks, I get it now. Can someone explain what homogeneous division is? I've tried reading up on it, but I'm still not sure what it actually is.
|
# ? Jan 4, 2013 19:11 |
|
Boz0r posted:Can someone explain what homogeneous division is? I've tried reading up on it, but I'm still not sure what it actually is. The reason this is done is because the division allows the hardware to determine how to handle various aspects that can't be done with just 2D coordinates, like perspective correction, handling screen-edge clipping for coordinates that cross the W=0 plane, etc. The depth is divided because doing that results in a non-linear depth value distribution that concentrates more depth precision close to the camera.
|
# ? Jan 4, 2013 20:23 |
|
So the homogeneous division is the projection from view space to screen space, more or less?
|
# ? Jan 4, 2013 20:55 |
|
I feel like I'm thiiiiiiis close to understanding OpenGL + VBO + VBA. But maybe not: I know you can apply matrix transformations to transform objects, but can you change the actual vertex positions in between frames? code:
EDIT: is it glBufferSubData? and if it is, how come I always figure things out 2 minutes after asking? lord funk fucked around with this message at 23:19 on Jan 12, 2013 |
# ? Jan 12, 2013 22:50 |
|
lord funk posted:I feel like I'm thiiiiiiis close to understanding OpenGL + VBO + VBA. But maybe not: If you want to replace the values entirely, then yes, BufferSubData -- but not with your vertex layout there. To replace the vertex coords you'd have to call BufferSubData 4 times; if you plan on changing them, you should put your vertex coords first and then your colors after (so the vertices are all in one block followed by the color data). The way you have it laid out has some benefits for performance, but they're nullified if you're updating the data. However, it depends on what you mean by "move" or "alter" -- you can also do them programmatically in your shaders, e.g. by passing in a matrix to multiply the vertices by as you mention, or by passing in a uniform to scale or add to the color/vertex values.
|
# ? Jan 13, 2013 01:22 |
|
Thanks - that makes sense. I'm thinking of making two vertexArrays, one with static objects that will be moved using matrix transformations, and another with dynamic objects whose vertices will change each frame. Does that sound like a good idea?
|
# ? Jan 13, 2013 18:21 |
|
Seems to be working great. Screenshot: The circles and pac-man wedges are fixed coordinates, but the lines connecting all the touch points change their vertex x/y position each frame.
|
# ? Jan 13, 2013 20:41 |
|
Apologies for the massive code dump here but I'm trying to make sure that I've provided enough information to help you all understand what I'm doing (which is) trying to get a grip on this new open GL 4.0 thing with the fancy shaders. I'm pretty sure I'm making a rookie mistake of some kind here but I've spent all day with google going through tutorials and most of them are obsolete and the rest seem to just replicate what I've done. Here is how I'm going about it: - init(); - loadTextureLibrary(); - loadShaderLibrary(); - loadBufferLibrary(); Then the engine calls drawFrame() unto infinity. (Problem is solved so I removed the massive code dump that was here, if you'd like to see it for some bizarre reason, PM me.) What it produces is: The tiny square in the top left is the actual texture and you can see how it's rendering it. If I had to take a wild rear end guess, I'd say it's using the same uv in the fragment shader for every outbound pixel but I don't know why so if anyone has a theory about what is or isn't firing (or even if you see something I've done that's just terrible or deprecated) please let me know. Also, I have absolutely no idea what glUniform1i(uniforms[UNIFORM_TEXTURE], 0); does. Doesn't GL_TEXTURE0 activate the first texture register? Why do we need to pass '0' to the uniform as well? glActiveTexture(GL_TEXTURE0); glUniform1i(uniforms[UNIFORM_TEXTURE], 0); glBindSampler(0, samplers[SAMPLER_TILE]) Ed: if you're wondering what the problem was, I was using different names for the texture coord vertex in my vertex and fragment shaders. Party Ape fucked around with this message at 09:02 on Feb 19, 2013 |
# ? Feb 17, 2013 07:24 |
|
Heliotic posted:Also, I have absolutely no idea what glUniform1i(uniforms[UNIFORM_TEXTURE], 0); does. Doesn't GL_TEXTURE0 activate the first texture register? Why do we need to pass '0' to the uniform as well?
|
# ? Feb 17, 2013 09:50 |
|
Looks like you've not got any glTexParameter calls, could be that.
|
# ? Feb 17, 2013 20:26 |
|
Thanks for the suggestion, I was under the impression that you could set the parameters against the sampler rather than the texture (but unfortunately it didn't fix the problem.) Unfortunately, the new openGL superbible is still a few weeks off so I'll keep poking away at it until then.
|
# ? Feb 18, 2013 13:04 |
|
Heliotic posted:
I don't know why texture coordinates are set to be normalized but vertex coordinates are not. They both seem to use same kind of floats in memory so it should be the same for both? A question of my own: What does context being lost in OpenGL mean and how should it be handled? How should resources like buffers, shaders, programs and textures be managed in general? Max Facetime fucked around with this message at 17:20 on Feb 18, 2013 |
# ? Feb 18, 2013 17:17 |
|
Win8 Hetro Experie posted:What does context being lost in OpenGL mean and how should it be handled? How should resources like buffers, shaders, programs and textures be managed in general?
|
# ? Feb 18, 2013 18:24 |
|
Heliotic posted:holy poo poo Should texOut in the vertex shader be texCoord? That or rename texCoord in the fragment program to texOut? ShinAli fucked around with this message at 05:06 on Feb 19, 2013 |
# ? Feb 19, 2013 05:04 |
|
ShinAli posted:Should texOut in the vertex shader be texCoord? That or rename texCoord in the fragment program to texOut? ... I gotta admit I'm disappointed at how stupid that was. (Especially since I've worked with shaders before in OpenGLES2), I owe you a beer. Thanks!
|
# ? Feb 19, 2013 09:01 |
|
OneEightHundred posted:Windows will evict your D3D context if the application window is minimized or resized, destroying all of its resources and forcing you to reupload them and reset the device if you want to do anything. OpenGL specifically requires that data is not able to be spontaneously lost, so the Windows drivers will preserve them. It sounds like mobile devices may also be able to lose the context if the device goes to sleep, but I don't know much about that. I guess this means going back to square one and starting again from setting up the display mode and pixel format. Is there any harm or defensive programming advantage in calling delete buffers etc. with the old buffer names before allocating new ones?
|
# ? Feb 19, 2013 10:08 |
|
The Gripper posted:What's with all the Sydney nerds programming at 11PM on a Friday night, during the silly season? Truly a city of lame dudes (put my name down in that list too). Put me down as another ex-sydneysider (now Canberran) who spent both Friday and Saturday night playing with OpenGL. Buncha weirdos you ask me.
|
# ? Feb 23, 2013 17:06 |
|
Win8 Hetro Experie posted:I guess this means going back to square one and starting again from setting up the display mode and pixel format. Is there any harm or defensive programming advantage in calling delete buffers etc. with the old buffer names before allocating new ones? If you are using D3D, then a lost context will still return memory regions and whatnot when you attempt to map resources, but they'll be from a dummy allocation system that throws it out when you unmap. Probably the best thing to do is just program as usual, but have a mechanism for doing a drop/reload of all resources.
|
# ? Feb 23, 2013 18:31 |
|
Incidentally, this is why a lot of older PC games are so unstable when alt-tabbing- they weren't written defensively enough to catch the invalidation at literally any point in the program. I've never seen iOS destroy a context without being specifically told to, but I don't know how Android or Windows Phone handle it. I think ES has the same stipulation as full-fat GL.
|
# ? Feb 23, 2013 19:10 |
|
gooby on rails posted:Incidentally, this is why a lot of older PC games are so unstable when alt-tabbing- they weren't written defensively enough to catch the invalidation at literally any point in the program. If your app can support manually dropping all resources and reloading them while you're playing, then you should be able to survive a context loss without problem, since the D3D API should still be giving you responses that will at least avoid a crash until then. The best practice is probably to only reference D3D resources via a resource manager that can do a centralized drop/reload. Android dropping the context in certain situations is documented, intended behavior. OneEightHundred fucked around with this message at 19:47 on Feb 23, 2013 |
# ? Feb 23, 2013 19:26 |
|
|
# ? May 16, 2024 19:35 |
|
drat, the terminology is really confusing, but I think I can make some semblance of what happens in OpenGL ES 2.0 and similar. First, OpenGL doesn't lose its context, EGL loses the OpenGL context. The error code is EGL_CONTEXT_LOST and it's mentioned in eglSwapBuffers and a few other functions. Also what OneEightHundred said about Android. Then, in WebGL there is the error code gl.CONTEXT_LOST_WEBGL that gl.getError can return for anything. Khronos has a wiki for WebGL and it's got instructions for handling it properly. Though it's a bit odd that some of the issues mentioned, like something hogging the GPU or the driver being updated must surely apply in desktop OpenGL as well. In light of those I think I should be fine with pretending to be doing an orderly cleanup phase, then starting initialization from the beginning. If nothing else at least this should be testable.
|
# ? Feb 24, 2013 00:11 |