|
That worked great, I can't believe I didn't notice it before. Can't say I know anything about your problem though.
|
# ? May 15, 2009 18:57 |
|
|
# ? May 30, 2024 14:12 |
|
Can't help you with the selection buffer bit, but I caught this while going over your code:code:
Any particular reason you are doing that?
|
# ? May 15, 2009 18:58 |
|
It was just what a tutorial was doing, I changed it to 1.0f for the near plane but either way it ran fine.
|
# ? May 16, 2009 00:56 |
|
brian posted:It was just what a tutorial was doing, I changed it to 1.0f for the near plane but either way it ran fine. If its the Nehe tutorials you should be aware that they contain a lot of out of date or just plain bad advice!
|
# ? May 16, 2009 01:06 |
|
With GLSL, is there any way to change JUST the vertex or pixel shader to a different one that doesn't involve linking a new program object?
|
# ? May 18, 2009 01:31 |
|
I've just graduated from university with my CS degree, and I was always disappointed that I could never take my school's graphics class. I'm looking at the courses offered in my master's program this fall and more than a few require a working knowledge of OpenGL. So, I'm wondering what the consensus is for someone wanting to pick up modern (3.x?) OpenGL. I know nehe is deprecated, and most people I've spoken to have referred me to the SuperBible, but the most recent version I can find still uses the OpenGL state machine model, so that's out. Further, the documentation for 3.0 and 3.1 is really sparce; all the resources on opengl.org seem to be for older versions, so I don't know where I should be looking. Thoughts?
|
# ? Jun 1, 2009 22:05 |
Dijkstracula posted:I've just graduated from university with my CS degree, and I was always disappointed that I could never take my school's graphics class. I'm looking at the courses offered in my master's program this fall and more than a few require a working knowledge of OpenGL. So, I'm wondering what the consensus is for someone wanting to pick up modern (3.x?) OpenGL. I know nehe is deprecated, and most people I've spoken to have referred me to the SuperBible, but the most recent version I can find still uses the OpenGL state machine model, so that's out. Further, the documentation for 3.0 and 3.1 is really sparce; all the resources on opengl.org seem to be for older versions, so I don't know where I should be looking. I'm in almost exactly the same boat. I've got a deficient background in OpenGL 2.1, but I'm completely lost in 3.1. If anyone could point me (us) in the right direction with some tutorials, it would be very much appreciated.
|
|
# ? Jun 4, 2009 07:06 |
|
Serious question: do people actually use OpenGL 3? Maybe id Software does since they're on the committee or whatever...
Avenging Dentist fucked around with this message at 20:53 on Jun 5, 2009 |
# ? Jun 5, 2009 20:50 |
|
Nobody uses OpenGL 3 because there's no documentation for it, and there's no documentation for it because nobody uses it...
|
# ? Jun 6, 2009 01:26 |
|
Everyone switched to either DX10 or GL ES.
|
# ? Jun 6, 2009 02:19 |
|
Dijkstracula posted:Nobody uses OpenGL 3 because there's no documentation for it, and there's no documentation for it because nobody uses it... You're probably being sarcastic but my detector is broken. The GL3 docs are here: http://www.opengl.org/registry/ http://www.opengl.org/registry/doc/glspec31.20090528.pdf
|
# ? Jun 6, 2009 03:31 |
|
I imagine most of the changes in 3.0 have to do with GLSL rather than the core API. ex, implementation of geometry shaders. The only real change left to the core API is to remove the fixed pipeline functionality. That's never going to happen though, since the CAD developers would poo poo bricks.
|
# ? Jun 6, 2009 04:18 |
|
PnP Bios posted:I imagine most of the changes in 3.0 have to do with GLSL rather than the core API. ex, implementation of geometry shaders. Geometry Shaders have been in as an extension before 3.0 came out. 3.0 was a horrible release for the most part though. Khronos had been promissing blow-jobs and a new API all around, and they ended up with an incremental update instead of a major revision...
|
# ? Jun 6, 2009 05:50 |
|
Khronos showed that OpenGL is going to be a dinosaur playing catch-up forever on the consumer market because the CAD industry loves dinosaurs. Only time will tell if the deprecation model has any teeth, OpenGL 3.1 at least caught up with uniform buffers and instancing, but it's still missing features that should have been added ages ago. i.e. why can't I get the compiled code for shaders and reload it later to avoid glacial load times and huge hitches during hot-loading? Why are all stages coupled into one program object, requiring ridiculous numbers of permutations, especially considering the previous issue? Where's the atomic object creation promised for 3.0? As for getting tutorials for it: Try to think of OpenGL 3.0 as OpenGL 2.2, except with "GL3/gl3.h" as your include file. What you really want to get in to is using GLSL instead of the fixed-function poo poo, since they hardly changed anything else. OneEightHundred fucked around with this message at 17:15 on Jun 6, 2009 |
# ? Jun 6, 2009 16:49 |
|
OneEightHundred posted:As for getting tutorials for it: Try to think of OpenGL 3.0 as OpenGL 2.2, except with "GL3/gl3.h" as your include file. What you really want to get in to is using GLSL instead of the fixed-function poo poo, since they hardly changed anything else. I'd actually like to track down a tutorial for the new "direct state access" dealie. I can see that making my OpenGL code a bit more readable. Also, is there a go-to tutorial for using stream-out buffers?
|
# ? Jun 6, 2009 17:22 |
OneEightHundred posted:As for getting tutorials for it: Try to think of OpenGL 3.0 as OpenGL 2.2, except with "GL3/gl3.h" as your include file. What you really want to get in to is using GLSL instead of the fixed-function poo poo, since they hardly changed anything else. I'd love to just treat it like OpenGL 2.0++, but they removed some very foundational features, so I'm feeling lost. glLight is gone/depreciated. glMatrixMode, glTranslate, etc are gone. I'm at a loss as to where to begin. EDIT: GLSL does transforms and rotations? This is must be some obscure usage of the word shader that I was not previously aware of.
|
|
# ? Jun 7, 2009 01:53 |
|
Jo posted:EDIT: GLSL does transforms and rotations? This is must be some obscure usage of the word shader that I was not previously aware of. Why do you say that? The fixed function pipeline is gone in DirectX 10 as well.
|
# ? Jun 7, 2009 02:22 |
Null Pointer posted:Why do you say that? The fixed function pipeline is gone in DirectX 10 as well. I'm just surprised. I always shrugged GLSL off as nothing more than a way of applying fancy textures. That's what 'shader' meant to me. To see that it does lighting and geometric transforms is a very strange and eye-opening realization.
|
|
# ? Jun 7, 2009 02:52 |
|
The pipeline with GLSL looks like: Uniforms + samplers + per-vertex data --> vertex shader [--> geometry shader] --> depth test --> fragment shader --> alpha test (ugh) --> blend --> target. The vertex shader takes over all of the per-vertex operations (including transform and lighting), the fragment shader takes over all of the per-pixel operations. Jo posted:I'm just surprised. I always shrugged GLSL off as nothing more than a way of applying fancy textures. That's what 'shader' meant to me. To see that it does lighting and geometric transforms is a very strange and eye-opening realization.
|
# ? Jun 7, 2009 21:40 |
|
Jo posted:I'm just surprised. I always shrugged GLSL off as nothing more than a way of applying fancy textures. That's what 'shader' meant to me. To see that it does lighting and geometric transforms is a very strange and eye-opening realization. 'vertex program' and 'fragment program' are generally more correct than 'shader', especially nowadays. Even in the 'fixed function' pipeline, very little is actually in fixed function hardware.
|
# ? Jun 8, 2009 11:16 |
|
PnP Bios posted:I imagine most of the changes in 3.0 have to do with GLSL rather than the core API. ex, implementation of geometry shaders. Well, it's very incremental, but 3.0 _should_ have been like ES2.0 and removed all that crap. Especially since half of OpenGL isn't ever used, should never be used, and your computer's implementation doesn't support it anyway. As for 3.0 usage there really isn't a reason to use it yet since almost everything interesting can be done with an extension and no one wants to learn a new API until they have too. But it will never happen because, as you said, the legacy developers would go crazy. There are a lot (and I mean a lot) of really baaaaad OpenGL apps out there. Though I have to say: it is a royal pain in the rear end to get something up and running quickly in ES2.0.
|
# ? Jun 9, 2009 07:33 |
|
If you combine vertex array objects from GL 3.0 and uniform buffer objects (constant buffers) from GL 3.1, in my opinion the performance potential of the API is at least on par with DX10 and maybe even DX11 except for things like command lists (display lists in GL are too heavy-weight to be recompiled at runtime). Vertex array objects in my opinion are even better than what DX10 has to offer since you can set up a draw call in one driver (BindVertexArray) rather than three calls to do the same in D3D 10 (IASetIndexBuffer, IASetVertexBuffers, IASetInputLatyout). Uniform buffer objects (GL constant buffers) provide the power of allowing users to group constants by frequency of update with the opportunity for the driver to still optimize the internal contents of the buffer for optimal layout. If there was something in the API that I would like to see as well, it would be a light-weight object similar to vertex array objects that would allow me to program the values of all texture samplers in one driver call, rather than calling ActiveTexture and BindTexture per unit. Unfortunately neither Nvidia's or ATI's GL driver implementation is as performant as their respective D3D drivers.
|
# ? Jun 13, 2009 22:05 |
|
Vertex arrays are generally a bad idea because it prevents the driver from using vertex cache at all. Using an index buffer means that it can cache the results of vertex shader runs so it doesn't have run the vertex shader again for that vertex. Vertex buffer objects are the preferred way of doing things, and the equivalents of SetInputLayout and SetIndexBuffer are not skippable in those cases.
|
# ? Jun 14, 2009 07:02 |
|
Vertex array objects are not the same as client-side vertex arrays or non-indexed draw calls. Vertex array objects allow you to precompute the vertex attribute bindings (VertexPointer, ColorPointer, etc.) into a state object that you can efficiently bind with one call. http://www.opengl.org/registry/specs/ARB/vertex_array_object.txt hth
|
# ? Jun 14, 2009 20:16 |
|
samiamwork posted:You're probably being sarcastic but my detector is broken. The GL3 docs are here: Can I assume that these documents outline the "correct 3.x way" of doing things and don't talk about deprecated stuff? I had it in my head that the GL_{POINTS | LINESTRIP | TRIANGLEFAN | etc} modal stuff has gone away in 3x (which would require a completely new way of defining primatives, I guess), but they talk about it in the specification documents anyway.
|
# ? Jun 15, 2009 16:42 |
|
Dijkstracula posted:Hm, thanks for these; the specification documents are a lot more readable than I thought they'd be. http://www.devklog.net/2008/08/23/forward-compatible-opengl-3-entry-points/ http://www.cincomsmalltalk.com/userblogs/mls/blogView?showComments=true&printTitle=Forward_compatible_OpenGL_3.0_defines&entry=3398275422 There are no extensions listed, but it's a pretty good start, especially if you're new to OpenGL.
|
# ? Jun 16, 2009 00:18 |
|
I'm rendering a tileset in OpenGL ES (basically, a huge set of adjacent quads, but where the texture coordinates are not adjacent). Since ES does not support GL_QUAD, I'm drawing these as an array of GL_TRIANGLES, i.e., 6*(# of quad) vertices and 4*(# of quad) texture coordinates. Is there a more efficient way? Can I use separate indices for the vertices and the coordinates, or something like that? EDIT: Also curious if anyone knows any fancy tricks to emulate palette tricks short of creating a bunch of separate textures in advance or the use of shaders. Small White Dragon fucked around with this message at 08:38 on Jun 16, 2009 |
# ? Jun 16, 2009 08:35 |
|
Well, using TRIANGLE_STRIP or TRIANGLE_FAN each quad becomes 4 vertices, for a start.
|
# ? Jun 16, 2009 11:35 |
|
HauntedRobot posted:Well, using TRIANGLE_STRIP or TRIANGLE_FAN each quad becomes 4 vertices, for a start.
|
# ? Jun 16, 2009 19:28 |
|
Small White Dragon posted:How would this work? I was under the impression that with a TRIANGLE_STRIP, each subsequent quad shared two vertices (and two texture coordinates) with the previous quad, whereas in this case the texture coordinates for a quad might not be adjacent to the previous quad. Triangles, not quads. A triangle strip goes in a zigzag, so you place 2 corners of the first triangle, then the third corner, then a fourth corner which forms a second triangle which shares the 2nd and 3rd vertices with the first and has the side between them in common. A triangle fan would have the same layout in memory as a normal quad, it would just be interpreted as, again, two triangles sharing a side and the 2 vertices that define it.
|
# ? Jun 16, 2009 19:33 |
|
Is anyone else actually reading what this guy wants? I tend to use DirectX instead of OpenGL, but I'm sure this stuff is pretty similar. The issue is that, depending on what face you're working on, the texture coordinates for a given point are different. I think you'd have to render them as individual quads, but I think there are also some tricks in DirectX to make this easier (I'd post them, but I don't have the relevant code at my fingertips now, and I might be completely wrong anyway.)
|
# ? Jun 16, 2009 19:38 |
|
Here's a visual showing unique vertices and texture coordinates: This scenario involves only two adjacent quads when we might dealing with several hundred, but while two adjacent quads may share vertices they do not share texture coordinates. Hence, I think a strip or fan wouldn't work -- but I hope I'm wrong. EDIT: Since these are square, I was also thinking about points, but as far as I can tell, the entire point is either rendered with a full texture (0,0 to 1,1) or all composed of the same texture coordinate.
|
# ? Jun 16, 2009 21:54 |
|
Texture coordinates are also shared in triangle strips as are all other attributes of a vertex (a vertex describes the entire packet of information, not just position. OpenGL named it badly). The problem with tri-strips is that while drawing a single quad with a tri-strip is fine, you can't draw more than one isolated quad in a single draw call because the tri strip will stitch those two together. You can hack it by inserting some dummy vertex indices in-between two quad but it's not worth it. Or you can use an extension called NV_primitive_restart to split the triangle strip, although that's unlikely to be available on OpenGL ES and probably not worth it from a performance point of view at the rate you would have to use it in the first place. The best you can do for drawing a set of quads is drawing them as triangle lists, 4 vertices per quad (including 4 texture coordinates) and 6 indices per quad. If all your quads form a continuous strip though (such as a terrain), consider triangle strips. Bonus: Even GL_QUADS will only interpolate values from 3 vertices at a time. That's why you can see some shading discontinuity along the diagonal of the quad depending on how the pixel shader uses the interpolated values. For true 4-vertex barycentric interpolation, you have to do it yourself with a geometry shader.
|
# ? Jun 17, 2009 05:19 |
|
Actually, if all your quads are square, and all you need is texture coordinates, then consider point sprites. I'm not sure if they are available on OpenGL ES though..
|
# ? Jun 17, 2009 05:27 |
|
Minsky posted:Actually, if all your quads are square, and all you need is texture coordinates, then consider point sprites. I'm not sure if they are available on OpenGL ES though..
|
# ? Jun 17, 2009 06:00 |
|
When I mentioned Triangle Strips I was explicitly talking about drawing one rectangle at a time, obviously they won't work over multiple rectangles because of the texturing thing. I was addressing the fact that the lack of quads led him to complain about using 6 vertices per quad, where a tri strip uses 4.
|
# ? Jun 17, 2009 07:36 |
|
Small White Dragon posted:They are, and you can set the point size on screen, but can you set the size of the texture fragment that is used? As far as I can tell, the entire point always has the same texture coordinates, but maybe there's something I'm missing or a neat trick to get around that. I'm not exactly sure what you mean about the size of a texture fragment, but from what I understand the point sprite functionality just expands a point to a quad and automatically generates texture coordinates for each of the four corners from [0,0] in one corner of the quad to [1,1] in the diagonally opposite corner, and stuffs them in the gl_PointCoord vertex input attribute (this is ES-specific and for regular OpenGL this coordinate is placed in a user-chosen gl_TexCoord attribute by calling glTexEnv). And then you can do whatever you want with them from then on, including passing them through to the pixel shader and using them for texture lookups. If you also want to have some other information that comes from outside the vertex shader and varies per-vertex across the quad, you're basically screwed and indexed triangle lists is the way to go. If this is for a particle engine or a sprite engine, don't do one draw call per quad. Try to stuff them into a buffer and render them as a batch. Minsky fucked around with this message at 23:57 on Jun 17, 2009 |
# ? Jun 17, 2009 23:53 |
|
Since we are talking about point sprites, is it possible to get to the point-sprite generated geometry inside a geometry shader?
|
# ? Jun 18, 2009 02:19 |
|
shodanjr_gr posted:Since we are talking about point sprites, is it possible to get to the point-sprite generated geometry inside a geometry shader? If you are using geometry shaders (and really, they kind of suck since they aren't very performant), why not extrude a single vertex into a quad yourself?
|
# ? Jun 18, 2009 07:44 |
|
|
# ? May 30, 2024 14:12 |
|
Spite posted:If you are using geometry shaders (and really, they kind of suck since they aren't very performant), why not extrude a single vertex into a quad yourself? It could possibly be faster if done natively by the GPU.
|
# ? Jun 18, 2009 09:12 |