|
A tristrip saves just under 2 vertices per triangle over raw triangles, so one or two degenerates on a row of several thousand visible is practically no overhead at all.
|
# ¿ May 5, 2010 01:43 |
|
|
# ¿ May 14, 2024 19:42 |
|
The only thing trifans are good for is drawing 2D circles, as far as I know.
|
# ¿ May 5, 2010 15:57 |
|
Bonus posted:I import a 1024 x 1024 heightmap to generate a terrain in OpenGL. Anyway, once I display it, my computer comes to a crawl. Is 1024 x 1024 just to much to display at once or are there some optimizations that I should be doing? For now I just go over the vertices and draw triangles. Just asking generally, but I can provide more info though. What method of submitting geometry are you using? glBegin/glVertex/glEnd is much too slow for that workload, it should be in a vertex array or VBO. If you're already doing that, then what 3D hardware do you have? That might strain a low-end integrated chip like an Intel GMA but it shouldn't make a real recent GPU break a sweat.
|
# ¿ May 6, 2010 00:24 |
|
It is necessary, branching and loops are extremely expensive in shaders. The built-in functions map to functionality of the silicon, so something like min() or clamp() doesn't generate a branch and is nowhere near as expensive as writing "if(x<0) x = 0" in your own code.
|
# ¿ May 10, 2010 20:53 |
|
Definitely the vertex buffer. It is the fastest for everything that is not frequently updated.
|
# ¿ May 12, 2010 04:15 |
|
UraniumAnchor posted:How do I get the current pixel's depth in a GL pixel shader? I know FragCoord.z is the incoming fragment, and I write to gl_FragDepth if I want to modify the final depth, but how do I read what's already in there? You can't read directly from the depth buffer, you have to bind the previously rendered depth buffer as a texture and render the shader output into a different target.
|
# ¿ May 14, 2010 04:15 |
|
Spite posted:It's much better to get into the habit of rendering to an FBO and then blitting that to the screen. The iPhone, for example, requires you to render into a renderbuffer and then give that to the windowing system to present. To be pedantic, I think that's a property of OpenGL ES, not the iPhone specifically. Also, nobody learns to do this because one of the Xcode project templates contains all the GL setup and frame submission code
|
# ¿ May 15, 2010 23:59 |
|
PalmTreeFun posted:Also, as far as image loading/writing goes, do I need to check for system endian-ness to manipulate image data? If I make a game cross-platform, I don't want it to start making GBS threads itself, loading and modifying images in the wrong order because some goofy OS reads and writes data backwards. This isn't an issue, all the major image formats and their loaders/exporters will take care of it.
|
# ¿ Jul 5, 2010 19:06 |
|
OneEightHundred posted:Those have already been purged from the core API as of 3.1 And they never existed at all in OpenGL ES.
|
# ¿ Jul 16, 2010 18:39 |
|
For what it's worth, saving shader executables to disk is a trick that's often done on consoles.
|
# ¿ Jul 19, 2010 20:48 |
|
PDP-1 posted:I'm having a problem with textures scaling shittily. I start with a texture that looks like this Mipmapping, especially combined with trilinear filtering.
|
# ¿ Oct 16, 2010 21:27 |
|
passionate dongs posted:Where should I look to fix this? normals? geometry? That would be the normals. I'm guessing from the picture on the right that each vertex's normal is informed by exactly one polygon, you'll have to calculate the average normal of every poly that touches it.
|
# ¿ Oct 27, 2010 23:44 |
|
Did you leave lighting or texturing turned on?
|
# ¿ Nov 15, 2010 19:56 |
|
OS X can do it because Apple put in a shim layer that uses LLVM to run shaders on the CPU in a pinch. I doubt the Intel embedded chips have true shader processors.
|
# ¿ Dec 22, 2010 06:46 |
|
Optimus Prime Ribs posted:Is there ever a justifiable reason to use display lists in OpenGL? Don't bother, they are a very early and obsolete system.
|
# ¿ Mar 2, 2011 04:24 |
|
UraniumAnchor posted:So if you have a moderately complex shader that could do one of two things depending on a boolean flag, would it be better to just have two versions of the shader and switch between the two, rather than having a boolean uniform? I'm still not clear on how much of a stall you might get in the pipeline by switching shader stuff around. I read somewhere that the shader actually gets recompiled on some hardware when you modify a uniform. It's usually cheaper to do both calculations and throw one of them away rather than make real mutually exclusive code paths. result = (formula1)*which + (formula2)*(1.0-which) where which is set to 0 or 1.
|
# ¿ Apr 7, 2011 18:45 |
|
roomforthetuna posted:Can someone else please confirm or deny this, because to me it seems a ridiculous premise that a debug build will have a performance problem where a release build will have no performance problem (not ridiculous that it will perform slower, that's a given, but that it will specifically perform badly, below a reasonable expectation for a given operation.) If the compiler optimization setting is different between debug and release then performance could indeed be radically different in certain areas. Also, running in debug mode may be enabling extra logging, doing more bounds/sanity checks, skipping optimizing data transformations, and so on in the system libraries, especially inside 3D graphics drivers. "Badly" is subjective but you could gain 20-30% performance just by switching from debug to release, depending on what you are doing.
|
# ¿ May 3, 2011 16:12 |
|
That is "tearing" and it happens because your screen updates are out of synch with the monitor displaying the new image. http://msdn.microsoft.com/en-us/library/bb174576(VS.85).aspx
|
# ¿ Jun 6, 2011 02:42 |
|
ShinAli posted:I'm not sure how well shaders handle branching. Very, very poorly. Avoid if at all possible. Depending on what you are doing it may be faster to evaluate both branches and multiply the one you don't want to use by zero before combining it with the final result.
|
# ¿ Jul 15, 2011 21:34 |
|
Two possibilities:
|
# ¿ Jul 22, 2011 04:59 |
|
That would be the easiest way to fake it, yes. Comparing the normal to the camera vector is just a dot product.
|
# ¿ Aug 15, 2011 19:47 |
|
FlyingDodo posted:I'm not sure if this should be a general programming or here. I am trying to make an opengl renderer in c++ and I want to make as much use of OOP as possible. This runs into some problems. For example if I have any class that stores an opengl object id (any texture, vertex array object,buffer) in it and the destructor unloads it from opengl then any copy of the object will have an invalid id. What would be the best way of dealing with this? All I can think of is have the destructor do its thing, and have the copy constructor actually create another opengl object which I think would be slow and pointless. I can't be the first person to wrap opengl into objects, so there must be a better way. Why are you sharing the same GL entities among multiple instances of your own class? There's probably a better way to do this.
|
# ¿ Sep 9, 2011 19:30 |
|
Yes, you're just lucky that it doesn't look as bad in your specific case. T-junctions are always a bad thing.
|
# ¿ Sep 16, 2011 18:15 |
|
OneEightHundred posted:Making card-specific behavior there means you're sinking development resources into a fraction of your audience. Or it means you're targeting a console, where doing this is much more practical and encouraged and one of the big reasons a console is viable for much longer than a PC with the equivalent specs on paper.
|
# ¿ Oct 27, 2011 06:10 |
|
It's almost always preferable to a true conditional to run the extra calculations in all cases and multiply the result by 0 if you don't want it to contribute to the fragment.
|
# ¿ Nov 23, 2011 19:26 |
|
Formally don't-write-in-new-code deprecated, or just out of favor? I thought they were still better for frequently updated buffers.
|
# ¿ Jan 11, 2012 18:18 |
|
Does GL 3+ still provide matrix stacks? I only know that ES 2.0 does not.
|
# ¿ Jan 22, 2012 17:35 |
|
Yeah, look at how many different approaches are out there just for, say, filtering shadow maps. Real-time graphics and offline graphics are converging and knowledge is moving between them faster than ever before. When I was in college around the turn of the century I was told that the rule of thumb was that real-time is perpetually where offline was ten years ago; that gap has narrowed a lot today.
|
# ¿ Feb 8, 2012 22:44 |
|
The GL_TEXTURE_2D target requires dimensions that are a power of 2, so trying to use a 640x480 image won't work.
|
# ¿ Aug 18, 2012 16:18 |
|
Don't you have to explicitly enable that with GL_TEXTURE_RECTANGLE, though? Also, he didn't turn on anything that would involve the alpha channel.
|
# ¿ Aug 18, 2012 17:17 |
|
How are you generating the points? If your source of randomness or however you convert that source to x, y, and z isn't uniform you'll see patterns in the result even if the shader is correct.
haveblue fucked around with this message at 23:55 on Dec 27, 2012 |
# ¿ Dec 27, 2012 23:50 |
|
Incidentally, this is why a lot of older PC games are so unstable when alt-tabbing- they weren't written defensively enough to catch the invalidation at literally any point in the program. I've never seen iOS destroy a context without being specifically told to, but I don't know how Android or Windows Phone handle it. I think ES has the same stipulation as full-fat GL.
|
# ¿ Feb 23, 2013 19:10 |
|
What graphics API are you using? Either it has a built-in transformation matrix manager or it should be easy to scare one up online. If you find yourself actually composing rotation formulas by hand you are doing something very wrong (or at least inefficient).
|
# ¿ Jun 26, 2013 02:18 |
|
Malcolm XML posted:1) Can I have OpenGL do the heavy lifting to calculate the per vertex normals or do I have to manually do it? I already maintain a vertex -> triangle map when I calculate the indices, so this is tedious but doable That isn't something it can help you with, unfortunately. quote:2) Is there a way of communicating my face normal to the appropriate fragments in the fragment shader? From what I understand, OpenGL interpolates the normals from the vertices of the primitive when it gives that data to the fragment shader Yeah, per-vertex data is the way to go in the vast majority of normal cases. You'll have to set up the vertex and fragment variables to include the normal data and then OpenGL will do what you expect.
|
# ¿ Aug 29, 2013 00:29 |
|
Grocer Goodwill posted:It's not per draw, it's over the lifetime of your app. The lifetime of the scene, really. It depends a lot on what your app is trying to do (i.e. a game transitioning to a new level cannot avoid shuffling a lot of stuff around) but in general don't mix draw calls and changes to the working set.
|
# ¿ Dec 19, 2013 20:34 |
|
No, that's the right way to do it. If you reflect polygons across a plane you effectively reverse their winding, and it's easier to tell OpenGL to match that than try to undo it by further modifying the geometry.
|
# ¿ Jan 3, 2014 07:29 |
|
You really don't want to try to roll your own loader for a modern compressed image format.
|
# ¿ Jan 8, 2014 21:35 |
|
Pretty much. The minimum work any shader does is identical to the work the fixed-function pipeline would be doing; the power and flexibility come from adding operations on top of that basic process.
|
# ¿ Jan 16, 2014 07:22 |
|
I've never had good luck with polygon offset. Do you see the wireframe if you turn off the depth test entirely?
|
# ¿ Feb 5, 2014 17:02 |
|
|
# ¿ May 14, 2024 19:42 |
|
The vector from B to A is <Ax - Bx, Ay - By,Az - Bz>. Yes, you'll need 3 angles per bone for 3D bones. You should probably make the assumption that when the model is first loaded all of the angles are zero.
|
# ¿ Feb 24, 2014 06:55 |