|
Scaevolus posted:In OpenGL, what's the best way to render a million cubes? (in a 100x100x100 grid) They won't be moving. Should I use display lists or vertex buffer objects? Take this with a grain of salt, I haven't worked with GL for a few years. Why don't you just use both? VBOs and display lists are mutually exclusive. VBOs makes your data reside on the graphics card, which will be much faster than immediate mode(glVertex3f etc). Display lists just records your GL function calls. If we just play with the thought that you would issue a draw call for each of the million boxes(which you're not, I hope) you could make a VBO out for the box, and record the million drawcalls into a display list, so when you need to draw everything you have an already-compiled command buffer to use(i.e. you traded function call time for memory).
|
# ¿ May 12, 2010 21:12 |
|
|
# ¿ May 11, 2024 18:40 |
|
Contero posted:I can get some recommendations on papers or tutorials for rendering fire effects? Any kind of fire. Do you want it to be useful or just play around? If I had time some more time to just experiment I'd definitely look into this: http://users.skynet.be/fquake/
|
# ¿ Jun 24, 2010 23:02 |
|
Hammertime posted:what's the best way for me to gain a thorough modern OpenGL education? I find that the easiest way is just to skip the specifics(i.e. what API you're using) and just read what the IHVs gives presentations about, be it DirectX or OpenGL. Working with graphics, you're going to be bound by hardware anyways, so what works good in DirectX is probably going to work good in OpenGL too. What I would do is just skim through Nvidia/AMD dev websites and look at performance articles/presentations such as http://developer.download.nvidia.com/presentations/2008/GDC/GDC08-D3DDay-Performance.pdf (this is a bit old, I just took something as an example) You'll find lots of information on http://developer.amd.com/, http://developer.nvidia.com/ and maybe even http://software.intel.com/en-us/visual-computing/. And when in doubt, use a profiler! Zerf fucked around with this message at 20:29 on Jul 19, 2010 |
# ¿ Jul 19, 2010 20:25 |
|
Harokey posted:This has worked fine for normal shapes, but I'm having a bit of trouble doing it for my arbitrary "polygon" shape. Is there an algorithm to do this? Or am I maybe going about this the wrong way? I don't know how far you want to push this, but if you want an algorithm that handles odd cases you should look up http://en.wikipedia.org/wiki/Straight_skeleton The algorithm I used last time I implemented straight skeletons was quite difficult to get robust though, so I'd advise against using them unless it's really necessary.
|
# ¿ May 1, 2011 22:42 |
|
unixbeard posted:Does anyone know a good book on intermediate/advanced opengl programming? I'd like something that covers commonly used techniques that are beyond introductory basics, so stuff like SSAO, volumetric rendering, etc. I know I can find lots of info on the web about this stuff but I'd like it if there was a book I could just work through that has consistent writing and code style. I've been reading through realtime rendering which is great, so something around that level but perhaps less comprehensive and with more code. Try any of the GPU Gems, ShaderX or GPU Pro books. They're not OpenGL specific but they have some really neat articles. The GPU Gems are also available for free at Nvidia(https://developer.nvidia.com/content/gpu-gems-3 for example), so you can see if there's something you are interested in. I'd also rate the GPU Gems series higher than the others in terms of quality, but it was a while ago since the latest release. I don't know if there is such a book you are asking for, usually more advanced stuff like this you learn from the web, articles and/or talks at SIGGRAPH/GDC.
|
# ¿ Aug 17, 2013 08:10 |
|
czg posted:So, I'm trying to write some stuff using SlimDX which is a C# wrapper around DirectX, using DirectX 11. You could give Intel GPA a shot I suppose - but pretty much every PC app for graphics debugging sucks compared to the console tools. Personally I find that GPA does at least a decent job, but it's far from perfect.
|
# ¿ Feb 5, 2014 19:01 |
|
Raenir Salazar posted:*skinning stuff* It's kind of hard to know exactly what your transformations look like just from this code, but in general this is what you want to do for each joint: code:
Once you have a transform for each joint, you can upload all those to a shader and use the formula you posted, something like this(if you limit influence to four transforms): code:
|
# ¿ Mar 11, 2014 19:33 |
|
Raenir Salazar posted:by shaders do you mean modern opengl stuff? We've been using immediate mode/old opengl so far. Then I think what you are missing is the inverse bind pose transform. Is this a school assignment? Is that transform mentioned somewhere? A simple example why it's needed: Imagine that we have two joints, one at position j1abs(5,0,0) and one at position j2abs(5,2,0). j2abs has a relative transform to j1abs which looks like j2rel(0,2,0). Now we have a vertex which we want to skin. This vertex is placed at v1abs(5,3,0). For simplicity, we want to attach this vertex only to the j2 joint. We cannot apply j2s absolute transform to the vertex position right away(that would give us a new position v1'abs(5+5,2+3,0+0)=(10,5,0) which is not what we want). Therefore, we define the inverse bind pose transform to be the transformation from a joint to the origin of the model. In other words, we want a transform which transforms a position in the model to the local space of a joint. With translations, this is simple, we can just invert the transform by negating it, giving us j2invBindPose(-5,-2,0). Now, lets try and apply these transformations. First we take the vertex v1 and multiply with the inverse bind pose for j2. This results in a position(0,1,0)(see, we are now in joint local space). Now we can simply apply j1rel(5,0,0) and j2rel(0,2,0) which gives us v1'abs(0+5+0,1+0+2,0+0+0)=(5,3,0), right were we started. Now imagine we change j1rels transform to j1rel(6,1,0). We again take v1abs(5,3,0)*j2invBindPose(-5,-2,0)*j1rel(6,1,0)*j2rel(0,2,0) = ( 5 + -5 + 6 + 0, 3 + -2 + 1 + 2, 0+0+0+0) = v1'abs(6,4,0), which is exactly what we want. So, does this explanation make sense to you or have I succeeded in making you more confused?
|
# ¿ Mar 11, 2014 20:42 |
|
Raenir Salazar posted:I think I see what you mean but isn't that handled by assigning weights? There might be other ways to do this but the most common way is to use weights and apply them to different joint transforms(usually in shaders but there's no stopping you from doing it on the CPU if you really want to). Following my example, say that you have three joints: j1, j2 and j3. j2 has j1 as parent and j3 has j2 as parent. You then compute all three joint transforms, including the inverse bind pose, like so: j1compound = j1inverseBindPose * j1rel j2compound = j2inverseBindPose * j1rel * j2rel j3compound = j3inverseBindPose * j1rel * j2rel * j3rel Then, for a vertex that is affected by all three with the following weights <0.2,0.3,0.5>, calculate its skinned position with the formula you first posted, i.e.: v1' = v1 * j1compound * 0.2 + v1 * j2compound * 0.3 + v1 * j3compound * 0.5 That should give you the correct position for a vertex that is skinned to all three joints. As for my Skype, I'm really rusty on OpenGL and I usually don't have much time to answer questions like these, so I'd rather not give it away. You can always PM me questions though, just beware that sometimes I might not find time to answer them for a couple of days.
|
# ¿ Mar 11, 2014 22:20 |
|
Raenir Salazar posted:Okay so to compute the inverseBindPose, you said: Well, with origin of the model I actually meant (0,0,0). You don't need any other connection between the joint and the vertices other than the inverse bind pose, because that will transform the vertex to joint-local space no matter where the vertex is from the beginning. You don't need to involve the point C at all in your calculations. Also note that the inverse bind pose is constant over time, you only need to calculate it once. The compound transforms you need to compute each frame(obviously since the relative position between joints might change).
|
# ¿ Mar 12, 2014 19:05 |
|
Raenir Salazar posted:
Did you mean to write this? code:
|
# ¿ Aug 23, 2015 07:57 |
|
Raenir Salazar posted:The error I got was that the sampler2D had to be a "literal expression" which seemed to rule out any sort of variable assignment and thus ruled out much simpler code. It's possible that stuffing them into an array would be more helpful, I'll try it out. If you haven't got support for texture arrays, this should work with your existing code: code:
|
# ¿ Aug 23, 2015 08:23 |
|
Joda posted:Could it be fill related? When you're zoomed in the triangles take up more of the viewport, so if you're committing them all to memory you're gonna be doing way more writes when you're zoomed in. What happens if you just draw lines in stead of full triangles? Fill rate would be my guess - check the blending states and especially measure overdraw - if you see a lot of objects at the same time doing alphablending and ignoring the z-buffer, and fill the entire screen with it, you could see large performance drops. But instead of speculating about it, there should be some performance analyzer program that you could download? Intel GPA maybe? I know very little about the tools you have available on OSX, so can't give you any proper recommendations.
|
# ¿ Apr 24, 2016 21:41 |
|
Doc Block posted:On iOS, Apple has performance analyzers for Metal that will detect some things that hurt performance and give you a complete breakdown/trace analysis for a frame, showing you every API call made during that frame and how long each draw call took etc. But according to Lord Funk, the trace analyzer for Metal isn't available on OS X. How did I miss that entire post? My reading comprehension yesterday must've been broken. Sorry for that Lord funk. Advice still stands though, try downloading a performance analyzer of some kind, to get more information on what takes up more time.
|
# ¿ Apr 25, 2016 11:07 |
|
Distance fields also have some other nice properties when it comes to text effects, like dropshadows, outer/inner glow etc. Doing similar things for meshes/vector fonts is non-trivial and involves computing the straight skeleton or similar.Ralith posted:They're also both slower to render (even ignoring preprocessing!)... Please elaborate on this. If we ignore preprocessing, rendering distance field fonts is just a plain texture lookup and some simple maths(which is essentially free because the texture lookup).
|
# ¿ Apr 13, 2017 20:25 |
|
Ralith posted:It surprised me too. I linked experimental data. There's discussion of implementation details as well. I skimmed through the link, but where do you come to the conclusion that this is faster than distance field rendering? All the comparisons seems to be against CPU-based rasterizers, and the GPU part seems non-trivial to implement. It's probably faster than distance fields if you include the preprocessing they require, but ideally you preprocess each glyph once (or once per desired resolution) and end up with a super-low-res image that can easily be cached and is fully satisfactory for most use cases. Don't get me wrong, that link seems like a good idea when dealing with rasterization of fonts though, but I still believe distance fields provide much more bang for the buck.
|
# ¿ Apr 14, 2017 10:08 |
|
Xerophyte posted:GLyphy is a GPU implementation that uses signed distance fields: Oh, I see, that's why. Thanks. On the other hand, here's an excerpt from the GLyphy Github repo: quote:The main difference between GLyphy and other SDF-based OpenGL renderers is that most other projects sample the SDF into a texture. This has all the usual problems that sampling has. Ie. it distorts the outline and is low quality. So sure, if you are going to use SDF without computing it to a texture, it's going to be expensive. I still believe regular, texture-based SDF variants will be both simpler and faster than doing any other font rasterization on the GPU (but with the caveat that generating the texture is expensive and sampling artifacts can occur). Zerf fucked around with this message at 12:17 on Apr 14, 2017 |
# ¿ Apr 14, 2017 12:13 |
|
peepsalot posted:Are there any online calculator or utility that help create transformation matrices? I use WolframAlpha quite much; it's super handy. For example, it can do symbolic matrix inversions etc. What are you after specifically?
|
# ¿ Nov 29, 2017 20:41 |
|
lord funk posted:Yeah that makes total sense! Thanks for the approach details. Heh, funny you should bring this up. I just implemented this last week. My solution was to handle it in the shader. Since perspective transform is non-linear, it means that each affected vertex now needs to be multiplied by two matrices instead of one and some meddling between the multiplications. Quite a simple solution, but it works well.
|
# ¿ Mar 22, 2018 21:05 |
|
Hubis posted:Sorry, was phone-posting! Nice, currently sitting doing some Vulkan stuff, enjoying bindless, and doing a lot of stuff in batches, so this was really informative. But I take it then, that instancing in itself isn't that bad, it's just the extremes when you get really low vertex count per instance?
|
# ¿ Mar 28, 2018 13:54 |
|
Can you even model that using blend modes? The equation contains abs(base-blend), and there's no standard way of doing that AFAIK(but I think Nvidia has a ton of blend extensions). Reference formula: https://github.com/jamieowen/glsl-blend/blob/master/difference.glsl Edit: Here's GL:s extension: https://www.khronos.org/registry/OpenGL/extensions/KHR/KHR_blend_equation_advanced.txt Maybe something like that exists for Metal? Zerf fucked around with this message at 18:58 on Mar 20, 2019 |
# ¿ Mar 20, 2019 18:55 |
|
Doc Block posted:Vulkan question: We use the bindless approach and use an array of texture arrays, so texture count is not really an issue. We bind each texture array once upon creation, but the rest of the time a texture manager keeps track of which indices in each array that points to which texture, and streams them in/out when needed. As such, there's no need to compile shaders at runtime, because we're well within limits. We don't use push constants for the texture indices either, rather they are placed in a storage buffer. Each model/mesh then gets fed an entity index in some way (via per instance data/push constant/glBaseInstanceIndex etc.) and this index is used to access different model/mesh settings.
|
# ¿ May 25, 2019 09:41 |
|
Doc Block posted:Doesn't that still leave you having to create a million descriptors, though? I read somewhere that some implementations have a really low number of max descriptors. And still seems like a hassle if you need to load or unload images on the fly, since wouldn't you have to rebuild the buffer of descriptor sets? We create a total of 2 descriptor sets, since we double buffer most things. Each descriptor set contains approx 100 entries. The only extra work after initial setup is if we run out of space in a texture array and need to create and bind another one. Again, no biggie to update 2 descriptor sets...
|
# ¿ May 26, 2019 07:51 |
|
|
# ¿ May 11, 2024 18:40 |
|
If I understand your problem correctly, you don't actually want to use lerp at all, because mixing red and blue doesn't make sense for the middle values. You might be looking for the over operator found here: https://en.wikipedia.org/wiki/Alpha_compositing I.e. "out = outline + shadow * (1-outline.a);"
|
# ¿ Aug 22, 2020 18:55 |