|
Thug Bonnet posted:My question: Is there any reason to ever use immediate mode in OpenGL for anything but toy programs?
|
# ¿ Jul 24, 2008 08:22 |
|
|
# ¿ Apr 29, 2024 07:39 |
|
MasterSlowPoke posted:That's supposed to be a man-shaped figure, so something's off, and I assume it's my bone transforms. They should be matrices that transform the bone from bone space to world space, correct? Doom 3's MD5 format uses the first. With pretransformed weight lists, each vertex references only a list of weights, and the weights contain coordinates in bone space premultiplied by the influence. The final result is achieved by transforming those values from bone space into world space and summing them. Matrix palette instead has a base (a.k.a. reference) model in world space, and a base pose. The transformed position for each weight is calculated by transforming the vertex in the base pose into bone space using the bone's base pose (or rather, the inverse of it), then transforming it into world space based on its current pose. The results of that are then multiplied by the influence and summed to form the result. What you're looking at is the latter. The second matrix is the inverse of the base pose. In implementation, you can concatenate the two matrices into one matrix, a matrix which will transform a base pose vertex into its new location with a single matrix multiply operation rather than two. If you're going to do this in software (as opposed to in the vertex shader), you can pre-blend the matrices for each unique weight list since the number of unique weight combinations/values is generally fairly low, which lets you avoid branch mispredictions in the transform code. OneEightHundred fucked around with this message at 21:19 on Jul 24, 2008 |
# ¿ Jul 24, 2008 20:29 |
|
MasterSlowPoke posted:Actually, that is an MD5 model, but I am computing a base pose and base model at load time as I've never seen any other description on how to do it. Getting matrix palette skinning to work basically involves two steps: Getting it to work right in the base pose, then getting the animation to work right. The reason for the first is that it's really easy to tell when the base pose is correct: Your transform matrices should all be identity matrices (or will be only slightly off due to floating point error). You get the transform matrix by computing (current * inverse(base)). Obviously, when current is base, you should be getting an identity matrix, meaning none of the vertices will get changed at all. If you can't get them to be identity matrices, troubleshoot the computation I just mentioned. Make sure they're in world space and not bone space. Make sure you inverted the base poses after concatenating them to bring them into world space, not before. If you can get the transform matrices to be identity and the result is still hosed up, then it means the vertices in your base model were calculated wrong. If you can get it to that point, and it animates incorrectly, then your transform matrix is probably inverted, or the transform operation in your shader is backwards. The other possibility is that your base AND animated poses were both in bone space when you calculated the transform matrices, so make sure you have a way of debugging what the SKELETON looks like as well. OneEightHundred fucked around with this message at 02:14 on Jul 25, 2008 |
# ¿ Jul 25, 2008 02:06 |
|
Easy way to tell is to translate the model along each axis and find out if it's going the same direction as it is in the editor. If it is, your projection matrix is flipped. If it isn't, your model was loaded flipped.
|
# ¿ Jul 26, 2008 21:45 |
|
Shazzner posted:When is OpenGL 3.0 going to be released and will it be the next big thing? Will it be the next big thing? No, because of the previous answer! The problem with OpenGL ever since D3D7 has been that they take too long to get features through. D3D went through 2 major versions before OpenGL finally got a good render-to-texture extension. The spec for OpenGL 3 was supposed to be out in September LAST YEAR. Developers can not adopt a graphics API that does not exist, and will not adopt a graphics API that continues to lag on features without providing a clear benefit. The Xbox 360 has made the portability argument less clear-cut, and the usability arguments have been fairly moot since D3D9 came out with a sane API. quote:OpenGL3.0 will just make them standard One example would be that texture dimensions are mutable, and replacing a texture mandates rescaling all of the mipmap levels even though they're likely to be overwritten immediately. In OpenGL 3, texture dimensions are immutable. It also has asynchronous object creation, another performance boost. As for what game makers will do, it depends heavily on how accepted Vista and D3D10 are. OpenGL 3 is probably going to be XP compatible, and will offer support for D3D9 and D3D10 hardware with one API. If D3D10 and Vista aren't widespread, it would provide a very compelling reason to use it over D3D. OneEightHundred fucked around with this message at 00:55 on Jul 28, 2008 |
# ¿ Jul 28, 2008 00:45 |
|
Professor Science posted:Real OGL3 != the OGL3 that was discussed until last year's SIGGRAPH. I assume they'll actually explain WTF happened at this year's SIGGRAPH (so a few weeks), but from what I've heard it's a far less radical change than what they announced (supposedly mobile people were unhappy that it would not look very much like OpenGL ES, and considering that's the only area where they've had a real lock on the market, back to the drawing board they went). OpenGL ES isn't really just a mobile thing either, most of the current generation of game consoles use graphics libraries derived from it. quote:How exactly does 3d texturing work.
|
# ¿ Jul 28, 2008 16:16 |
|
You could compensate for fillrate by having the shader sample multiple points within the volume, which would obviously involve some refactoring.
|
# ¿ Jul 30, 2008 15:45 |
|
MasterSlowPoke posted:In my Quake 3 BSP loader I'm trying to get frustum culling to work but the results are far too overzealous. (You are using plane-side culling, right?) OneEightHundred fucked around with this message at 02:47 on Jul 31, 2008 |
# ¿ Jul 31, 2008 02:43 |
|
MasterSlowPoke posted:I'm trying to figure out the best process to use to render a BSP with DirectX. The traditional way to render a BSP is the find what polygons (leafs) are visible from the camera and render them as you find them. This works fine and dandy with OpenGL, but the relatively high cost of Draw calls with DirectX makes me weary. Iterate over the surface list to batch them. q3map already sorts geometry by texture and lightmap index so you can scan right through them and flush batches as soon as a change is detected. If you're going to do everything with hardware shaders in your material system, create a static buffer and upload all of the drawvert data to that and just copy index ranges into an index buffer and flush the draw out when it fills up or a material/lightmap change is detected. For transparent stuff it's a bit more difficult, there are a lot of ways to handle it, and many engines these days are lazy and don't even bother sorting it because it's not the hot poo poo it was when Quake 3 came out. quote:Also, do anyone find that GameDev.net's forums are absolutely worthless for getting help? OneEightHundred fucked around with this message at 21:32 on Aug 3, 2008 |
# ¿ Aug 3, 2008 21:22 |
|
shodanjr_gr posted:So what are some good fora for getting OGL help? http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=cfrm&c=3 #darkplaces and #neither on AnyNet have a bunch of dabblers in this sort of thing, including myself. Being able to bounce ideas off of other people has helped a shitload. It's getting harder to recommend anything for OpenGL development because it's become so much more of a pain in the rear end to use. There are at least two ways to do everything now, which means it's really easy to get led into doing things that are completely out of date. And GLUT still blows. I found that modding existing stuff was a much better starting point, Quake 3's renderer, or QFusion's (which is roughly the same thing) are easy to get started in because they're not using a lot of the really new stuff, but are still mostly based on design principles that still apply. quote:What i want to do now is visualize the light's depth buffer (render it to a quad). And i am a bit clueless as to how i can read a GL_DEPTH_COMPONENT texture from inside a shader (using a texture sampler). GLSL depth textures need to be sampler2DShadow. Sampling them with texture2D does not work on all hardware, so don't do it. You need to set GL_TEXTURE_COMPARE_MODE to GL_COMPARE_R_TO_TEXTURE_ARB (Do not leave it as GL_NONE!) and GL_TEXTURE_COMPARE_FUNC to whatever you want in the texture environment for the depth texture sampler. The sampler type needs to be texture2DShadow. You can compare another depth value against it by using shadow2DProj(shadowSampler, coord). (coord.s, coord.t) = the location on the shadow image to use. I think it divides those two by the W as well, to let you use a projection matrix to calculate it. The R coordinate is the depth value you're comparing against. shadow2DProj returns a vec4 full of 0's if it fails the depth test, according to the texture compare function you specify, and 1 if it passes. Most ATI hardware doesn't filter depth textures, so if you have an ATI card and it's giving you jaggy, ugly shadows, it's not your fault. If you need to visualize the depth, then you'll need to render to a non-depth format. I think the only alternative is to read the depth values using glReadPixels, which is slow as gently caress, but will obviously work for debugging. OneEightHundred fucked around with this message at 06:07 on Aug 4, 2008 |
# ¿ Aug 4, 2008 05:56 |
|
shodanjr_gr posted:Actually, i set the compare mode to GL_NONE, and then i managed to read it using a sampler_2D and accessing the R-channel. quote:Also, is it possible to do shadow-mapping with point-lights? It's one of those things you probably want to fake. Point lights that aren't used purely for decoration usually do have a limited angle range where it's affecting surfaces worth shadowing, so it's best to just cap the FOV boundaries to that range. Or if you REALLY want to cheat, cast the shadow isometrically using a single direction vector. quote:edit: also, agreed on OpenGL being a pain...ive been getting increasingly tempted to port my research code over to D3D and XNA, due to the vast amounts of documentation/tutorials and IDE integration that help you get the technical stuff out of the way easily...Plus i would get to run my stuff on my 360!! XNA requires using C# for better or worse.
|
# ¿ Aug 4, 2008 08:28 |
|
Since delphi3d.net is down, does anyone know what hardware supports the GL_EXT_framebuffer_sRGB extension?
|
# ¿ Aug 24, 2008 08:53 |
|
ColdPie posted:I've got what feels like a noob question. I'm trying to learn about using shaders, and things are going fine, except that I can't get it to run on my laptop! My stuff works fine on my desktop with its real OpenGL 2.x graphics card, but my laptop only supports OpenGL 1.4. However, I read that shaders were originally introduced in 1.4 as an extension. And, as a matter of fact, glxinfo reveals extensions such as GL_ARB_fragment_program and GL_ARB_vertex_program which sound like what I want. If you're going to target Intel hardware, you may want to check out Cg, which will allow you to target ARB programs and GLSL shaders with the same code.
|
# ¿ Aug 24, 2008 19:41 |
|
Well, as I said, the only real alternative for Intel hardware are ARB fragment/vertex programs, which are in a sort of pseudo-assembly language. You can write shaders in that language directly (which isn't fun), or you can use Cg (which is an HLSL-like language) to target them. Cg also has a separate compiler (cgc) that dumps the high-level metadata in comments, so you don't have to use the Cg libraries to use the language. There really aren't any other alternatives for programmable shaders until Intel decides to release a driver update.
|
# ¿ Aug 24, 2008 20:24 |
|
shodanjr_gr posted:Say i got an opengl FBO with 4 color attachments, and i want to clear only one of those attachments (that is, do the equivalent of glClear(GL_COLOR_BUFFER_BIT). How do i do that?
|
# ¿ Sep 1, 2008 01:28 |
|
Looks like depth buffer precision issues. Consider using D3DRS_SLOPESCALEDEPTHBIAS and D3DRS_DEPTHBIAS with D3D, or glPolygonOffset with OpenGL to nudge it. If you're reading the depth value in the shader, just offset it by a fixed amount.
|
# ¿ Sep 5, 2008 20:21 |
|
sex offendin Link posted:VBOs would be an advantage for anything that's generated exactly once and never modified for the entire session. For any element that might be frequently rebuilt, they're a wash or worse. Of course, there are two obscure things to keep in mind: First is that writing into a mapped VBO can cause severe cache pollution, which is why the GPU manufacturers recommend doing uncached SSE writes to the GPU space after you map it. glBufferData/glBufferSubData do exactly that, but it also means that SSE-accelerated transforms can push results directly to the GPU with no performance hit. Another obscure thing is mirroring D3D's "discard" behavior: If you're not going to use the data in a VBO any more and want to map it again, call glBufferData with a NULL data pointer to throw out the existing contents. Doing that will cause the driver to give you a new region of memory to work with if it's not done with the old one, not doing it will cause a stall until it's done using that region. OneEightHundred fucked around with this message at 05:00 on Sep 25, 2008 |
# ¿ Sep 25, 2008 04:20 |
|
magicalblender posted:What's better, drawing pixels directly, or using textures? Poking around the OpenGL documentation, I see that I can put pixel data into a raster and glDrawPixels() it onto the screen. I can also put the pixel data onto a texture, put the texture on a quad, and put that onto the screen. If I'm just doing two-dimensional imaging, then which is preferable?
|
# ¿ Oct 3, 2008 01:34 |
|
Eponym posted:Ideally, I'd like to do something like D3DVector3 normal = pVert[index].Normal. My issue is that I don't know what to define struct VERTEX with or what order in which to define its elements. How do I find out the vertex format? http://msdn.microsoft.com/en-us/library/bb172630(VS.85).aspx Use CreateVertexDescription to create an instance of it http://msdn.microsoft.com/en-us/library/bb174365(VS.85).aspx Use SetVertexDescription to bind it http://msdn.microsoft.com/en-us/library/bb174464(VS.85).aspx Use SetStreamSource to assign the vertex buffer to a stream slot http://msdn.microsoft.com/en-us/library/bb174459(VS.85).aspx
|
# ¿ Oct 4, 2008 21:20 |
|
brian posted:As for the fixed function pipeline I was under the impression by using any vertex or fragment shader it goes around the usual processes that gl performs (like having to transform vertices with ftransform() or modelview * vertex). Also, keep in mind that blending, stencil, depth, and alpha test (for now) are not currently handled by shaders.
|
# ¿ Oct 11, 2008 01:38 |
|
Don't most consumer graphics cards do accumulation buffers in software, punching your framerate in the face?
|
# ¿ Oct 11, 2008 02:24 |
|
Luminous posted:So, my question is simply: what am I missing? Please tell me I have just missed something completely minor so that I can get along. Below is a sampling of the errors, if it helps. Right-click the project, select Properties, go to the Linker subsection, select Input, and then check the Additional Dependencies line. You need d3dx10d.lib for the debug build and d3dx10.lib for the release build.
|
# ¿ Oct 14, 2008 06:19 |
|
Mithaldu posted:Got a question about z-fighting issues. I'm rendering layers of roughly 80*80 objects, with roughly 20-40 layers stacked atop each other at any time. There is no computationally non-expensive way to cut out invisible geometry that i am aware of, so i pretty much have to render everything at all times. This leads to the z-buffer getting filled up pretty fast, resulting in rather ugly occasions of polys clipping through other polys. As for a computationally-inexpensive way to occlude objects, try zoning them using occlusion queries. An occlusion query lets you know if and how much of some geometry is visible. Draw the world (occluders), then disable color/depth write and "draw" the bounding of a zone a bunch of objects could be contained in. Repeat for all zones. If the zone isn't visible, don't draw any objects fully contained in it. For voxel-based stuff there's also octree occlusion, but I'm less familiar with that.
|
# ¿ Oct 27, 2008 03:29 |
|
Mithaldu posted:Try scrolling up the page to see screenshots. If you're creating a complete 3D voxel grid, then you can draw it in order by just using the view axis. i.e. take the sign of each component of the view direction of the camera, and use that to determine which order to iterate through those components in the voxel array. You can also squeeze more precision out by segmenting the world and doing multiple draw calls. This means that you'd chop your near/far range up into segments, draw anything visible in the farthest segment, clear your depth buffer, draw anything visible in the next-closest segment, repeat. quote:That might actually work ... Do you maybe have some kind of example code for that? You don't want to check each item, you want to check zones, and if the zone hull is culled, then everything in the zone is considered not visible. A zone would be a room, for example, and what you'd "draw" is a box for the bounds of the room. I don't have example code, but it's asynchronous, it works by creating an occlusion query, drawing some stuff, ending the occlusion query, and then later when the card has the query finished, you can check how much it drew. You don't want to check immediately if you have other queries you can run. By disabling depth/color write, it won't actually draw anything, but it will check the visibility of the polygons you're sending. There's some trivial sample code here: http://oss.sgi.com/projects/ogl-sample/registry/ARB/occlusion_query.txt
|
# ¿ Oct 28, 2008 03:22 |
|
Mithaldu posted:Ah, i see. He's suggesting i do the job of the z-buffer in-code. Sadly that's not an option although i'll need to do it later on for objects with transparency. Reason for that not being an option is that it would massively increase the amount of work that Perl needs to do, which is what i'm trying to avoid.
|
# ¿ Oct 29, 2008 16:17 |
|
Mithaldu posted:I was tempted to post an image of a golf ball bucket here. But to be serious, I do not know what "range buckets" are. I can make vague guesses, which lead me to believe two things: Either you are talking about some technique that is completely unknown to me, or you're ignoring the display list structures i described in earlier posts around which i cannot work around. Color me confused. Although you're right, it won't work with display lists. (Stuff like this is why I don't like display lists!)
|
# ¿ Oct 29, 2008 18:26 |
|
Mithaldu posted:but i don't think it'd give me the granularity i'm aiming for. quote:I also jsut tried to get the occlusion stuff implemented, but ran into a few roadblocks, so i have a few questions: Is it possible to do the occlusion query stuff in another way than using glBeginQueryARB, etc? I know it's not very likely, but better to ask than to stay ignorant. quote:Secondly, there's a similar HP extension for that. Does it only work on special gfx cards or why would glGetBooleanv(GL_OCCLUSION_TEST_RESULT_HP, &result) return the error "Unknown param"? OneEightHundred fucked around with this message at 02:53 on Nov 1, 2008 |
# ¿ Nov 1, 2008 02:48 |
|
Mithaldu posted:glBeginQueryARB and associated commands are not yet implemented in the Perl OpenGL library. quote:I know it's pretty horrible in comparison, but if i can get it to work, it just may suffice. Since it only relies on commands that are actually implemented already, it might work; unless it's hardware-specific.
|
# ¿ Nov 1, 2008 11:08 |
|
Mithaldu posted:As far as i am aware i can only use the functions that are defined in the .xs, compiled into the .dll and exported in the .pm. The only function with the word "query" in the name for which this is true is glpXQueryPointer. quote:Again, i don't know what that means. Sorry. :/
|
# ¿ Nov 2, 2008 06:57 |
|
I'm just going off what I see in the package and I really don't know what the gently caress, so the best I can do is tell you where I see things that look like what you want.Mithaldu posted:Can you point out where you get that information from so i at least have a clue where to start looking? I mean, i'm looking at the source code for the module here, straight from CPAN, BEFORE compiling etc., and the only mentions of these functions are in the opengl includes files, but NOT in the perl files themselves. quote:Is there a good way to check whether glBeginQuery was executed correctly or am i gonna have to fly blind? Although realistically, if your program doesn't instantly crash from calling a bad entry point, it probably executed correctly.
|
# ¿ Nov 2, 2008 07:47 |
|
Mithaldu posted:Edit2: Apparently i'd need to convince it to load nvoglnt.dll somehow. As i'm not too firm in c (beyond basic syntax and compiling stuff in visual studio), is there a simple way to edit the makefile/c file to make that happen? quote:Edit3: Nvm, got it. They apparently do both, link against the windows opengl library AND automatically load from the driver. quote:Edit5: All functions implemented and the occlusion queries are working now in the test implementation. I hope with that the hard part is over ...
|
# ¿ Nov 2, 2008 12:02 |
|
Mithaldu posted:I actually think it's meant to be that way, since, uh, if you do it manually with LoadLibrary, you'd need to try for all possible driver dlls, and keep your code up-to-date as drivers change, right? I may be wrong on this whole thing too. quote:Also, while i'm looking at it. What's the glFlush there good for? It's not a terrible idea to flush if you're going to do other things before you request the query results though. OneEightHundred fucked around with this message at 20:12 on Nov 2, 2008 |
# ¿ Nov 2, 2008 19:58 |
|
BernieLomax posted:Sorry if this has been asked before, but I have problems with precision using the depth buffer. First I'm rendering stuff to several depth buffers, and afterwards I want to merge them by drawing them on top of each other with depth testing enabled. And afterwards I am using the depth test GL_EQUAL in order to shade everything. However it does not work very well. I do find that by using buffers with the same precision I get better results, but I have yet to find a combination that works. Any ideas? Camera --> N[----A----B----]F ... and chop it into two depth slices... Camera --> N[----A--]F N[--B----]F ... then you need to clear the depth buffer after after you draw the distant slice anyway, otherwise you can get out-of-order objects like this crap: Camera --> N[--B-A--]F The drawback of course is that by clearing the depth buffer, you are eliminating B from the depth buffer. What you need to do is make sure that any objects that are in a depth slice (even partially) get drawn in that slice. .... Alternately, make sure you're creating the depth buffer with a reasonable precision and try adjusting your nearplane/farplane values. OneEightHundred fucked around with this message at 21:36 on Nov 3, 2008 |
# ¿ Nov 3, 2008 21:20 |
|
Mithaldu posted:Edit: Also, question: How would i go about making a polygon render visibly from both sides without rendering it twice? If you want the two sides to render differently, you either need to render twice or use two-sided rendering pixel shaders which I haven't dealt with yet.
|
# ¿ Nov 11, 2008 18:40 |
|
Tap posted:I've recently been inspired to start game programming as a hobby, but I know virtually nothing about the subject. Do you guys have any literature you'd recommend for a beginner? I'd like to get started with basic stuff like vertexes and pixel shading, etc.. Mithaldu posted:Thanks. I needed it to make the occlusion check shapes also render when the camera is inside them.
|
# ¿ Nov 11, 2008 21:19 |
|
Mithaldu posted:Is that another awesome OpenGL technique i never heard of or is it just a fancy way to say "check where your camera is"?
|
# ¿ Nov 12, 2008 11:28 |
|
Getting geometry from modeling software is far less about the software than about the FORMAT. The formats used by the modeling packages are horribly unsuitable for being directly loaded by a game engine, so you always want the model exported in a format you can handle, catered to the capabilities of your engine. The project I'm working on uses its own format and imports that from other easier-to-parse formats like MD5, SMD, or PSK/PSA. COLLADA and FBX are the ultimate intermediate formats right now and are supported by virtually every decent modeling package, but it's still much more complex than game-specialized formats.
|
# ¿ Dec 12, 2008 19:51 |
|
A custom format is the way to go, but the way you go about doing it is also kind of important. The best candidates for import formats if you're doing skeletal animation are MD5 (Doom 3, text), PSK/PSA (Unreal Engine, binary, see this page for specs), SMD (Source, text), and Cal3D (open source, binary). SMD is the only one of the bunch that will export normal data, but it's definitely something you DON'T want to load directly, unlike the other three. It contains a ton of duplicated data. Personally, I recommend PSK/PSA because it uses matrix palette blending implicitly, which is much better for doing transforms on the GPU and doesn't require doing stupid poo poo to avoid branch mispredictions on the CPU. TMRF uses a hybrid format which doesn't favor any weight blending method, and the skeletal library uses preblended transform matrices which are very SIMD-friendly. Supporting COLLADA would be an extremely desirable goal: Every major modeling package supports it, it allows normal/tangent/binormal data to be exported so seams are never a problem, and its ubiquity futureproofs it. The only problem is that it's HARD to parse it. Getting meshes to import requires digging through several more layers of complexity than dealing with one of the afforementioned formats, and animations... well, let's just say that NONE of the major open source 3D engines will import COLLADA animations yet. OneEightHundred fucked around with this message at 21:42 on Dec 12, 2008 |
# ¿ Dec 12, 2008 21:35 |
|
ultra-inquisitor posted:How reliable are non-pow2 textures in OpenGL? They've been promoted to the core spec and I've never had any problem with them, but I've only had a very limited range of cards to test on (all nvidia). I've just come back to graphics coding after a pretty lengthy absence - are they still slow, or is the speed difference negligible?
|
# ¿ Dec 15, 2008 11:21 |
|
|
# ¿ Apr 29, 2024 07:39 |
|
ultra-inquisitor posted:Ok, that's a bit more drastic than I was expecting, and pretty conclusive. I was actually only considering using one for a full-screen overlay (ie using one 1024x768 texture) because using a 1024x1024 is causing slight but noticeable scaling artifacts. I don't think this is going to impact performance - I'm more worried that drivers (especially ATI's) will be dodgy and give me the old white texture routine. edit: Yes, you can use a subset of a larger texture, but get a full-screen overlay on a 1680x1050 screen and tell me if you can find a better use for the 7MB of VRAM you're wasting. OneEightHundred fucked around with this message at 18:20 on Dec 15, 2008 |
# ¿ Dec 15, 2008 18:13 |