Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Thug Bonnet posted:

My question: Is there any reason to ever use immediate mode in OpenGL for anything but toy programs?
It's somewhat nice for doing trivial dynamic geometry like particles and beam sprites.

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

That's supposed to be a man-shaped figure, so something's off, and I assume it's my bone transforms. They should be matrices that transform the bone from bone space to world space, correct?
There are generally two ways to do weighted skeletal animation: Pretransformed weight lists and matrix palette.

Doom 3's MD5 format uses the first. With pretransformed weight lists, each vertex references only a list of weights, and the weights contain coordinates in bone space premultiplied by the influence. The final result is achieved by transforming those values from bone space into world space and summing them.

Matrix palette instead has a base (a.k.a. reference) model in world space, and a base pose. The transformed position for each weight is calculated by transforming the vertex in the base pose into bone space using the bone's base pose (or rather, the inverse of it), then transforming it into world space based on its current pose. The results of that are then multiplied by the influence and summed to form the result.


What you're looking at is the latter. The second matrix is the inverse of the base pose.


In implementation, you can concatenate the two matrices into one matrix, a matrix which will transform a base pose vertex into its new location with a single matrix multiply operation rather than two.

If you're going to do this in software (as opposed to in the vertex shader), you can pre-blend the matrices for each unique weight list since the number of unique weight combinations/values is generally fairly low, which lets you avoid branch mispredictions in the transform code.

OneEightHundred fucked around with this message at 21:19 on Jul 24, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

Actually, that is an MD5 model, but I am computing a base pose and base model at load time as I've never seen any other description on how to do it.
If you're going to do transformation on the GPU, then it's definitely the better way since the alternative is using up all your texcoord/attrib streams to save a handful of vector multiplies.

Getting matrix palette skinning to work basically involves two steps: Getting it to work right in the base pose, then getting the animation to work right.

The reason for the first is that it's really easy to tell when the base pose is correct: Your transform matrices should all be identity matrices (or will be only slightly off due to floating point error). You get the transform matrix by computing (current * inverse(base)). Obviously, when current is base, you should be getting an identity matrix, meaning none of the vertices will get changed at all.

If you can't get them to be identity matrices, troubleshoot the computation I just mentioned. Make sure they're in world space and not bone space. Make sure you inverted the base poses after concatenating them to bring them into world space, not before.

If you can get the transform matrices to be identity and the result is still hosed up, then it means the vertices in your base model were calculated wrong.

If you can get it to that point, and it animates incorrectly, then your transform matrix is probably inverted, or the transform operation in your shader is backwards. The other possibility is that your base AND animated poses were both in bone space when you calculated the transform matrices, so make sure you have a way of debugging what the SKELETON looks like as well.

OneEightHundred fucked around with this message at 02:14 on Jul 25, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Easy way to tell is to translate the model along each axis and find out if it's going the same direction as it is in the editor. If it is, your projection matrix is flipped. If it isn't, your model was loaded flipped.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Shazzner posted:

When is OpenGL 3.0 going to be released and will it be the next big thing?
Nobody knows, supposedly later this year.

Will it be the next big thing? No, because of the previous answer! The problem with OpenGL ever since D3D7 has been that they take too long to get features through. D3D went through 2 major versions before OpenGL finally got a good render-to-texture extension. The spec for OpenGL 3 was supposed to be out in September LAST YEAR. Developers can not adopt a graphics API that does not exist, and will not adopt a graphics API that continues to lag on features without providing a clear benefit. The Xbox 360 has made the portability argument less clear-cut, and the usability arguments have been fairly moot since D3D9 came out with a sane API.

quote:

OpenGL3.0 will just make them standard
The main differences are that OpenGL 3 eliminates some of the legacy cruft that hurts OpenGL 1-2's performance. OpenGL has been unable to eliminate a lot of design decisions that, in retrospect, were not very good, due to legacy support.

One example would be that texture dimensions are mutable, and replacing a texture mandates rescaling all of the mipmap levels even though they're likely to be overwritten immediately. In OpenGL 3, texture dimensions are immutable.

It also has asynchronous object creation, another performance boost.

As for what game makers will do, it depends heavily on how accepted Vista and D3D10 are. OpenGL 3 is probably going to be XP compatible, and will offer support for D3D9 and D3D10 hardware with one API. If D3D10 and Vista aren't widespread, it would provide a very compelling reason to use it over D3D.

OneEightHundred fucked around with this message at 00:55 on Jul 28, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Professor Science posted:

Real OGL3 != the OGL3 that was discussed until last year's SIGGRAPH. I assume they'll actually explain WTF happened at this year's SIGGRAPH (so a few weeks), but from what I've heard it's a far less radical change than what they announced (supposedly mobile people were unhappy that it would not look very much like OpenGL ES, and considering that's the only area where they've had a real lock on the market, back to the drawing board they went).
They've got a lock on the professional graphics market too. If the API was really that much different, I can see the concern, but most of it was taking out the trash that I can't imagine would harm mobile. I thought most of the concerns were over technical issues, like mandating S3TC, and numerous fine points about the spec.

OpenGL ES isn't really just a mobile thing either, most of the current generation of game consoles use graphics libraries derived from it.

quote:

How exactly does 3d texturing work.
The same way 2D texturing works, except instead of using a 2D texture coordinate that retrieves a value from a 2D texture map, it's using a 3D texture coordinte that retrieves a value from a 3D texture map. Where the polygon is in space has nothing to do with it, you supply texture coordinates for each vertex.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
You could compensate for fillrate by having the shader sample multiple points within the volume, which would obviously involve some refactoring.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

In my Quake 3 BSP loader I'm trying to get frustum culling to work but the results are far too overzealous.
As usual, try breaking the problem down. Try culling using just ONE plane perpendicular to the camera, that makes it pretty easy to tell if your box-plane cull is working properly since all of the culling will be done on a particular half of your screen. Another very useful tool in any sort of visibility optimization is to be able to lock the current visible set, so you can walk around and see exactly what it's doing.

(You are using plane-side culling, right?)

OneEightHundred fucked around with this message at 02:47 on Jul 31, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

I'm trying to figure out the best process to use to render a BSP with DirectX. The traditional way to render a BSP is the find what polygons (leafs) are visible from the camera and render them as you find them. This works fine and dandy with OpenGL, but the relatively high cost of Draw calls with DirectX makes me weary.
Create a bitfield of visible surfaces. For every leaf in a visible cluster, mark the surfaces in that leaf as visible (surfaces can appear in multiple clusters!)

Iterate over the surface list to batch them. q3map already sorts geometry by texture and lightmap index so you can scan right through them and flush batches as soon as a change is detected.

If you're going to do everything with hardware shaders in your material system, create a static buffer and upload all of the drawvert data to that and just copy index ranges into an index buffer and flush the draw out when it fills up or a material/lightmap change is detected.

For transparent stuff it's a bit more difficult, there are a lot of ways to handle it, and many engines these days are lazy and don't even bother sorting it because it's not the hot poo poo it was when Quake 3 came out.

quote:

Also, do anyone find that GameDev.net's forums are absolutely worthless for getting help?
GameDev.net's forums are mostly aspiring programmers with no experience, so yes.

OneEightHundred fucked around with this message at 21:32 on Aug 3, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

So what are some good fora for getting OGL help?
The OpenGL developer boards.
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=cfrm&c=3

#darkplaces and #neither on AnyNet have a bunch of dabblers in this sort of thing, including myself. Being able to bounce ideas off of other people has helped a shitload.

It's getting harder to recommend anything for OpenGL development because it's become so much more of a pain in the rear end to use. There are at least two ways to do everything now, which means it's really easy to get led into doing things that are completely out of date. And GLUT still blows.

I found that modding existing stuff was a much better starting point, Quake 3's renderer, or QFusion's (which is roughly the same thing) are easy to get started in because they're not using a lot of the really new stuff, but are still mostly based on design principles that still apply.


quote:

What i want to do now is visualize the light's depth buffer (render it to a quad). And i am a bit clueless as to how i can read a GL_DEPTH_COMPONENT texture from inside a shader (using a texture sampler).
You can't read depth values in GLSL, you can only compare them. There are performance reasons behind this, let's just say depth doesn't have to be treated as a linear scale by the card.

GLSL depth textures need to be sampler2DShadow. Sampling them with texture2D does not work on all hardware, so don't do it.

You need to set GL_TEXTURE_COMPARE_MODE to GL_COMPARE_R_TO_TEXTURE_ARB (Do not leave it as GL_NONE!) and GL_TEXTURE_COMPARE_FUNC to whatever you want in the texture environment for the depth texture sampler. The sampler type needs to be texture2DShadow. You can compare another depth value against it by using shadow2DProj(shadowSampler, coord).

(coord.s, coord.t) = the location on the shadow image to use. I think it divides those two by the W as well, to let you use a projection matrix to calculate it. The R coordinate is the depth value you're comparing against.

shadow2DProj returns a vec4 full of 0's if it fails the depth test, according to the texture compare function you specify, and 1 if it passes.

Most ATI hardware doesn't filter depth textures, so if you have an ATI card and it's giving you jaggy, ugly shadows, it's not your fault.


If you need to visualize the depth, then you'll need to render to a non-depth format. I think the only alternative is to read the depth values using glReadPixels, which is slow as gently caress, but will obviously work for debugging.

OneEightHundred fucked around with this message at 06:07 on Aug 4, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Actually, i set the compare mode to GL_NONE, and then i managed to read it using a sampler_2D and accessing the R-channel.
Okay, apparently you can read them if comparisons are turned OFF, and you can use shadow2DProj if comparisons are turned ON, doing otherwise makes it vomit. My bad.

quote:

Also, is it possible to do shadow-mapping with point-lights?
You'd have to make 6 shadowmaps and use a pseudo cubemap style solution to truly cover the full range. I say "pseudo" because GLSL has no support for depth texture cubemaps, so you'd have to fake it.

It's one of those things you probably want to fake. Point lights that aren't used purely for decoration usually do have a limited angle range where it's affecting surfaces worth shadowing, so it's best to just cap the FOV boundaries to that range. Or if you REALLY want to cheat, cast the shadow isometrically using a single direction vector.

quote:

edit: also, agreed on OpenGL being a pain...ive been getting increasingly tempted to port my research code over to D3D and XNA, due to the vast amounts of documentation/tutorials and IDE integration that help you get the technical stuff out of the way easily...Plus i would get to run my stuff on my 360!!
Just about everything in D3D9 is doable in OpenGL, D3D's data input handling (that stream source and vertex declaration poo poo) is worse but otherwise it's generally sane. D3DX is very helpful and the documentation on MSDN is certainly much easier to browse than the encyclopedia they call the OpenGL spec.

XNA requires using C# for better or worse.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Since delphi3d.net is down, does anyone know what hardware supports the GL_EXT_framebuffer_sRGB extension?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

ColdPie posted:

I've got what feels like a noob question. I'm trying to learn about using shaders, and things are going fine, except that I can't get it to run on my laptop! My stuff works fine on my desktop with its real OpenGL 2.x graphics card, but my laptop only supports OpenGL 1.4. However, I read that shaders were originally introduced in 1.4 as an extension. And, as a matter of fact, glxinfo reveals extensions such as GL_ARB_fragment_program and GL_ARB_vertex_program which sound like what I want.

I found calls like glCreateShaderObjectARB() in a sample program on the internet. Trouble is, it's in an if(GLEW_VERSION_1_5) block, while I'm at 1.4. When I bypass this check and force it to use the 1.5 calls, it segfaults on the glCreateShaderObjectARB() call.

How can I use GL shaders in a OpenGL 1.4 implementation?
"program" refer to the early pseudo-assembly shaders, not GLSL. Last I knew, Intel has no support for GLSL on anything lower than the X3100, and I'm not sure if it's on the X3100 either.

If you're going to target Intel hardware, you may want to check out Cg, which will allow you to target ARB programs and GLSL shaders with the same code.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Well, as I said, the only real alternative for Intel hardware are ARB fragment/vertex programs, which are in a sort of pseudo-assembly language.

You can write shaders in that language directly (which isn't fun), or you can use Cg (which is an HLSL-like language) to target them. Cg also has a separate compiler (cgc) that dumps the high-level metadata in comments, so you don't have to use the Cg libraries to use the language.

There really aren't any other alternatives for programmable shaders until Intel decides to release a driver update.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Say i got an opengl FBO with 4 color attachments, and i want to clear only one of those attachments (that is, do the equivalent of glClear(GL_COLOR_BUFFER_BIT);). How do i do that?
Use glColorMask to disable the channels you don't want to clear, then use glClear

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Looks like depth buffer precision issues. Consider using D3DRS_SLOPESCALEDEPTHBIAS and D3DRS_DEPTHBIAS with D3D, or glPolygonOffset with OpenGL to nudge it. If you're reading the depth value in the shader, just offset it by a fixed amount.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

sex offendin Link posted:

VBOs would be an advantage for anything that's generated exactly once and never modified for the entire session. For any element that might be frequently rebuilt, they're a wash or worse.
Heavily-updated VBOs are useful in cases where you want to dump geometry directly to its destination and not gently caress around with the buffering internally. It's not like you're benefiting from doing an extra copy to the GPU every time you draw. Drawing directly from system memory is deprecated in OpenGL 3 for that matter, and using VBOs exclusively is a good way to make porting to D3D easier if you ever decide to take that route.

Of course, there are two obscure things to keep in mind:

First is that writing into a mapped VBO can cause severe cache pollution, which is why the GPU manufacturers recommend doing uncached SSE writes to the GPU space after you map it. glBufferData/glBufferSubData do exactly that, but it also means that SSE-accelerated transforms can push results directly to the GPU with no performance hit.

Another obscure thing is mirroring D3D's "discard" behavior: If you're not going to use the data in a VBO any more and want to map it again, call glBufferData with a NULL data pointer to throw out the existing contents. Doing that will cause the driver to give you a new region of memory to work with if it's not done with the old one, not doing it will cause a stall until it's done using that region.

OneEightHundred fucked around with this message at 05:00 on Sep 25, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

magicalblender posted:

What's better, drawing pixels directly, or using textures? Poking around the OpenGL documentation, I see that I can put pixel data into a raster and glDrawPixels() it onto the screen. I can also put the pixel data onto a texture, put the texture on a quad, and put that onto the screen. If I'm just doing two-dimensional imaging, then which is preferable?
Using textures is almost always faster. The only exception might be if you're going to draw something for one frame before completely replacing it.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Eponym posted:

Ideally, I'd like to do something like D3DVector3 normal = pVert[index].Normal. My issue is that I don't know what to define struct VERTEX with or what order in which to define its elements. How do I find out the vertex format?
Create vertex layout description to tell D3D how to read data from the structure
http://msdn.microsoft.com/en-us/library/bb172630(VS.85).aspx

Use CreateVertexDescription to create an instance of it
http://msdn.microsoft.com/en-us/library/bb174365(VS.85).aspx

Use SetVertexDescription to bind it
http://msdn.microsoft.com/en-us/library/bb174464(VS.85).aspx

Use SetStreamSource to assign the vertex buffer to a stream slot
http://msdn.microsoft.com/en-us/library/bb174459(VS.85).aspx

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

brian posted:

As for the fixed function pipeline I was under the impression by using any vertex or fragment shader it goes around the usual processes that gl performs (like having to transform vertices with ftransform() or modelview * vertex).
ftransform does the same calculations that fixed-function would do though, so if your fixed-function settings are hosed up, ftransform will likely reflect it.

Also, keep in mind that blending, stencil, depth, and alpha test (for now) are not currently handled by shaders.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Don't most consumer graphics cards do accumulation buffers in software, punching your framerate in the face?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Luminous posted:

So, my question is simply: what am I missing? Please tell me I have just missed something completely minor so that I can get along. Below is a sampling of the errors, if it helps.
Did you include the D3DX libs in your project?

Right-click the project, select Properties, go to the Linker subsection, select Input, and then check the Additional Dependencies line.

You need d3dx10d.lib for the debug build and d3dx10.lib for the release build.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mithaldu posted:

Got a question about z-fighting issues. I'm rendering layers of roughly 80*80 objects, with roughly 20-40 layers stacked atop each other at any time. There is no computationally non-expensive way to cut out invisible geometry that i am aware of, so i pretty much have to render everything at all times. This leads to the z-buffer getting filled up pretty fast, resulting in rather ugly occasions of polys clipping through other polys.

I've already restricted the near and far clipping planes as much as possible so the geometry has the full z-buffer to play with
What are you using for the z-buffer that you're only getting 40 layers of precision? Make sure your nearplane isn't too close in particular.

As for a computationally-inexpensive way to occlude objects, try zoning them using occlusion queries. An occlusion query lets you know if and how much of some geometry is visible. Draw the world (occluders), then disable color/depth write and "draw" the bounding of a zone a bunch of objects could be contained in. Repeat for all zones. If the zone isn't visible, don't draw any objects fully contained in it.

For voxel-based stuff there's also octree occlusion, but I'm less familiar with that.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mithaldu posted:

Try scrolling up the page to see screenshots. :)
Still not sure I understand where it's breaking down. What precision are you creating the depth buffer at?

If you're creating a complete 3D voxel grid, then you can draw it in order by just using the view axis. i.e. take the sign of each component of the view direction of the camera, and use that to determine which order to iterate through those components in the voxel array.

You can also squeeze more precision out by segmenting the world and doing multiple draw calls. This means that you'd chop your near/far range up into segments, draw anything visible in the farthest segment, clear your depth buffer, draw anything visible in the next-closest segment, repeat.

quote:

That might actually work ... Do you maybe have some kind of example code for that?
No.

You don't want to check each item, you want to check zones, and if the zone hull is culled, then everything in the zone is considered not visible. A zone would be a room, for example, and what you'd "draw" is a box for the bounds of the room.

I don't have example code, but it's asynchronous, it works by creating an occlusion query, drawing some stuff, ending the occlusion query, and then later when the card has the query finished, you can check how much it drew. You don't want to check immediately if you have other queries you can run.

By disabling depth/color write, it won't actually draw anything, but it will check the visibility of the polygons you're sending.

There's some trivial sample code here:
http://oss.sgi.com/projects/ogl-sample/registry/ARB/occlusion_query.txt

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mithaldu posted:

Ah, i see. He's suggesting i do the job of the z-buffer in-code. Sadly that's not an option although i'll need to do it later on for objects with transparency. Reason for that not being an option is that it would massively increase the amount of work that Perl needs to do, which is what i'm trying to avoid. :)
Just use range buckets for the objects.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mithaldu posted:

I was tempted to post an image of a golf ball bucket here. But to be serious, I do not know what "range buckets" are. I can make vague guesses, which lead me to believe two things: Either you are talking about some technique that is completely unknown to me, or you're ignoring the display list structures i described in earlier posts around which i cannot work around. Color me confused.
It just means having a bunch of lists of objects in various distance ranges (range "buckets") because it's faster and simpler than re-sorting an object list with qsort or something. You distance-sort the scene by just adding objects to the buckets they belong in, then drawing the buckets in order, possibly sorting the bucket contents.

Although you're right, it won't work with display lists. (Stuff like this is why I don't like display lists!)

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mithaldu posted:

but i don't think it'd give me the granularity i'm aiming for.
The point is that it'll let you get more resolution out of the depth buffer since you can adjust the near/farplane for each bucket.

quote:

I also jsut tried to get the occlusion stuff implemented, but ran into a few roadblocks, so i have a few questions: Is it possible to do the occlusion query stuff in another way than using glBeginQueryARB, etc? I know it's not very likely, but better to ask than to stay ignorant.
No. What roadblocks are you running into?

quote:

Secondly, there's a similar HP extension for that. Does it only work on special gfx cards or why would glGetBooleanv(GL_OCCLUSION_TEST_RESULT_HP, &result) return the error "Unknown param"?
The HP extension stalls if you're running multiple queries and occlusion is all-or-nothing, and beyond that the only thing you need to do is create/destroy query objects once in a while so I'd recommend using the ARB one instead.

OneEightHundred fucked around with this message at 02:53 on Nov 1, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mithaldu posted:

glBeginQueryARB and associated commands are not yet implemented in the Perl OpenGL library. :)
I'm pretty sure POGL will let you make core OpenGL 1.5 contexts, in which case everything will work the same except the occlusion query ARB functions are core, meaning they don't have "ARB" at the end.

quote:

I know it's pretty horrible in comparison, but if i can get it to work, it just may suffice. Since it only relies on commands that are actually implemented already, it might work; unless it's hardware-specific.
I see no reason it shouldn't, are you sure you're getting a context from the ICD and not the lovely stock Windows OpenGL implementation?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mithaldu posted:

As far as i am aware i can only use the functions that are defined in the .xs, compiled into the .dll and exported in the .pm. The only function with the word "query" in the name for which this is true is glpXQueryPointer.
POGL definitely has a glBeginQuery binding in everything starting from the OpenGL 1.5 bindings and up. How to get that working, I don't know.

quote:

Again, i don't know what that means. Sorry. :/
If you link against opengl32.lib then for whatever reason, the entry points (read: functions) you get are from the lovely stock Windows OpenGL library. You need to make sure that the project is set up to dynamically load the DLL and get the entrypoints with GetProcAddress, which will use the ICD version (a.k.a. the one that comes with your video drivers, which is fully-featured) instead.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I'm just going off what I see in the package and I really don't know what the gently caress, so the best I can do is tell you where I see things that look like what you want.

Mithaldu posted:

Can you point out where you get that information from so i at least have a clue where to start looking? I mean, i'm looking at the source code for the module here, straight from CPAN, BEFORE compiling etc., and the only mentions of these functions are in the opengl includes files, but NOT in the perl files themselves.
I can't read Perl anyway, and unfortunately I don't feel like dissecting a makefile written in it. POGL has bindings for OpenGL 1.5 in utils/gl_1_5.txt, and the .xs file LOOKS like it should be using GetProcAddress for everything and just grabbing entrypoints from your own opengl32.dll, but that doesn't explain why it's refusing to compile without opengl32.lib so I have no idea.

quote:

Is there a good way to check whether glBeginQuery was executed correctly or am i gonna have to fly blind?
Easy way to verify it's working at least is to just BeginQuery, draw something, EndQuery, then get the pixel count and see if it's actually what you expect it to be.

Although realistically, if your program doesn't instantly crash from calling a bad entry point, it probably executed correctly.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mithaldu posted:

Edit2: Apparently i'd need to convince it to load nvoglnt.dll somehow. As i'm not too firm in c (beyond basic syntax and compiling stuff in visual studio), is there a simple way to edit the makefile/c file to make that happen?
Using LoadLibrary on opengl32.dll should do this automatically. I don't know why it works this way, all I know is that static-linking opengl32.lib is bad for your health.

quote:

Edit3: Nvm, got it. They apparently do both, link against the windows opengl library AND automatically load from the driver.
You should try getting rid of whatever dependencies on the opengl32.lib it has, last thing you want to do is mix calls.

quote:

Edit5: All functions implemented and the occlusion queries are working now in the test implementation. I hope with that the hard part is over ...
:hellyeah:

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mithaldu posted:

I actually think it's meant to be that way, since, uh, if you do it manually with LoadLibrary, you'd need to try for all possible driver dlls, and keep your code up-to-date as drivers change, right?
Loading opengl32.dll will get the appropriate DLLs. Apparently you'd want to use vanilla GetProcAddress for the regular entry points and wglGetProcAddress for everything else, but I really don't know how well that behaves with various OpenGL versions. Best practice is to use GPA for the whole interface.

I may be wrong on this whole thing too.

quote:

Also, while i'm looking at it. What's the glFlush there good for?
It's redundant, checking query availability automatically flushes. NV_occlusion_query didn't do that so it's probably a leftover from that.

It's not a terrible idea to flush if you're going to do other things before you request the query results though.

OneEightHundred fucked around with this message at 20:12 on Nov 2, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

BernieLomax posted:

Sorry if this has been asked before, but I have problems with precision using the depth buffer. First I'm rendering stuff to several depth buffers, and afterwards I want to merge them by drawing them on top of each other with depth testing enabled. And afterwards I am using the depth test GL_EQUAL in order to shade everything. However it does not work very well. I do find that by using buffers with the same precision I get better results, but I have yet to find a combination that works. Any ideas?
You can't really do this because the depth buffer values range from 0-1, the projection matrix remaps distances between the nearplane and farplane to that range. That means if have, say, one depth range...

Camera --> N[----A----B----]F
... and chop it into two depth slices...
Camera --> N[----A--]F N[--B----]F

... then you need to clear the depth buffer after after you draw the distant slice anyway, otherwise you can get out-of-order objects like this crap:
Camera --> N[--B-A--]F

The drawback of course is that by clearing the depth buffer, you are eliminating B from the depth buffer. What you need to do is make sure that any objects that are in a depth slice (even partially) get drawn in that slice.

.... Alternately, make sure you're creating the depth buffer with a reasonable precision and try adjusting your nearplane/farplane values.

OneEightHundred fucked around with this message at 21:36 on Nov 3, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mithaldu posted:

Edit: Also, question: How would i go about making a polygon render visibly from both sides without rendering it twice?
glDisable(GL_CULL_FACE)


If you want the two sides to render differently, you either need to render twice or use two-sided rendering pixel shaders which I haven't dealt with yet.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Tap posted:

I've recently been inspired to start game programming as a hobby, but I know virtually nothing about the subject. Do you guys have any literature you'd recommend for a beginner? I'd like to get started with basic stuff like vertexes and pixel shading, etc..
If you want to dive into the iPhone, please start making 2D stuff so you can get the hang of the API without having to deal with the difficulty of 3D content creation.

Mithaldu posted:

Thanks. I needed it to make the occlusion check shapes also render when the camera is inside them.
Don't do that, not only does it break if something inside of the volume is obscuring the outer hull, but it's much cheaper to just use convex visibility volumes and check if the camera point is inside.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mithaldu posted:

Is that another awesome OpenGL technique i never heard of or is it just a fancy way to say "check where your camera is"? :)
Basically yes. It's really cheap to check if a point is inside a volume, so just do that and if the camera is inside (or nearly inside, so you don't run into weird nearplane fuckery in some instances) then all objects inside the volume should be considered visible.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Getting geometry from modeling software is far less about the software than about the FORMAT. The formats used by the modeling packages are horribly unsuitable for being directly loaded by a game engine, so you always want the model exported in a format you can handle, catered to the capabilities of your engine.

The project I'm working on uses its own format and imports that from other easier-to-parse formats like MD5, SMD, or PSK/PSA. COLLADA and FBX are the ultimate intermediate formats right now and are supported by virtually every decent modeling package, but it's still much more complex than game-specialized formats.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
A custom format is the way to go, but the way you go about doing it is also kind of important.

The best candidates for import formats if you're doing skeletal animation are MD5 (Doom 3, text), PSK/PSA (Unreal Engine, binary, see this page for specs), SMD (Source, text), and Cal3D (open source, binary).

SMD is the only one of the bunch that will export normal data, but it's definitely something you DON'T want to load directly, unlike the other three. It contains a ton of duplicated data. Personally, I recommend PSK/PSA because it uses matrix palette blending implicitly, which is much better for doing transforms on the GPU and doesn't require doing stupid poo poo to avoid branch mispredictions on the CPU. TMRF uses a hybrid format which doesn't favor any weight blending method, and the skeletal library uses preblended transform matrices which are very SIMD-friendly.

Supporting COLLADA would be an extremely desirable goal: Every major modeling package supports it, it allows normal/tangent/binormal data to be exported so seams are never a problem, and its ubiquity futureproofs it.

The only problem is that it's HARD to parse it. Getting meshes to import requires digging through several more layers of complexity than dealing with one of the afforementioned formats, and animations... well, let's just say that NONE of the major open source 3D engines will import COLLADA animations yet.

OneEightHundred fucked around with this message at 21:42 on Dec 12, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

ultra-inquisitor posted:

How reliable are non-pow2 textures in OpenGL? They've been promoted to the core spec and I've never had any problem with them, but I've only had a very limited range of cards to test on (all nvidia). I've just come back to graphics coding after a pretty lengthy absence - are they still slow, or is the speed difference negligible?
On my X1600 Pro, they cause a massive framerate hit.

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

ultra-inquisitor posted:

Ok, that's a bit more drastic than I was expecting, and pretty conclusive. I was actually only considering using one for a full-screen overlay (ie using one 1024x768 texture) because using a 1024x1024 is causing slight but noticeable scaling artifacts. I don't think this is going to impact performance - I'm more worried that drivers (especially ATI's) will be dodgy and give me the old white texture routine.
As far as I know, rectangle textures (see: ARB_texture_rectangle) give good speed on all hardware, you just have to deal with the fact that the coordinates aren't normalized and you can't mipmap them.

edit: Yes, you can use a subset of a larger texture, but get a full-screen overlay on a 1680x1050 screen and tell me if you can find a better use for the 7MB of VRAM you're wasting. :)

OneEightHundred fucked around with this message at 18:20 on Dec 15, 2008

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply