|
HB posted:glEnable(GL_CLIP_PLANE0); I have it enabled alright, to no effect. If I try 0,0,1,0 and 0,0,-1,0 or 0,1,0,0 and 0,-1,0,0 as parameters at least one of them should clip something, right?
|
# ? May 14, 2008 23:52 |
|
|
# ? May 10, 2024 00:40 |
|
heeen posted:I have it enabled alright, to no effect. If I try 0,0,1,0 and 0,0,-1,0 or 0,1,0,0 and 0,-1,0,0 as parameters at least one of them should clip something, right? If you set <1,0,0,0> (or <-1,0,0,0>) you should see exactly half of your scene. Are you doing anything weird with your projection matrix? haveblue fucked around with this message at 00:31 on May 15, 2008 |
# ? May 15, 2008 00:26 |
|
tyrelhill posted:I'm starting some research for a project I start in a few months and want to ask a few questions here about some good sources. By now, of course, you have the Design Patterns book by the gang of four? I've found the composite pattern to be unbelievably useful. Especially combined with the command pattern. Most of my experience is with that book, so if you had some more specific resources, I'd love to see them.
|
# ? May 15, 2008 06:10 |
SDL and OpenGL+Blending Question Here. I have a multi-function (multipass?) rendering setup in my game. Ideally, I'd like to be able to add all the lights into one surface/buffer, then multiply that buffer by a tile layer to create the illusion of lighting. My functions are: MapManager::drawTiles(); LightManager::drawLights(); I can change the blending modes outside/between functions, but I've tried every permutation of blending modes and drawing order to no avail. What glBlendFunc should I be using? In what order should I do it? What I'm getting: What I want: What I want with multiple lights:
|
|
# ? May 15, 2008 07:55 |
|
edit: wrong quote you have to use additive blending (gl_one, gl_one) if each light pass already contains the diffuse color of the tiles, or you can first render all lights and multiply it with the diffuse color of your scene afterwards. heeen fucked around with this message at 10:34 on May 15, 2008 |
# ? May 15, 2008 10:26 |
heeen posted:edit: wrong quote As an extension of my original question: What blending mode performs a multiply?
|
|
# ? May 15, 2008 18:12 |
|
Jo posted:As an extension of my original question: What blending mode performs a multiply? e: I wasn't thinking. Setting the texture mode to modulate will cause the texel color to be multiplied into the fragment color. If you want the destination to be assigned the product of the results of 2 texture units, you need to use the texture combiner extension. haveblue fucked around with this message at 18:27 on May 15, 2008 |
# ? May 15, 2008 18:23 |
HB posted:e: I wasn't thinking. Setting the texture mode to modulate will cause the texel color to be multiplied into the fragment color. Does this mean I have to render to a texture to blend? Also, wasn't multitexturing build into OpenGL since version 1.2.1? EDIT: God loving damnit, I think my overlay rendering function was changing the blending mode. EDIT Again: For prosperity, here are the blending modes and what I did: glBlendFunc( GL_ONE, GL_ZERO ); // 'Replace mode' Screen = WhateverYouDraw*1 + Screen*0 OR glBlendFunc( GL_SRC_ALPHA, GL_ZERO ); // If you use the alpha channel for intensity. Good for colored lights, it seems. // Draw ambient light here. glBlendFunc( GL_ONE, GL_ONE ); // 'Add mode' Screen = WhateverYouDraw*1 + Screen*1 OR glBlendFunc( GL_SRC_ALPHA, GL_ONE ); // Draw the dynamic lights glBlendFunc( GL_DST_COLOR, GL_ZERO ); // 'Multiply mode' Screen = NewTile*previousLuminosity/Color + Screen*0 // Draw tiles and sprites Jo fucked around with this message at 20:01 on May 15, 2008 |
|
# ? May 15, 2008 18:33 |
|
HB posted:If you set <1,0,0,0> (or <-1,0,0,0>) you should see exactly half of your scene. I found the solution, I have to use gl_ClipVertex in my vertex shader to use clipping planes. I didn't understand the problems with transformations yet, though. seems like I can just use ftransform when my clipping plane is specified in eye coordinates, but other than that, I don't know yet. edit: 8) How does user clipping work? DISCUSSION: The OpenGL Shading Language provides a gl_ClipVertex built-in variable in the vertex shading language. This variable provides a place for vertex shaders to specify a coordinate to be used by the user clipping plane stage. The user must ensure that the clip vertex and user clipping planes are defined in the same coordinate space. Note that this is different than ARB_vertex_program, where user clipping is ignored unless the position invariant option is enabled (where all vertex transformation options are performed by the fixed functionality pipeline). Here are some tips on how to use user clipping in a vertex shader: 1) When using a traditional transform in a vertex shader, compute the eye coordinates and store the result in gl_ClipVertex. 2) If clip planes are enabled with a vertex shader, gl_ClipVertex must be written to, otherwise results will be undefined. 3) When doing object-space clipping, keep in mind that the clip planes are automatically transformed to eye coordinates (see section 2.11 of the GL 1.4 spec). Use an identity modelView matrix to avoid this transformation. heeen fucked around with this message at 18:37 on May 16, 2008 |
# ? May 16, 2008 18:24 |
|
Deos someone have some good texture resources? Preferably with normal/spec/bumpmap.
|
# ? May 19, 2008 21:45 |
|
I'm writing a MD2 (Quake 2 model) importer for XNA right now. If you don't know, the MD2 format stores the texture coordinates with the triangle indices. This allows two triangles that share a vertex to not share a UV coordinate at that position. The problem I'm having is that I see no way to emulate that behavior in my VertexBuffer. What I'm doing now is to split any vertex with multiple sets of UV coordinates into that many seperate vertices. The problem with this method is that it makes it impossible to accurately determine the normals as the model animates. I have to position the model, calculate the normals, then resplit the vertices every frame. That seems a tad bulky.
|
# ? May 20, 2008 21:10 |
|
I'm considering tossing my hat into This thread so I'm thinking pygame, since for me, python is more a "fun project" language than C++ in many ways. Can anyone with experience tell me how much I need to worry about performance, if I'm going to use it for a 2d game? Last time I made a game it was SDL in C++ where I could toss out sprites without a care in the world, but I'm guessing that won't be the case here! Is it easy to use SDL in python? How about OpenGL, is it easy compared to C++ and is it worth it for a relatively low res 2d game?
|
# ? May 20, 2008 22:01 |
|
MasterSlowPoke posted:What I'm doing now is to split any vertex with multiple sets of UV coordinates into that many seperate vertices. The problem with this method is that it makes it impossible to accurately determine the normals as the model animates. I have to position the model, calculate the normals, then resplit the vertices every frame. That seems a tad bulky.
|
# ? May 20, 2008 22:25 |
|
In regards to the clip plane stuff: A good range of ATI cards have no hardware clip plane support, so using them will cause it to kick back to the software rasterizer. gl_ClipVertex support is spotty as well. For near-plane clipping, there's a technique called oblique depth projection which basically lets you set the near plane to an arbitrary plane instead of one perpendicular to the camera, giving you the same effect. http://www.terathon.com/code/oblique.php There are some caveats with it, it breaks down when the plane gets too close to the camera and I doubt it works properly if the portion of the plane within the view frustum isn't entirely in front of the nearplane. MasterSlowPoke posted:I'm writing a MD2 (Quake 2 model) importer for XNA right now. If you don't know, the MD2 format stores the texture coordinates with the triangle indices. This allows two triangles that share a vertex to not share a UV coordinate at that position. The problem I'm having is that I see no way to emulate that behavior in my VertexBuffer. OneEightHundred fucked around with this message at 03:01 on May 21, 2008 |
# ? May 21, 2008 02:46 |
|
OneEightHundred posted:Build a list of unique point/UV combinations and reindex the triangles to that. The problem is that would cause the normals calculated incorrectly, as the normals of all the triangles that share that split vertex wouldn't be averaged. I know it's a small and probably undetectable error but might as well do it right the first time. I guess the lookup table is a good solution. I'm thinking I should use a Dictionary to do it?
|
# ? May 21, 2008 06:17 |
|
MasterSlowPoke posted:The problem is that would cause the normals calculated incorrectly, as the normals of all the triangles that share that split vertex wouldn't be averaged. I know it's a small and probably undetectable error but might as well do it right the first time. EDIT -- You don't really even need to do that with MD2 because it stores the normals with the point. Points are stored as 3 bytes for the vertex position and a 1-byte lookup into a precomputed normals table. http://tfc.duke.free.fr/coding/src/anorms.h Out of curiousity, why are you using MD2? MD3's a higher-precision format which has the same capabilities and then some. OneEightHundred fucked around with this message at 06:31 on May 21, 2008 |
# ? May 21, 2008 06:20 |
|
I'm calculating the normals because I'm sure that they're be more correct than a static normal from a lookup table. I'm using MD2 to learn how to load models from a file. It's a rather straightforward format and I don't have to worry about skeletal animation or anything complicated. When I'm done with this I'll move onto Half Life's MDL and MD3.
|
# ? May 21, 2008 06:57 |
|
I'd recommend staying away from Half-Life MDL. It's an awful clusterfuck of a format. If you want a skeletal format that's not too hard to write a parser for, try MD5, Unreal PSK/PSA, or Cal3D. Half-Life and Source SMD files are easy to parse, but they're very far from what you want the internal representation to be so I'd recommend writing something to compile it to your own format if you want to use it.
|
# ? May 21, 2008 08:36 |
|
MasterSlowPoke posted:I'm calculating the normals because I'm sure that they're be more correct than a static normal from a lookup table. It might be decent practise, but I'd recommend focusing on more modern formats; HL's MDL format is ancient and might teach you bad ideas. You should consider adding COLLADA on your list; it's easy to parse (XML) and totally hip.
|
# ? May 21, 2008 12:33 |
|
The problem with HL MDL is that it's a typical Valve format, one that fails to separate the implementation details from the file format itself. It's not just an old format, it's a BAD format. I wouldn't describe COLLADA as easy to parse either. Even with FCollada, which does a very good job of parsing it and getting it into a usable format, practically everything has an additional layer of complexity that you have to work through, and everything's indexed separately which means it takes a good deal of work to get it into a usable format. I'm writing a COLLADA importer for the project I'm working on right now, and it's definitely the most difficult format to work with of the ones I mentioned in my last post. OneEightHundred fucked around with this message at 23:05 on May 21, 2008 |
# ? May 21, 2008 23:03 |
|
OneEightHundred posted:I wouldn't describe COLLADA as easy to parse either. Even with FCollada, which does a very good job of parsing it and getting it into a usable format, practically everything has an additional layer of complexity that you have to work through, and everything's indexed separately which means it takes a good deal of work to get it into a usable format. I'm writing a COLLADA importer for the project I'm working on right now, and it's definitely the most difficult format to work with of the ones I mentioned in my last post. I Agree to some extent in that it's a very general-purpose format, with lots of data you might not need (and it shouldn't be used as a final engine model format), but in my experience it's not that hard to get the relevant data out of there, once you get a feel for how it's all connected (this may be the hardest part, as the documentation is quite lacking). Still, I think it's very useful to learn because it's growing as a standard in the industry. A good exercise might be to convert some other model format to COLLADA (using FCollada), or the other way around.
|
# ? May 22, 2008 16:49 |
|
The single biggest selling point for COLLADA is that there are open source exporters for both Maya and Max. Even if you have to extend the exporters, then having a solid(ish) starting point is an absolute godsend for those starting afresh on a toolchain.
|
# ? May 22, 2008 17:24 |
|
Adhemar posted:Still, I think it's very useful to learn because it's growing as a standard in the industry. quote:A good exercise might be to convert some other model format to COLLADA (using FCollada), or the other way around. TSDK posted:The single biggest selling point for COLLADA is that there are open source exporters for both Maya and Max. Authoring plug-ins for multiple versions of multiple modeling packages is a severe pain in the rear end compared to supporting one format. Being able to export one format and have the whole middleware market support it is a great selling point for the modeling software developers.
|
# ? May 22, 2008 20:39 |
|
OneEightHundred posted:Not really, FBX occupies the same niche and is closed-source.
|
# ? May 23, 2008 09:34 |
|
TSDK posted:Being closed source is a major disadvantage when putting toolchains together. I've yet to see one 'standard' file format that didn't need either extending or bug-fixing in one way or another to meet all of the requirements for a project.
|
# ? May 24, 2008 02:57 |
|
OneEightHundred posted:FBX and COLLADA both contain a shitload of information, I'm really not sure what more you'd want out of them. Not to mention that COLLADA allows you to elegantly add your own additional data (using "<extra>" tags and plug-ins), which doesn't confuse existing third party apps that use the format.
|
# ? May 24, 2008 10:01 |
|
From my university, I get all Microsoft development tools in all versions for free. What exactly do I need nowadays to do game pogramming on this basis (for windows)? I used to do a little with DirectX 7 and Visual Studio 6, but I can't seem to find the SDK for Visual Studio 05 or whatever is the most recent version right now. How does this work? What the hell is XNA? Edit: Or to be more precise: Since I have the advantage of getting MS software over MSDNAA, is it a good idea to do use Visual Studio etc. for Game Development, or should I use the Express Versions and XNA anyways? I won't be doing top level development, obviously, but I'd like flexibility and I don't know about the Express Versions when I can have the "real" thing? Boner Slam fucked around with this message at 12:45 on May 24, 2008 |
# ? May 24, 2008 12:31 |
|
Boner Slam posted:From my university, I get all Microsoft development tools in all versions for free. Here's the latest DirectX (10) SDK: http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&displaylang=en, which you can use with Visual Studio (2008 is the latest version). XNA is a framework specifically for game development that has a lot of stuff already built in. One nice feature is that you can compile the same code to run on Windows as well as on an Xbox 360. It's all managed code. The drawback is that it's Microsoft controlled so (AFAIK) you are restricted to Windows and the Xbox, and there are some licensing resctrictions I believe. Also, since you write managed code, it's hard to port. Bottom line; it's nice if you're targeting Windows only and want to get something going fast, but not so nice if you want to code for multiple platforms and/or use your own lower level technology. You can use it with the full versions of Visual Studio though; no reason to use Express if you have access. There is also a non-Express version of XNA with more features, I believe. I haven't played with it much though, and things may have changed, so anyone can feel free to correct me.
|
# ? May 24, 2008 13:25 |
|
I just read that I can use XNA with Visual Studio 2005. Since I have 2005 and 2008 installed, and XNA 3 will run with 2008, I think I might go with the XNA route (I don't think I will be needing cutting edge technologies anytime soon). I will go ahead and try to figure out how to get this working now, thanks!
|
# ? May 24, 2008 13:56 |
|
I'm having trouble with my game loop. I want it to use a variable frame rate and move all my objects based on that. Here's some code:code:
My sprites (controlled via mouse) move around really smoothly now and it all seems to be working. I'm pretty sure I'm missing something though. Should I be moving my sprites based on a delta or something? (I'm not even sure if I'm asking the right question.) Also, I'm not sure how to properly calculate FPS. In general, any game loop help or code people could show would be great. Edit: I hope any of this makes sense. If anyone can point me to a good article regarding this stuff, I would appreciate it.
|
# ? May 25, 2008 02:35 |
|
Your delta value should be the time differential between frames. Here's an example of some boilerplate code to handle something like this in a fairly simple way:code:
QueryPerformanceCounter() provides a much higher resolution timer than what you'd get with GetTickCount(). It gives you a value in "counts." The actual length of a single count is static as long as the system is running, but it might vary from system to system. To determine the actual length of a count you use QueryPerformanceFrequency(), which provides you with the number of counts per second. You can use this (1 / frequency) to determine the length of a single count. Multiply this value by the count difference between frames and you've got your frametime in seconds. If your animations are timed in seconds it's trivial to use your frametime to keep them framerate independent. Let's say you wanted a sprite to move ten pixels to the right every second: code:
Edit- Just realized that I completely missed part of your question. Your delta is your frametime, which is the inverse of your framerate. Taking your average framerate is easy and something you can do trivially in your update function: code:
Paradoxish fucked around with this message at 07:16 on May 25, 2008 |
# ? May 25, 2008 03:43 |
|
Here's a timer class I wrote once:code:
code:
code:
heeen fucked around with this message at 13:54 on May 25, 2008 |
# ? May 25, 2008 13:51 |
|
On the subject of timers, I've always used ltimer; it's fast, robust and does the job wonderfully.
|
# ? May 25, 2008 15:26 |
|
Paradoxish posted:Your delta value should be the time differential between frames. Here's an example of some boilerplate code to handle something like this in a fairly simple way: I'm getting a bunch of errors regarding LARGE_INTEGER: code:
code:
Edit: This fixed it but I'm not sure if it's correct. (Although Microsoft's documentation seems to think it's correct. Thoughts?) code:
code:
Gary the Llama fucked around with this message at 17:16 on May 25, 2008 |
# ? May 25, 2008 16:54 |
|
Using an __int64 is going to be a lot easier in practice than using a LARGE_INTEGER, unless you have a need to specifically reference the high/low dwords, ie:code:
|
# ? May 25, 2008 21:52 |
|
Something to be aware of in regards to QueryPerformanceCounter is that it's very expensive, on the order of 3000-4000 cycles per call. You might want to beware if you're using it in many places in your code
|
# ? May 25, 2008 21:58 |
|
Well, no. I wrote that a few years ago and never thought about it. I had a different class for unix timers, so I didn't really need a virtual function, I just wanted a defined interface.
|
# ? May 25, 2008 23:03 |
|
I'm working on a 2D isometric map in Python using Pygame. Creating a map at 640x480 with 32x32 tiles which comes out to 300 map tiles for just the first Z axis. However, I need 16 Z axises. So that is 4800 tiles which each contain an image. With such a giant list(array), FPS is sluggish. I noticed that changing the resolution to 352x352 improves FPS to an ideal frame rate, but isn't the ideal resolution I'm looking for. Does anyone have any recommendations for increasing FPS? Also, I use formula to convert a list position into (x, y, z) positions, could this have a substantial impact on the frame rate? Here is the source, without data files. (It won't run.) Chris Awful fucked around with this message at 00:22 on May 26, 2008 |
# ? May 26, 2008 00:11 |
|
Gary the Llama posted:Edit: This fixed it but I'm not sure if it's correct. (Although Microsoft's documentation seems to think it's correct. Thoughts?) Yep, that's right. That was my fault there. Like ehnus suggested, you should use __int64s and just do a cast for the counter calls. I was writing that code off the top of my head and I just used LARGE_INTEGERs to reduce the number of casts and make it less confusing. The fact that LARGE_INTEGER is actually a union totally slipped my mind, so I guess I just ended up making it more confusing. edit- For what it's worth, the QuadPart of a LARGE_INTEGER is just a typedef'd __int64 anyway, so it's really just a matter of semantics either way. quote:Edit 2x: Also, in my Update method, do I need to keep doing this? I assume the answer is no and I can just go ahead and move my stuff like you said. Right, as long as you can somehow time your movement in pixels (or units or whatever) per second you can just multiply by the frame delta and there's no problem. ehnus posted:Something to be aware of in regards to QueryPerformanceCounter is that it's very expensive, on the order of 3000-4000 cycles per call. You might want to beware if you're using it in many places in your code For most game applications you shouldn't need more than one call per frame. Not that one call per frame should have any impact at all on performance, but is there a faster alternative I'm not aware of?
|
# ? May 26, 2008 04:09 |
|
|
# ? May 10, 2024 00:40 |
|
Paradoxish posted:For most game applications you shouldn't need more than one call per frame. Not that one call per frame should have any impact at all on performance, but is there a faster alternative I'm not aware of? You just never know, sometimes it just slips peoples minds when it comes to calling expensive functions, sometimes people just don't know the cost of functions. There is an alternative in __rdtsc() as it's really fast but unfortunately it's not reliable on multiple core machines so it can be ruled out for most cases.
|
# ? May 26, 2008 04:19 |