Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SlightlyMadman
Jan 14, 2005

I'm working on a pseudo-roguelike, and I need a map generator. There's lots of them out there, but they all seem to generate dungeons or wilderness. What I actually need is something to generate modern buildings, like a warehouse or shopping mall.

Since buildings are often vaguely symmetrical, I'm thinking of taking a rectangle, and splitting it down the middle. Then each "room" would have a chance of being split down the middle, recursively. That would end up with a sort of random perversion of the fibbonacci sequence. Then it would just be a matter of making sure there's enough doors that no room is isolated. It doesn't account for hallways very well, though.

If anybody has any tips on better ways of going about it, I'd love to hear them.

Adbot
ADBOT LOVES YOU

SheriffHippo
Jul 18, 2002

Suburban Krunk posted:

I want to learn a programming language and learn about game development at the same time.

My personal opinion is that Flash is the best for beginners. One main reason is that the Flash IDE allows you to create named graphical assets instantly.

In Flash, you can literally open the IDE, draw a circle with the pen tool, right-click the circle, name it "circle_lol", then type the code:

code:
circle_lol.x += 3;
Compile, and your circle will move.

Conversely, in OpenGL, Direct3D, SDL, or XNA, you will have to spend hours setting up confusing libraries and write several lines of cryptic code just to get a graphic to appear on the screen.

Also, what you learn about programming and game programming in ActionScript will be transferable to C# and even C++.

However, if you don't want to use Flash, XNA and SDL are the easiest of the C++/C# graphical APIs.

C# is also far easier to learn than C++. (XNA uses C#)

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Suburban Krunk posted:

I want to learn a programming language and learn about game development at the same time. I know a small amount of Java, and feel I have a basic understanding about some basic programming fundamentals. On the plus side, I have a strong background in math (Up to Vector Calculus). Can anyone recommend me a book or site that would teach me a programming language and game development at the same time? From reading around a little bit, it seems as if C# is the language to learn, and something about XNA (Not sure what this is), is this correct?
What kind of game?

I still going to recommend modding for first timers, because it means there are design decisions, content, and mistakes you don't have to make in order to hit the ground running. Modding something like UT2k4 or Source comes with the advantage of a massive number of tutorials.

SheriffHippo posted:

Also, what you learn about programming and game programming in ActionScript will be transferable to C# and even C++.
I really can't stress one related point enough: Learning programming languages is kind of easy, learning APIs and learning how to program are the hard parts. Most good programming languages are only as difficult as the frameworks you're using them with.

Oae Ui
Oct 7, 2003

Let's be friends.

tripwire posted:

Has anyone had any experience/luck combining Box2d (specifically the python bindings, pyBox2d) with pyglet? The guy who did the python bindings for box2d included a testbed that uses pygame, but since everyone keeps mentioning the various limitations with pygame I'm wondering if I should just use pyglet instead. What would I have to do differently if I wanted to package the binary for windows/osx/linux?

I've not tried getting Box2d to work, but as an alternative I do use pymunk (which is a binding for the Chipmunk physics library) with pyglet. I've been able to build binaries for Windows/OS X/Linux with the pyglet+pymunk combo.

shodanjr_gr
Nov 20, 2007
I am writting a small deferred renderer using GLSL.

I want to be able to use the depth buffer values to extrapolate the eye-space coordinates of each texel of the depth buffer. I intrinsically understand how this is possible, but i cant for the life of me figure out how do it inside a shader.

My thinking was that:

A) i get a vec3(x,y,z) where x,y are the texture coordinates and z is the depth buffer value.

B) i multiply by 2 and subtract 1 to bring the values to [-1,1]

C) then i make a vec4(x,y,z,1.0) and multiply that by the inverse of the projection matrix to bring the clip space coordinates into eye-space.

And yet this does not work, and i am clueless as to why....Any ideas?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

I want to be able to use the depth buffer values to extrapolate the eye-space coordinates of each texel of the depth buffer.
The perspective correction is determined by the W value, so leaving it as 1.0 is not going to give you the right values.

I'm not good enough with matrix math to give you the full answer, but you can recalculate the HPOS value's W coordinate, and that's easy enough: The HPOS Z value (a.k.a. the depth) is rescaled from nearplane-farplane to 0-1, so just denormalize that for the absolute depth value. The HPOS W is just calculated by negating that value.

OneEightHundred fucked around with this message at 23:23 on Aug 7, 2008

shodanjr_gr
Nov 20, 2007
Thanks for the answer, it turns out that i actually (among many other things) wasnt dividing by w when i was trying to use the coordinates, and this was screwing up my results.

code:
float depthValue = texture2D(viewDepth, gl_TexCoord[0]).x;//reading depth value from buffer	
vec3 screenSpaceCoords = vec3(gl_TexCoord[0].x * 2.0 - 1.0, gl_TexCoord[0].y * 2.0 - 1.0, depthValue* 2.0 - 1.0);
//unpacking screenspace coordinates
vec4 eyeSpaceCoords = projectionMatrixInverse * vec4(screenSpaceCoords, 1.0);//unprojecting back to eyespace
vec4 worldSpaceCoords = modelviewMatrixInverse * (eyeSpaceCoords);//transforming from eyespace to worldspace
this is the code i used, it should make sense to most people. projectionMatrixInverse and modelviewMatrixInverse are calculated offline and sent to the shader as uniforms.

Ive had quite a bit of progress today, getting multiple lights to work (with minimal overhead, since the rendering process is deferred :science: ).

What i am now looking into is some guide/paper/words of wisdom, on how to set up my lights in such a way that my final scene does not end up looking WAY too bright once i have done all my lighting passes. Right now i am just tweaking the numbers by hand, and still cant get nice results. Any ideas?

Also, is there a place i can pull some proper gl_material properties off of, instead of ballparking them? (diffuse, ambient, specular, shininess factors for materials like metal/wood/etc)

Hanpan
Dec 5, 2004

I was just wondering if anyone could spread some light onto Unity for me. Costing aside, I was really blown away by what they were able to create inside a browser at reasonable download speeds. The engine supports multi-platform deployment, physics and all kinds of amazing stuff. That said however, I haven't come across a single site which uses the unity web player or even seen any indie projects which utilize the engine.

I was wondering if someone help shed some light onto why no one has really adopted it yet, or is it literally because it's so darn expensive?

Hypnobeard
Sep 15, 2004

Obey the Beard



Hanpan posted:

I was just wondering if anyone could spread some light onto Unity for me. Costing aside, I was really blown away by what they were able to create inside a browser at reasonable download speeds. The engine supports multi-platform deployment, physics and all kinds of amazing stuff. That said however, I haven't come across a single site which uses the unity web player or even seen any indie projects which utilize the engine.

I was wondering if someone help shed some light onto why no one has really adopted it yet, or is it literally because it's so darn expensive?

Can only speak from my own experience, but:

a) it's closed-source
b) it's expensive
c) you can only do Mac and web-based deployments unless you spring for the Pro license (which is even MORE expensive)
d) apparently there's some issues with source control on the objects it generates, so that you have to use their source control product or go through some hoops
e) development environment is Mac-only
f) networking support is there but feels incomplete for some reason

That said, it's generally a pretty polished development environment and if it wasn't $1500 I'd certainly consider it for game development.

(Compare with, say, Torque, which while not looking as quite as polished, is only $300 and comes with complete source, or the C4 engine which is $350 and also comes with source.)

This amused me in the Unity marketing page for the new version (2.1): "When Funcom, makers of Anarchy Online and Age of Conan, decide to use Unity for their upcoming browser based MMO project, you know it has to be good."

Hypnobeard fucked around with this message at 00:11 on Aug 9, 2008

Hanpan
Dec 5, 2004

Tolan posted:

Can only speak from my own experience, but:

a) it's closed-source
b) it's expensive
c) you can only do Mac and web-based deployments unless you spring for the Pro (I *think* that's what it's called) license (which is even MORE expensive)
d) apparently there's some issues with source control on the objects it generates, so that you have to use their source control product or go through some hoops
e) development environment is Mac-only
f) networking support is there but feels incomplete for some reason

That said, it's generally a pretty polished development environment and if it wasn't $1450 I'd certainly consider it for game development.

(Compare with, say, Torque, which while not looking as quite as polished, is only $300 and comes with complete source, or the C4 engine which is $350 and also comes with source.)

Thanks a lot for this detailed response Tolan. It does seem that the major issue is with the price. (I am Mac based, so no problems there. Source control doesn't really phase me that much either.)

It's certainly worth downloading the trial version at least I suppose, thanks again for you reply!

Hypnobeard
Sep 15, 2004

Obey the Beard



Hanpan posted:

Thanks a lot for this detailed response Tolan. It does seem that the major issue is with the price. (I am Mac based, so no problems there. Source control doesn't really phase me that much either.)

It's certainly worth downloading the trial version at least I suppose, thanks again for you reply!

Yeah, if you're not worried about stand-alone Windows deployment, it's definitely worth a look. The closed-source might be a bit worrying, but the product is pretty polished for all that.

Enjoy. I'm going to be reviewing it again myself, I think.

Just to follow up--there's also some features missing from the indie version; be sure to read the page about the difference between the indie and the pro version carefully. If you can live without those features (realtime shadows, I think, plus a few other fairly common features) then the indie will work for you.

Hypnobeard fucked around with this message at 00:55 on Aug 9, 2008

schnarf
Jun 1, 2002
I WIN.
I'm working on generating a PVS set for a portal engine. The calculations I'm doing are for load-time, so each sector will have a set of potentially visible sectors that can be used during run-time. My algorithm is pretty standard I think: For the particular sector we're generating a PVS for, clear out the clip plane, then recurse for each portal in that sector. In the next step we form a clipping volume made of a bunch of planes, which is created from the portal we came from and every portal in that new sector. Then once we have a clip volume, we clip that to each successive portal so the clip volume gets smaller and smaller as we recurse more.

My explanation might not be very clear, but the part I'm having trouble with isn't too interconnected with it. I'm having trouble with a good general way to generate and modify that clipping volume. I have a general idea of how to generate the clipping volume, but it's not complete. Say we're generating a clip volume between A and B:
code:
for each Edge curEdge in A:
    for each Vertex curVtx in B:
        Add to the clipping volume Plane(curEdge.vtx1, curEdge.vtx2, curVtx)
    end for
end for
But after that I need to somehow make the clipping volume minimal. It seems like this part will be pretty ugly. Same deal with clipping down the clipping volume to the next portal.

Basically, am I at all in the right direction? There has to be a nicer way to generate and clip a clipping volume. Alternatively, is there a more effective portal PVS generation algorithm that I haven't found? My method is the method I've seen in a lot of places, except all of those tutorials show it in 2D, which is way easier.

schnarf fucked around with this message at 05:21 on Aug 12, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

schnarf posted:

I'm working on generating a PVS set for a portal engine. The calculations I'm doing are for load-time, so each sector will have a set of potentially visible sectors that can be used during run-time. My algorithm is pretty standard I think: For the particular sector we're generating a PVS for, clear out the clip plane, then recurse for each portal in that sector. In the next step we form a clipping volume made of a bunch of planes, which is created from the portal we came from and every portal in that new sector. Then once we have a clip volume, we clip that to each successive portal so the clip volume gets smaller and smaller as we recurse more.
Portal engines normally work by casting the view frustum through portals, clipping them when they go through, and they do this at runtime.

I can tell you how Quake's vis tool solves sector-to-sector, which is first solving portal-to-portal visibility, and then just making a visibility bitfield. A sector is considered visible from another sector if any portal in one can see any portal in another.

Portal-to-portal I'm a bit hazy on, but the jist is to find a plane such that both portals are on the plane, and any occluding portals are either behind one of the portals, or are completely on either side of that plane. The candidate planes come from essentially every edge-point combination in the list of offending portals and the portals themselves. Needless to say, this is SLOOOOW. If you can find such a plane, then the two portals can see each other.

(Note that a "portal" in this sense isn't just from open areas into open areas, but also from open areas into solids, i.e. walls, which are considered occluding portals)

q3map uses two systems, the old portal system, a new "passage" system that I know nothing about, and a third method that uses both. You can check out visflow.c in the Quake 3 source code for details, but it really doesn't explain the implementation at all so it's extremely hard to read.

OneEightHundred fucked around with this message at 06:57 on Aug 12, 2008

schnarf
Jun 1, 2002
I WIN.

OneEightHundred posted:

Portal engines normally work by casting the view frustum through portals, clipping them when they go through, and they do this at runtime.

I can tell you how Quake's vis tool solves sector-to-sector, which is first solving portal-to-portal visibility, and then just making a visibility bitfield. A sector is considered visible from another sector if any portal in one can see any portal in another.

Portal-to-portal I'm a bit hazy on, but the jist is to find a plane such that both portals are on the plane, and any occluding portals are either behind one of the portals, or are completely on either side of that plane. The candidate planes come from essentially every edge-point combination in the list of offending portals and the portals themselves. Needless to say, this is SLOOOOW. If you can find such a plane, then the two portals can see each other.

(Note that a "portal" in this sense isn't just from open areas into open areas, but also from open areas into solids, i.e. walls, which are considered occluding portals)

q3map uses two systems, the old portal system, a new "passage" system that I know nothing about, and a third method that uses both. You can check out visflow.c in the Quake 3 source code for details, but it really doesn't explain the implementation at all so it's extremely hard to read.
Thanks. I'm trying to avoid as many runtime calculations as possible, which is why I'm trying out this PVS idea.

My approach is pretty similar to clipping the view frustum though, but instead of starting out with a view frustum I start with an empty or infinite frustum. So your portal to portal method should work, correct? When you say a plane such that both portals are on it, do you mean it meets a vertex or two of theirs incidentally?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Okay, the way vis works is a bit different than I thought (and I'm still trying to figure it out...), but here's my current understanding:

Portals build a list of other visible portals by starting in one sector, and trying to see through a chain of other portals to other sectors. The separating plane is used to determine how much of the portal is visible, if any.

Say you had a chain of sectors/portals as follows:

A -> pAB -> B -> pBC -> C -> pCD -> D -> pDE -> E

... and you're trying to determine what portals are visible from pAB.

pAB can always see pBC because they share a sector.

To determine how much of pCD is visible, it tries creating separator planes using edges from pAB, and points from pBC. A separator plane is valid if all points on pAB are on one side, and all points of pBC are on the other. pCD is then clipped using that plane, so that everything remaining is on the same side as pBC. If there is nothing remaining, then that portal can't be seen.

To determine how much of pDE is visible, the same process is done, except the separating planes are calculated using whatever's left of pCD instead of pBC.

Hope that makes sense.

OneEightHundred fucked around with this message at 18:46 on Aug 12, 2008

Murodese
Mar 6, 2007

Think you've got what it takes?
We're looking for fine Men & Women to help Protect the Australian Way of Life.

Become part of the Legend. Defence Jobs.
Trying to make a 3d person camera in opengl that can be attached to objects and will stay m_zoomLevel distance behind it, no matter the orientation.

I have it set up so that using the cursor keys will turnleft/right/pitchup/down the player object. The camera object is attached to the player object and should just follow it around, but I'm running into problems related to my dumbness and opengl's matrices.

Some very rough psuedocode explaining how I have it;

(note that the class variable layout is not actually how I have it, but is indicative of the variables available to the classes)

code:
class Camera inherit BaseObject
{
    float m_zoomLevel;
    Quaternion m_orientation;
    Vector3 m_position;
    BaseObject *m_attached;
}

class Player inherit BaseObject
{
    Quaternion m_orientation;
    Vector3 m_position;
    BaseObject *m_attached_to;
}

// in init function

void init()
{
    camera->attach(player);
}

// in renderscene (again not actually within renderscene but an example without functions so it's centralised)

void renderScene()
{
    //opengl clearcolor etc, already set to modelview
    glLoadIdentity();

    // set the camera's orientation to the same as the attached object
    // i figured this should be ok, as the camera will always be set to be directly
    // behind the attached object
    camera->m_orientation = Quaternion(camera->m_attached->m_orientation); 
    
    // i would think that this should set the camera's position to m_zoomlevel units directly behind
    // the object
    camera->m_position = camera->m_attached->m_position + (camera->m_attached->m_orientation.getFront() * -camera->m_zoomLevel);

    Matrix4 transform(camera->m_orientation.toMatrix());
    
    // add translations to the transformation matrix
    transform[12] = camera->m_position[0];
    transform[13] = camera->m_position[1];
    transform[14] = camera->m_position[2];

    glMultMatrixf(transform.getMatrix());

    glPushMatrix();
        Matrix4 objtransform(player->m_orientation.toMatrix());
        objtransform[12... set positions etc

        glMultMatrixf(objtransform.toMatrix());

        // draw stuff

    glPopMatrix();

// rest of stuff
}
The camera never seems to be able to get itself to the right position, and I am assuming this is because player is still being translated to its position after the mvmatrix is set to the camera's position, ending in horrible results of global rotations and vomit on the window.

Is this the right way to do it and I'm missing something in the player's transformation step, or am I retarded for doing it like this? :(

Murodese fucked around with this message at 19:08 on Aug 12, 2008

schnarf
Jun 1, 2002
I WIN.

OneEightHundred posted:

Okay, the way vis works is a bit different than I thought (and I'm still trying to figure it out...), but here's my current understanding:

Portals build a list of other visible portals by starting in one sector, and trying to see through a chain of other portals to other sectors. The separating plane is used to determine how much of the portal is visible, if any.

Say you had a chain of sectors/portals as follows:

A -> pAB -> B -> pBC -> C -> pCD -> D -> pDE -> E

... and you're trying to determine what portals are visible from pAB.

pAB can always see pBC because they share a sector.

To determine how much of pCD is visible, it tries creating separator planes using edges from pAB, and points from pBC. A separator plane is valid if all points on pAB are on one side, and all points of pBC are on the other. pCD is then clipped using that plane, so that everything remaining is on the same side as pBC. If there is nothing remaining, then that portal can't be seen.

To determine how much of pDE is visible, the same process is done, except the separating planes are calculated using whatever's left of pCD instead of pBC.

Hope that makes sense.
That's really helpful, thanks. I basically was looking for a criterion to find separator planes, and what you gave seems to work well.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Murodese posted:

Is this the right way to do it and I'm missing something in the player's transformation step, or am I retarded for doing it like this? :(
I'd recommend doing matrix math with your own code and just using glLoadMatrix, it makes things easier to debug (like if one of your matrix multiplies is backwards).

code:
// i would think that this should set the camera's position to m_zoomlevel units directly behind
// the object
camera->m_position = camera->m_attached->m_position + (camera->m_attached->m_orientation.getFront()
   * -camera->m_zoomLevel); 
Generally you want to do zoom in the projection matrix by changing the FOV.

OneEightHundred fucked around with this message at 20:11 on Aug 12, 2008

shodanjr_gr
Nov 20, 2007
Is there a place where i can download ALL the official tutorial material for XNA game development (D3D stuff, HLSL stuff etc)? I am going on a vacation at a barbaric place with no high speed internet, and id like to have all the necessary material with me....


In other news, the OpenGL 3.0 spec is out, and it looks crappy...

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

In other news, the OpenGL 3.0 spec is out, and it looks crappy...
Went from "major refactoring of the API to get rid of legacy cruft" to "yeah, we marked those features deprecated, we promise we'll remove them next time, by the way, here are some more extensions made core."

Might as well just call it OpenGL 2.2.

shodanjr_gr
Nov 20, 2007
Weren't they planing at some point to switch to a more object oriented approach, instead of this whole freaking "bind identifiers to stuff then switch states using those identifiers" mentallity?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Weren't they planing at some point to switch to a more object oriented approach, instead of this whole freaking "bind identifiers to stuff then switch states using those identifiers" mentallity?
They added that as an extension:
http://www.opengl.org/registry/specs/EXT/direct_state_access.txt
... which is core in OpenGL 3.

As far as the evolutionary fixes, what they did was introduce a deprecation model. A context can be made as "full" or "forward compatible", with deprecated features not being available in "forward compatible" contexts. A lot of the legacy cruft is gone in "forward compatible," but the immutability and asynchronous object stuff isn't there yet.

OneEightHundred fucked around with this message at 23:50 on Aug 12, 2008

POKEMAN SAM
Jul 8, 2004

SheriffHippo posted:

(I used a C# class I found on CodeProject.com which uses a system hook to trap all mouse input)
http://www.codeproject.com/KB/cs/globalhook.aspx

Just a warning, but I've noticed some [over-]zealous Anti-Virus applications will freak out at system wide mouse/keyboard hooking because it thinks they might be keyloggers (even if only the mouse is being caught.)

shodanjr_gr
Nov 20, 2007

OneEightHundred posted:

They added that as an extension:
http://www.opengl.org/registry/specs/EXT/direct_state_access.txt
... which is core in OpenGL 3.

I dont see how Direct State Access makes OpenGL more Object Oriented.

Sure, it saves you switching active Texture Units/Matrices etc but you still have to screw around with uint identifiers if you cant be bothered to write wrapper classes/methods for everything...

I hoping for something like:

Texture my_texture;
my_texture.setParameter(GL_TEXTURE.GL_MIN_FILTER, GL_TEXTURE.GL_LINEAR);

I dont know...maybe Java has spoiled me...

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

I hoping for something like:

Texture my_texture;
my_texture.setParameter(GL_TEXTURE.GL_MIN_FILTER, GL_TEXTURE.GL_LINEAR);

I dont know...maybe Java has spoiled me...
OpenGL's designed to be compatible with non-object-oriented languages including vanilla C. What is an improvement is that you can actually write object-oriented wrappers without having to gently caress with the state machine constantly so that you don't clobber the selectors, and execute most operations without side-effects. It's not like it's a huge advantage when you're not calling everything through a device object and the syntax for everything else is really light.

It's not like there are really that many object types in OpenGL either, and you'd probably want to write a wrapper anyway if you ever wanted to port to D3D. Aside from that, it's not really much more work to do this:

code:
VertexBuffer *vb = glDevice->NewVertexBuffer();
void *mem = vb->Map(GL_WRITE_ONLY);
<stuff>
vb->Unmap();
Than it is to do this:
code:
GLuint vb;
glGenBuffers(1, &vb);
void *mem = glMapNamedBufferEXT(vb, GL_WRITE_ONLY);
<stuff>
glUnmapNamedBufferEXT(vb);
But it is more work to do this poo poo:

code:
GLint oldBuffer;
glGetIntegerv(GL_ARRAY_BUFFER_BINDING, &oldBuffer);
GLuint vb;
glGenBuffers(1, &vb);
glBindBuffer(GL_ARRAY_BUFFER, vb);
void *mem = glMapBufferARB(GL_ARRAY_BUFFER, GL_WRITE_ONLY)
glBindBuffer(GL_ARRAY_BUFFER, oldBuffer);
<stuff>
glGetIntegerv(ARRAY_BUFFER_BINDING, &oldBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vb);
glUnmapBufferARB(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, oldBuffer);
... which makes wrappers a necessity just to keep your sanity.

What's ironic is that they STILL don't let you update textures directly, but they did find a way to work in stuff like deprecating passing anything other than zero to the border parameter of glTexImage2D. Yeah, good work Khronos, way to design that technology.

OneEightHundred fucked around with this message at 02:25 on Aug 13, 2008

Shazzner
Feb 9, 2004

HAPPY GAMES ONLY

welp I guess long live direct3d

Murodese
Mar 6, 2007

Think you've got what it takes?
We're looking for fine Men & Women to help Protect the Australian Way of Life.

Become part of the Legend. Defence Jobs.

OneEightHundred posted:

Generally you want to do zoom in the projection matrix by changing the FOV.

This isn't actually zoom, it's just indicative of the number of units the camera sits behind the attached object in third person mode.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Shazzner posted:

welp I guess long live direct3d
Well, it's a bit of a hard call still, they did add instancing support, geometry shaders are still available on hardware that supports them, so it is up to Direct3D 10 in terms of features (except with WinXP support, so developers don't have to write two renderers).

Direct3D 11 looks like it's set to break new ground in underwhelming, so there's room to maneuver. Regardless, D3D is going to stay on top in Windows development just because of inertia. OpenGL had time to capitalize on D3D's overpriced draw calls long before D3D10 fixed it, they didn't do it then, they're not going to beat D3D to the punch tomorrow, so it's pretty much doomed to second place forever on Windows at this point.

OneEightHundred fucked around with this message at 05:16 on Aug 13, 2008

captain_g
Aug 24, 2007
Do directx 10 and 11 still continue the COM-style?

If so then :barf:

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

captain_g posted:

Do directx 10 and 11 still continue the COM-style?
Of course. Microsoft needs to keep pushing their technology.

11's features are basically:
- Multithreaded rendering. Who gives a poo poo, sending things to D3D is not expensive any more, you don't need to parallelize your D3D calls.
- Tessellation. Because developers were gladly willing to surrender control for the sake of renderer speed in 2001, I'm sure they'll do it this time around, it's not like relief mapping actually works or anything.
- Compute shaders, which OpenGL has mechanisms for already.
- Order-independent transparency. Yay, state-trashing. :barf:

OneEightHundred fucked around with this message at 07:34 on Aug 13, 2008

TSDK
Nov 24, 2003

I got a wooden uploading this one

OneEightHundred posted:

11's features are basically:
- Multithreaded rendering. Who gives a poo poo, sending things to D3D is not expensive any more, you don't need to parallelize your D3D calls.
Submitting batches is actually pretty drat expensive CPU wise (i.e. framerate limiting). Being able to thread off calls is going to be handy, especially as more engines start threading more stuff off in order to take advantage of multicore systems.

Null Pointer
May 20, 2004

Oh no!
D3D11's multithreading features have a lot more to do with thread safety than they do with performance. Everything I've heard says that render state manipulation still involves a critical section, but device access and resource creation do not. What this really means is that your content management thread won't need to be so chatty with your rendering thread, which is a pretty huge thing unless you have an inexplicable hard-on for loading screens and stuttering.

Also, the last I heard the order-independent transparency was being billed as a hardware feature, which makes me quite skeptical that they're using an algorithm that will trample state or murder performance like depth peeling. It's a lot more likely that they're using an accumulation buffer with a modest number of samples - something in the area of 8-16 fragments per pixel. Order-independent transparency is interesting for having fluid and aerosol simulations on the GPU, not because of a problem a BSP already solves.

TSDK posted:

Submitting batches is actually pretty drat expensive CPU wise (i.e. framerate limiting). Being able to thread off calls is going to be handy, especially as more engines start threading more stuff off in order to take advantage of multicore systems.
It's worth mentioning that Direct3D 10 was designed with the Xbox 360 best-practices in mind: using a separate (and typically dedicated) thread for rendering. This is in spite of the fact that the Xbox 360 has no real operating system overhead. What you do in Direct3D 10 is really just as expensive as it was under Direct3D 9, except some costs are deferred and some inefficiencies are optimized away. The performance gains from a multithreaded renderer can't be discounted until the day we all migrate to hardware scenegraph cards.

StickGuy
Dec 9, 2000

We are on an expedicion. Find the moon is our mission.

Null Pointer posted:

Order-independent transparency is interesting for having fluid and aerosol simulations on the GPU, not because of a problem a BSP already solves.
It will also be helpful for rendering intersecting transparent geometry without having to resort to tesselation to get the right result.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

TSDK posted:

Submitting batches is actually pretty drat expensive CPU wise (i.e. framerate limiting). Being able to thread off calls is going to be handy, especially as more engines start threading more stuff off in order to take advantage of multicore systems.
D3D10's draw calls are a good deal cheaper (or at least, that was the goal). Even without that, I don't really see a huge advantage over just having a primary rendering thread that exists solely to throw commands at D3D. It's not an operation that takes up enough time to really see much of a gain from being parallelized, as opposed to physics and animation which are currently the big CPU drainers.

quote:

It's a lot more likely that they're using an accumulation buffer with a modest number of samples - something in the area of 8-16 fragments per pixel.
OpenGL's had accumulation buffers in the spec for ages, so this isn't really amazing. Just means that suddenly Microsoft has decided that it would be nice to have on consumer cards.

quote:

What this really means is that your content management thread won't need to be so chatty with your rendering thread
I haven't really had a problem with this, it's just more commands to throw on the render queue. If preloading is a problem, map everything, thread the loading, then unmap everything when it's done.

Edit:

StickGuy posted:

It will also be helpful for rendering intersecting transparent geometry without having to resort to tesselation to get the right result.
Well, at the same time, developers don't seem to have the hard-on for transparency that they did in the Quake 2/3 days. The Unreal engine doesn't really even bother sorting transparent objects, because "transparent objects" tends to mean "particles" and little more.

OneEightHundred fucked around with this message at 16:53 on Aug 13, 2008

PnP Bios
Oct 24, 2005
optional; no images are allowed, only text
http://www.devmaster.net/news/index.php?storyid=2062

Truespace 7.6 is now free. This is good news for those of us who can't wrap our heads around blender.

Tragic Fat Detective
Jul 12, 2002

           soulja boy tell em
Can anyone recommend a good book or online resource for game engine design and best practices? Thanks goons.

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through
I'm trying to implement Quake 3's .shader scripts into a HLSL shader and I'm not quite sure how I should do it.

Here's an example of one of the scripts.
http://pasteall.org/2104

The first line is the name of the shader and each block within the shader is a "stage" that describes how and in what order a particular texture is used in the shader. What I want to know is how I should make a generic shader that can have data filled in by these scripts. This particular shader has 5 stages, and I'm not sure what the limit is. I'd rather not even use a limit, but don't I have to make a different shader for every possible number of stages? Or is there someway I can make a shader that uses an arbitrary number of passes and I just set the texture and the other stage variables every pass?


Big Woman posted:

Can anyone recommend a good book or online resource for game engine design and best practices? Thanks goons.

Not sure what exactly your looking for, but if you have no experience with 3D engines, Frank Luna's Introduction to 3D Game Programming with Direct X 9.0c: A Shader Approach is the best I've ever read.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

I'm trying to implement Quake 3's .shader scripts into a HLSL shader and I'm not quite sure how I should do it.
Your best bet is to create a "signature" to avoid duplicating shaders. The stage limit will actually vary:

- If a stage is using rgbgen or alphagen with a non-static value, then you'll need to allocate a texcoord to the color. If it's static, you can bake it into the pixel shader.
- If a stage is using a tcgen, then you need to allocate a texcoord to passing the generated texcoords.
- Doing tcmod transforms in the pixel shader may allow you to save texcoords, but is slower and should be a last resort.

All tcmods except turb can be combined as a 2x3 matrix, so it's best to just do that in the front-end and pass it as a uniform.

tcmod turb fucks everything up and you should probably just ignore it.



Realistically, the way Quake 3 does shaders is a bad template for a material system. You shouldn't have to tell the engine exactly how to render a lightmapped wall, for example, it ruins your scalability. The best way I've heard it describes is that ideally, your material system should not define how to render a specific material, it should define what it looks like. Or at least, you shouldn't be defining both in the same spot. Scalable engines have multiple ways to render a simple wall, so all you should really need to do is give it a list of assets and parameters to render a "wall" (i.e. the albedo, bump, reflectivity, and glow textures), tell it that it's a "wall," and have it figure out the rest. Build complex materials by compositing those simple types together.

The system I'm using is a data-driven version of this and has three parts for a simple diffuse surface:

The default material template, which tries loading assets by name and defines how to import them, and references "Diffuse.mrp" as the rendering profile:
http://svn.icculus.org/teu/trunk/tertius/release/base/default.mpt?revision=177&view=markup

Diffuse.mrp, which is a rendering profile that defines how to render a simple diffuse-lit surface. This is the branch-off for world surfaces:
http://svn.icculus.org/teu/trunk/tertius/release/base/materials/profiles/DiffuseWorldSurfaceARBFP.mrp?revision=152&view=markup

The shader and permutation projects:
http://svn.icculus.org/teu/trunk/tertius/release/base/materials/cg/world_base_lightmap.cg?revision=185&view=markup
http://svn.icculus.org/teu/trunk/tertius/release/base/materials/fp/p_world_base_lightmap.svl?revision=183&view=markup
http://svn.icculus.org/teu/trunk/tertius/release/base/materials/vp/v_world_base_lightmap.svl?revision=177&view=markup


There are simpler ways to do this, of course. Half-Life 2 just defines the surface type for every material (which does wonders for its load times!) and those surface types have various ways of rendering defined for different hardware levels and configurations.

OneEightHundred fucked around with this message at 09:04 on Aug 19, 2008

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through

OneEightHundred posted:



I'm not quite sure I asked my question right. Best case scenario, I'd like to just have one HLSL .fx for all possible Quake 3 .shaders. Having to hand make a different .fx for every .shader I want to use is way too annoying to do.

Also, I could be understanding your answer wrong.

edit: I'm writing an XNA library to load and use Quake 3 assets so I'm kind of stuck with having to use .shaders as I don't really feel like making my own fork of gtkradiant. Really, even if .shaders describe how a material looks and how to render it, can't I just pick out the look part and toss the "how to render" part?

MasterSlowPoke fucked around with this message at 09:19 on Aug 19, 2008

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
You can't even guarantee that you'd be able to single-pass it because of specific combinations. For example, suppose you had an additive layer followed by an alpha blend layer (these exist in the game!!), you CAN'T single-pass that because whatever your result is can still only be drawn to the framebuffer with one blend function and there's no such thing as a "blend shader" yet.


Even with just one, you'd have to permute the poo poo out of it so you get results that use the proper tcgen (on the vertex shader) and blendfunc (on the pixel shader).


My suggestion isn't to hand-code them, but rather, to generate them at runtime and compile them then. The alternative is to do all possible permutations in advance, which takes a LONG time, but will speed up load times a lot.

quote:

Really, even if .shaders describe how a material looks and how to render it, can't I just pick out the look part and toss the "how to render" part?
You can, sort of, but what you're doing is effectively analyzing the shader and seeing how much work you can combine into a single pass or shader, and then doing it. Even Quake 3 itself does this, the difference is that it does it using the fixed-function pipeline which is designed to be instantly reprogrammable, whereas you're using pixel shaders which need to be compiled in advance. In order for it to really be optimal, you're going to want to generate shaders at runtime or during load.

OneEightHundred fucked around with this message at 09:27 on Aug 19, 2008

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply