Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
stramit
Dec 9, 2004
Ask me about making games instead of gains.

Thug Bonnet posted:

My question: Is there any reason to ever use immediate mode in OpenGL for anything but toy programs?

Real answer: No there is no reason too. You can just wrap any immidiate call into a VBO / dynamic VBO and do it that way.

Practical answer: I'm lazy and draw full screen quads in immidiate mode all the time :S. It's only 4 verts, it's not that slow right...

Adbot
ADBOT LOVES YOU

stramit
Dec 9, 2004
Ask me about making games instead of gains.
My awesome shadow mapping tutorial (in OpenGL)

Disclaimer: I whipped this up pretty quick from some old source code, there are many optimisations I can see from just looking though the source ect, but it's a pretty good starting point to getting shadow mapping up and running

Okay to start with there are three main steps to getting shadow maps working, each is important, and it's best to tackle the problems in an incremental way. Before you begin you really need to get yourself acquainted with the coordinate space which you will be using for texture projection and the like. Ensure you understand the notions of Model Space, World Space, Eye Space, and Texture Space and how all the matrix translations relate before trying to delve into texture projection.

The three main steps to getting texture projection working are:

1) Texture projection - Projection is what is used to get a shadow aligned with the target surface. The best way to start is just by projecting a generic texture onto a surface.
2) Depth render - Rendering a depth buffer from the lights perspective for use as a depth comparison.
3) Performing the comparison to not colour surface which are in shadow.

The example I am going to run through here is quite simple, it is a perspective projection, there are really no lights in the scene, and no filtering is applied to the shadow so it has rough edges.

Texture projection
Things you will need to get this working:

1) A scene that is working
2) A nice camera / transform system which allows access to the transform and projection matrices for the scene.

Texture projection is a relatively simple concept. Much like a camera a projector can be considered to have a frustum. That is it has an volume which is projected into. Because of this a projector has a projection matrix like a camera. This matrix determines the volume and type of the projection. A projector can either project in orthographic or perspective mode, just like a camera. For information on how the projection matrices work you should check up in the Open GL specification if you don't already know.

In the following example the following happens.

1) The scene is rendered using standard textures.
2) The scene is rendered again using the texture projection in an adative blend. Where the texture is black no colour is applied. Where the texture is coloured colour is adatively blended in.

code:
  public void render() {
    //Clear FBO's
    glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );

    //Pass 1, render scene
    textureShader.bind();
    GL20.glUniformMatrix4( basicProjectionMatrixUniform, false, Matrix4fToFloatBuffer( context.getProjectionMatrix() ) );
    GL20.glUniformMatrix4( basicCameraMatrixUniform, false, Matrix4fToFloatBuffer( testCamera.getCameraMatrix() ) );
    GL20.glUniform1i( basicDiffuseTextureUniform, 0 );
    for( Actor actor : actors ) {
      GL20.glUniformMatrix4( basicModelMatrixUniform, false, Matrix4fToFloatBuffer( actor.getCurrentTransformMatrix() ) );
      GL11.glBindTexture( GL11.GL_TEXTURE_2D, actor.getTexture().textureId );
      actor.getMesh().draw( GL11.GL_TRIANGLES );
    }
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    Shader.unbind();

    //Pass 2, render projector
    glEnable( GL_BLEND );
    glBlendFunc( GL_ONE, GL_ONE ); // additive blending
    textureProjectionShader.bind();
    GL20.glUniformMatrix4( projProjectionMatrixUniform, false, Matrix4fToFloatBuffer( context.getProjectionMatrix() ) );
    GL20.glUniformMatrix4( projCameraMatrixUniform, false, Matrix4fToFloatBuffer( testCamera.getCameraMatrix() ) );
    GL20.glUniformMatrix4( projTexutureMatrixUniform, false, Matrix4fToFloatBuffer( textureProjector.getTextureMatrix() ) );
    GL20.glUniform1i( projTextureUniform, 0 );
    glBindTexture( GL_TEXTURE_2D, projectionTexture.textureId );
    for( Actor actor : actors ) {
      GL20.glUniformMatrix4( projModelMatrixUniform, false, Matrix4fToFloatBuffer( actor.getCurrentTransformMatrix() ) );
      actor.getMesh().draw( GL11.GL_TRIANGLES );
    }
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    Shader.unbind();
    glDisable( GL_BLEND );
  }
Pass one is very simple, all it does is render the scene using the given objects texture. Pass two is a bit more complex, but in non shader code is very simple. All that is down is an extra matrix is passed through to the shader. This matrix determines the position of the projector, it's rotation in the world and the projection (i.e it is pre multiplied).

Before looking into the shader itself it's probably bet to quickly cover the texture projection class, the code is documented and should provide the needed information. the only really interesting thing to note is that an offset matrix is required to convert points from perspective space (-1, 1) into a range of (0, 1) for the texture lookup.

code:
/* Used for texture projection */
public class TextureProjector {
  //Convert from -1 -> 1 into 0 -> 1
  private static final Matrix4f OFFSET_MATRIX;
  static {
    float[] offset = {0.5f, 0, 0, 0.0f,
                      0, 0.5f, 0, 0.0f,
                      0, 0, 0.5f, 0.0f,
                      0.5f, 0.5f, 0.5f, 1.0f};
    OFFSET_MATRIX = new Matrix4f( offset );
  }

  private Vector3f worldPosition;
  private Vector3f lookAt;
  private Matrix4f viewMatrix;
  private Matrix4f perspectiveMatrix;

  public TextureProjector( @NotNull Vector3f worldPosition,
                           @NotNull Vector3f lookAt,
                           float nearPlane,
                           float farPlane,
                           float fovy,
                           float aspectRatio ) {
    this.worldPosition = worldPosition;
    this.lookAt = lookAt;
    viewMatrix = GLUtilities.getLookAtMatrix( worldPosition, lookAt, new Vector3f( 0.f, 1.f, 0.0f ) );
    perspectiveMatrix = GLUtilities.getPerspectiveMatrix( fovy, aspectRatio, nearPlane, farPlane, false );
  }

  //Construct the texture matrix
  public Matrix4f getTextureMatrix() {
    Matrix4f result = new Matrix4f();
    result.mul( viewMatrix, perspectiveMatrix );
    result.mul( result, OFFSET_MATRIX );
    return result;
  }
}
The shaders are really simple, the vertex shader translates a vertex as per usual, but also calculates an additional texture coordinate which is the projected coordinate for the texture:
code:
uniform mat4 cameraMatrix;
uniform mat4 modelMatrix;
uniform mat4 projectionMatrix;

uniform mat4 projectionTextureMatrix;

void main()
{
  gl_Position = projectionMatrix * cameraMatrix * modelMatrix * gl_Vertex;
  gl_TexCoord[0] = gl_MultiTexCoord0;
  gl_TexCoord[1] = projectionTextureMatrix * modelMatrix * gl_Vertex;
}
The fragment shader simply uses the texture by sampling it from that coordiate:
code:
uniform sampler2D projectionTextureUnit;

void main()
{
	vec4 colour = vec4(0,0,0,1);

	if(gl_TexCoord[0].w > 0.0){
		colour.rgb = texture2DProj(projectionTextureUnit, gl_TexCoord[1]).rgb;
	}

	gl_FragColor = colour;
}

Click here for the full 1104x870 image.


Writing the depth information from the lights perspective
The first thing you need to do is set up a FrameBufferObject with a depth texture. You probably won't need a colour buffer as you won't be writing colour for the depth write stage.

This is a little codedump on how to do it, if you want an explanation I can provide it but it's a bit out of the scope of this tutorial. The code simply creates a depth texture which can be written to and read from.
code:
      createdContext.depthTexture = TextureFactory.generateDepthStencilFBOTexture( width, height );
      EXTFramebufferObject.glFramebufferTexture2DEXT( EXTFramebufferObject.GL_FRAMEBUFFER_EXT,
                                                      EXTFramebufferObject.GL_DEPTH_ATTACHMENT_EXT,
                                                      GL11.GL_TEXTURE_2D,
                                                      createdContext.depthTexture.textureId,
                                                      0 );

 @NotNull
  public static Texture generateDepthStencilFBOTexture( int width, int height ) {
    Texture loadedTexture = new Texture();
    loadedTexture.textureId = generateTextureId();
    loadedTexture.width = width;
    loadedTexture.height = height;

    GL11.glEnable( GL11.GL_TEXTURE_2D );
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, loadedTexture.textureId );

    GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST );
    GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST );
    GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_CLAMP );
    GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_CLAMP );
    GL11.glTexImage2D( GL11.GL_TEXTURE_2D,
                       0,
                       EXTPackedDepthStencil.GL_DEPTH24_STENCIL8_EXT,
                       width,
                       height,
                       0,
                       EXTPackedDepthStencil.GL_DEPTH_STENCIL_EXT,
                       EXTPackedDepthStencil.GL_UNSIGNED_INT_24_8_EXT,
                       (ByteBuffer) null );
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    registerTextureId( loadedTexture.textureId );
    return loadedTexture;
  }
Rendering the depth texture is extremely simple. Just render the scene as you would normally, but instead of using the camera transform matrices use the lights! (I have left colour writes on here, but you should not do that!)
code:
    //Generate depth texture from the lights (projectors) perspective
    shadowBufer.activate();
    glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );
    textureShader.bind();
    GL20.glUniformMatrix4( basicProjectionMatrixUniform, false, Matrix4fToFloatBuffer( textureProjector.getPerspectiveMatrix() ) );
    GL20.glUniformMatrix4( basicCameraMatrixUniform, false, Matrix4fToFloatBuffer( textureProjector.getViewMatrix() ) );
    GL20.glUniform1i( basicDiffuseTextureUniform, 0 );
    for( Actor actor : actors ) {
      GL20.glUniformMatrix4( basicModelMatrixUniform, false, Matrix4fToFloatBuffer( actor.getCurrentTransformMatrix() ) );
      GL11.glBindTexture( GL11.GL_TEXTURE_2D, actor.getTexture().textureId );
      actor.getMesh().draw( GL11.GL_TRIANGLES );
    }
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    Shader.unbind();
    shadowBufer.deactivate();
This will result in a depth texture that looks like this: (I just bound this as and rendered it as a full screen aligned quad)


Click here for the full 1104x870 image.

stramit
Dec 9, 2004
Ask me about making games instead of gains.
(Was a bit long for one post :()

Doing the depth comparison
Now for the fun part, putting it all together! What we need to do here is render the scene from the cameras point of view, but perform a projection (using the depth texture) from the lights point of view. At each texel on the screen we then compare the value in the light's depth buffer to the distance from that texel to the light. If the value in the depth buffer is closer then that texel is in shadow, otherwise it is not in shadow.

I set up my pass as follows, here we bind the depth texture as well as the colour texture, the shader is written to only apply the diffuse colour if the the texel passes the depth test:
code:
    //Pass 2, render projector
    glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );
    textureProjectionShader.bind();
    GL20.glUniformMatrix4( projProjectionMatrixUniform, false, Matrix4fToFloatBuffer( context.getProjectionMatrix() ) );
    GL20.glUniformMatrix4( projCameraMatrixUniform, false, Matrix4fToFloatBuffer( testCamera.getCameraMatrix() ) );
    GL20.glUniformMatrix4( projTexutureMatrixUniform, false, Matrix4fToFloatBuffer( textureProjector.getTextureMatrix() ) );
    GL20.glUniform1i( projTextureUniform, 0 );
    GL20.glUniform1i( projDiffuseTextureUnit, 1 );
    GL13.glActiveTexture( GL13.GL_TEXTURE0 );
    glBindTexture( GL_TEXTURE_2D, shadowBufer.depthTexture.textureId );
    for( Actor actor : actors ) {
      GL20.glUniformMatrix4( projModelMatrixUniform, false, Matrix4fToFloatBuffer( actor.getCurrentTransformMatrix() ) );
      GL13.glActiveTexture( GL13.GL_TEXTURE1 );
      GL11.glBindTexture( GL11.GL_TEXTURE_2D, actor.getTexture().textureId );
      actor.getMesh().draw( GL11.GL_TRIANGLES );
      GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    }
    GL13.glActiveTexture( GL13.GL_TEXTURE0 );
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    Shader.unbind(); 
The vertex shader is the same as before. The fragment shader is as follows:

code:
uniform sampler2D ShadowMap;
uniform sampler2D diffuseTextureUnit;

void main (void)
{
  vec3 color = texture2D(diffuseTextureUnit, gl_TexCoord[0].st).rgb;

  //Project the shadow map from the light to this texel (light depth value for this texel)
  float distance1 = texture2DProj( ShadowMap, gl_TexCoord[1] ).r;
  //Do a perspective divide to get the depth at this texel relative to the light
  vec3 distance2 = gl_TexCoord[1].xyz / gl_TexCoord[1].w;

  //Only perform test if inside the projector frustum
  if( distance2.x <= 1.0 && distance2.x >= 0.0
      && distance2.y <= 1.0 && distance2.y >= 0.0
      && gl_TexCoord[1].w >= 0.0){

    //Distance 2 is the actual value at this texel, value 1 is the value 
    //as projected using the texture. If we compare the two we can
    //figure out if the texel is occluded by another light
    //Also Offset a tad to make it look sexy ( no z-fighting)
    if( distance2.z > distance1 + 0.001 ){
	    color *= 0.0;
    }
  }
	  
  gl_FragColor = vec4(color, 1);
}
This leads to a decent looking scene, all it needs now is some nicer lighting and a better shadow sample mode and it should be good to go!


Click here for the full 1104x870 image.


I'm sure you have heaps of questions about all this, it's a pretty decent technique, but there are a few little gotchas along the way. Good luck with implementation! If anyone else has any 'tutorial' requests I can whip them up pretty easily. I have a decent code base with quite a few nifty little demos.

Some I could write up if anyone is interesed are:
Screen space ambient occlusion
Deferred Shading
Using (and abusing) the Phong shading model
Stencil shadows
Normal / Parrallax mapping
A simple HDR pipeline

stramit
Dec 9, 2004
Ask me about making games instead of gains.

shodanjr_gr posted:

I have an issue with some fixed pipeline code I am writting. I'm running the code on my Acer Aspire One, with a GMA945. The problem is that it looks like it is extremely fill-rate limited, and it feels like its not being accelerated by the GMA. Now I understand the crapyness, but this thing runs unreal tournament fine at high resolutions, so it's not a lack of support on the hardware side.

I think that the header files, or the DLLs I am using are not talking to the driver properly and thus everything runs in software mode. I cant track down an "Intel-specific" opengl32.dll anywhere...




Does anyone have any suggestions on where I can go from here?

Try direct x :S It sounds like a dll related issue. Are you running UT in GL or DX? If the first can you track down the DLL that it is using?

stramit
Dec 9, 2004
Ask me about making games instead of gains.

Spite posted:

I think it's absolutely horrible, even if you find a binding that doesn't create some over-engineered OO paradigm.

You are wrong. LWJGL is a great binding. Uses native buffers to imitate float pointers and the like and the API is for the most part a direct binding. There is some Display classes and stuff to get it started that are not GL specific but all the openGL code will be.

http://lwjgl.org/

stramit
Dec 9, 2004
Ask me about making games instead of gains.

RussianManiac posted:

I also had fun time using JOGL - https://jogl.dev.java.net/.

It is relatively easy to translate C++ OpenGL program to this, you just have to use GL object, but it doesn't appear to be over-engineered or too heavy into OO, as it is still a state machine, you just do all your gl calls as function calls on an object.

From what I remember JOGL does a bit more then be a straight up GL wrapper. LWJGL (the gl portion at least) is a straight up wrapper. Functions have the same names and take the same arguments. It just pipes the commands down into native via JNI. I think JOGL tries to do more but it has been a few years (I jumped on the C++ bus).

e: I was wrong about JOGL it is aparently 'just a binding'.

quote:

JOGL differs from some other Java OpenGL wrapper libraries in that it merely exposes the procedural OpenGL API via methods on a few classes, rather than attempting to map OpenGL functionality onto the object-oriented programming paradigm. Indeed, the majority of the JOGL code is autogenerated from the OpenGL C header files via a conversion tool named Gluegen, which was programmed specifically to facilitate the creation of JOGL.

stramit fucked around with this message at 08:51 on Jul 9, 2009

stramit
Dec 9, 2004
Ask me about making games instead of gains.
And whilst all this is 'good practice' if you are writing a little engine to play around with at home it's mostly overkill and it's probably better spending time writing features than optimising the poo poo out of context changes / synchronisation.

Not having a go, this is really important stuff. Just trying to say that if you are just learning to program graphics there are better ways to spend your time.

stramit
Dec 9, 2004
Ask me about making games instead of gains.

PDP-1 posted:

Are there any tips or tricks for debugging shader files?

I'm using DirectX/HLSL and finding writing custom shaders to be very frustrating since it isn't possible to step through the code like you can in a normal IDE and there's no way to print values to a debug file.

Are there any kind of emulators that will take a .fx file as input, let you specify the extern values and vertex declaration, and walk through the code one line at a time?

Use PIX for windows. It comes with the Direct X SDK and allows you to do a variety of things. You can debug a pixel (step through each render call that wrote to that pixel), Check render state and a whole bunch of other features. If your application is in DX10 then it should be pretty easy to use. DX9 PIX is a bit flakey but will still get the job done.

I prefer these to specific 'shader' debuggers as you can check a variety of things and it is based on what YOUR application is doing.

stramit fucked around with this message at 00:09 on May 12, 2010

stramit
Dec 9, 2004
Ask me about making games instead of gains.

tractor fanatic posted:

I'm writing a little program with a top-down camera (like Diablo) and I want to add some shadows. Is shadow volume or shadow mapping a better option for this? I'm concerned shadow mapping will produce aliasing effects since the light sources will be close to the ground but cast long shadows, but I've read in this thread that shadow volumes are more or less unused these days?

There are some specific situations where shadow volumes are useful but for the most part they are not really used. You can filter your shadow maps to reduce aliasing effects, or increase the size of your shadow map buffer. If they are good enough for AAA titles then they should be good enough for you ;)

Adbot
ADBOT LOVES YOU

stramit
Dec 9, 2004
Ask me about making games instead of gains.
I found some awesome OpenCL tutorials here:
http://macresearch.org/opencl

Subscribe in itunes and they have video presentations with slides and everything... reminds me of being back at uni. Oh how times change!

If you know anything about the GPU I would skip the first one and start on the second. Things don't really start to get interesting till the 3'rd though, but the second goes over how OpenCL works. (first lectures are just explaining what GPGPU is ect).

I'm going to do some particle simulations or a raytracer in GPGPU when my new mac arrives I think.

Yay :D

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply