Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

So what are some good fora for getting OGL help?
The OpenGL developer boards.
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=cfrm&c=3

#darkplaces and #neither on AnyNet have a bunch of dabblers in this sort of thing, including myself. Being able to bounce ideas off of other people has helped a shitload.

It's getting harder to recommend anything for OpenGL development because it's become so much more of a pain in the rear end to use. There are at least two ways to do everything now, which means it's really easy to get led into doing things that are completely out of date. And GLUT still blows.

I found that modding existing stuff was a much better starting point, Quake 3's renderer, or QFusion's (which is roughly the same thing) are easy to get started in because they're not using a lot of the really new stuff, but are still mostly based on design principles that still apply.


quote:

What i want to do now is visualize the light's depth buffer (render it to a quad). And i am a bit clueless as to how i can read a GL_DEPTH_COMPONENT texture from inside a shader (using a texture sampler).
You can't read depth values in GLSL, you can only compare them. There are performance reasons behind this, let's just say depth doesn't have to be treated as a linear scale by the card.

GLSL depth textures need to be sampler2DShadow. Sampling them with texture2D does not work on all hardware, so don't do it.

You need to set GL_TEXTURE_COMPARE_MODE to GL_COMPARE_R_TO_TEXTURE_ARB (Do not leave it as GL_NONE!) and GL_TEXTURE_COMPARE_FUNC to whatever you want in the texture environment for the depth texture sampler. The sampler type needs to be texture2DShadow. You can compare another depth value against it by using shadow2DProj(shadowSampler, coord).

(coord.s, coord.t) = the location on the shadow image to use. I think it divides those two by the W as well, to let you use a projection matrix to calculate it. The R coordinate is the depth value you're comparing against.

shadow2DProj returns a vec4 full of 0's if it fails the depth test, according to the texture compare function you specify, and 1 if it passes.

Most ATI hardware doesn't filter depth textures, so if you have an ATI card and it's giving you jaggy, ugly shadows, it's not your fault.


If you need to visualize the depth, then you'll need to render to a non-depth format. I think the only alternative is to read the depth values using glReadPixels, which is slow as gently caress, but will obviously work for debugging.

OneEightHundred fucked around with this message at 06:07 on Aug 4, 2008

Adbot
ADBOT LOVES YOU

shodanjr_gr
Nov 20, 2007

OneEightHundred posted:

If you need to visualize the depth, then you'll need to render to a non-depth format. I think the only alternative is to read the depth values using glReadPixels, which is slow as gently caress, but will obviously work for debugging.

Actually, i set the compare mode to GL_NONE, and then i managed to read it using a sampler_2D and accessing the R-channel.


Ive sorta completed my implementation (although it was jaggy as hell, despite running it on Nvidia hardware). I am using a 16-sample dithering technique now, and the result is better, but not THAT better...

Also, is it possible to do shadow-mapping with point-lights? I suppose you could approximate it by giving the light a very large FOV, but my tests show that this doesnt give good results at all (it obviously decreases the shadowmap detail).


edit: also, agreed on OpenGL being a pain...ive been getting increasingly tempted to port my research code over to D3D and XNA, due to the vast amounts of documentation/tutorials and IDE integration that help you get the technical stuff out of the way easily...Plus i would get to run my stuff on my 360!!

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Actually, i set the compare mode to GL_NONE, and then i managed to read it using a sampler_2D and accessing the R-channel.
Okay, apparently you can read them if comparisons are turned OFF, and you can use shadow2DProj if comparisons are turned ON, doing otherwise makes it vomit. My bad.

quote:

Also, is it possible to do shadow-mapping with point-lights?
You'd have to make 6 shadowmaps and use a pseudo cubemap style solution to truly cover the full range. I say "pseudo" because GLSL has no support for depth texture cubemaps, so you'd have to fake it.

It's one of those things you probably want to fake. Point lights that aren't used purely for decoration usually do have a limited angle range where it's affecting surfaces worth shadowing, so it's best to just cap the FOV boundaries to that range. Or if you REALLY want to cheat, cast the shadow isometrically using a single direction vector.

quote:

edit: also, agreed on OpenGL being a pain...ive been getting increasingly tempted to port my research code over to D3D and XNA, due to the vast amounts of documentation/tutorials and IDE integration that help you get the technical stuff out of the way easily...Plus i would get to run my stuff on my 360!!
Just about everything in D3D9 is doable in OpenGL, D3D's data input handling (that stream source and vertex declaration poo poo) is worse but otherwise it's generally sane. D3DX is very helpful and the documentation on MSDN is certainly much easier to browse than the encyclopedia they call the OpenGL spec.

XNA requires using C# for better or worse.

stramit
Dec 9, 2004
Ask me about making games instead of gains.
My awesome shadow mapping tutorial (in OpenGL)

Disclaimer: I whipped this up pretty quick from some old source code, there are many optimisations I can see from just looking though the source ect, but it's a pretty good starting point to getting shadow mapping up and running

Okay to start with there are three main steps to getting shadow maps working, each is important, and it's best to tackle the problems in an incremental way. Before you begin you really need to get yourself acquainted with the coordinate space which you will be using for texture projection and the like. Ensure you understand the notions of Model Space, World Space, Eye Space, and Texture Space and how all the matrix translations relate before trying to delve into texture projection.

The three main steps to getting texture projection working are:

1) Texture projection - Projection is what is used to get a shadow aligned with the target surface. The best way to start is just by projecting a generic texture onto a surface.
2) Depth render - Rendering a depth buffer from the lights perspective for use as a depth comparison.
3) Performing the comparison to not colour surface which are in shadow.

The example I am going to run through here is quite simple, it is a perspective projection, there are really no lights in the scene, and no filtering is applied to the shadow so it has rough edges.

Texture projection
Things you will need to get this working:

1) A scene that is working
2) A nice camera / transform system which allows access to the transform and projection matrices for the scene.

Texture projection is a relatively simple concept. Much like a camera a projector can be considered to have a frustum. That is it has an volume which is projected into. Because of this a projector has a projection matrix like a camera. This matrix determines the volume and type of the projection. A projector can either project in orthographic or perspective mode, just like a camera. For information on how the projection matrices work you should check up in the Open GL specification if you don't already know.

In the following example the following happens.

1) The scene is rendered using standard textures.
2) The scene is rendered again using the texture projection in an adative blend. Where the texture is black no colour is applied. Where the texture is coloured colour is adatively blended in.

code:
  public void render() {
    //Clear FBO's
    glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );

    //Pass 1, render scene
    textureShader.bind();
    GL20.glUniformMatrix4( basicProjectionMatrixUniform, false, Matrix4fToFloatBuffer( context.getProjectionMatrix() ) );
    GL20.glUniformMatrix4( basicCameraMatrixUniform, false, Matrix4fToFloatBuffer( testCamera.getCameraMatrix() ) );
    GL20.glUniform1i( basicDiffuseTextureUniform, 0 );
    for( Actor actor : actors ) {
      GL20.glUniformMatrix4( basicModelMatrixUniform, false, Matrix4fToFloatBuffer( actor.getCurrentTransformMatrix() ) );
      GL11.glBindTexture( GL11.GL_TEXTURE_2D, actor.getTexture().textureId );
      actor.getMesh().draw( GL11.GL_TRIANGLES );
    }
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    Shader.unbind();

    //Pass 2, render projector
    glEnable( GL_BLEND );
    glBlendFunc( GL_ONE, GL_ONE ); // additive blending
    textureProjectionShader.bind();
    GL20.glUniformMatrix4( projProjectionMatrixUniform, false, Matrix4fToFloatBuffer( context.getProjectionMatrix() ) );
    GL20.glUniformMatrix4( projCameraMatrixUniform, false, Matrix4fToFloatBuffer( testCamera.getCameraMatrix() ) );
    GL20.glUniformMatrix4( projTexutureMatrixUniform, false, Matrix4fToFloatBuffer( textureProjector.getTextureMatrix() ) );
    GL20.glUniform1i( projTextureUniform, 0 );
    glBindTexture( GL_TEXTURE_2D, projectionTexture.textureId );
    for( Actor actor : actors ) {
      GL20.glUniformMatrix4( projModelMatrixUniform, false, Matrix4fToFloatBuffer( actor.getCurrentTransformMatrix() ) );
      actor.getMesh().draw( GL11.GL_TRIANGLES );
    }
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    Shader.unbind();
    glDisable( GL_BLEND );
  }
Pass one is very simple, all it does is render the scene using the given objects texture. Pass two is a bit more complex, but in non shader code is very simple. All that is down is an extra matrix is passed through to the shader. This matrix determines the position of the projector, it's rotation in the world and the projection (i.e it is pre multiplied).

Before looking into the shader itself it's probably bet to quickly cover the texture projection class, the code is documented and should provide the needed information. the only really interesting thing to note is that an offset matrix is required to convert points from perspective space (-1, 1) into a range of (0, 1) for the texture lookup.

code:
/* Used for texture projection */
public class TextureProjector {
  //Convert from -1 -> 1 into 0 -> 1
  private static final Matrix4f OFFSET_MATRIX;
  static {
    float[] offset = {0.5f, 0, 0, 0.0f,
                      0, 0.5f, 0, 0.0f,
                      0, 0, 0.5f, 0.0f,
                      0.5f, 0.5f, 0.5f, 1.0f};
    OFFSET_MATRIX = new Matrix4f( offset );
  }

  private Vector3f worldPosition;
  private Vector3f lookAt;
  private Matrix4f viewMatrix;
  private Matrix4f perspectiveMatrix;

  public TextureProjector( @NotNull Vector3f worldPosition,
                           @NotNull Vector3f lookAt,
                           float nearPlane,
                           float farPlane,
                           float fovy,
                           float aspectRatio ) {
    this.worldPosition = worldPosition;
    this.lookAt = lookAt;
    viewMatrix = GLUtilities.getLookAtMatrix( worldPosition, lookAt, new Vector3f( 0.f, 1.f, 0.0f ) );
    perspectiveMatrix = GLUtilities.getPerspectiveMatrix( fovy, aspectRatio, nearPlane, farPlane, false );
  }

  //Construct the texture matrix
  public Matrix4f getTextureMatrix() {
    Matrix4f result = new Matrix4f();
    result.mul( viewMatrix, perspectiveMatrix );
    result.mul( result, OFFSET_MATRIX );
    return result;
  }
}
The shaders are really simple, the vertex shader translates a vertex as per usual, but also calculates an additional texture coordinate which is the projected coordinate for the texture:
code:
uniform mat4 cameraMatrix;
uniform mat4 modelMatrix;
uniform mat4 projectionMatrix;

uniform mat4 projectionTextureMatrix;

void main()
{
  gl_Position = projectionMatrix * cameraMatrix * modelMatrix * gl_Vertex;
  gl_TexCoord[0] = gl_MultiTexCoord0;
  gl_TexCoord[1] = projectionTextureMatrix * modelMatrix * gl_Vertex;
}
The fragment shader simply uses the texture by sampling it from that coordiate:
code:
uniform sampler2D projectionTextureUnit;

void main()
{
	vec4 colour = vec4(0,0,0,1);

	if(gl_TexCoord[0].w > 0.0){
		colour.rgb = texture2DProj(projectionTextureUnit, gl_TexCoord[1]).rgb;
	}

	gl_FragColor = colour;
}

Click here for the full 1104x870 image.


Writing the depth information from the lights perspective
The first thing you need to do is set up a FrameBufferObject with a depth texture. You probably won't need a colour buffer as you won't be writing colour for the depth write stage.

This is a little codedump on how to do it, if you want an explanation I can provide it but it's a bit out of the scope of this tutorial. The code simply creates a depth texture which can be written to and read from.
code:
      createdContext.depthTexture = TextureFactory.generateDepthStencilFBOTexture( width, height );
      EXTFramebufferObject.glFramebufferTexture2DEXT( EXTFramebufferObject.GL_FRAMEBUFFER_EXT,
                                                      EXTFramebufferObject.GL_DEPTH_ATTACHMENT_EXT,
                                                      GL11.GL_TEXTURE_2D,
                                                      createdContext.depthTexture.textureId,
                                                      0 );

 @NotNull
  public static Texture generateDepthStencilFBOTexture( int width, int height ) {
    Texture loadedTexture = new Texture();
    loadedTexture.textureId = generateTextureId();
    loadedTexture.width = width;
    loadedTexture.height = height;

    GL11.glEnable( GL11.GL_TEXTURE_2D );
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, loadedTexture.textureId );

    GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST );
    GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST );
    GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_CLAMP );
    GL11.glTexParameteri( GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_CLAMP );
    GL11.glTexImage2D( GL11.GL_TEXTURE_2D,
                       0,
                       EXTPackedDepthStencil.GL_DEPTH24_STENCIL8_EXT,
                       width,
                       height,
                       0,
                       EXTPackedDepthStencil.GL_DEPTH_STENCIL_EXT,
                       EXTPackedDepthStencil.GL_UNSIGNED_INT_24_8_EXT,
                       (ByteBuffer) null );
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    registerTextureId( loadedTexture.textureId );
    return loadedTexture;
  }
Rendering the depth texture is extremely simple. Just render the scene as you would normally, but instead of using the camera transform matrices use the lights! (I have left colour writes on here, but you should not do that!)
code:
    //Generate depth texture from the lights (projectors) perspective
    shadowBufer.activate();
    glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );
    textureShader.bind();
    GL20.glUniformMatrix4( basicProjectionMatrixUniform, false, Matrix4fToFloatBuffer( textureProjector.getPerspectiveMatrix() ) );
    GL20.glUniformMatrix4( basicCameraMatrixUniform, false, Matrix4fToFloatBuffer( textureProjector.getViewMatrix() ) );
    GL20.glUniform1i( basicDiffuseTextureUniform, 0 );
    for( Actor actor : actors ) {
      GL20.glUniformMatrix4( basicModelMatrixUniform, false, Matrix4fToFloatBuffer( actor.getCurrentTransformMatrix() ) );
      GL11.glBindTexture( GL11.GL_TEXTURE_2D, actor.getTexture().textureId );
      actor.getMesh().draw( GL11.GL_TRIANGLES );
    }
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    Shader.unbind();
    shadowBufer.deactivate();
This will result in a depth texture that looks like this: (I just bound this as and rendered it as a full screen aligned quad)


Click here for the full 1104x870 image.

stramit
Dec 9, 2004
Ask me about making games instead of gains.
(Was a bit long for one post :()

Doing the depth comparison
Now for the fun part, putting it all together! What we need to do here is render the scene from the cameras point of view, but perform a projection (using the depth texture) from the lights point of view. At each texel on the screen we then compare the value in the light's depth buffer to the distance from that texel to the light. If the value in the depth buffer is closer then that texel is in shadow, otherwise it is not in shadow.

I set up my pass as follows, here we bind the depth texture as well as the colour texture, the shader is written to only apply the diffuse colour if the the texel passes the depth test:
code:
    //Pass 2, render projector
    glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );
    textureProjectionShader.bind();
    GL20.glUniformMatrix4( projProjectionMatrixUniform, false, Matrix4fToFloatBuffer( context.getProjectionMatrix() ) );
    GL20.glUniformMatrix4( projCameraMatrixUniform, false, Matrix4fToFloatBuffer( testCamera.getCameraMatrix() ) );
    GL20.glUniformMatrix4( projTexutureMatrixUniform, false, Matrix4fToFloatBuffer( textureProjector.getTextureMatrix() ) );
    GL20.glUniform1i( projTextureUniform, 0 );
    GL20.glUniform1i( projDiffuseTextureUnit, 1 );
    GL13.glActiveTexture( GL13.GL_TEXTURE0 );
    glBindTexture( GL_TEXTURE_2D, shadowBufer.depthTexture.textureId );
    for( Actor actor : actors ) {
      GL20.glUniformMatrix4( projModelMatrixUniform, false, Matrix4fToFloatBuffer( actor.getCurrentTransformMatrix() ) );
      GL13.glActiveTexture( GL13.GL_TEXTURE1 );
      GL11.glBindTexture( GL11.GL_TEXTURE_2D, actor.getTexture().textureId );
      actor.getMesh().draw( GL11.GL_TRIANGLES );
      GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    }
    GL13.glActiveTexture( GL13.GL_TEXTURE0 );
    GL11.glBindTexture( GL11.GL_TEXTURE_2D, 0 );
    Shader.unbind(); 
The vertex shader is the same as before. The fragment shader is as follows:

code:
uniform sampler2D ShadowMap;
uniform sampler2D diffuseTextureUnit;

void main (void)
{
  vec3 color = texture2D(diffuseTextureUnit, gl_TexCoord[0].st).rgb;

  //Project the shadow map from the light to this texel (light depth value for this texel)
  float distance1 = texture2DProj( ShadowMap, gl_TexCoord[1] ).r;
  //Do a perspective divide to get the depth at this texel relative to the light
  vec3 distance2 = gl_TexCoord[1].xyz / gl_TexCoord[1].w;

  //Only perform test if inside the projector frustum
  if( distance2.x <= 1.0 && distance2.x >= 0.0
      && distance2.y <= 1.0 && distance2.y >= 0.0
      && gl_TexCoord[1].w >= 0.0){

    //Distance 2 is the actual value at this texel, value 1 is the value 
    //as projected using the texture. If we compare the two we can
    //figure out if the texel is occluded by another light
    //Also Offset a tad to make it look sexy ( no z-fighting)
    if( distance2.z > distance1 + 0.001 ){
	    color *= 0.0;
    }
  }
	  
  gl_FragColor = vec4(color, 1);
}
This leads to a decent looking scene, all it needs now is some nicer lighting and a better shadow sample mode and it should be good to go!


Click here for the full 1104x870 image.


I'm sure you have heaps of questions about all this, it's a pretty decent technique, but there are a few little gotchas along the way. Good luck with implementation! If anyone else has any 'tutorial' requests I can whip them up pretty easily. I have a decent code base with quite a few nifty little demos.

Some I could write up if anyone is interesed are:
Screen space ambient occlusion
Deferred Shading
Using (and abusing) the Phong shading model
Stencil shadows
Normal / Parrallax mapping
A simple HDR pipeline

shodanjr_gr
Nov 20, 2007

Stramit posted:

Deferred Shading
HDR



Id actually love a brief how-to on Multiple Render Targets in OGL since id like to build a basic deferred renderer at some point.

Also, what are you guys using as a scenegraph/model loading/camera managing framework? I dont want to do too complicated stuff, my aim is basically to load some models and move them/the camera around, and animate them (applying some shader effects on them of course), and this has been getting kinda clunky using pure OGL.

Entheogen
Aug 30, 2004

by Fragmaster
what is a good OpenGL way to have transcluency for multiple objects? Perhaps I can turn off depth test and set this blend function?

code:
glBlendFunc( GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
I use this and it works somewhat well for many objects, but there are certain artifacts produced. Should I also render from back to front? The problem with that is that I would like to use display lists, and how would I render from back to front while also having rotating camera?

haveblue
Aug 15, 2005



Toilet Rascal

Entheogen posted:

what is a good OpenGL way to have transcluency for multiple objects? Perhaps I can turn off depth test and set this blend function?

code:
glBlendFunc( GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
I use this and it works somewhat well for many objects, but there are certain artifacts produced. Should I also render from back to front? The problem with that is that I would like to use display lists, and how would I render from back to front while also having rotating camera?

Yes, you usually need to render from back to front for 100% correct blending. There's no easy way around this, you need to depth-sort your transparent objects every frame. This isn't too expensive since you can use a distance-squared test since you only care about depth comparisons, not the actual values.

If you're worried about polygons within display lists drawing in the wrong order, turning on backface culling will get rid of 99% of that.

vvvvvvvv Yes, but rejecting per polygon is almost certainly faster.

Entheogen
Aug 30, 2004

by Fragmaster

sex offendin Link posted:

Yes, you usually need to render from back to front for 100% correct blending. There's no easy way around this, you need to depth-sort your transparent objects every frame (if you're worried about polygons within display lists drawing in the wrong order, turning on backface culling will get rid of 99% of that).

I am actually rendering this using custom shaders. For that should i just discard a fragment that is not front facing?

^^^^^^^^^^^^^^^^^^^^ How do I reject by polygon? Am I just missing some simple OpenGL call?

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through

Entheogen posted:

How do I reject by polygon? Am I just missing some simple OpenGL call?

I think me means not worrying about what way the polygons face and setting a CullMode, as the graphics card is faster than however you'd determine front facing polygons.

Mithaldu
Sep 25, 2007

Let's cuddle. :3:
Got an OpenGL question i would like to have answered to see into which direction i would need to research.

Here's the basic setup:

I'm, for now, trying to display a grid consisting of cubes. There's not much i can do about reducing the number of elements here, since each cube can have a number of appearances and adjacent cubes rarely share the same appearance.

Here's a few examples.

The maximum number of cubes is 768x768x96 (width x length x height). Right now i have a very simplicistic method for doing it in immediate mode. (The images up there are from another dude though and right now i'm not entirely sure as to how he does it.)

My question here is: What would be the method to do this with that gives the most performance?

heeen
May 14, 2005

CAT NEVER STOPS

Mithaldu posted:

I'm, for now, trying to display a grid consisting of cubes.

Subdivide world into chunks of say 32x32xdepth cubes
Collect all cubes of equal appearance in this chunk into a display list/VBO.

Cull chunks with view frustum, only render visible chunks
for all types of cubes:
render all cubes of current type from all visible chunks sequentially.

Mithaldu
Sep 25, 2007

Let's cuddle. :3:
Would be 32x32x1 chunks then, as i want to be able to slice layers off the top 1 by 1 so one can easily see what's going on inside the mountain.

In any case, thanks a lot! You really showed me something i missed. :) (Was simply having one big texture previously, with the world being sub-divided in 16x16x1 chunks and the texture being applied by moving it around based on offsets.)

Can i get some more info on what the advantages/disadvantages of display list vs. vbo are?

shodanjr_gr
Nov 20, 2007
Out of curiosity, what are you making exactly?

That looks like a plenty cool engine for an old school Strategy RPG :D

Mithaldu
Sep 25, 2007

Let's cuddle. :3:
I originally started to write a 3d visualizer for maps from dwarf fortress in Perl. Some other dude however was also working on it, only he did it in C++ and was a bit faster than i was, so i completely forgot about the 3d part and concentrated on writing the part of the program that extracts the map from the memory of the running program.

Now i'm finding that the will to poke at this is flagging, since i can get out more info now, but i can't visualize it, since i don't have the source to his engine, nor am i really fluent in c++.

As such i'm looking to move forward and get done with implementing my 3d viewer in perl and would simply like to hear what a good way to get this done is, as i have only very BASIC experience with OpenGL.

This isn't really an engine, as it's mostly about viewing static content.

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through
don't give dwarf fortress graphics or I might get sucked into playing it forever

Mithaldu
Sep 25, 2007

Let's cuddle. :3:

MasterSlowPoke posted:

don't give dwarf fortress graphics or I might get sucked into playing it forever

Oh god, you just gave me an idea. D:

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Since delphi3d.net is down, does anyone know what hardware supports the GL_EXT_framebuffer_sRGB extension?

ColdPie
Jun 9, 2006

I've got what feels like a noob question. I'm trying to learn about using shaders, and things are going fine, except that I can't get it to run on my laptop! My stuff works fine on my desktop with its real OpenGL 2.x graphics card, but my laptop only supports OpenGL 1.4. However, I read that shaders were originally introduced in 1.4 as an extension. And, as a matter of fact, glxinfo reveals extensions such as GL_ARB_fragment_program and GL_ARB_vertex_program which sound like what I want.

I found calls like glCreateShaderObjectARB() in a sample program on the internet. Trouble is, it's in an if(GLEW_VERSION_1_5) block, while I'm at 1.4. When I bypass this check and force it to use the 1.5 calls, it segfaults on the glCreateShaderObjectARB() call.

How can I use GL shaders in a OpenGL 1.4 implementation?

haveblue
Aug 15, 2005



Toilet Rascal
What hardware are you on? There may be a vendor-specific shader extension you can use instead of the ARB feature.

ColdPie
Jun 9, 2006

sex offendin Link posted:

What hardware are you on? There may be a vendor-specific shader extension you can use instead of the ARB feature.

I'm not even entirely sure. Looks like Mesa DRI Intel(R) 965GM 4.1.3002 according to glxinfo. Crummy integrated graphics :) I could dig up the actual card name if that's needed.

Also a little more digging looks like I might need GL_ARB_vertex_shader etc. The phrase "shader" doesn't show up in glxinfo at all.

Edit:
Yeah looks like I need

GL_ARB_shader_objects
GL_ARB_shading_language_100
GL_ARB_vertex_shader
GL_ARB_fragment_shader

None of which are present. Am I SOL on shader support?

ColdPie fucked around with this message at 19:38 on Aug 24, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

ColdPie posted:

I've got what feels like a noob question. I'm trying to learn about using shaders, and things are going fine, except that I can't get it to run on my laptop! My stuff works fine on my desktop with its real OpenGL 2.x graphics card, but my laptop only supports OpenGL 1.4. However, I read that shaders were originally introduced in 1.4 as an extension. And, as a matter of fact, glxinfo reveals extensions such as GL_ARB_fragment_program and GL_ARB_vertex_program which sound like what I want.

I found calls like glCreateShaderObjectARB() in a sample program on the internet. Trouble is, it's in an if(GLEW_VERSION_1_5) block, while I'm at 1.4. When I bypass this check and force it to use the 1.5 calls, it segfaults on the glCreateShaderObjectARB() call.

How can I use GL shaders in a OpenGL 1.4 implementation?
"program" refer to the early pseudo-assembly shaders, not GLSL. Last I knew, Intel has no support for GLSL on anything lower than the X3100, and I'm not sure if it's on the X3100 either.

If you're going to target Intel hardware, you may want to check out Cg, which will allow you to target ARB programs and GLSL shaders with the same code.

ColdPie
Jun 9, 2006

OneEightHundred posted:

"program" refer to the early pseudo-assembly shaders, not GLSL. Last I knew, Intel has no support for GLSL on anything lower than the X3100, and I'm not sure if it's on the X3100 either.

If you're going to target Intel hardware, you may want to check out Cg, which will allow you to target ARB programs and GLSL shaders with the same code.

Yeah, it's an Intel X3100 and it doesn't support shaders (unless I'm not understanding something). I guess I'll poke around and see what my alternatives are.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Well, as I said, the only real alternative for Intel hardware are ARB fragment/vertex programs, which are in a sort of pseudo-assembly language.

You can write shaders in that language directly (which isn't fun), or you can use Cg (which is an HLSL-like language) to target them. Cg also has a separate compiler (cgc) that dumps the high-level metadata in comments, so you don't have to use the Cg libraries to use the language.

There really aren't any other alternatives for programmable shaders until Intel decides to release a driver update.

heeen
May 14, 2005

CAT NEVER STOPS
Did anyone ever have to deal with continuing rendering after windows resumes from standby or even hibernation?
I have the source for a foobar extension that stops working after I resume from standby. How can I query if standby has occured to reclaim the device context or render context?

edit: using OpenGL

shodanjr_gr
Nov 20, 2007
Say i got an opengl FBO with 4 color attachments, and i want to clear only one of those attachments (that is, do the equivalent of glClear(GL_COLOR_BUFFER_BIT);). How do i do that?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Say i got an opengl FBO with 4 color attachments, and i want to clear only one of those attachments (that is, do the equivalent of glClear(GL_COLOR_BUFFER_BIT);). How do i do that?
Use glColorMask to disable the channels you don't want to clear, then use glClear

shodanjr_gr
Nov 20, 2007

OneEightHundred posted:

Use glColorMask to disable the channels you don't want to clear, then use glClear

I dont want to mute certain channels, i want to mute whole buffers against clearing (for instance my FBO has 4 color buffers + a depth buffer, i only want to clear the first color buffer). How does glColorMask help me with that?



Also i am setting glClearColor(1.0,0.0,0.0,1.0) and then calling glClear(COLOR_BUFFER_BIT | DEPTH_BUFFER_BIT) and it doesnt seem to be "going through" (i dont get a red background color in my Color_attachments). Any idea what's up with that? edit: Nevermind, fixed.

shodanjr_gr fucked around with this message at 01:37 on Sep 1, 2008

zzz
May 10, 2008

shodanjr_gr posted:

I dont want to mute certain channels, i want to mute whole buffers against clearing (for instance my FBO has 4 color buffers + a depth buffer, i only want to clear the first color buffer). How does glColorMask help me with that?

You need glDrawBuffer or glDrawBuffers to set up which ones to write to, I guess

pianoSpleen
Jun 13, 2007
printf("Java is for programmers with no %socks", "c");

shodanjr_gr posted:

Id actually love a brief how-to on Multiple Render Targets in OGL since id like to build a basic deferred renderer at some point.

Deferred shading in OpenGL is a little annoying, but reasonably easy once you get the idea:

- Disable clamping so we can set colors outside the range of 0..1:
code:
	glClampColorARB(GL_CLAMP_VERTEX_COLOR_ARB,GL_FALSE);
	glClampColorARB(GL_CLAMP_FRAGMENT_COLOR_ARB,GL_FALSE);
	glClampColorARB(GL_CLAMP_READ_COLOR_ARB,GL_FALSE);
- Create a PBO and bind it:
code:
glGenFramebuffersEXT(1, &pbo1);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, pbo1);
- Create a depth texture and bind it:
code:
	glGenRenderbuffersEXT(1, &depthbuffer);
	glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthbuffer);
	glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,	GL_DEPTH_COMPONENT24, screenwidth, screenheight);
- Create your MRTs
code:
 usual stuff to create a rectangle here, and something like
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_FLOAT_RGBA32_NV, ss.x, ss.y, 0, GL_RGBA, GL_HALF_FLOAT_ARB, NULL);
We probably want at least two MRTs - one for fragment position, one for the normal. If you want to do (diffuse) texture mapping as well for example you'd create another MRT for that.
- Bind your MRTs
code:
//Bind the two MRTs
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_RECTANGLE_ARB, frametexture[0], 0);
	glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT, GL_TEXTURE_RECTANGLE_ARB, frametexture[1], 0);
//Tell it to use two buffers
GLenum drawbuffers[16] = {GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT};
    glDrawBuffers(2, drawbuffers);
- Do the geometry pass:
In your pixel shader, instead of setting a gl_FragColor, you do something like this:
code:
//normal
gl_FragData[0] = vec4(outnormal,0.0);

//Position
gl_FragData[1] = vec4(frageyepos,0.0);
- Select your MRTs to be used as textures
code:
//Nothing fancy here, we just use it as a texture as normal. Remember that rectangle textures do NOT use normalised texture coordinates,
 and that it's probably a good idea to unbind this from the PBO before we try to read from it
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, frametexture[1]);
- Set to render to the screen
This time we read from the two textures, use them to calculate the pixel brightness, and spit that out on to the screen.

Hopefully that made sense :P feel free to ask if you have any questions.

shodanjr_gr
Nov 20, 2007
Thanks for the tut mate (although it came a bit late, i figured it out myself a month or so ago and wrote my renderer :P)!

I got another question, which is quite interesting.


I am rendering a shadowmap, which is of course a depth buffer from the light's point of view. As far as i know, a GL_DEPTH_COMPONENT texture has 8 bits that are used as a stencil value, when needed.

I want to be able to access those 8 bits and store a value of my own.

The thing is, i cant figure out the best and easiest way to do this.

If i assume that i use an FBO for my shadow map rendering, and then attach the depth texture to the GL_DEPTH_ATTACHMENT point of that FBO, i cant see a way to access the stencil value and WRITE to it from inside a shader (the only DB-related output variable that GLSL seems to offer is gl_FragDepth, which is a float).

So my alternative is to use a depth buffer for well, depth buffering, then attach a separate GL_DEPTH_COMPONENT texture to one of the FBOs GL_COLOR_ATTACHMENT points. I then assume that i will be able to access the depth value by gl_FragData[i].x and the stencil value by gl_FragData[i].a (correct me if i am wrong). However, i am not sure about the actual depth value that needs to be stored so that the texture can be used properly as a shadowmap (using shadow2DProj, so that i can be filtered etc). Does it have to be the eye-space Z value, eye-space Z/W value, or something else?

Any ideas?


edit:

Extra question:

Click here for the full 800x800 image.


Can anyone explain to me why does this artifacting happen? I dont mean the small artifacts (which i assume will go away with extra filtering), i mean the handle of the teapot casting a shadow on the FRONT face of the teapot...it doesnt make much sense to me....

shodanjr_gr fucked around with this message at 05:27 on Sep 5, 2008

forelle
Oct 9, 2004

Two fine trout
Nap Ghost
Ignore, I'm a retard.

Mithaldu
Sep 25, 2007

Let's cuddle. :3:
I'm guessing i'm being dumb about something here. These cubes are all flush to each other:


Click here for the full 1271x936 image.


How can i stop the borders there from loving up?

Source code here if anyone wants to look: http://code.google.com/p/dwarvis/source/browse/trunk/livevis/dwarvis.pl
Relevant functions: cbRenderScene ourDrawCube ourInit

Entheogen
Aug 30, 2004

by Fragmaster

Mithaldu posted:

I'm guessing i'm being dumb about something here. These cubes are all flush to each other:


Click here for the full 1271x936 image.


How can i stop the borders there from loving up?

Source code here if anyone wants to look: http://code.google.com/p/dwarvis/source/browse/trunk/livevis/dwarvis.pl
Relevant functions: cbRenderScene ourDrawCube ourInit

turn on GL_BACK culling is one thing that fixed it in my program, but i was also using tansperency.

haveblue
Aug 15, 2005



Toilet Rascal
You could also try pulling the rear clip plane in closer to increase the resolution of the depth buffer.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Looks like depth buffer precision issues. Consider using D3DRS_SLOPESCALEDEPTHBIAS and D3DRS_DEPTHBIAS with D3D, or glPolygonOffset with OpenGL to nudge it. If you're reading the depth value in the shader, just offset it by a fixed amount.

Mithaldu
Sep 25, 2007

Let's cuddle. :3:
Thanks guys, problems identified:

Had the front clipping plane FAR too close, because i dislike how things look when the camera clips through stuff and the front plane is too far away from it. That's irrelevant for this project though. :)

Additionally, the cubes are currently still rendered fully, which means that inside faces are still being rendered and confuse the depth buffer. Slated to be done later to keep the code simple for now.

Played around with glPolygonOffset too, but well, the results seemed REALLY random. :o

midnite
May 10, 2003

It's all in the wrist

Mithaldu posted:

Thanks guys, problems identified:

Had the front clipping plane FAR too close, because i dislike how things look when the camera clips through stuff and the front plane is too far away from it. That's irrelevant for this project though. :)


Yea, the near clipping plane really controls the accuracy of your depth buffer. It's best to keep it as far out as you can push it and just make sure you camera can't get that close to the geometry.

It's funny because in Descent we did the rotation/projection math in such a way (this was before hardware transform and lighting) that the near plane was actually at zero. I loved Descents transformation and projection system but it just doesn't work out with hardware stuff.

Mithaldu
Sep 25, 2007

Let's cuddle. :3:
Ok, one more odd question and i don't even know if you guys can actually help me there. The following happens on a two-core system under WinXP with Perl running the program.

I have it running fine and it generates 18 display lists at start.

As of now every display list starts with glBegin(GL_QUADS); and ends with glEnd();. The displaylists get then looped through in the render function. At this point the thing simply hums along and uses one core to render as is normal.
code:
prepare:
    for my $z (0..$zcount){
        glNewList($z, GL_COMPILE);
        glBegin(GL_QUADS);    # OK, let's start drawing our planer quads.
        **stuff**
        glEnd();    # All polygons have been drawn.
        glEndList();
    }

------------
render:
    for my $z (0..$zcount){
    glCallList($z);
    }

However when i take the begin and end out of the lists proper and put them in front and after the loop, it suddenly goes weird. The CPUs cores both go to 100%, all used by the Perl process and the framerate noticeably drops. It doesn't even matter whether the list calling loops is within begin/end or not. If it isn't it won't render, but the CPU will still freak out.
code:
prepare:
    for my $z (0..$zcount){
        glNewList($z, GL_COMPILE);
        **stuff**
        glEndList();
    }

------------
render:
    glBegin(GL_QUADS);    # OK, let's start drawing our planer quads.
    for my $z (0..$zcount){
        glCallList($z);
    }
    glEnd();    # All polygons have been drawn.

Any ideas?

Adbot
ADBOT LOVES YOU

Spite
Jul 27, 2001

Small chance of that...
I'm not a display list whiz, but your two versions don't really do the same thing. I'm not sure if display lists really have a meaning outside of glBegin. By which I mean stuff like glVertex doesn't work outside of a begin/end pair. I'd use VBOs though, as they tend to be more efficient on modern hardware. (as they are the path the driver developers care about)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply