Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
pseudorandom name
May 6, 2007

There's also An intro to modern OpenGL.

Adbot
ADBOT LOVES YOU

Cat Plus Plus
Apr 8, 2011

:frogc00l:
Learning Modern 3D Graphics Programming. It uses newer OpenGL (3.2 Core Profile, if I recall correctly) than Durian's tutorial.

HolaMundo
Apr 22, 2004
uragay

sponge would own me in soccer :(
I recently found this http://open.gl/
It's neat, but they guy is still writing it as he goes, so some chapters aren't there yet.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Anyone know where I could find out what the minimum hardware for 32-bit float textures would be? Or really, any sort of tables regarding what feature levels cards would support?

(R.I.P. Delphi3D)

MarsMattel
May 25, 2001

God, I've heard about those cults Ted. People dressing up in black and saying Our Lord's going to come back and save us all.
I'm having problems with GLSL in an app I've been writing. I'm trying to implement cel shading, but I have problems with (I think) communicating variables between the vertex and fragment shaders.

Abridged vertex shader:
code:
smooth out vec4 theColour;
out float intensity;

uniform vec4 lightDir;

void main()
{
	gl_Position = blah;

//	vec4 L = normalize(vec4(0, 1, 1, 0) - vec4(0));
	vec4 L = normalize(lightDir);
	intensity = dot(L, normal);

	theColour = colour;
}
Fragment shader:
code:

smooth in vec4 theColour;
in float intensity;
void main()
{
	vec4 colour;

	if (intensity > 0.5)
		colour = vec4(0.7, 0.7, 0.7, 1.0);
	else
		colour = vec4(0.1, 0.1, 0.1, 1.0);

	gl_FragColor = colour;
}
I'm setting the lightDir value using glUniform, and I can verify that it has been set using glGetUniform. As I'm setting it to (0, 1, 1) I'd expect that either of the "L = ..." statements would produce the same result.

What actually happens is the version not referencing lightDir causes the draw call to fail somehow (none of the polygons are drawn) while in the version defining the light direction the polygons are drawn. glGetError returns 0.

I have no idea what's going on here or how to debug it.

Edit: apparently the problem was because I was using a second vertex shader which had different inputs defined (it was an earlier version of the one I was working with).

MarsMattel fucked around with this message at 00:36 on Sep 17, 2012

Max Facetime
Apr 18, 2009

I'm looking for something like a scenegraph API except lower-level, something where you can set up your rendering passes, what is rendered to which intermediate texture and how it is brought together to a final image. Is there any API like this that models the OpenGL ES 2.0/DX9 programmable shader pipeline?

Mata
Dec 23, 2003
Has anyone written a hardware skinning instancing shader?
Everything in my game uses hardware instancing so it would be quite elegant if I could get animations to work this way...
I'm in over my head a little here, but I'm wondering if it's possible to stream the animation data to the shader the same way I do with the instancing data?
I read up a little on google about skinning shaders and a common way to do it seems to be to compile the animation data to a texture, are there advantages and disadvantages to doing it that way, and would it play nice with hardware instancing?

Edit: I found this which actually answers all of my questions... http://developer.download.nvidia.com/SDK/10.5/direct3d/Source/SkinnedInstancing/doc/SkinnedInstancingWhitePaper.pdf

Mata fucked around with this message at 15:57 on Oct 4, 2012

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mata posted:

Has anyone written a hardware skinning instancing shader?
Everything in my game uses hardware instancing so it would be quite elegant if I could get animations to work this way...
I'm in over my head a little here, but I'm wondering if it's possible to stream the animation data to the shader the same way I do with the instancing data?
I read up a little on google about skinning shaders and a common way to do it seems to be to compile the animation data to a texture, are there advantages and disadvantages to doing it that way, and would it play nice with hardware instancing?
The only other way to instance it would be to use the vertex streams, which is completely out on DX9 hardware since you'll run out of streams.

Using a texture should work fine.

Boz0r
Sep 7, 2006
The Rocketship in action.
I'm trying to learn shading by implementing phong shading in the fragment shader. I've tried to do it in a couple of ways, but even when I just copy the code from the book, my compiler still throws all sorts of type mismatch errors at me:

code:
varying vec4 v;
varying vec3 n;

void main()
{
    vec3 l = normalize(gl_LightSource[0].position - v);
    vec3 h = normalize(1 - normalize(v));

    float p = 16;                           /* Phong exponent */
    vec4 cr = gl_FrontMaterial.diffuse;     /* Diffuse */
    vec4 cl = gl_LightSource[0]. diffuse;   /* Light */
    vec4 ca = vec4(0.2,0.2,0.2,1.0);        /* Ambient */ 

    vec4 color;
    
    if (dot(h,n) > 0)
        color = cr * (ca + cl * max(0, dot(n,l))) + cl * pow(dot(h,n),p);
    else
        color = cr * (ca + cl * max(0, dot(n,l)));

	gl_FragColor = gl_Color;
}
code:
0:6(7): error: type mismatch
0:7(37): error: Could not implicitly convert operands to arithmetic operator
0:0(0): error: no matching function for call to `normalize()'
0:0(0): error: candidates are: vec4 normalize(vec4)
0:0(0): error:                 float normalize(float)
0:0(0): error:                 vec2 normalize(vec2)
0:0(0): error:                 vec3 normalize(vec3)
0:0(0): error:                 vec4 normalize(vec4)
0:9(8): error: type mismatch
0:16(18): error: Could not implicitly convert operands to relational operator
0:16(18): error: if-statement condition must be scalar boolean
0:0(0): error: no matching function for call to `dot(vec3, vec4)'
0:0(0): error: candidates are: float dot(vec3, vec3)
0:0(0): error:                 float dot(float, float)
0:0(0): error:                 float dot(vec2, vec2)
0:0(0): error:                 float dot(vec3, vec3)
0:0(0): error:                 float dot(vec4, vec4)
0:0(0): error: no matching function for call to `max(int, )'
0:0(0): error: candidates are: float max(float, float)
0:0(0): error:                 vec2 max(vec2, vec2)
0:0(0): error:                 vec3 max(vec3, vec3)
0:0(0): error:                 vec4 max(vec4, vec4)
0:0(0): error:                 vec2 max(vec2, float)
0:0(0): error:                 vec3 max(vec3, float)
0:0(0): error:                 vec4 max(vec4, float)
What am I missing?

Max Facetime
Apr 18, 2009

The error no matching function for call to `dot(vec3, vec4)' which looks to be coming from if (dot(h,n) > 0) sounds like a mismatch between varyings in vertex shader and fragment shader. Post your vertex shader too.

Also try using vec4(1,1,1,1) instead of 1 in normalize(1 - normalize(v));

Boz0r
Sep 7, 2006
The Rocketship in action.
Thanks, I got it compiling now, but all the objects are just a flat color with no shading at all :(

Vertex Shader:
code:
varying vec4 v;
varying vec3 n;

void main()
{
	v = gl_ModelViewMatrix * gl_Vertex;
    n = normalize(gl_NormalMatrix * gl_Normal);

	/* Pass on projected coords */
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment Shader:
code:
varying vec4 v;
varying vec3 n;

void main()
{
    vec3 l = normalize(gl_LightSource[0].position.xyz - v.xyz);
    vec3 e = normalize(-v.xyz);
    vec3 h = normalize(l - normalize(v.xyz));

    float p = 16.0f;                        /* Phong exponent */
    vec4 cr = gl_FrontMaterial.diffuse;     /* Diffuse */
    vec4 cl = gl_LightSource[0]. diffuse;   /* Light */
    vec4 ca = vec4(0.2,0.2,0.2,1.0);        /* Ambient */ 

    vec4 color;
    
    if (dot(h, n) > 0.0f)
        color = cr * (ca + cl * max(0, dot(n,l))) 
                + cl * pow(dot(h,n),p);
    else
        color = cr * (ca + cl * max(0, dot(n,l)));

    gl_FragColor = gl_Color;
}
Anything simple I missed?

EDIT: Never mind, turns out actually assigning the color to the fragment goes a long way.

Max Facetime
Apr 18, 2009

Win8 Hetro Experie posted:

I'm looking for something like a scenegraph API except lower-level, something where you can set up your rendering passes, what is rendered to which intermediate texture and how it is brought together to a final image. Is there any API like this that models the OpenGL ES 2.0/DX9 programmable shader pipeline?

I just read this which pretty much answers my question about a unifying API. There is not such thing, especially as it comes to features beyond OpenGL ES 2.0/DirectX 9, but there's a few candidates and there's demand for such a thing.

slovach
Oct 6, 2005
Lennie Fuckin' Briscoe
Not sure if I'm missing something here, but after every Present(), I'm seeing stuff like this in the D3D debug output. Something about getting this spammed to me hundreds of times per second makes me think something is amiss.

D3D11 is freshly initialized, bare minimum pretty much.

code:
D3D11: INFO: Create RenderTargetView: Name="unnamed", Addr=0x00B8200C, ExtRef=1, IntRef=0 [ STATE_CREATION INFO #2097243: CREATE_RENDERTARGETVIEW ]
D3D11: INFO: Destroy RenderTargetView: Name="unnamed", Addr=0x00B8200C [ STATE_CREATION INFO #2097245: DESTROY_RENDERTARGETVIEW ]
edit: Also I'm finally adding shaders to my little framework.

What's the reasoning behind how the format part of D3D11_INPUT_ELEMENT_DESC is handled? I see stuff like this. I can't seem to find a clear explanation of how that format applies here...

code:
inDesc[0].SemanticName		= "POSITION";
inDesc[0].Format		= DXGI_FORMAT_R32G32B32_FLOAT;

slovach fucked around with this message at 08:19 on Oct 16, 2012

UraniumAnchor
May 21, 2006

Not a walrus.
This isn't strictly 3D related, but it has to do with textures so it might belong here.

How should the alpha channel be weighted when computing the PSNR in a compressed texture? Since the amount of perceptual error not only depends on the original alpha value's absolute intensity as well as what the texture is being placed on top of, it seems like it's not a constant or easy thing, and I can't seem to find anything online about it.

UraniumAnchor fucked around with this message at 19:18 on Oct 19, 2012

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

UraniumAnchor posted:

This isn't strictly 3D related, but it has to do with textures so it might belong here.

How should the alpha channel be weighted when computing the PSNR in a compressed texture? Since the amount of perceptual error not only depends on the original alpha value's absolute intensity as well as what the texture is being placed on top of, it seems like it's not a constant or easy thing, and I can't seem to find anything online about it.
If you're using premultiplied alpha, which you generally should, then the alpha and RGB channels will be compressed independently. In that case, you can just use your normal PSNR calculation (i.e. square error, or whatever), because the alpha channel is all linearly-scaling values that are proportional to whatever's behind it and the RGB channels are linearly-scaling additions.

If you're trying to compute a total PNSR value for some reason, then it might be helpful to know what this is for since that sounds like a bad idea, but you'd have to weight the alpha error and what value you choose for that is totally arbitrary.

Other things to keep in mind may be that perceptual error is greater between differences in dark intensities than bright intensities (distortion computations are often generally done in gamma space), and high-frequency hue/saturation distortion is significantly less perceptible than high-frequency intensity distortion.

OneEightHundred fucked around with this message at 19:37 on Oct 19, 2012

UraniumAnchor
May 21, 2006

Not a walrus.
I'm trying to generate a summary of the PSNR values of PVR/DXT/ETC textures so I can quickly notice when there's a significant difference between formats (or a value is unusually low) and investigate it without having to visually scan thousands of texture files. Right now the calculation is completely ignoring the alpha channel unless the alpha is entirely transparent (in which case the MSE is 0 for that pixel for obvious reasons), so it's probably pretty inaccurate for textures where there's a significant alpha gradient. None of this is being used to make any automated decisions, it's mostly to spot outliers.

UraniumAnchor fucked around with this message at 21:42 on Oct 19, 2012

shodanjr_gr
Nov 20, 2007
e: Nevermind, I'm a moron.

shodanjr_gr fucked around with this message at 06:20 on Oct 23, 2012

slovach
Oct 6, 2005
Lennie Fuckin' Briscoe
How exactly does fxc / compiled shaders work in DX11? Is it compiled to some device independent format then the driver takes it from there later when you create / set it? Could compiling at run time could potentially yield a more optimized end result because it could be tailored to that machine's hardware?

Scaevolus
Apr 16, 2007

slovach posted:

How exactly does fxc / compiled shaders work in DX11? Is it compiled to some device independent format then the driver takes it from there later when you create / set it? Could compiling at run time could potentially yield a more optimized end result because it could be tailored to that machine's hardware?

It's a device independent bytecode. The graphics card will convert them as appropriate. The point is that your shaders aren't in an easily readable form, and you can skip the compilation phase.

Mata
Dec 23, 2003
What's the easiest way to change the way DirectX interpolates normals between my vertices? Here's what the lighting on my terrain looks like with everything but diffuse lighting turned off:


It's really hard to read what the geometry actually looks like!

Here's what's going on with the normals:


This kind of smoothing between vertices usually looks good but I would prefer flat shading so the player can more easily tell what the terrain looks like, then in the next steps I can apply normal maps and stuff to pretty it up a little.

Xerophyte
Mar 17, 2008

This space intentionally left blank
Interpolating things is what the rasterizer does, more or less. Far as I know you can't make it use some sort of average for the primitive instead, since it knows of no such thing. You need to alter the actual vertex data and I can think of two options for doing that:

1: Give each triangle (where the incline varies, at least) unique vertices when generating the terrain mesh.
2: Have a geometry shader that calculates the face normal of each triangle (cross product two suitable triangle edge vectors) and outputs a triangle with identical vertices save for replacing the normals with that.

I've never done either so not much idea what works best in practice. Geometry shaders are definitely neat if targeting hardware that supports them.

peak debt
Mar 11, 2001
b& :(
Nap Ghost

Mata posted:

What's the easiest way to change the way DirectX interpolates normals between my vertices? Here's what the lighting on my terrain looks like with everything but diffuse lighting turned off:


It's really hard to read what the geometry actually looks like!

Here's what's going on with the normals:


This kind of smoothing between vertices usually looks good but I would prefer flat shading so the player can more easily tell what the terrain looks like, then in the next steps I can apply normal maps and stuff to pretty it up a little.

Your triangle strips look really weird, there seem to be edges going right across 6+ other triangles. You must have messed up some indices in the loop that creates those strips. The same bug might be responsible for normals being assigned to different vertices than they're supposed to which might explain the weird shading.

Mata
Dec 23, 2003

peak debt posted:

Your triangle strips look really weird, there seem to be edges going right across 6+ other triangles. You must have messed up some indices in the loop that creates those strips. The same bug might be responsible for normals being assigned to different vertices than they're supposed to which might explain the weird shading.

I just assumed those were degenerate triangles? I mean, the normals seem to be at the right vertices.
Here's the code that makes creates indexes for a w * h geometry grid, I wrote it a long time ago and it's not very pretty so it could be wrong...

code:
public static IndexBuffer GenerateIB(int w, int h) {

    /* the index size grows at a rate of 2n + 2n^2 (plus degenerate triangles) */
    int extraindices = 2 * ((h) * 4) - 4;
    int indexsize = 2 * (h * (2 + 2 * w)) + extraindices;

    int[] indices = new int[indexsize];

    /* We index the geometry vertices like a triangle strip, so conceptually the vertices are arranged like this:  
     *    tile1  tile2  etc
     *   0      2      4        each row is terminated by degenerate triangles (triangles that have no
     *   |    / |    / |  ...   area so they won't be drawn but let us advance the current position to the start
     *   |  /   |  /   | /      of the next row. This means for the first tile of each row, we take all four
     *   1      3      5        vertices, but for each subsequent tile we only need the rightmost 2 vertices 
     *                          since the other vertices are shared by its left neighbor. 
     * The quad vertex order is lowerleft, upperleft, lowerright, upperright. */

    int stride = 4;
    for (int indexc = 0, row = 0; indexc < indexsize; row++) {
        for (int i = (w + 1) * row; i < (w + 1) + (w + 1) * row; i++, indexc += 2) {
            indices[indexc] = i;
            indices[indexc + 1] = (w+1) + i;
        }

        // Insert degenerate triangles
        if (indexc >= indexsize) break; // don't need degenerates at the last row
        int degenerateposition = (w + 1) * (row + 1);
        indices[indexc] = (w) + degenerateposition;       // last vertex of previous row
        indices[indexc+1] = indices[indexc+2] = degenerateposition; // first vertex of next row repeated twice
        indices[indexc+3] = (w + 1) + degenerateposition; // 2nd vertex of next row
        indexc += stride;   // advance index position
    }
    IndexBuffer IB = new IndexBuffer(Engine.Graphics.Instance.Device, typeof(int), indexsize, BufferUsage.WriteOnly);
    IB.SetData(indices);
    return IB;
}

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.
Two questions about openGL / glut:

I have a glutKeyboardFunction controlling the movement of a 'thing' on screen, but it's really lovely. It registers a held key the same way that a text window does - one initial press, and then an amount of delay, and then a fast succession of clicks.

It seems there should be an easy way to do better than this, eg, keyDown toggles a different state where we're now waiting for a keyUp to flip the toggle again, but I can't figure how to do it.


Second, I'm trying to have two windows open at one time, where one is to be used by the user as a steering wheel and the other is to draw the main scene.

http://www.opengl.org/resources/libraries/glut/spec3/node17.html seems to be what I'm at, but I'm just not having any luck with it... Any pointers?

The Gripper
Sep 14, 2004
i am winner

Newf posted:

I have a glutKeyboardFunction controlling the movement of a 'thing' on screen, but it's really lovely. It registers a held key the same way that a text window does - one initial press, and then an amount of delay, and then a fast succession of clicks.
glut has glutKeyboardUpFunc as well, so you can set keyup state with that. e: I think if you're using the arrow keys you'll need glutSpecialUpFunc since they don't have an ASCII code and I assume glutKeyboard[Up]Func is based on that.

Spatial
Nov 15, 2007

Newf posted:

I have a glutKeyboardFunction controlling the movement of a 'thing' on screen, but it's really lovely. It registers a held key the same way that a text window does - one initial press, and then an amount of delay, and then a fast succession of clicks.

It seems there should be an easy way to do better than this, eg, keyDown toggles a different state where we're now waiting for a keyUp to flip the toggle again, but I can't figure how to do it?
Calling glutIgnoreKeyRepeat(true) will turn off the repeats. Then you'll get the pure key up/down events like you want.

Boz0r
Sep 7, 2006
The Rocketship in action.
I'm learning OpenGL and I need to know how to push an array of vertex coordinates like so: x,y,z,x,y,z,x,y,z instead of first making vec3's out of them.

Right now my buffer command looks like this:

code:
glBufferData(GL_ARRAY_BUFFER, kTriangleVertices.size()*sizeof(vec3), &(kTriangleVertices[0]), GL_STATIC_DRAW);
EDIT: Now that I think about it, do I even need to change anything other than giving it the other array?

DOUBLE EDIT: What if I have the normals in a separate array, how do I add them?

Boz0r fucked around with this message at 17:51 on Nov 11, 2012

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Boz0r posted:

EDIT: Now that I think about it, do I even need to change anything other than giving it the other array?
BufferData and company take raw memory so they don't give a poo poo if you're giving it floats in a packed structure or in a flat array.

quote:

DOUBLE EDIT: What if I have the normals in a separate array, how do I add them?
See above for adding them interleaved to the same buffer.

The main key thing to realize is that VertexAttribPointer, DrawElements, etc. operate on the currently bound buffer. If you have stuff stored in a different buffer, bind the new buffer and use VertexAttribPointer to make that attribute use the other buffer. If you have it interleaved in the same buffer, pass the appropriate offset to VertexAttribPointer and use the same stride.

i.e.
code:
struct MyPackedVertex
{
    Vec3 position;
    Vec3 normal;
    Vec2 texCoord;
};

#define OFFSET_PTR(t, f) (&reinterpret_cast<t *>(NULL)->f)

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(MyPackedVertex), OFFSET_PTR(MyPackedVertex, position));
glVertexAttribPointer(1, 3, GL_FLOAT, GL_TRUE, sizeof(MyPackedVertex), OFFSET_PTR(MyPackedVertex, normal));
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(MyPackedVertex), OFFSET_PTR(MyPackedVertex, texCoord));

OneEightHundred fucked around with this message at 04:36 on Nov 12, 2012

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.
Thanks Spacial and Gripper - those did the trick.

I'm stuck again though - I can't get depth testing to work as I understand it should.

From the FAQ:


1. Ask for a depth buffer when you create your window.
2. Place a call to glEnable (GL_DEPTH_TEST) in your program's initialization routine, after a context is created and made current.
3. Ensure that your zNear and zFar clipping planes are set correctly and in a way that provides adequate depth buffer precision.
4. Pass GL_DEPTH_BUFFER_BIT as a parameter to glClear, typically bitwise OR'd with other values such as GL_COLOR_BUFFER_BIT.


Snips of my code from main():

code:
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);  // satisfies (1)
glutInitWindowSize(800,800);
glutInitWindowPosition(100,100);
mainWindow = glutCreateWindow("asdfasdfasdfasdfasdfasdfasdf");

glutDisplayFunc(displayMain);
initMain();
initMain():
code:
void initMain(void)
{
	
	glClearColor(1,1,1,0);
	glMatrixMode(GL_PROJECTION);
	glLoadIdentity();
	glOrtho(-100,100,-100,100,-100,100);
	glEnable(GL_BLEND);
	glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
	glMatrixMode(GL_MODELVIEW);
//	glClearDepth(1);
	glEnable(GL_DEPTH_TEST);    // satisfies (2)
	glDepthFunc(GL_LEQUAL);
	glDepthRange(5, 500);  // satisfies (3) (I think)
	glLoadIdentity();
}
and from displayMain():
code:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);  // satisfies (4) (I think?)

...drawing things...

glutSwapBuffers();
glutPostRedisplay();
Any ideas? The entire code is a giant Fing mess, but if you'd like to see it it's here: http://pastebin.com/zsi3yMJa

Boz0r
Sep 7, 2006
The Rocketship in action.
I received some sample code from my lecturer about deferred rendering. It compiles fine, but when I try to run it, it gives me the following errors:

code:
Unable to create OpenGL context
Unable to re-create GLFW window
Press any key to continue . . .
Anybody knows what this means? A Google search didn't really fix it.

EDIT: I suspected it was my GPU's support for OpenGL, but if I run glewinfo, it tells me I support OpenGL 4.0

EDIT: Or maybe it doesn't, I've got a GTX285 and according to nvidia's site it's only OpenGL 2.1.

EDIT: But according to wikipedia it's 3.3

Boz0r fucked around with this message at 23:24 on Nov 18, 2012

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
The code that made that error would be appreciated.

The Gripper
Sep 14, 2004
i am winner
Judging by the error it looks like it creates the window and context properly, but has to re-create the context because of specific window hints (FSAA, OpenGL version specification, profile req, forward compatibility req are the ones I can find in win32_window.c).

It could honestly just be that your card doesn't support something you're requesting, you could try commenting out any glfwOpenWindowHint() calls and figuring out which one is the culprit.

Boz0r
Sep 7, 2006
The Rocketship in action.

The Gripper posted:

It could honestly just be that your card doesn't support something you're requesting, you could try commenting out any glfwOpenWindowHint() calls and figuring out which one is the culprit.

Thanks, that did it. Apparently, it wanted to create a 4.1 window.

Boz0r
Sep 7, 2006
The Rocketship in action.
I'm trying to add the color from a texture with the colors of another texture (containing my lights). I have the following code: http://ideone.com/mb9380 and shader: http://ideone.com/AWCisI. However it only draws stuff from the tex texture. Can anyone spot my error?

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.

Boz0r posted:

I'm trying to add the color from a texture with the colors of another texture (containing my lights). I have the following code: http://ideone.com/mb9380 and shader: http://ideone.com/AWCisI. However it only draws stuff from the tex texture. Can anyone spot my error?

Maybe use glEnable(GL_TEXTURE_2D) for the texture unit 1 if it's not done somewhere else? Or maybe call glClientActiveTexture() as well as glActiveTexture()?

Floor is lava
May 14, 2007

Fallen Rib
Playing around with pure opengl to do some 2d stuff coming from sdl. I'm using textured quads in an ortho view with the top left being 0,0. I'm having problems with clipping on the x and y axis. If a quad goes to the left or above the drawing area the image disappears. How do I deal with something like a sprites on the edges of the drawing area? How do I go about adding a buffer area so that the quads won't be considered invalid?

I'm guessing I'm misunderstanding the different views.

Floor is lava fucked around with this message at 16:49 on Dec 4, 2012

zzz
May 10, 2008

floor is lava posted:

Playing around with pure opengl to do some 2d stuff coming from sdl. I'm using textured quads in an ortho view with the top left being 0,0. I'm having problems with clipping on the x and y axis. If a quad goes to the left or above the drawing area the image disappears. How do I deal with something like a sprites on the edges of the drawing area? How do I go about adding a buffer area so that the quads won't be considered invalid?

I'm guessing I'm misunderstanding the different views.

That shouldn't happen unless you're using unsigned integers for the coordinates. Normally, negative coordinates and clipping "just work".

Floor is lava
May 14, 2007

Fallen Rib

zzz posted:

That shouldn't happen unless you're using unsigned integers for the coordinates. Normally, negative coordinates and clipping "just work".

God damnit. Was using unsigned int mouse position. Figures it was something simple.

Floor is lava fucked around with this message at 18:54 on Dec 4, 2012

Van Kraken
Feb 13, 2012

Is there a way to have a GLSL geometry shader operate on two types of primitive at once? I'm writing one that duplicates both points and triangles and I'd rather not have two almost-identical shaders.

Adbot
ADBOT LOVES YOU

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Van Kraken posted:

Is there a way to have a GLSL geometry shader operate on two types of primitive at once? I'm writing one that duplicates both points and triangles and I'd rather not have two almost-identical shaders.

You could use a pre-processor macro

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply