Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hobnob
Feb 23, 2006

Ursa Adorandum
Say the coordinates of your point in world space are given by the vector P, and it has an orientation vector n.

I'm not sure what exactly what you mean by "draw a rectangle around it" but I'll assume you want the rectangle in a plane perpendicular to n. You'll also need an "up" direction perpendicular to n to orient the rectangle - we'll say that's u. If you don't have an "up" direction already we could calculate that based on the closest axis, but for now I'll assuming you have one.

For ease, we'll assume that both n and u are normalized: |n| = |u| = 1.

You can calculate a "right" direction vector from a cross product:
r = n x u

Now r and u are the basis vectors of a set of local x-y coordinates in the plane of your rectangle.

Then for a rectangle of size w x h, the positions of the four corners (in world space) are:

C1 = P + hu + wr
C2 = P + hu - wr
C3 = P - hu - wr
C4 = P - hu + wr

I hope that's helpful.

Hobnob fucked around with this message at 17:48 on Sep 11, 2009

Adbot
ADBOT LOVES YOU

Sex Bumbo
Aug 14, 2004

Goreld posted:

It's probably a Gabor filter. There's a recent Siggraph paper that uses Gabor filters to create noise: here.

It's a well-written paper, which even includes some source code and a cool iphone app.

Thanks!

FSMC
Apr 27, 2003
I love to live this lie
I'm using opengles. I have my world or (flat board) that's all good. But the background is black. Is there a way to make the default emptiness a different color?

Hobnob
Feb 23, 2006

Ursa Adorandum

Unparagoned posted:

I'm using opengles. I have my world or (flat board) that's all good. But the background is black. Is there a way to make the default emptiness a different color?

In regular OpenGL this is set by glClearColor(R,G,B,A) - I don't think it will be any different in ES.

FSMC
Apr 27, 2003
I love to live this lie

Hobnob posted:

In regular OpenGL this is set by glClearColor(R,G,B,A) - I don't think it will be any different in ES.

Thanks, I new there was something like that, but couldn't find it. I was looking under drawView, instead of drawSetup...

haveblue
Aug 15, 2005



Toilet Rascal
I'm really having trouble wrapping my head around duplicating the OpenGL lighting model in a shader in ES 2.0.

So far, I think the procedure goes like this (for a directional light):

-Transform the light vector by the modelview matrix, normalize the result, to get the eyespace light vector.
-Transform the vertex normal by the normal matrix (which is the upper left 3x3 submatrix of the modelview matrix, neglecting nonuniform scaling), normalize the result, to get the eyespace normal.
-Take the dot product of the eyespace light vector and the eyespace normal vector clamped to 0 to get the diffuse contribution.
-multiply the diffuse contribution by the light color to get the diffuse component of the fragment.

I think something is wrong in the light vector transformation- when I display the eyespace normals they seem to be correct, but I can see the diffuse contribution of each vertex changing as I move the camera forward and back which doesn't seem like something that should happen.

I also can't find any good explanations of this online, if anyone has any (all the links I turn up are just OpenGL recipes, and following them doesn't seem to help).

haveblue fucked around with this message at 22:58 on Sep 23, 2009

Commander Keen
Oct 6, 2003

The doomed voyage of the Obsessivo-Compulsivo will haunt me forever

haveblue posted:

I'm really having trouble wrapping my head around duplicating the OpenGL lighting model in a shader in ES

- Eyespace light vector = Inverse camera/eye matrix * light vector
- Normal matrix - http://www.lighthouse3d.com/opengl/glsl/index.php?normalmatrix

Spite
Jul 27, 2001

Small chance of that...

haveblue posted:

-Transform the vertex normal by the normal matrix (which is the upper left 3x3 submatrix of the modelview matrix, neglecting nonuniform scaling), normalize the result, to get the eyespace normal.

It's the inverse transpose of the upper 3x3 of the modelview. Of course, with a uniform scaling matrix, this is the identity transform.

Your problem is probably applying the translation to the light vector, instead of just the rotation/scale.

haveblue
Aug 15, 2005



Toilet Rascal

Spite posted:

Your problem is probably applying the translation to the light vector, instead of just the rotation/scale.

Looks like this was exactly it, thanks!

haveblue
Aug 15, 2005



Toilet Rascal
OK, now that eye space works, let's try tangent space :v:

I've implemented tangent space generation based on the code segment on this page, but a good chunk of the polygons are coming out as if the normal map is upside-down. Does this mean the handedness is not being handled properly, or is something else wrong?

Vertex shader:

code:
attribute vec4 position;
attribute vec4 color;
attribute vec2 texcoord;
attribute vec4 normal;
attribute vec4 tangent;
attribute vec4 binormal;

uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 normalMatrix;
uniform mat4 textureMatrix;
uniform vec4 lightVector;
uniform vec4 lightColor;

varying vec4 colorVarying;
varying vec2 tcVarying;
varying vec4 tangentLightVector;

void main()
{
 gl_Position = projectionMatrix * modelViewMatrix * position;

 vec4 normalVarying = normalize(normalMatrix * normal);

 vec4 tangentEye = normalize(normalMatrix * tangent);

 vec4 binormalEye = normalize(normalMatrix * binormal);

//transform eyespace light to tangent space
 tangentLightVector.x = dot(lightVector, tangentEye);
 tangentLightVector.y = dot(lightVector, binormalEye);
 tangentLightVector.z = dot(lightVector, normalVarying);

 tangentLightVector.w = 0.0;
 tangentLightVector = normalize(tangentLightVector);

 colorVarying = color;
 vec4 fullTexture = textureMatrix*vec4(texcoord.x, texcoord.y, 0, 1);
 tcVarying = vec2(fullTexture.x, fullTexture.y);
}
e: This is the result of dot(texture2D(normal map, tcVarying), tangentLightVector) in the fragment shader. Does anything obvious leap out at anyone?

haveblue fucked around with this message at 20:54 on Sep 29, 2009

Spite
Jul 27, 2001

Small chance of that...
Are your T,B,N vectors the handedness you are expecting? Maybe your bitangent is pointing the opposite direction or something.

haveblue
Aug 15, 2005



Toilet Rascal

Spite posted:

Are your T,B,N vectors the handedness you are expecting? Maybe your bitangent is pointing the opposite direction or something.

Maybe, but I think I've duplicated the method on the page I linked. The only change I made was generating the bitangent beforehand as a vertex attribute, not putting in the shader as they suggest.

This is the tangent/bitangent generator, in case there's something in it I missed:

code:
{
 //calculate the tangent of each vertex with this code I found on the internet
 
 GLfloat *tan1 = (GLfloat *)malloc(Mesh_VertexCount*3*sizeof(GLfloat));
 GLfloat *tan2 = (GLfloat *)malloc(Mesh_VertexCount*3*sizeof(GLfloat));
 int currentTangent = 0;
 GLfloat *vertex;
 GLfloat *texcoord;
 for(int i=0;i<Mesh_VertexCount;i+=3)
 {
  vertex = vertPointer+(i*elementStride);
  texcoord = vertPointer+(i*elementStride)+6;
  
  float x1 = vertex[3] - vertex[0];
  float x2 = vertex[6] - vertex[0];
  float y1 = vertex[4] - vertex[1];
  float y2 = vertex[7] - vertex[1];
  float z1 = vertex[5] - vertex[2];
  float z2 = vertex[8] - vertex[2];
  
  float s1 = texcoord[2] - texcoord[0];
  float s2 = texcoord[4] - texcoord[0];
  float t1 = texcoord[3] - texcoord[1];
  float t2 = texcoord[5] - texcoord[1];
  
  float r = 1.0/((s1*t2) - (s2*t1));
  
  Vector3 sdir = Vector3(((t2*x1) - (t1*x2))*r, ((t2*y1) - (t1*y2))*r, ((t2*z1) - (t1*z2))*r);
  Vector3 tdir = Vector3(((s1*x2) - (s2*x1))*r, ((s1*y2) - (s2*y1))*r, ((s1*z2) - (s2*z1))*r);
  
  memcpy(tan1+(i*3), &(sdir.x), sizeof(GLfloat)*3);
  memcpy(tan1+(i*3)+3, &(sdir.x), sizeof(GLfloat)*3);
  memcpy(tan1+(i*3)+6, &(sdir.x), sizeof(GLfloat)*3);
  memcpy(tan2+(i*3), &(tdir.x), sizeof(GLfloat)*3);
  memcpy(tan2+(i*3)+3, &(tdir.x), sizeof(GLfloat)*3);
  memcpy(tan2+(i*3)+6, &(tdir.x), sizeof(GLfloat)*3);
 }
 
 //now calculate the actual tangent of each vertex
 
 tangentPointer = (GLfloat *)malloc(Mesh_VertexCount*4*sizeof(GLfloat));
 bitangentPointer = (GLfloat *)malloc(Mesh_VertexCount*4*sizeof(GLfloat));
 
 for(int i=0;i<Mesh_VertexCount;i++)
 {
  Vector3 n = Vector3(vertPointer[(i*elementStride)+3], vertPointer[(i*elementStride)+4], vertPointer[(i*elementStride)+5]);
  Vector3 t = Vector3(tan1[i*3], tan1[(i*3)+1], tan1[(i*3)+2]);
  Vector3 t2 = Vector3(tan2[i*3], tan2[(i*3)+1], tan2[(i*3)+2]);
  
  //perform gram-schmidt orthogonalize
  Vector3 tangent = (t - (n * n.Dot(t)));
  tangent.Normalize();  
  memcpy(tangentPointer+(i*4), &(tangent.x), sizeof(GLfloat)*3);

  //calculate handedness
  tangentPointer[(i*4)+3] = ((n.Cross(t)).Dot(t2) < 0.0) ? -1.0 : 1.0;
  //calculate the bitangent by crossing the tangent with the normal and scaling by the handedness
  Vector3 bitangent = n.Cross(tangent)*tangentPointer[(i*4)+3];
/*  bitangent.x *= tangentPointer[(i*4)+3];
  bitangent.y *= tangentPointer[(i*4)+3];
  bitangent.z *= tangentPointer[(i*4)+3];*/
  memcpy(bitangentPointer+(i*4), &(bitangent.x), sizeof(GLfloat)*3);
  bitangentPointer[(i*4)+3] = 1.0;
 }
 
 free(tan1);
 free(tan2);

Sex Bumbo
Aug 14, 2004
Try creating test normal maps, like one that does nothing but straight normals and you should get regular smooth shading. If you don't, something's wrong. Then just work onto the other directions to verify. I've probably hosed up tangent space normal mapping a bajillion times, feel free to pm me if you're having difficulty.

haveblue
Aug 15, 2005



Toilet Rascal

Stanlo posted:

Try creating test normal maps, like one that does nothing but straight normals and you should get regular smooth shading. If you don't, something's wrong. Then just work onto the other directions to verify. I've probably hosed up tangent space normal mapping a bajillion times, feel free to pm me if you're having difficulty.

If I replace the normal map lookup with vec3(0,0,1), I do indeed get smooth shading. So it's got to be something involving texture coordinates.

Sex Bumbo
Aug 14, 2004
You should get normal smooth shading (i.e. smoothed normals shading) if you shove in (.5, .5, 1), not (0, 0, 1). Unless you're using a signed texture format?

Do the other components work?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
edit: Actually I'm confused.

Is this an indexed array of vertices, or do each consecutive three vertices represent one triangle?

OneEightHundred fucked around with this message at 06:46 on Oct 3, 2009

haveblue
Aug 15, 2005



Toilet Rascal
It's an interleaved array of consecutive triangles. At each stride there is a position followed by a normal followed by a texcoord (hence the elementStride+6 instead of +3).

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
The tangent/binormal generation code looks OK, though there is a possibility it's being mishandled elsewhere.

While the effect of this depends VERY heavily on the scale you're doing things at, I would strongly recommend avoiding the use of oversized vector components, and swizzle away components you don't need so you don't accidentally operate on them.

i.e. something like this...

quote:

tcVarying = vec2(fullTexture.x, fullTexture.y);
... can be done like this:

quote:

tcVarying = fullTexture.xy;


The big problem is when you get stuff like this:

quote:

vec4 tangentEye = normalize(normalMatrix * tangent);
...
tangentLightVector.x = dot(lightVector, tangentEye);

The last row/column (depending on convention) of a transform matrix is generally (0,0,0,1), so the W component would get copied. The side-effect of this is that tne is that the W component of (normalMatrix * tangent) is non-zero, and will consequently affect the result of the normalize. That goes double for this case, where tangent.w may be -1 or 1.

The dot product may result in even more issues if lightVector.w is non-zero.


Generally speaking, you should use swizzling to remove components you're not using to prevent unintended side-effects. i.e. something like:

quote:

vec3 tangentEye = normalize((normalMatrix * tangent).xyz);
...
tangentLightVector.x = dot(lightVector.xyz, tangentEye);


Best thing you can do for now though is make a normal map where the direction things should be facing is unmistakable, i.e. put a white circle on a black background and just run your heightmap-to-normalmap filter of choice on it, and make sure they are actually getting flipped AND getting flipped inconsistently (as opposed to being flipped consistently in which case you just negate the bitangent or something.)

haveblue
Aug 15, 2005



Toilet Rascal
I should be calculating the same handedness for each vertex of a polygon and if I don't then something is terribly wrong, correct?

e: Yes, something was terribly wrong.


code:
  vertex = vertPointer+(i*elementStride);
  texcoord = vertPointer+(i*elementStride)+6;
  
  float x1 = vertex[3] - vertex[0];
  float x2 = vertex[6] - vertex[0];
  float y1 = vertex[4] - vertex[1];
  float y2 = vertex[7] - vertex[1];
  float z1 = vertex[5] - vertex[2];
  float z2 = vertex[8] - vertex[2];
This did not match the layout of my vertex array. I correctly advanced by strides to get the base vertex pointer, but the 9 elements following it are not 3 3-component positions, it's a 3-component position, a 3-component normal, a 2-component texcoord, and a bit of the next vertex. After fixing that, all the rest fell into place.

haveblue fucked around with this message at 21:04 on Oct 7, 2009

MrPeanut
Mar 11, 2005

My word!
So I'm trying to build a ray tracer that incorporates translation, rotation, and scale to a sphere, then runs a typical tracing algorithm, but I'm running into a weird issue. It seems that the further away from the center of the field of view I transform the sphere, the more stretched the sphere is, but always in the direction towards the center.

Anyone have any idea what could be making this happen? I've tried 2 different intersection solutions (both algebraic and geometric) and my tracing code works when the sphere is at (0,0,0).

Here is the code to my tracing algorithm, as I'm thinking that it's hidden somewhere in here:

code:
double pixelCenter[] = {vf->getTopLeftPixelCenterX(), vf->getTopLeftPixelCenterY(), vf->getTopLeftPixelCenterZ()};
	double scanLineStart[] = {vf->getTopLeftPixelCenterX(), vf->getTopLeftPixelCenterY(), vf->getTopLeftPixelCenterZ()};

	for(int i = 0; i < vf->getYRes(); i++)
	{
		//Start new scanline
		pixelCenter[0] = scanLineStart[0];
		pixelCenter[1] = scanLineStart[1];
		pixelCenter[2] = scanLineStart[2];
		for(int j = 0; j < vf->getXRes(); j++)
		{
			//Form ray from camera to pixel center
			double ray[] = {pixelCenter[0] - cam->getCamPosX(), pixelCenter[1] - cam->getCamPosY(), pixelCenter[2] - cam->getCamPosZ()};
			//Normalize ray
			double raynorm = sqrt(pow(ray[0],2)+pow(ray[1],2)+pow(ray[2],2));
			ray[0] = ray[0]/raynorm;
			ray[1] = ray[1]/raynorm;
			ray[2] = ray[2]/raynorm;

			double minDistance = 0;
			bool firstIntersect = true;
			int closestObject;

			for(int k = 0; k < numOfObj; k++)
			{
				//Note to Self: Add array of object functionality
				//Test for intersection; if distance = -1, then no intersection
				double distance = intersect(ray, sparr[k]);
				if(firstIntersect && distance != -1)
				{
					minDistance = distance;
					closestObject = k;
					firstIntersect = false;
					//cout << distance;
				}
				else if(distance < minDistance && distance != -1)
				{
					minDistance = distance;
					closestObject = k;
				}
			}
			//Test if within frustum 
			//Note: -1 value will not be within frustum; subject to change
			if(vf->withinFrustum(minDistance))
			{
				//Note to Self: Add array of object functionality
				determinePixelColor(minDistance, sparr[closestObject], ray, j, i);
			}
			else
			{
				vf->writePixelColor(j, i, (int)(rbk*255), (int)(gbk*255), (int)(bbk*255));
			}

			pixelCenter[0] += cam->getUX()*vf->getPixelWidth();
			pixelCenter[1] += cam->getUY()*vf->getPixelWidth();
			pixelCenter[2] += cam->getUZ()*vf->getPixelWidth();
		}
		scanLineStart[0] -= cam->getVX()*vf->getPixelHeight();
		scanLineStart[1] -= cam->getVY()*vf->getPixelHeight();
		scanLineStart[2] -= cam->getVZ()*vf->getPixelHeight();
	}
Of course this code is due at 11:59pm tonight so a speedy answer would be appreciated haha

Only registered members can see post attachments!

MrPeanut
Mar 11, 2005

My word!
Actually heres my two different intersection algorithms. I just tested their distance values and one is showing less values than the other, yet only one produces the proper lighting (geometric).

code:
/*double intersect(double *ray, Sphere *s)
{
	double t[] = {cam->getCamPosX() - s->getX(), cam->getCamPosY() - s->getY(), cam->getCamPosZ() - s->getZ()};
	
	double a = (ray[0]*ray[0])+ (ray[1]*ray[1])+ (ray[2]*ray[2]);
	double b = 2*((t[0]*ray[0]) + (t[1]*ray[1]) + (t[2]*ray[2]));
	double c = ((t[0]*t[0]) + (t[1]*t[1]) + (t[2]*t[2])) - pow(s->getRadius(),2);

	double disc = b*b - 4*a*c;

	if(disc < 0){return -1;}

	double sqrtDisc = sqrt(disc);
	double quadratic;
	if(b < 0)
	{
		quadratic = (-b - sqrtDisc)/2.0f;
	}
	else
	{
		quadratic = (-b + sqrtDisc)/2.0f;
	}

	double t0 = quadratic / a;
	double t1 = c / quadratic;

	if(t0 > t1)
	{
		double temp = t0;
		t0 = t1;
		t1 = temp;
	}

	if(t1 < 0){return -1;}
	if(t0 < 0){return t1;}
	else{return t0;}
}*/	

double intersect(double *ray, Sphere *s)
{
	double OC[] = {s->getX() -cam->getCamPosX(), s->getY() - cam->getCamPosY(), s->getZ() - cam->getCamPosZ()};
	double c = ((OC[0]*OC[0]) + (OC[1]*OC[1]) + (OC[2]*OC[2]));

	bool inside = false;

	if(c < pow(s->getRadius(), 2))
	{
		inside = true;
	}

	double L = ((OC[0]*ray[0]) + (OC[1]*ray[1]) + (OC[2]*ray[2]));

	if(L < 0)
	{
		return -1;
	}

	double D = c - pow(L, 2);
	double HC = pow(s->getRadius(), 2) - D;

	if(HC < 0)
	{
		return -1;
	}

	double t;
	if(inside)
	{
		t = L - sqrt(HC);
	}
	else
	{
		t = L + sqrt(HC);
	}
	return t;
}

Mata
Dec 23, 2003
I'm having some problems getting vertex buffer objects to work in windows...

At first I was getting undefined reference errors which I fixed by including glext and setting the GL_GLEXT_PROTOTYPES flag like this in my header file:
code:
#define GL_GLEXT_PROTOTYPES    //for vertex buffer objects in windows
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
#include <GL/glext.h>
Which solved the undefined references, but now I get linker errors like such:

[Linker error] undefined reference to `glGenBuffers@8'
[Linker error] undefined reference to `glBindBuffer@8'
[Linker error] undefined reference to `glBufferData@16'

I use devcpp so I tried downloading the GLEW devpak and changing my headers around a little which gives me this error instead:

[Linker error] undefined reference to `_imp____glewGenBuffersARB'

EDIT: Fixed it....
Turns out opengl32.dll does not support extensions higher than v1.1 (i.e vertex buffer objects) and the workaround is something like this:
code:
ifdef _WIN32
	PFNGLGENBUFFERSARBPROC pglGenBuffersARB = 0;                     // VBO Name Generation Procedure
	PFNGLBINDBUFFERARBPROC pglBindBufferARB = 0;                     // VBO Bind Procedure
	PFNGLBUFFERDATAARBPROC pglBufferDataARB = 0;                     // VBO Data Loading Procedure
	PFNGLBUFFERSUBDATAARBPROC pglBufferSubDataARB = 0;               // VBO Sub Data Loading Procedure
	PFNGLDELETEBUFFERSARBPROC pglDeleteBuffersARB = 0;               // VBO Deletion Procedure
	PFNGLGETBUFFERPARAMETERIVARBPROC pglGetBufferParameterivARB = 0; // return various parameters of VBO
	PFNGLMAPBUFFERARBPROC pglMapBufferARB = 0;                       // map VBO procedure
	PFNGLUNMAPBUFFERARBPROC pglUnmapBufferARB = 0;                   // unmap VBO procedure
	#define glGenBuffersARB           pglGenBuffersARB
	#define glBindBufferARB           pglBindBufferARB
	#define glBufferDataARB           pglBufferDataARB
	#define glBufferSubDataARB        pglBufferSubDataARB
	#define glDeleteBuffersARB        pglDeleteBuffersARB
	#define glGetBufferParameterivARB pglGetBufferParameterivARB
	#define glMapBufferARB            pglMapBufferARB
	#define glUnmapBufferARB          pglUnmapBufferARB
	

        // get pointers to GL functions
        glGenBuffersARB = (PFNGLGENBUFFERSARBPROC)wglGetProcAddress("glGenBuffersARB");
        glBindBufferARB = (PFNGLBINDBUFFERARBPROC)wglGetProcAddress("glBindBufferARB");
        glBufferDataARB = (PFNGLBUFFERDATAARBPROC)wglGetProcAddress("glBufferDataARB");
        glBufferSubDataARB = (PFNGLBUFFERSUBDATAARBPROC)wglGetProcAddress("glBufferSubDataARB");
        glDeleteBuffersARB = (PFNGLDELETEBUFFERSARBPROC)wglGetProcAddress("glDeleteBuffersARB");
        glGetBufferParameterivARB = (PFNGLGETBUFFERPARAMETERIVARBPROC)wglGetProcAddress("glGetBufferParameterivARB");
        glMapBufferARB = (PFNGLMAPBUFFERARBPROC)wglGetProcAddress("glMapBufferARB");
        glUnmapBufferARB = (PFNGLUNMAPBUFFERARBPROC)wglGetProcAddress("glUnmapBufferARB");
#endif	

Mata fucked around with this message at 07:33 on Nov 10, 2009

Sex Bumbo
Aug 14, 2004
It's been a while since I've used OGL, but isn't it around version 3.1 or something? Are you using the latest dlls?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Static linking to opengl32.dll is something you should never do anyway, since the program will close if a proc lookup fails, making it impossible to support optional features.

You can macro it to make things a bit easier, i.e.:

code:
#define GPA(t, n) \
	do \
	{ \
		pfn = i->GetProcAddress("gl" #n);\
		if(!pfn)\
			throw GLProcLookupException("gl" #n);\
		this->n = (t)pfn;\
	} while(0)
...

code:
GPA(PFNGLGETBUFFERSUBDATAARBPROC, GetBufferSubDataARB);
(Obviously use a different macro for lookups that are allowed to fail)

Mata
Dec 23, 2003
Thanks for the help above but my crappy method WorksForMe so fixing it up better is low priority now!
My latest opengl headache in my long line of trials & tribulations is getting textures to work...

I load .obj files which is like a cleartext list of vertices, for example the vertices in a 10x10 cube looks like this:
code:
#top: x y z
v  -5.0000 -5.0000 -5.0000
v  5.0000 -5.0000 -5.0000
v  -5.0000 5.0000 -5.0000
v  5.0000 5.0000 -5.0000
#bottom: x y z 
v  -5.0000 -5.0000 5.0000
v  5.0000 -5.0000 5.0000
v  -5.0000 5.0000 5.0000
v  5.0000 5.0000 5.0000
Then there's the texture vertices, which specifies offsets (from 0 to 1) in our texture image, something like this:
code:
vt 0.5000 0.5000 0.0000	
vt 0.7500 0.5000 0.0000		
vt 0.5000 0.2500 0.0000		
vt 0.7500 0.2500 0.0000		
Now in our obj files, to create one textured face we have indices which point to vertices and texture vertices (also normals, but forget that for now)
code:
#face vertex/texture/normal for each xyz
f 1/2/1 3/1/2 4/3/3 
f 4/3/4 2/4/5 1/2/6 
The two lines above say that vertex 1, 3 and 4 specify one triangle, and that these vertices use vertex texture 2, 1 and 3 respectively. Two triangles compose one face of our cube.

I hope you're with me so far... The problem is OpenGL doesnt seem to support texture indices (or normal indices but whatever) and just uses the same vertex index for the texture and normal. So, to opengl the indices above are essentially 1/1/1 3/3/3 4/4/4 etc.
So I figure, that sucks but I only have to go through the vertex and normals arrays and rearrange them so the vertex index points to the proper texture index, so that 1/1/1 3/3/3 4/4/4 become the correct indices.
Problem is, it's not that simple because .objs reuse the same vertex index for several different texture and normal indices - for example another one of the indices for our cube will be:
code:
f 6/9/16 5/10/17 1/2/18
So no amount of rearranging the textures and normals will give me the correct cube.
I feel like a loving retard because texture mapping shouldn't be this complicated... Please tell me I AM retard and am approaching this from the wrong angle because to fix this I would have to make a new vertex for each unique permutation of coordv/coordt/coordn which is not only a pain in the rear end to program but will also bloat the size of my models to hell.

Mata fucked around with this message at 01:27 on Nov 13, 2009

haveblue
Aug 15, 2005



Toilet Rascal

Mata posted:

to fix this I would have to make a new vertex for each unique permutation of coordv/coordt/coordn which is not only a pain in the rear end to program but will also bloat the size of my models to hell.

Unfortunately this really is what you have to do. Each vertex in OpenGL has exactly one index which is used for all the attribute arrays. You should be able to have your 3D package generate models that already have that property and not have to worry about generating it yourself, although you're right about the increased footprint.

What kind of models are you making that you have significant redundancy of normals?

Mata
Dec 23, 2003

haveblue posted:

Unfortunately this really is what you have to do. Each vertex in OpenGL has exactly one index which is used for all the attribute arrays. You should be able to have your 3D package generate models that already have that property and not have to worry about generating it yourself, although you're right about the increased footprint.

What kind of models are you making that you have significant redundancy of normals?

I couldn't find this option in 3dsmax :( I didn't really check if my models had significant redundancy of indices, it's just the principle of the thing.

Is there any point to using indices at all, then?

Sex Bumbo
Aug 14, 2004

Mata posted:

I couldn't find this option in 3dsmax :( I didn't really check if my models had significant redundancy of indices, it's just the principle of the thing.

Is there any point to using indices at all, then?

Indices reduce the size, both on the file and on the GPU.

Most of the time for personal projects when perf isn't an issue you can just cheese your way around and draw everything as unindexed triangle lists.

If you want the perf and space boost, a simple optimization would be:

code:
create a unique list of vertices

create empty list of indices
for each triangle
    for each triangle vertex
        put index that vertex appears in unique list of vertices into list of indices
Now, ideally, each vertex needs to be processed only once.

Sex Bumbo fucked around with this message at 01:50 on Nov 13, 2009

Mata
Dec 23, 2003

Stanlo posted:

Indices reduce the size, both on the file and on the GPU.

Most of the time for personal projects when perf isn't an issue you can just cheese your way around and draw everything as unindexed triangle lists.

If you want the perf and space boost, a simple optimization would be:

code:
create a unique list of vertices

create empty list of indices
for each triangle
    for each triangle vertex
        put index that vertex appears in unique list of vertices into list of indices
Now, ideally, each vertex needs to be processed only once.

Yeah I worked out an algorithm in my head but it's still funny how something like "create a list of unique vertices" which would be EZ in say, python, is difficult for me in C++.
But it seems like pretty much EVERY vertex triplet will be unique. I might be wrong, but if there are only like a handful of shared vertices then surely using indexed vertex buffer objects won't be much of a performance boost at all. Oh well, I'm not one to pinch every cycle..

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Stanlo posted:

Indices reduce the size, both on the file and on the GPU.

Most of the time for personal projects when perf isn't an issue you can just cheese your way around and draw everything as unindexed triangle lists.

If you want the perf and space boost, a simple optimization would be:

code:
create a unique list of vertices

create empty list of indices
for each triangle
    for each triangle vertex
        put index that vertex appears in unique list of vertices into list of indices
Now, ideally, each vertex needs to be processed only once.

Space (or more correctly, bandwidth) is factor in that indices allow adjacent triangles using the same vertices to only load the vertex data once (using a pre-transform cache); however a bigger benefit of using indices is that it allows the GPU to cache the results of the transform/vertex shader stage into a post-transform cache as well, thus saving the the cost of the vertex processing.

So yeah, doesn't matter if you're not GPU performance bound, but pretty important if you are.

Fecotourist
Nov 1, 2008

Mata posted:

Yeah I worked out an algorithm in my head but it's still funny how something like "create a list of unique vertices" which would be EZ in say, python, is difficult for me in C++.

If you'd use a dictionary in Python (maybe even if you wouldn't), use a std::map in C++. Something like a map from tuple<vertex_pointer,normal_pointer...> to index in the vertex array. Map is happy to tell you if it's already seen something, with log complexity.
Or I'm completely missing the point.

vvvvvvvvvv When I did this, it was at load time. Given raw vertices, normals, etc. and the code to draw the geometry, replaced glVertex();glNormal()... with a map lookup. If the combination was novel, add a new interleaved vertex to the array and do a map insertion. Otherwise just took the index the map gave me. Either way the index array got a new index.

Fecotourist fucked around with this message at 17:54 on Nov 13, 2009

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Creating a list of unique vertices is something you do during preprocessing, not runtime.

Sex Bumbo
Aug 14, 2004

Fecotourist posted:

Something like a map from tuple<vertex_pointer,normal_pointer...> to index in the vertex array.

Make sure you supply custom equality and comparison operators so that you catch floating point errors if you do this. But really, I doubt you need the vertex throughput, just throw it all at the gpu however you want and it will probably be dandy.

Fecotourist
Nov 1, 2008
I had the luxury of just relying on pointer comparison. I was really just going from an independent multiple-index format to single-index.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
I'm puttering around in XNA as an intro to 3D graphics and I'm getting a really strange error - adding text sprites seems to be messing up my 3D images.

The picture on the left below is what I want - some axes and a red triangle sitting on a ground mesh. When I try to put in some text the letters show up just fine but the red triangle 'moves' below ground. It isn't actually changing location, the ground plane is drawing on top of it.



The code below is the main draw loop, the lines that cause the problem are commented out. The base.Draw call draws the 3D stuff, the spriteBatch calls do the text.

code:
        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.Black);
             base.Draw(gameTime);

             //GraphicsDevice.RenderState.FillMode = FillMode.Solid;
             //spriteBatch.Begin();
             //spriteBatch.DrawString(font, "ABC123", new Vector2(10, 10), Color.Yellow, 0,
             //    Vector2.Zero, 1, SpriteEffects.None, 0);
             //spriteBatch.End();
        }
The problem still happens if I strip out everything but spriteBatch.Begin() and spriteBatch.End() and don't draw any text, so something there is changing the state of the pipeline.

Any suggestions?

edit: On further research it looks like going into sprite mode sets a bunch of flags that don't get reset when returning to 3D mode on the next frame update. There is a writeup here with more details. Adding the following lines after spriteBatch.End() fixes the problem.
code:
GraphicsDevice.RenderState.DepthBufferEnable = true;
GraphicsDevice.RenderState.AlphaBlendEnable = false;
GraphicsDevice.RenderState.AlphaTestEnable = false;
vvv: Yeah, it looks like the depth buffer is disabled when drawing sprites, but not re-enabled when going back to 3D.

PDP-1 fucked around with this message at 04:16 on Nov 16, 2009

digibawb
Dec 15, 2004
got moo?
Most likely candidate would be the depth testing mode being changed, or depth testing being disabled.

a slime
Apr 11, 2005

Can I seriously not tile textures from an atlas in OpenGL

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

not a dinosaur posted:

Can I seriously not tile textures from an atlas in OpenGL

not without doing some pixel shader tricks, no.

Sweeper
Nov 29, 2007
The Joe Buck of Posting
Dinosaur Gum
I crossposted this with the games thread but this seems like a better place for it.

I was wondering how simple it was to have a Direct2D application be on top of a game. I want to put my window on top of my game so I can get updates from my program while playing. Unfortunately this seems like it would be harder than I had thought and requires hooking Direct3D functions to write to the screen. Is there an easier way to do this?

Adbot
ADBOT LOVES YOU

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh
One of the first Google results is for a DirectDraw overlay, which is probably functionally similar to Direct2D: http://www.gamedev.net/community/forums/topic.asp?topic_id=359319

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply