Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
steckles
Jan 14, 2006

Boz0r posted:

Thanks, that's a lot of good info. I've already implemented a normal path tracer which works fine, but it doesn't convert easily to a bidirectional one. It's just a simple recursive bouncer that shades every surface it hits and returns less and less of each surface.
Ah. Is there any reason you want to implement bidirectional path tracing as well as MLT? Kelemen style MLT works just fine with recursive path tracers.

Adbot
ADBOT LOVES YOU

Boz0r
Sep 7, 2006
The Rocketship in action.
Fun, practice, and to have something to compare it with :)

Boz0r
Sep 7, 2006
The Rocketship in action.
My (standard) path tracer is doing a thing that's wrong:



Without looking at the code, does anyone have an idea to what can be wrong here? My normal shader renders all the objects smoothly.

steckles
Jan 14, 2006

Boz0r posted:

Fun, practice, and to have something to compare it with :)
You've got a masochistic sense of fun!

Boz0r posted:

My (standard) path tracer is doing a thing that's wrong:


Kinda looks like surface acne. The usual ways of dealing with this are to skip testing the triangle you've just intersected, skipping ray intersections with back facing triangles, and adding a small delta to the origin of the reflected ray so that it starts slightly above the surface.

Alternatively, your reflection code could be producing rays facing the wrong direction.

Xerophyte
Mar 17, 2008

This space intentionally left blank

Boz0r posted:

My (standard) path tracer is doing a thing that's wrong:



Without looking at the code, does anyone have an idea to what can be wrong here? My normal shader renders all the objects smoothly.

I'm going to guess that you're offsetting your scattered rays in some up direction and letting up be determined by triangle winding (typically when you're taking the cross of two triangle edges as up). If the triangle winding isn't consistent this leads to the offset putting the scattering ray's origin inside the object for half the triangles or so.

Raenir Salazar
Nov 5, 2010

College Slice
So quoting my code for reference I can't seem to get a cube to appear on my screen using the simple matrix transformations:


Matrix Transformations:

quote:

glm::mat4 blInitMatrix2() {
// translation matrix 10 units in the x direction
mat4 myTranslationMatrix = glm::translate(0.0f, 0.0f, 0.0f);
// Identity matrix
mat4 myIdentityMatrix = glm::mat4(1.0f);
// Scaling matrices, double everything!
mat4 myScalingMatrix = glm::scale(2.0f, 2.0f, 2.0f);
// rotation along the ZED-axis (drat yanks.... grumble,grumble,grumble...)
vec3 myRotationAxis(0, 0, 1);
// 90 degrees? Just gonna guess that 'rotate' returns me a matrix
mat4 myRotationMatrix = rotate(90.0f /* Apparently this is a float */, myRotationAxis);
// Now all of our powers combined!
mat4 myModelMatrix = myTranslationMatrix * myRotationMatrix * myScalingMatrix * myIdentityMatrix;

return myModelMatrix;
}


Vertex buffer, I'm not at the point yet where I eliminate redundant vertexes:

quote:

GLuint blInitVertexBuffer() {

static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, -1.0f, // triangle 1 : begin
-1.0f, -1.0f, 1.0f,
-1.0f, 1.0f, 1.0f, // triangle 1 : end
1.0f, 1.0f, -1.0f, // triangle 2 : begin
-1.0f, -1.0f, -1.0f,
-1.0f, 1.0f, -1.0f, // triangle 2 : end
1.0f, -1.0f, 1.0f,
-1.0f, -1.0f, -1.0f,
1.0f, -1.0f, -1.0f,
1.0f, 1.0f, -1.0f,
1.0f, -1.0f, -1.0f,
-1.0f, -1.0f, -1.0f,
-1.0f, -1.0f, -1.0f,
-1.0f, 1.0f, 1.0f,
-1.0f, 1.0f, -1.0f,
1.0f, -1.0f, 1.0f,
-1.0f, -1.0f, 1.0f,
-1.0f, -1.0f, -1.0f,
-1.0f, 1.0f, 1.0f,
-1.0f, -1.0f, 1.0f,
1.0f, -1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
1.0f, -1.0f, -1.0f,
1.0f, 1.0f, -1.0f,
1.0f, -1.0f, -1.0f,
1.0f, 1.0f, 1.0f,
1.0f, -1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
1.0f, 1.0f, -1.0f,
-1.0f, 1.0f, -1.0f,
1.0f, 1.0f, 1.0f,
-1.0f, 1.0f, -1.0f,
-1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
-1.0f, 1.0f, 1.0f,
1.0f, -1.0f, 1.0f
};
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);

GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);

return vertexbuffer;
}

Draw triangle function:

quote:

void blDrawTriangle( GLuint vertexbuffer ) {
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);

// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, 12*3); // 12 triangles

//glDisableVertexAttribArray(0);

}


Main:

quote:

int main( void )
{
// Initialise GLFW
if( !glfwInit() )
{
fprintf( stderr, "Failed to initialize GLFW\n" );
return -1;
}

glfwOpenWindowHint(GLFW_FSAA_SAMPLES, 4);
glfwOpenWindowHint(GLFW_WINDOW_NO_RESIZE,GL_TRUE);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

// Open a window and create its OpenGL context
if( !glfwOpenWindow( 1024, 768, 0,0,0,0, 32,0, GLFW_WINDOW ) )
{
fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the tutorials.\n" );
glfwTerminate();
return -1;
}

// Initialize GLEW
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return -1;
}

glfwSetWindowTitle( "Playground" );

// Ensure we can capture the escape key being pressed below
glfwEnable( GLFW_STICKY_KEYS );

// Dark blue background
glClearColor(0.0f, 0.0f, 0.4f, 0.0f);

// Enable depth test
glEnable(GL_DEPTH_TEST);
// Accept fragment if it closer to the camera than the former one
glDepthFunc(GL_LESS);

// Create and compile our GLSL program from the shaders
//GLuint programID = LoadShaders("SimpleVertexShader.vertexshader", "SimpleFragmentShader.fragmentshader");
//GLuint programID = LoadShaders("TransformVertexShader.vertexshader", "ColorFragmentShader.fragmentshader");
GLuint programID = LoadShaders("SimpleVertexShader.vertexshader", "ColorFragmentShader.fragmentshader");
//GLuint programID = LoadShaders("TransformVertexShader.vertexshader", "SingleColor.fragmentshader");

// Get a handle for our "MVP" uniform
GLuint MatrixID = glGetUniformLocation(programID, "MVP");

//glm::mat4 MVP = blInitMatrix();
glm::mat4 MVP = blInitMatrix2();
static const GLushort g_element_buffer_data[] = { 0, 1, 2 };


GLuint vertexbuffer = blInitVertexBuffer();

GLuint colorbuffer = blInitColorBuffer();

do{

// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Use our shader
glUseProgram(programID);

// Send our transformation to the currently bound shader,
// in the "MVP" uniform
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);

blDrawTriangle(vertexbuffer);
//glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // shows only wireframe.
blPaintColor(colorbuffer);

glDisableVertexAttribArray(0);
//glDisableVertexAttribArray(1);

// Swap buffers
glfwSwapBuffers();

//

} // Check if the ESC key was pressed or the window was closed
while (glfwGetKey(GLFW_KEY_ESC) != GLFW_PRESS &&
glfwGetWindowParam(GLFW_OPENED));

// Close OpenGL window and terminate GLFW
glfwTerminate();

// Cleanup VBO
//glDeleteBuffers(1, &vertexbuffer);
//glDeleteVertexArrays(1, &VertexArrayID);

return 0;
}

I tried fiddling with the x/y/z transformations thinking it may have been off screen but no luck. I can see the cube if I use the other function I got commented out there:

quote:

glm::mat4 blInitMatrix() {
// Projection matrix : 45° Field of View, 4:3 ratio, display range : 0.1 unit <-> 100 units
glm::mat4 Projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f);
// Or, for an ortho camera :
//glm::mat4 Projection = glm::ortho(-10.0f,10.0f,-10.0f,10.0f,0.0f,100.0f); // In world coordinates

// Camera matrix
glm::mat4 View = glm::lookAt(
glm::vec3(4, 3, -3), // Camera is at (4,3,3), in World Space
glm::vec3(0, 0, 0), // and looks at the origin
glm::vec3(0, 1, 0) // Head is up (set to 0,-1,0 to look upside-down)
);
// Model matrix : an identity matrix (model will be at the origin)
glm::mat4 Model = glm::mat4(1.0f);
// Our ModelViewProjection : multiplication of our 3 matrices
glm::mat4 MVP = Projection * View * Model; // Remember, matrix multiplication is the other way around

return MVP;
}

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.
Based on your code you're not multiplying your model matrix that you create in bInitMatrix2 with a view and a projection matrix, like what bInitMatrix is doing. The way you have it now, the "eye" at the origin is actually inside the cube, so you're not gonna see it with backface culling. You'll need a view matrix (like with glm::lookAt) to push out the cube a little bit so that you can see it.

Plus you shouldn't have to use the identity matrix in your multiplication.

code:
// EDIT: Remove myIdentityMatrix here because you don't need it.
// mat4 myModelMatrix = myTranslationMatrix * myRotationMatrix * myScalingMatrix * myIdentityMatrix;
mat4 myModelMatrix = myTranslationMatrix * myRotationMatrix * myScalingMatrix;
code:
// Model matrix : an identity matrix (model will be at the origin)
glm::mat4 Model = glm::mat4(1.0f);
// Our ModelViewProjection : multiplication of our 3 matrices
// EDIT: Even though "Model" is the identity, this is fine to leave in since it's easier to understand what's going on.
glm::mat4 MVP = Projection * View * Model; // Remember, matrix multiplication is the other way around

Raenir Salazar
Nov 5, 2010

College Slice
But I mean, with that code I can see a single triangle that I draw with the three vertices, shouldn't it still be possible to see a cube without a projection matrix?

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.

Raenir Salazar posted:

But I mean, with that code I can see a single triangle that I draw with the three vertices, shouldn't it still be possible to see a cube without a projection matrix?

Is the triangle facing towards the screen? At what Z is the triangle at? With a cube you're not going to see the faces of the cube while you're inside it (the eye is at 0, 0, 0) because of backface culling. Turn off culling and see if the cube shows up.

Raenir Salazar
Nov 5, 2010

College Slice

HiriseSoftware posted:

Is the triangle facing towards the screen? At what Z is the triangle at? With a cube you're not going to see the faces of the cube while you're inside it (the eye is at 0, 0, 0) because of backface culling. Turn off culling and see if the cube shows up.

IIRC I didn't have backface calling there until after I confirmed seeing if the code worked with the projection matrix, with backface culling off I'm pretty sure there was still no cube; I'll try to confirm but right now I'm at my laptop and I'm having.... Technical difficulties... :shepicide:

(For some reason it won't open my shader files!)

e: Essentially it says "Impossible to open SimpleVertexShader.vertexshader...", I stepped through it in the debugger and as best as I can tell somehow the string passed to it becomes corrupt and unreadable, I'm stumped so I'm recompiling the project tutoriial files.

e2: Well that's fixed, but now I have unhandled exception error at code that used to work perfectly fine. glGenVertexArrays(1, &VertexArrayID);

e3: Finally fixed that too.

Edit 4: Okay did as you said, made sure it was as "clean" as possible and I certainly see a side of my cube!

e5: Okay the next exersize confuses me as well:

quote:

Draw the cube AND the triangle, at different locations. You will need to generate 2 MVP matrices, to make 2 draw calls in the main loop, but only 1 shader is required.

Does he mean render one scene with the cube, clear it and then draw the triangle or have both at the same time? If its the latter I accomplished that by adding 3 more vertexes at some points I figure was empty space away from the square? What was the tutorial expecting me to do?

Here's my hack:

Raenir Salazar fucked around with this message at 01:05 on Jan 31, 2014

Tres Burritos
Sep 3, 2009

Raenir Salazar posted:

e5: Okay the next exersize confuses me as well:

Does he mean render one scene with the cube, clear it and then draw the triangle or have both at the same time? If its the latter I accomplished that by adding 3 more vertexes at some points I figure was empty space away from the square? What was the tutorial expecting me to do?


It's probably the latter, what he means is before another draw call you can reset/change your MVP uniforms and therefore have whatever geometry you like show up in a different place.

So for each piece of geometry you might have something like

code:
//send translation uniform
mat4 translated = translate(Position);
glUniformMatrix4fv(TranslationUniform, 1, GL_FALSE, &translated[0][0]);

//send rotation uniform
mat4 rot = mat4_cast(Rotation);
glUniformMatrix4fv(RotationUniform, 1, GL_FALSE, &rot[0][0]);

//draw
glDrawWhatever();
So since you've changed the uniforms for position and rotation the shader will now use those uniforms and move the geometry accordingly.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
That reminds me:

How is photon mapping supposed to work with complex geometry? Specifically, how do you handle collection in a way that both avoids the noise of small polygons getting hit by single-digit numbers of photons (or zero), without bleeding through solid surfaces? Or is the shoot phase just supposed to be really inaccurate?

steckles
Jan 14, 2006

OneEightHundred posted:

How is photon mapping supposed to work with complex geometry? Specifically, how do you handle collection in a way that both avoids the noise of small polygons getting hit by single-digit numbers of photons (or zero), without bleeding through solid surfaces? Or is the shoot phase just supposed to be really inaccurate?
For any given point you want to shade, you'd grab the k nearest photons that are stored in the photon map. From this list, you can filter out photons whose intersection normals are too different than the one you're shading. This avoids light bleeding through thin surfaces. By always getting a large number of photons for the estimate, noise is avoided, trading it for an blurry estimation of the indirect illumination.

Indirect diffuse illumination is usually very low frequency, so it takes surprisingly few photos to get a decent looking result. If you wanted better quality, you could do a single bounce path tracing step at each pixel instead and only sample the photon map for the secondary intersections. This is referred to as final gathering.

Boz0r
Sep 7, 2006
The Rocketship in action.
As I was starting the bidirectional path tracing, I thought the normal path tracer was acting a little funny, so I started doing some testing. I think the color contribution of light and surfaces are a little off, as the surfaces end up looking pale and washed-out.

Basic shading w/ a red/green light


Path tracing w/ a red/green light


Path tracing w/ a grey light


The left wall is red, the right is green and the teapot is blue. The floor is cyan, the back wall is yellow, and the ceiling is magenta. I'd figure that the colors should be very apparent in the image with the grey light, but it's not really.

Here is some pseudocode of what I think is the most important bits:
code:
Vector3 pathTracing(eyeRay) {
	shadeResult = trace(eyeRay).shade()
	for each pathSample
		traceResult = tracePath(randomDirection from eyeRayHit, 0)		
		shadeResult += traceResult / samples		
	return shadeResult
}

Vector3 tracePath(ray, depth) {
	if (depth > depthLimit) 
		return 0
	shadeResult = trace(ray).shade() / 2
	shadeResult += tracePath(randomDirection from rayHit, depth + 1)
	return shadeResult	
}

Vector3 lambertShading(Ray) {
	viewDir = -ray.direction
	for each light
		if (trace(randomPointOnLight - rayHit) is not blocked) {
			vHalf = normalize(lightDistance + viewDir)			
			I_diffuse = color * light.color * max(dot(normal, lightvector), 0)
			I_specular = light.color * pow(max(vHalf, normal), glossiness)		
			L += I_diffuse + I_specular
		}
}
Can anyone spot any obvious errors?

Xerophyte
Mar 17, 2008

This space intentionally left blank

Boz0r posted:

Can anyone spot any obvious errors?

- Your shading has issues:
-- Only direct "diffuse light" is scaled by dot(N,I). This attenuation is geometrical and holds for every type of incoming light: indirect, direct, glossy and diffuse. That means removing it from the I_diffuse calculation and adding it to shadeResult += dot(N,I) * tracePath(...) and L += dot(N,I) * (I_diffuse + I_specular), typically.
-- The diffuse contribution should be scaled by 1/(2*pi). You want it to integrate to 1 before taking dot(N,I) into account.
-- In general, the shading is non-physical and your materials will typically reflect more light than they receive. Note that a BRDF + another BRDF results in something that isn't a BRDF, you need to weight the two components in some way that sums to 1 to get a plausible result. As a basic hack, and bearing in mind the stuff above, you can do L += dot(N,I) * (luminance(color)*I_diffuse + I_specular)/(1 + luminance(color)) to get something stable.
-- The shading isn't actually Lambertian. :)

- Are you doing gamma correction (or some other tonemapping) of the output?
- I'm not quite following how your samples are weighted but it looks like the direct light at the primary sample isn't weighted the same as the other paths.
- Where does the sampling pdf come in? Sure, if it's uniform you might be having it cancel out the Lambertian weight, but your materials aren't Lambertian. There should be a 2*pi somewhere...

Xerophyte fucked around with this message at 18:09 on Feb 2, 2014

Boz0r
Sep 7, 2006
The Rocketship in action.
I've done some of the things you've mentioned, but I think I have to do a flip through of PBRT a bit before I go much further. Thanks.
How does this look?
code:
Vector3 pathTracing(eyeRay) {
	shadeResult = trace(eyeRay).shade()
	for each pathSample
		traceResult = tracePath(randomDirection from eyeRayHit, 0)		
		shadeResult += traceResult / samples		
	return shadeResult
}

Vector3 tracePath(ray, depth) {
	if (depth > depthLimit) 
		return 0
	shadeResult = trace(ray).shade() / 2
	shadeResult += tracePath(randomDirection from rayHit, depth + 1)
	return shadeResult	
}

Vector3 lambertShading(Ray) {
	viewDir = -ray.direction
	for each light
		if (trace(randomPointOnLight - rayHit) is not blocked) {
			vHalf = normalize(lightDistance + viewDir)			
			I_diffuse = color * light.color * max(dot(normal, lightvector), 0)
			I_specular = light.color * pow(max(vHalf, normal), glossiness)		
			L += I_diffuse + I_specular
		}
}
EDIT: Looks remarkably much like poo poo, though:

Boz0r fucked around with this message at 21:44 on Feb 2, 2014

steckles
Jan 14, 2006

You would usually compute the illumination from just one light at each bounce, and you wouldn't compute more than one reflection type either. It's generally easier to get a handle on things when you're only considering one set of interactions per bounce rather than all of them. I would also stick with diffuse inter-reflection for now; Glossy BRDFs are practically a field unto themselves.

Definitely read PBRT before going any further and definitely nail diffuse GI before moving on to other surface types.

Edit: I wrote up a quick path tracing function off the top of my head.
code:
colour trace_ray(ray, depth)
{
    float rr_weight = 1;

    if !intersect(ray) //Missed geometry, terminate the path
    {
	return black
    }
    else if depth > 2 //Don't do rr on the first few bounce to minimize variance
    {
        if random01() < rr_probability //Probabilistically terminate the path
            return black
        else
            //Keep the path alive
            rr_weight /= 1-rr_probability //Account for the energy of all the paths we didn't sample
        
    }

    point p = ray.intersection
    light l = get_random_point_light() //We only consider the contribtion of a single light per bounce
    colour l_colour = l.colour*number_of_point_lights //We need to account for all the lights we didn't sample
    vector n = ray.intersection_normal
    vector lv = normalize(p-l.location)

    colour local_illumination = ray.intersection_colour * l.colour * max(0,dot(lv,n)) / (p-l.location).magnitude
    
    Ray next_ray = uniform_sample_hemisphere(n)

    colour indirect_illumination = ray.intersection_colour * trace_ray(next_ray, depth+1) * dot(n, next_ray.direction) / pi
    
    return (local_illumination + indirect_illumination)*rr_weight
}
It's been a while since I've done anything with ray tracing, so there might be some mistakes in there, but it should be basically right. Also, try keeping your surface colours at a realistic value. White paper only reflect about 60% of the light that hits it, so try to keep all your material colours below that value.

steckles fucked around with this message at 22:21 on Feb 2, 2014

Boz0r
Sep 7, 2006
The Rocketship in action.
I've tried refactoring the path tracer based on your code, and it looks better. Check out the light on the bottom of the refractive ball:


Shadows still look pretty sharp, though.

Updated pseudocode:
code:
Vector3 shade(Ray) {
	viewDir = -ray.direction
	light = randomLight()
	
	float m = maxVectorValue(objectColor);
	if (recDepth > 5) 		
		if (rnd() < m) 
			objectColor = objectColor*(1/m)					
		else 
			return black
	
	// Shadow ray test
	if (trace(ray to light) is not blocked) {
		illumination_direct = (color * color_light * max(dot(lv, n), 0.0f))
		illumination_direct /= (light.pos - p).length()
	}
	
	Ray randomRay = Ray(p, randomDirection(n))
	// Next ray bounce
	if (trace(randomRay) is not blocked) {
		float nDotD = dot(n, randomRay.direction)
		illumination_indirect = m_kd * randomRayColor * nDotD * PI * 10;
	}
	
	return illumination_direct + illumination_indirect
}

steckles
Jan 14, 2006

Boz0r posted:

I've tried refactoring the path tracer based on your code, and it looks better. Check out the light on the bottom of the refractive ball:


Shadows still look pretty sharp, though.
You'll always get sharp shadows with point lights.

A couple issues I can spot with your code. First you need to divide the local illumination by the square of the distance to the light. Second, you should divide by pi rather than multiply by it (or was it 2pi?). And what's with that *10 in there?

Boz0r
Sep 7, 2006
The Rocketship in action.
Thanks, I fixed those issues too. I did actually divide by PI, I just wrote the wrong constant in the pseudocode :) The *10 was just to get a more pronounced effect, but now I'll try extending the light with wattage and range instead.

Raenir Salazar
Nov 5, 2010

College Slice
Okay so I'm trying to draw two objects, one in wiremesh mode and one a solid and have them overlaid.

As far as I understand it, in opengl 2.0 this is done something like this:

quote:

glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(1.0f, 1.0f);
glColor3f(0.0f, 1.0f, 0.0f);
// draw
//draw object
glBegin(GL_TRIANGLES);
for (int i = 0; i<mesh->nfaces.size(); i += 1)
for (int j = 0; j<3; j += 1){
glNormal3f(mesh->normal[mesh->nfaces[i][j]][0],
mesh->normal[mesh->nfaces[i][j]][1],
mesh->normal[mesh->nfaces[i][j]][2]);

glVertex3f(mesh->vertex[mesh->faces[i][j]][0],
mesh->vertex[mesh->faces[i][j]][1],
mesh->vertex[mesh->faces[i][j]][2]);
}
glEnd();
glDisable(GL_POLYGON_OFFSET_FILL);

glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glColor3f(1.0f, 0.0f, 0.0f);
glLineWidth(1.0f);
// draw
//draw object
glBegin(GL_TRIANGLES);
for (int i = 0; i<mesh->nfaces.size(); i += 1)
for (int j = 0; j<3; j += 1){
glNormal3f(mesh->normal[mesh->nfaces[i][j]][0],
mesh->normal[mesh->nfaces[i][j]][1],
mesh->normal[mesh->nfaces[i][j]][2]);

glVertex3f(mesh->vertex[mesh->faces[i][j]][0],
mesh->vertex[mesh->faces[i][j]][1],
mesh->vertex[mesh->faces[i][j]][2]);
}
glEnd();

But its not working, it seems to just apply the polygon mode to both, so regardless of what happens I can't seem to get them overlayed.

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.
Did you try drawing each of the triangles individually (i.e. without drawing the other) to see if it's actually doing the wireframe? Also, I saw this StackOverflow post that offsets the wireframe triangles instead of the filled:

http://stackoverflow.com/questions/13438450/glpolygonmodegl-back-gl-line-isnt-working

Raenir Salazar
Nov 5, 2010

College Slice

HiriseSoftware posted:

Did you try drawing each of the triangles individually (i.e. without drawing the other) to see if it's actually doing the wireframe? Also, I saw this StackOverflow post that offsets the wireframe triangles instead of the filled:

http://stackoverflow.com/questions/13438450/glpolygonmodegl-back-gl-line-isnt-working

Yeah, it does the wireframe alone or fill alone just fine, it just can't seem to do both (or it applies it to both? I can't really tell).

I'm looking at the SOF post now.

haveblue
Aug 15, 2005



Toilet Rascal
I've never had good luck with polygon offset. Do you see the wireframe if you turn off the depth test entirely?

Raenir Salazar
Nov 5, 2010

College Slice

haveblue posted:

I've never had good luck with polygon offset. Do you see the wireframe if you turn off the depth test entirely?

Yup! I think I just need to fiddle with it now, why did depth test obscure my lines?

czg
Dec 17, 2005
hi
So, I'm trying to write some stuff using SlimDX which is a C# wrapper around DirectX, using DirectX 11.
What's the best way to debug the rendering here?
PIX stopped working a long time ago after some .NET update.
The Visual Studio graphics diagnostics tool works perfectly fine, until I try stepping through a shader and all my locals are either 0 or NaN which makes it pretty much worthless.
Nvidia Nsight is set up right and I think it works, except when it starts the .exe it just immediately throws an exception with "The operation completed successfully" and closes.

Working with SlimDX has been pretty smooth so far, but now I'm trying to debug my shadow mapping and it's a nightmare not being able to tell exactly what is going on in the shaders.

Zerf
Dec 17, 2004

I miss you, sandman

czg posted:

So, I'm trying to write some stuff using SlimDX which is a C# wrapper around DirectX, using DirectX 11.
What's the best way to debug the rendering here?
PIX stopped working a long time ago after some .NET update.
The Visual Studio graphics diagnostics tool works perfectly fine, until I try stepping through a shader and all my locals are either 0 or NaN which makes it pretty much worthless.
Nvidia Nsight is set up right and I think it works, except when it starts the .exe it just immediately throws an exception with "The operation completed successfully" and closes.

Working with SlimDX has been pretty smooth so far, but now I'm trying to debug my shadow mapping and it's a nightmare not being able to tell exactly what is going on in the shaders.

You could give Intel GPA a shot I suppose - but pretty much every PC app for graphics debugging sucks compared to the console tools. Personally I find that GPA does at least a decent job, but it's far from perfect.

czg
Dec 17, 2005
hi
Oh hey thanks for that!
GPA won't let me step through shaders, but being able to swap out shader code on the fly is pretty nice, and at least I can see what values are passed in now.

Colonel J
Jan 3, 2008
I am getting into programming with Three.js and I hope some of you can help me here.
http://jsfiddle.net/fL33x/4/

The big yellow cube behind is a screen and the small yellow sphere is a camera pointing towards the origin. I'm trying to render what this camera sees on the cube but if just gives error glDrawElements: attempt to access out of range vertices in attribute 1 .

The goal of this is making my own shadow map shader; I will render what the second camera sees as a depth map and then shade accordingly in the second pass depending on what the camera positioned at the "light source" sees. Any thoughts on this are welcome, but first I need to get the rendering to texture to work. Line 274 is where it breaks.

thanks so much to anyone who could help me out with this. I could get it working in a previous test case but now I'm rewriting it cleaner and it just won't work and is driving me nuts.
If it helps I am basing myself off this example: http://stemkoski.github.io/Three.js/Camera-Texture.html

Colonel J fucked around with this message at 03:27 on Feb 6, 2014

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.

Colonel J posted:

I am getting into programming with Three.js and I hope some of you can help me here.
http://jsfiddle.net/fL33x/4/

The big yellow cube behind is a screen and the small yellow sphere is a camera pointing towards the origin. I'm trying to render what this camera sees on the cube but if just gives error glDrawElements: attempt to access out of range vertices in attribute 1 .

The goal of this is making my own shadow map shader; I will render what the second camera sees as a depth map and then shade accordingly in the second pass depending on what the camera positioned at the "light source" sees. Any thoughts on this are welcome, but first I need to get the rendering to texture to work. Line 274 is where it breaks.

thanks so much to anyone who could help me out with this. I could get it working in a previous test case but now I'm rewriting it cleaner and it just won't work and is driving me nuts.
If it helps I am basing myself off this example: http://stemkoski.github.io/Three.js/Camera-Texture.html

Does "cubeGeometry" have texture coordinates? When you're giving the mesh a texture material, it must be expecting some texture coordinates as part of the geometry, and it's getting none, which would cause the error. glDrawElements renders vertices by an array of indexes - it probably found the XYZ, but not the UV.

Edit: I'll admit I don't have any experience with THREE.js but I was fiddling around with your, uh, fiddle, based on some info I found online, but I couldn't get anything to work.

I did see this though: http://stackoverflow.com/questions/16531759/three-js-map-material-causes-webgl-warning

HiriseSoftware fucked around with this message at 05:50 on Feb 6, 2014

Raenir Salazar
Nov 5, 2010

College Slice

haveblue posted:

I've never had good luck with polygon offset. Do you see the wireframe if you turn off the depth test entirely?

Okay so on review this didn't do what I thought it did, I was simply seeing the back of the model and mistaking it for the wireframe. :(

Also I can't seem to AT ALL modify the color, the gl command for changing color does nothing, the model is a solid shade of purple no matter what I do.

e: Turns out I needed glEnable(GL_COLOR_MATERIAL); somewhere for ~reasons~ as my textbook doesn't use it for its examples.

Raenir Salazar fucked around with this message at 20:59 on Feb 6, 2014

Colonel J
Jan 3, 2008

HiriseSoftware posted:

Does "cubeGeometry" have texture coordinates? When you're giving the mesh a texture material, it must be expecting some texture coordinates as part of the geometry, and it's getting none, which would cause the error. glDrawElements renders vertices by an array of indexes - it probably found the XYZ, but not the UV.

Edit: I'll admit I don't have any experience with THREE.js but I was fiddling around with your, uh, fiddle, based on some info I found online, but I couldn't get anything to work.

I did see this though: http://stackoverflow.com/questions/16531759/three-js-map-material-causes-webgl-warning

I'm not sure what the problem was in the end. I just revamped my test case and it works, though I can't find what the difference is between the two files. Pretty ridiculous, but eh, I got it sorta working now. Thank you for looking at it though.

I do have another thing that popped up now. Here's my jsfiddle at http://jsfiddle.net/5br8D/4/ , you can move around with click and drag, mouse wheel zooms in and out.

You can see, besides the blue stuff (in the back) there is a screen displaying the shadow map, made using an orthographic view. It's a bit hard to see but it works; basically the stuff over the cube is darker and thus closer to the camera. The shadowMap is contained in a WebGLRenderTarget which I'm trying to pass to my shader as an uniform texture; however it doesn't seem to work, looks like when you get to the shader it's all white pixels. Does anybody have any idea how to pass a WebGLRenderTarget to a shader so you can read the pixel's colors.

I hope I am being clear enough, and that my code is understandable, I can comment more on it if the need is there. I looked around on the internet and this guy at http://stackoverflow.com/questions/18167797/three-js-retrieve-data-from-webglrendertarget-water-sim seems to be close to what I'm looking for but I'm not really sure I understand what's going on his solution. Cheers to anyone who know WebGL / Three.js to make this happen.


EDIT:

So I did some work using the example I gave and I sorta made some progress; here's the updated jsfiddle. http://jsfiddle.net/5br8D/6/

I added a buffer buf1 which takes its image data from the renderer context; it's updated starting at lines 329 in the javascript. buf1 is then passed to the shaders as a sampler2d. If you uncomment line 48 in the html you will draw the scene using the orthographic camera's matrices; then if you uncomment line 84 the colors will be taken from buf1. However it's only 0s, so it seems like the readPixels function doesn't really work, OR the xy coordinates I try to grab the pixel data from are wrong, which I'd be surprised because the projection is correct on screen.

How can readPixels return black pixels? There isn't actually anything black on the shadowmap. Gah, I'm so confused.

More editing, this post is turning out to be loving long and rambling but I guess what I'm right now after is: How do you turn gl_Position into gl_FragCoord for an orthographic camera? I know the shaders do it automatically but I kinda need to do it myself right now to get the shadow map coordinates. Google isn't really helping on this one. Thanks!

Colonel J fucked around with this message at 02:07 on Feb 8, 2014

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.
I don't have any experience with shadow mapping, but I found this which has a part about calculating the shadow map coordinates:

http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/

It's something about multiplying a "bias matrix" against the MVP matrix used from the viewpoint of the light. Multiply that result by your model coordinates and you have the coordinate you pass to the texture - XY is for the texture lookup, and Z is used to determine if an object is in shadow or not.

Raenir Salazar
Nov 5, 2010

College Slice
I just spent 4 hours trying to figure out why my shape wouldn't rotate right using this online guide.

Now I'm pretty sure its because the function acos returns a value in radians, when glrotatef takes a angle in degrees.

A little annoying to say the least.

e: yup, I now have something that does what its supposed to do.

code:
	if (arcball_on) {  // if left button is pressed
		lArcAngle = arcAngle;
		cur_mx = mx;
		cur_my = my;

		glm::vec3 va = get_arcball_vector(last_mx, last_my);
		glm::vec3 vb = get_arcball_vector(cur_mx, cur_my);
		arcAngle = acos(min(1.0f, glm::dot(va, vb)));
		arcAngle = (180 / arcAngle);
		arcAngle = arcAngle * 0.005f;
		arcAngle += lArcAngle;
		axis_in_world_coord = glm::cross(va, vb);

		last_mx = cur_mx;
		last_my = cur_my;
		glutPostRedisplay();

	}
There's still a little bit of weird behavior where it randomly disappeared at some arbitrary angle and some funny business when my mouse gets to the edge of the screen but seems to work. Does anyone have any suggestions to improve this?

arcAngle = acos(min(1.0f, glm::dot(va, vb))); gives me some value in radians, I convert it to degrees, I make it smaller (or did I make it bigger?) so I get a smoother rotation and then I accumulate it as the code 'forgets' its last known position of the shape when it loads the identity matrix for reasons I don't know.

Raenir Salazar fucked around with this message at 19:10 on Feb 9, 2014

Tres Burritos
Sep 3, 2009

Raenir Salazar posted:

I just spent 4 hours trying to figure out why my shape wouldn't rotate right using this online guide.

Now I'm pretty sure its because the function acos returns a value in radians, when glrotatef takes a angle in degrees.

A little annoying to say the least.

Yeah, that one's a bitch but eventually you get used to it and radians/degrees is the first thing you start checking when poo poo goes wrong.

Raenir Salazar
Nov 5, 2010

College Slice


I just.. I just.. I jus.. I just don't know what's going on. It seems to work now. Turns out my previous means of getting degrees was wrong. I also think something is weird with my matrix multiplication, as when I try to 'pan' the image my object just disappears, but aside from weird jitteryness it mostly works, now I'm just not sure what I need to account for.

e: Specifically it will sometimes gets confused as to along what axis its rotating by. If I had to guess, minor variances in my mouse movement seems to confuse it.


e2: Now I think I understand the problem, since I'm accumulating my angle as I'm rotating when I change directions it rotates by the same now larger angle of before, instead of a smaller angle.

Raenir Salazar fucked around with this message at 20:24 on Feb 9, 2014

baka kaba
Jul 19, 2003

PLEASE ASK ME, THE SELF-PROFESSED NO #1 PAUL CATTERMOLE FAN IN THE SOMETHING AWFUL S-CLUB 7 MEGATHREAD, TO NAME A SINGLE SONG BY HIS EXCELLENT NU-METAL SIDE PROJECT, SKUA, AND IF I CAN'T PLEASE TELL ME TO
EAT SHIT

If you're not aware of gimbal lock then read up on that, and prepare to cry. That could be why your rotations seem to get confused, especially if it happens when you get close to 90°.

Another thing to watch out for is your matrix multiplication order - a rotation followed by a translation isn't the same as a translation followed by a rotation, things can end up in very different places. Same goes for putting scaling in there

Raenir Salazar
Nov 5, 2010

College Slice

baka kaba posted:

If you're not aware of gimbal lock then read up on that, and prepare to cry. That could be why your rotations seem to get confused, especially if it happens when you get close to 90°.

Another thing to watch out for is your matrix multiplication order - a rotation followed by a translation isn't the same as a translation followed by a rotation, things can end up in very different places. Same goes for putting scaling in there

I don't *think* its gimble lock, but then again the transformations are so large (almost like teleportation from one angle to another) that its hard to tell whats happening. The multiplication order is fine, the problem has to do with the fact that everytime I do a rotation my angle accumulates with the previous angle of rotation, eventually reaching something arbitrarily large and meaningless ~6000 degrees turn.

This succeeds somehow in letting the object turn in 360 degree turns that I can visually follow but smaller changes in yaw means it goes all funky because its doing these massive turns in the direction I'm not intending to go. So I get these brief moments where it occupies a funky position before resuming what its going.

The alternative seems to not Pop'ing my matrix and "save" the matrix I'm working on but this causes... other issues...

e: Edit to add, if the problem IS gimble lock, do I need to switch to quaternions or is there another solution?

quote:

If we didn’t accumulate the rotations ourselves, the model would appear to snap to origin each time that we clicked. For instance if we rotate around the X-axis 90 degrees, then 45 degrees, we would want to see 135 degrees of rotation, not just the last 45.

Hrrm, I think this might be it.

Raenir Salazar fucked around with this message at 00:07 on Feb 10, 2014

Raenir Salazar
Nov 5, 2010

College Slice
Motherfucking nailed it people at long last!

ThisRot = glm::rotate(arcAngle, axis_in_world_coord);
ThisRot = ThisRot * LastRot;

I completely freaked the gently caress out when this a complete hunch ended up working.

Adbot
ADBOT LOVES YOU

Colonel J
Jan 3, 2008

HiriseSoftware posted:

I don't have any experience with shadow mapping, but I found this which has a part about calculating the shadow map coordinates:

http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/

It's something about multiplying a "bias matrix" against the MVP matrix used from the viewpoint of the light. Multiply that result by your model coordinates and you have the coordinate you pass to the texture - XY is for the texture lookup, and Z is used to determine if an object is in shadow or not.

Thanks for your help, I finally got it working. I couldn't really get the bias matrix to work as it would distort my geometry in strange ways. I just multiplied the vertice positions by 0.5 and translated by 0.5 and they're good now.

As for sending the shadow map to the shaders as a uniform, I'll leave the answer here for posterity: you can send a WebGLRenderTarget to a shader as a regular texture and it'll work just fine.

Here's the updated fiddle: http://jsfiddle.net/7b9G8/1/
The yellow sphere is just to represent the directional light vector, It's not the actual light source.

Colonel J fucked around with this message at 21:14 on Feb 11, 2014

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply