Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Zerf
Dec 17, 2004

I miss you, sandman

Raenir Salazar posted:

I think I see what you mean but isn't that handled by assigning weights?



Example, vertex[0] = <0.0018 0.0003 0.8716 0.0003 0.0004 0 0 0.0006 0.0007 0.0001 0 0.0063 0.0046 0 0.0585 0.0546 0.0002>

We're given the file that has the associated weights for every vertex. Each float is for a particular bone from 0 to 16; the file has the 17 weights for each of the 6669 vertexes.

e: Out of curiosity Zerf do you have skype and any chance could I add you for not just figuring this out but for opengl help and advice in general. :)

There might be other ways to do this but the most common way is to use weights and apply them to different joint transforms(usually in shaders but there's no stopping you from doing it on the CPU if you really want to).

Following my example, say that you have three joints: j1, j2 and j3. j2 has j1 as parent and j3 has j2 as parent. You then compute all three joint transforms, including the inverse bind pose, like so:

j1compound = j1inverseBindPose * j1rel
j2compound = j2inverseBindPose * j1rel * j2rel
j3compound = j3inverseBindPose * j1rel * j2rel * j3rel

Then, for a vertex that is affected by all three with the following weights <0.2,0.3,0.5>, calculate its skinned position with the formula you first posted, i.e.:

v1' = v1 * j1compound * 0.2 + v1 * j2compound * 0.3 + v1 * j3compound * 0.5

That should give you the correct position for a vertex that is skinned to all three joints.

As for my Skype, I'm really rusty on OpenGL and I usually don't have much time to answer questions like these, so I'd rather not give it away. You can always PM me questions though, just beware that sometimes I might not find time to answer them for a couple of days.

Adbot
ADBOT LOVES YOU

Raenir Salazar
Nov 5, 2010

College Slice

quote:

j1compound = j1inverseBindPose * j1rel
j2compound = j2inverseBindPose * j1rel * j2rel
j3compound = j3inverseBindPose * j1rel * j2rel * j3rel

Okay so to compute the inverseBindPose, you said:


quote:

Therefore, we define the inverse bind pose transform to be the transformation from a joint to the origin of the model. In other words, we want a transform which transforms a position in the model to the local space of a joint. With translations, this is simple, we can just invert the transform by negating it, giving us j2invBindPose(-5,-2,0).

The problem I see here is that the skeleton/joint, the animations and mesh coordinates were all given seperately, meaning that they actually are not aligned. I have no idea which vertexes vaguely align with which joint; so I would need the skeleton aligned first (I only have managed this imperfectly with mostly trial and error.

I think I do have the 'center' (lets call it C)of the mesh, so when I take the coordinates of a joint, make a vector between it and C and then transform them, its moved close to but not exactly to it, and is off by some strange offset which I've determined is roughly <.2,.1>.

So take that combined value, the constant offset, plus the vector <C-rJnt>; and now make a vector between that and the C center of the mesh? Each animation frame is with respect to the rest pose and not sequential.

I have the animation now vaguely working and renders, not using the above, but the skeleton desync's with the mesh and seems to slowly veers away from the mesh as it goes on.

Boz0r
Sep 7, 2006
The Rocketship in action.
I made an implementation of Path Tracing, Bidirectional Path Tracing, and Metropolis Light Transport using both. The normal Path Tracer converges faster, and I don't think this should be so.

Boz0r fucked around with this message at 10:12 on Mar 12, 2014

Zerf
Dec 17, 2004

I miss you, sandman

Raenir Salazar posted:

Okay so to compute the inverseBindPose, you said:


The problem I see here is that the skeleton/joint, the animations and mesh coordinates were all given seperately, meaning that they actually are not aligned. I have no idea which vertexes vaguely align with which joint; so I would need the skeleton aligned first (I only have managed this imperfectly with mostly trial and error.

I think I do have the 'center' (lets call it C)of the mesh, so when I take the coordinates of a joint, make a vector between it and C and then transform them, its moved close to but not exactly to it, and is off by some strange offset which I've determined is roughly <.2,.1>.

So take that combined value, the constant offset, plus the vector <C-rJnt>; and now make a vector between that and the C center of the mesh? Each animation frame is with respect to the rest pose and not sequential.

I have the animation now vaguely working and renders, not using the above, but the skeleton desync's with the mesh and seems to slowly veers away from the mesh as it goes on.

Well, with origin of the model I actually meant (0,0,0). You don't need any other connection between the joint and the vertices other than the inverse bind pose, because that will transform the vertex to joint-local space no matter where the vertex is from the beginning. You don't need to involve the point C at all in your calculations.

Also note that the inverse bind pose is constant over time, you only need to calculate it once. The compound transforms you need to compute each frame(obviously since the relative position between joints might change).

Raenir Salazar
Nov 5, 2010

College Slice

Zerf posted:

Well, with origin of the model I actually meant (0,0,0). You don't need any other connection between the joint and the vertices other than the inverse bind pose, because that will transform the vertex to joint-local space no matter where the vertex is from the beginning. You don't need to involve the point C at all in your calculations.

Also note that the inverse bind pose is constant over time, you only need to calculate it once. The compound transforms you need to compute each frame(obviously since the relative position between joints might change).

I mean, I'm not sure the model is actually at (0,0,0) is my concern, so I'm confused on how to compute the Inverse Bind Pose in the first place.

I mention "C" because it is possibly a given function that returns the model's center coordinates; but I am not 100% on that.

Boz0r
Sep 7, 2006
The Rocketship in action.
I'm still doing the Bidirectional Path Tracer but this time I actually have a question.

I've tried running it with no eye path and no light path to compare it to my normal path tracer, and something is probably a little off:

This is my normal path tracer:
103 seconds

This is the BPT without a light path:
510 seconds
I think this should look identical to the previous image, right?

This is the BPT without an eye path:
400 seconds
It looks like it's taking more samples towards the back of the room, but the light is in the exact middle of the room. What could cause this? I'm pretty sure my random bounce is uniformly distributed.

This is the complete BPT:
1921 seconds
I don't think it should be so bright.


I'm not really sure exactly how the samples are supposed to be weighted, right now they just combine to a shitload of rays that get summed up, I think? Any other obvious issues?

Boz0r fucked around with this message at 15:32 on Mar 13, 2014

Colonel J
Jan 3, 2008
Those are pretty pictures Boz0r, I wish I could help you out but I'm way too much of a beginner.

I'm trying to make a path tracer using WebGL and I just found out GLSL doesn't allow recursive calls. What's the alternative? Hardcode as many shading functions as I want light bounces and call them successively? Thanks.

haveblue
Aug 15, 2005



Toilet Rascal
That or do CPU-mediated multipass.

steckles
Jan 14, 2006

Boz0r posted:

I'm still doing the Bidirectional Path Tracer but this time I actually have a question.

...

I think this should look identical to the previous image, right?
The look close. Aside from the noise difference, they appear to be the same brightness.

Boz0r posted:

This is the BPT without an eye path:
400 seconds
It looks like it's taking more samples towards the back of the room, but the light is in the exact middle of the room. What could cause this? I'm pretty sure my random bounce is uniformly distributed.
First, it looks like your hemisphere sampling code is off somehow, as the light particles clearly aren't being distributed evenly.

Second, particle tracing will naturally create brighter regions closer to the centre of the image. The reason is simple: Each pixel in the image corresponds a small patch of geometry in the scene. The surface area of this patch is larger or smaller depending on how far away from the camera it is and its angle relative to the camera's "look" vector. When you're shooting particles into the scene, the number of particles that contribute the each pixel is proportional to the surface area of the geometry that the pixel covers.

You'll need to weight the particle's contribution by the square of the distance from their intersection point and the camera and take into account that pixels in the image aren't evenly spread over the geometry. If I remember correctly, the weighting for this is:
code:
costTheta1 = dot(vector_between_intersection_and_camera, camera_look_direction)
costTheta2 = dot(vector_between_intersection_and_camera, intersection_normal)
camera_weight = 1.f/(((2 * tan(fov))^2 / (w/h)) * cosTheta1^3)
geometric_weight = costTheta2 / distance_between_camera_and_intersection^2
full_weight =  geometric_weight*camera_weight

Boz0r posted:

I'm not really sure exactly how the samples are supposed to be weighted, right now they just combine to a shitload of rays that get summed up, I think? Any other obvious issues?
In BDPT, you need to compute the contribution of every possible combination of eye sub-path and light sub-path. Weighting them is the hard part. The simplest correct weight for each complete path is simply 1.f/(i+j+2), where i an index on the eye path and j is the index on the light path. This works and will converge to the correct result, but not very quickly.

I'd start there and make sure your BDPT, particle tracer, and path tracer all produce identical results. Then you should move on to Multiple Importance Sampling, which is a much better way of weighting the paths, but much more difficult to implement.

Raenir Salazar
Nov 5, 2010

College Slice
My professor complains Boz0r that there's no bleeding of colour from the walls to the other walls in your program. :D

Boz0r
Sep 7, 2006
The Rocketship in action.
Steckles, you're a fountain of knowledge on this thing.

I got the sampling distribution fixed so the light path looks good now, I'll try fixing those weights next.

I tried fixing the color bleeding, but I'm color-blind so it's hard to be sure that it's actually happening, but I think I did it correctly, code-wise. I posted some pseudo-code earlier, and I think that looked fine-ish. I did a render of a more colorful scene:



My current lambert shader(pseudocode)
code:
illumination_direct = (color_diffuse * (light_color * light_wattage) * max(dot(lightvector, normal), 0)) / (light_distance^2)
illumination_indirect = next_ray_color * dot(normal, direction) / PI
Also, is BPT and MLT supposed to be this slow, or have I hosed something up? Right now I don't see any reason to use them?

EDIT: I double-checked my previous post, and I multiply the indirect illumination with the diffuse color. Should I do this?

EDIT: In any case, it's not surprising to me that the indirect color contributes so little as it's dimished both by the dot product and dividing by PI, so I fear I may have understood some of this wrong.

Boz0r fucked around with this message at 14:45 on Mar 15, 2014

Raenir Salazar
Nov 5, 2010

College Slice
Disclaimer, I imagine what I am about to say is going to sound really, really, quaint...

Nevertheless,

Oh my god! I got specular highlighting to work on my Professors stupid code where everything is slightly different :black101:

Its a relief, as 'easy' as shaders are apparently supposed to be it has NOT at all been a fun ride trying to navigate the differences between all the versions of Opengl that exist vs what we're using.

We're using I think Opengl 2.1, and for most of the course were using immediate mode for everything, which was kinda annoying as at Stackoverflow everyone keeps asking "Why are you using immediate mode?"/"Tell your teaching that ten years ago called and it wants its immediate mode back."

So when it came time to finally use GLSL and shaders, the code used by the 3.3+ tutorial and what my book uses and what the teacher uses all differ from each other.

Thankfully my textbook I bought approximately eight years ago is the closest and I just muddled through it.

Aside from the color no longer being brown and the specular reflectance having odd behavior along the edges (can anyone explain if that's right or wrong, and if wrong, why?).




e: In image one, you can see how the lighting is a little weird when there's elevation.

e2: So I'm following along the tutorial in the book here, and tried to do a fog shader, but nothing happens. Is there something I'm supposed to do in the main program? All I did was try to modify my lighting shader to also do fog and nothing happens.

e3: Got fog sorta working, kinda weird if I zoom out as it just turns green without fading away to white. Fake edit, turns out I just needed to adjust the fog color. I have mastered The Fog.

e4: According to one tutorial, they say this:

"For now, lets assume that the light’s direction is defined in world space."

And this segways nicely into something I still don't get with GLSL, does the shaders *know* whats defined in the program? Do I pass it variables or not, does it know? If I defined a lightsource in my main program does it automatically know? I don't understand.

Raenir Salazar fucked around with this message at 00:21 on Mar 17, 2014

MarsMattel
May 25, 2001

God, I've heard about those cults Ted. People dressing up in black and saying Our Lord's going to come back and save us all.
You would pass the light direction into the shader as a uniform, and could update it as little as often as you need to. There are a number of built-in variables but typically you would use uniforms. Think of it this way, the shaders get two types of input: the first you can think of sort per-frame or longer constants (e.g. MVP matrix, light dir, etc) and per-operation data like the vertex information for a vertex shader, etc.

Raenir Salazar
Nov 5, 2010

College Slice

MarsMattel posted:

You would pass the light direction into the shader as a uniform, and could update it as little as often as you need to. There are a number of built-in variables but typically you would use uniforms. Think of it this way, the shaders get two types of input: the first you can think of sort per-frame or longer constants (e.g. MVP matrix, light dir, etc) and per-operation data like the vertex information for a vertex shader, etc.

By pass it in, do you mean define it as an argument on the main application side of things or is simply writing "uniform vec3 lightDir" alone sufficient to nab the variable in question?

e: For example, here's the setShaders() code provided for the assignment:

quote:

void setShaders() {

GLint success;

char *vs = NULL,*fs = NULL,*fs2 = NULL;

v = glCreateShader(GL_VERTEX_SHADER);
f = glCreateShader(GL_FRAGMENT_SHADER);
f2 = glCreateShader(GL_FRAGMENT_SHADER);
if (shadingMode == 0){
// PHONG ours
vs = textFileRead("phong.vert");
fs = textFileRead("phong.frag");
}
if (shadingMode == 1){
// toon shader
vs = textFileRead("toon.vert");
fs = textFileRead("toon.frag");
}

const char * ff = fs;
const char * ff2 = fs2;
const char * vv = vs;

glShaderSource(v, 1, &vv,NULL);
glShaderSource(f, 1, &ff,NULL);

free(vs);
free(fs);

glCompileShader(v);
glGetShaderiv(v, GL_COMPILE_STATUS, &success);
if (!success)
{
GLchar infoLog[MAX_INFO_LOG_SIZE];
glGetShaderInfoLog(v, MAX_INFO_LOG_SIZE, NULL, infoLog);
fprintf(stderr, "Error in vertex Shader compilation!\n");
fprintf(stderr, "Info log: %s\n", infoLog);
}
glCompileShader(f);
glGetShaderiv(f, GL_COMPILE_STATUS, &success);
if (!success)
{
GLchar infoLog[MAX_INFO_LOG_SIZE];
glGetShaderInfoLog(f, MAX_INFO_LOG_SIZE, NULL, infoLog);
fprintf(stderr, "Error in fragment Shader compilation!\n");
fprintf(stderr, "Info log: %s\n", infoLog);
}

glCompileShader(f2);

p = glCreateProgram();
glAttachShader(p,f);
glAttachShader(p,v);

glLinkProgram(p);
glUseProgram(p);
}

I added the error detection/logging code. And I don't really see how for instance, the normals or any variable were passed to it.

I'm also not entirely sure why we both with the fragment shader (f2), its not used as far as I can tell.

Raenir Salazar fucked around with this message at 23:30 on Mar 17, 2014

MarsMattel
May 25, 2001

God, I've heard about those cults Ted. People dressing up in black and saying Our Lord's going to come back and save us all.
You would use glUniform. It sounds like you could do with reading up a bit on shader programs :)

Raenir Salazar
Nov 5, 2010

College Slice

MarsMattel posted:

You would use glUniform. It sounds like you could do with reading up a bit on shader programs :)

Yes, yes I do. Although my professor's code doesn't use glUniform so I still don't know how it gets anything.

baka kaba
Jul 19, 2003

PLEASE ASK ME, THE SELF-PROFESSED NO #1 PAUL CATTERMOLE FAN IN THE SOMETHING AWFUL S-CLUB 7 MEGATHREAD, TO NAME A SINGLE SONG BY HIS EXCELLENT NU-METAL SIDE PROJECT, SKUA, AND IF I CAN'T PLEASE TELL ME TO
EAT SHIT

Raenir Salazar posted:

Yes, yes I do. Although my professor's code doesn't use glUniform so I still don't know how it gets anything.

Just going by my own minimal experience here, but when you compile your shaders you're basically baking in the interface - you define the types of variables that can be passed in, and the shader code to do things with that data. The rest of your code should call that program-making function, and then get references to those shader variables that you need to access, using GetUniformLocation and GetAttributeLocation. Then you can use those references to pass in your data.

So say for a 1-dimensional integer uniform (like a texture ID), early on you'd call GetUniformLocation and store that reference in a handy variable (it's just an integer). When you want to pass that data in you call Uniform1i (because you want to set a 1-dimensional integer uniform) with your variable reference and the value you want to set.

Uniforms are for values that don't change for the primitive you're shading, like a reference to a texture you're using. Attributes are for things that can change on a per-vertex basis, like vertex positions, colour values, texture coordinates and so on. It's possible you're not using any uniforms at all (I think?), and you're just passing through references to Attribute locations of one kind or another - makin' buffers, bindin' pointers, that kind of thing.

Take a look at your code, the bulk of the rendering part should be shunting data over to the shader using various GL commands. Also sorry for any wrongness, just trying to point you in the right general direction :shobon:

Raenir Salazar
Nov 5, 2010

College Slice
Would that be from glBegin( gl_primitive ) glEnd()?

haveblue
Aug 15, 2005



Toilet Rascal

Raenir Salazar posted:

Would that be from glBegin( gl_primitive ) glEnd()?

You code should look in general like this:

code:
loadAndSetUpShaders()
while(rendering==true)
{
	assignUniforms();
	for(elements of scene)
	{
		glBegin(gl_primitive);
        		setAttributes();
	   	glEnd();
	}
	finishAndPresentFrame();
}

Raenir Salazar
Nov 5, 2010

College Slice
Yeah it doesn't at all look like that.

code:
void renderScene(void) {
	//Background color
	glClearColor(1, 1, 1, 0);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);	

	glLightfv(GL_LIGHT0, GL_POSITION, lpos);

	glEnable(GL_DEPTH_TEST);
	glEnable(GL_BLEND);                                                                                          
	glEnable(GL_LINE_SMOOTH);
	glEnable(GL_POINT_SMOOTH);

	// enable rescalling of the normals                                                                                                  
	glEnable(GL_NORMALIZE);
    glEnable(GL_DEPTH_TEST);
	
	matrix_setup(0);

	//draw object - Fill mode
	if (rendering_mode == 0){
		glEnable(GL_LIGHTING);
		glDisable(GL_CULL_FACE);
		glEnable(GL_DEPTH_TEST);
		glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
		glDisable(GL_POLYGON_OFFSET_FILL);
		glBegin(GL_TRIANGLES);
		for (int i = 0; i < mesh->nfaces.size(); i += 1)
		  for (int k = 0; k < 3; k += 1){

		  int j = k;//2 - k;

			glNormal3f(mesh->normal[mesh->nfaces[i][j]][0],
					   mesh->normal[mesh->nfaces[i][j]][1],
					   mesh->normal[mesh->nfaces[i][j]][2]);

			glVertex3f(mesh->vertex[mesh->faces[i][j]][0],
					   mesh->vertex[mesh->faces[i][j]][1],
					   mesh->vertex[mesh->faces[i][j]][2]);
		}
		glEnd();
	} 
Shaders:

code:
void setShaders() {

	GLint success;

	char *vs = NULL,*fs = NULL,*fs2 = NULL;

	v = glCreateShader(GL_VERTEX_SHADER);
	f = glCreateShader(GL_FRAGMENT_SHADER);
	f2 = glCreateShader(GL_FRAGMENT_SHADER);
	if (shadingMode == 0){
		// PHONG ours
	   vs = textFileRead("phong.vert");
	   fs = textFileRead("phong.frag");
	}
	if (shadingMode == 1){
		// toon shader
	   vs = textFileRead("toon.vert");
	   fs = textFileRead("toon.frag");
	}

	const char * ff = fs;
	const char * ff2 = fs2;
	const char * vv = vs;

	glShaderSource(v, 1, &vv,NULL);
	glShaderSource(f, 1, &ff,NULL);

	free(vs);
	free(fs);

	glCompileShader(v);
	glCompileShader(f);

	glCompileShader(f2);

	p = glCreateProgram();
	glAttachShader(p,f);
	glAttachShader(p,v);

	glLinkProgram(p);
	glUseProgram(p);
}
I still don't know what f2 or fs2 are supposed to do, they aren't used.

lord funk
Feb 16, 2004

Raenir Salazar posted:

trying to navigate the differences between all the versions of Opengl that exist vs what we're using.

The entire internet needs a filter based on the version of OpenGL you actually want to learn about. I remember what a nightmare it was to teach myself OpenGL ES 2.0, and all the answers I found were 1.x.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

lord funk posted:

The entire internet needs a filter based on the version of OpenGL you actually want to learn about. I remember what a nightmare it was to teach myself OpenGL ES 2.0, and all the answers I found were 1.x.
Honestly, they should have been purging the API of obsolete features ages ago, instead it's March 2014 and people are still recommending immediate mode.

Raenir Salazar
Nov 5, 2010

College Slice
Alrighty, in today's tutorial I confirmed that yeah, that GLSL has variables/functions to access information from the main opengl program and so I figured out how to access the location of a light and then fiddled with my program to let me change the light sources location in real time.

This got me bonus marks during the demonstration because apparently I was the only person to actually have the curiosity to play around with shaders and see what I could do, crikey.

The_Franz
Aug 8, 2003

OneEightHundred posted:

Honestly, they should have been purging the API of obsolete features ages ago, instead it's March 2014 and people are still recommending immediate mode.

They did purge it years ago. Immediate mode and other legacy cruft was purged from the core headers since the early 3.x versions and any usage of it was prohibited in the case of core/forward-compatible contexts.

lord funk
Feb 16, 2004

Raenir Salazar posted:

This got me bonus marks during the demonstration because apparently I was the only person to actually have the curiosity to play around with shaders and see what I could do, crikey.
I met a bunch of iOS developers who couldn't understand why I would teach an iOS development college course. They were worried about job security and thought I would be flooding the market with new developers. The reality is only one out of twenty students has the drive or curiosity to actually make anything of it.

Boz0r
Sep 7, 2006
The Rocketship in action.
My MLT and Path Tracer work pretty good right now and produce almost identical results, but the MLT is much slower than PT. I thought it was supposed to be the other way around, or is it only under specific circumstances?

Xerophyte
Mar 17, 2008

This space intentionally left blank

Boz0r posted:

My MLT and Path Tracer work pretty good right now and produce almost identical results, but the MLT is much slower than PT. I thought it was supposed to be the other way around, or is it only under specific circumstances?

MLT can involve a lot of overhead for the ability to sample certain types of paths better. If those paths aren't more important to a degree corresponding to your overhead then the image will be slower to converge. So, it depends on the scene and your implementation. How many paths/second are you sampling with/without Metropolis?

As a rule I'd expect scenes that are mainly directly lit will converge faster with simple path tracing: solid objects lit by an environment, a basic Cornell Box, etc. MLT will help when path tracing has a hard time sampling the important paths: only indirect paths to light in most of the scene, a single point light encased in sharp glass, etc.

Anecdotally: we have an MLT kernel for our tracer. It's, to my knowledge, never been enabled in any production build because the overhead and threading complexities mean that convergence has inevitably been slower overall in real scenes. The one exception was apparently architectural scenes with real, modeled lamps & armatures and even there we get better practical results by simply cheating and turning off refraction for shadow rays.

Boz0r
Sep 7, 2006
The Rocketship in action.

Xerophyte posted:

How many paths/second are you sampling with/without Metropolis?

I haven't measured yet, but that's a good idea.


Xerophyte posted:

Anecdotally: we have an MLT kernel for our tracer. It's, to my knowledge, never been enabled in any production build because the overhead and threading complexities mean that convergence has inevitably been slower overall in real scenes. The one exception was apparently architectural scenes with real, modeled lamps & armatures and even there we get better practical results by simply cheating and turning off refraction for shadow rays.

Good to know that my project is useful in real life :)

Raenir Salazar
Nov 5, 2010

College Slice
Spaceship for my class project coming along nicely.



Current Status: Good enough.

I might for the final presentation add some greebles but this took long enough and we need to begin serious coding for our engine.

Based very loosely, and emphasis on loosely on the Terran battlecruiser from Starcraft. :shobon:

Kinda looks like something out of Babylon 5, but I can't figure out how to avoid that look. :(

Raenir Salazar fucked around with this message at 06:47 on Mar 23, 2014

Boz0r
Sep 7, 2006
The Rocketship in action.
If I wanted to do an implementation of https://www.youtube.com/watch?v=eB2iBY-HjYU, are there any frameworks I could use so I don't have to write the whole engine from scratch?

Raenir Salazar
Nov 5, 2010

College Slice
I've been to Stackoverflow, my professor, and the Opengl forums and no one at all can figure out what's wrong with my shaders.

Here's the program:

quote:

#version 330 core

// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec3 vertexNormal_modelspace;


// Values that stay constant for the whole mesh.
uniform mat4 MVP;
uniform mat4 V;
uniform mat4 M;
uniform mat3 blNormalMatrix;
uniform vec3 lightPos;
out vec4 forFragColor;
const vec3 diffuseColor = vec3(0.55, 0.09, 0.09);

void main(){

gl_Position = MVP * vec4(vertexPosition_modelspace,1);
vec3 vertNormal_cameraspace = vec3(V*M*vec4(vertexNormal_modelspace,0)).xyz;
vec3 vertPos_cameraspace = vec3(V*M*vec4(vertexPosition_modelspace,1)).xyz;

vec3 lightDir = normalize(lightPos - vertPos_cameraspace);
float lambertian = clamp(dot(vertNormal_cameraspace, lightDir), 0.0, 1.0);

forFragColor = vec4((lambertian*diffuseColor),1);

}

}

Here is a link to a video of how it behaves

At first it seems like the light is tidal locked with the model for a bit then suddenly the lighting teleports to the back of the model.

The light should be centered on the screen in the direction of the user (0,0,3) but acts more like its somewhere to the left.

I have no idea whats going on, here's my MVP/display code:

quote:

glm::mat4 MyModelMatrix = ModelMatrix * thisTran * ThisRot;


MVP = ProjectionMatrix * ViewMatrix * MyModelMatrix;
glm::mat4 ModelView = ViewMatrix * MyModelMatrix;
glm::mat3 MyNormalMatrix = glm::mat3(glm::transpose(glm::inverse(ModelView)));
glm::vec3 newLightPos = lightPos;
// Send our transformation to the currently bound shader,
// in the "MVP" uniform
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glUniformMatrix4fv(ModelMatrixID, 1, GL_FALSE, &MyModelMatrix[0][0]);
glUniformMatrix4fv(ViewMatrixID, 1, GL_FALSE, &ViewMatrix[0][0]);
glUniformMatrix4fv(BlNormalMatrix,1,GL_FALSE, &MyNormalMatrix[0][0]);
glUniformMatrix4fv(BlRotations, 1, GL_FALSE, &ThisRot[0][0]);
glUniform3f(BlCamera, cameraLoc.x, cameraLoc.y, cameraLoc.z);
glUniform3f(lPosition, newLightPos.x,newLightPos.y,newLightPos.z);

// VBO buffer: vertices
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0, // attribute
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);

// 2rd attribute buffer : normals
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, normalbuffer);
glVertexAttribPointer(
1, // attribute
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);

// draw object using opengl 3.3 poo poo
glDrawArrays(GL_TRIANGLES, 0, vertices.size() );

I apply my transformations to my model matrix from which I create the ModelViewProjection matrix, I also try manually creating a normalMatrix but it refuses to work, if I try to use it in my shader it destroys my normals and thus removes all detail from the mesh.

I've went through something around 6 different diffuse tutorials and it doesn't work and I have no idea why; can anyone help?

EDIT:

Alright I've successfully narrowed down the problem, I am now convinced its the normals.

I used a model I made in blender which I loaded in my Professor's program and my version with updated code:

Mine

His

The sides being dark when turning but lit up in the other confirms the problem. I think I need a normal Matrix to multiply my normals by but all my attempts at it didn't work for bizarre reasons I still don't know. That or the normals aren't being properly given to the shader?

Raenir Salazar fucked around with this message at 21:15 on Apr 4, 2014

BlockChainNetflix
Sep 2, 2011
I'm probably way off bat here, but what if you replace

code:
vec3 vertNormal_cameraspace = vec3(V*M*vec4(vertexNormal_modelspace,0)).xyz;
with

code:
vec3 vertNormal_cameraspace = vec3(blNormalMatrix*vertexNormal_modelspace.xyz);

Raenir Salazar
Nov 5, 2010

College Slice

BLT Clobbers posted:

I'm probably way off bat here, but what if you replace

code:
vec3 vertNormal_cameraspace = vec3(V*M*vec4(vertexNormal_modelspace,0)).xyz;
with

code:
vec3 vertNormal_cameraspace = vec3(blNormalMatrix*vertexNormal_modelspace.xyz);

The problem is every time I try something along the lines there's no more normals and the whole mesh goes dark. :(

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.
Can you change

code:
glUniformMatrix4fv(BlNormalMatrix,1,GL_FALSE, &MyNormalMatrix[0][0]);
to

code:
glUniformMatrix3fv(BlNormalMatrix,1,GL_FALSE, &MyNormalMatrix[0][0]);
?

Raenir Salazar
Nov 5, 2010

College Slice

HiriseSoftware posted:

Can you change

code:
glUniformMatrix4fv(BlNormalMatrix,1,GL_FALSE, &MyNormalMatrix[0][0]);
to

code:
glUniformMatrix3fv(BlNormalMatrix,1,GL_FALSE, &MyNormalMatrix[0][0]);
?

Why yes indeed! So by trying to use 4fv it wasn't working/undefined or some such?

e: Right now its not entirely dark anymore, but the side faces are still dark.

Raenir Salazar fucked around with this message at 21:42 on Apr 4, 2014

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.

Raenir Salazar posted:

Why yes indeed! So by trying to use 4fv it wasn't working/undefined or some such?

e: Right now its not entirely dark anymore, but the side faces are still dark.

When using 4fv it thinks that the matrix is arranged 4x4 when you're passing in a 3x3 so you were getting the matrix elements in the wrong places. It could have also caused a strange crash at some point since it was trying to access 16 floats and there were only 9 - it might have been accessing those last 7 floats (28 bytes) from some other variable (or unallocated memory)

Raenir Salazar
Nov 5, 2010

College Slice

HiriseSoftware posted:

When using 4fv it thinks that the matrix is arranged 4x4 when you're passing in a 3x3 so you were getting the matrix elements in the wrong places. It could have also caused a strange crash at some point since it was trying to access 16 floats and there were only 9 - it might have been accessing those last 7 floats (28 bytes) from some other variable (or unallocated memory)

Then is the problem might be here then?

quote:

void blInitResources ()
{
cout << "BlInitResources" << endl;
//loads object
mesh = new ObjModel();
Load(*mesh, "Resources/cheb.obj");
mesh->CenterObject();

blSetupMaterial();
blSetupLights();
blCreateModelViewProjectionMatrix();

ship_mesh = new ObjModel();
Load(*ship_mesh, "Resources/RCSS.obj");
ship_mesh->CenterObject();

mesh = ship_mesh;

/**************** CHEB **********************/
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);

for (int i = 0; i < mesh->nfaces.size(); i += 1)
for (int k = 0; k < 3; k += 1){

int j = k;//2 - k;

normals.push_back(glm::vec3((mesh->normal[mesh->nfaces[i][j]][0],
mesh->normal[mesh->nfaces[i][j]][1],
mesh->normal[mesh->nfaces[i][j]][2])));

vertices.push_back(glm::vec3(mesh->vertex[mesh->faces[i][j]][0],
mesh->vertex[mesh->faces[i][j]][1],
mesh->vertex[mesh->faces[i][j]][2]));

}

// Load it into a VBO
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3), &vertices[0], GL_STATIC_DRAW);
//glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3), &mesh->vertex[0], GL_STATIC_DRAW);

// Normal buffer
glGenBuffers(1, &normalbuffer);
glBindBuffer(GL_ARRAY_BUFFER, normalbuffer);
glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(glm::vec3), &normals[0], GL_STATIC_DRAW);




}

As in the original code, the professor used glVertex( ... ) and glNormal(...) in that look whenever he drew the mesh with immediate mode, which set the vertexes and normals. Perhaps it doesn't read every normal?

Edit: Additionally, going back to the original mesh, turns out that normalizing the normal by the normal matrix obliterates the normals in a different way, its no longer black but the result is all surface detail is lost. Fixed by removing the normalization but I'm still at step 0 in terms of the behavior of the light and maybe a strong idea its because of the normals.

e2: I'm going to try to use Assimp to see if that resolves the problem (if its because of inconsistent normals being loaded) just as soon as it stops screwing with me because of unresolved dependency issues.

Raenir Salazar fucked around with this message at 23:19 on Apr 4, 2014

Raenir Salazar
Nov 5, 2010

College Slice
Well I think I got assimp to work (except for all the random functions that don't work) and I loaded it and now I can see properly coloured normals (I did an experiment of passing my normals to the fragment shader as color directly); but now the problem is well, everything.

Using the 'modern' opengl way of doing things now my mesh if off center, and both rotation and zooming has broken.

Zooming now results in my mesh being culled as it gets too close or too far being defined as some weird [-1,1] box, it doesn't seem to correspond or use my ViewMatrix at all.

The main change is using assimp and the inclusion of an Index buffer.

e: Ha I'm stupid, I forgot to call the method that initialized my MVP matrix.

Weirdly the mesh provided by my professor still doesn't work (off center and thus its rotation doesn't work), but I think it works with anything else I make in blender.

e2: So I believe it finally works, thank god. Although now I have to deal with the fact that my professor's mesh may never load with assimp unless I can center it somehow.

Another weird thing but at this point its either because I don't have enough normals or maybe my math could use refinement or because the ship is entirely flat faces but the specular lighting isn't perfect:





Like at the third one I think its more visible that there should be something somewhere there but there isn't.

Raenir Salazar fucked around with this message at 21:09 on Apr 6, 2014

Foiltha
Jun 12, 2008
So I'm not really sure whether to post this in some web development thread or here because it's seemingly a WebGL problem. I'm doing the age old ray marching in the fragment shader thing and things are going swimmingly for the most part. However, it seems like when I use small enough ray marching steps or a massive amount of intersection tests the rendering process just fails. Basically anything that prolongs the computation enough (like upwards of 5 seconds per frame) seems to crash the rendering. Is this a known thing with WebGL used to prevent the browser from freezing or am I missing something? I'm using a 2013 MacBook Air so the GPU is not exactly great :v:

Adbot
ADBOT LOVES YOU

Madox
Oct 25, 2004
Recedite, plebes!
Hey guys, I have an odd WebGL question as well. (Sorry I don't know about problems with such long running shaders, guy above).

I have been porting my terrain code from DirectX to WebGL and its going pretty well. It uses a shadowing shader to do typical shadowing (render to depthmap from light POV, compare depths, etc). I added a slider that sets the sun's position to test shadowing, but as soon as I start moving the sun, performance on crappy end hardware degrades from 60fps to 30fps (abouts).

Setting the sun is simply changing the vec3 that holds the sun's position. This uniform is being set every frame. Also the shadowing shader is always running every frame. Nothing is different except the contents of the uniform vec3. So why does the frame rate take such a giant hit? What is WebGL doing? Am I breaking some kind of optimizing or caching that it does?

You can see it happening here if you have a crappy computer: http://madoxlabs.github.io/webGL/Float/ (Chrome only)
On my good computer, it runs at 17ms/frame all the time.
On a crappy computer, it runs at 17ms/frame until I start dragging the sun slider, then it goes to over 30ms/frame.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply