Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.

Colonel J posted:

Thanks for your help, I finally got it working. I couldn't really get the bias matrix to work as it would distort my geometry in strange ways. I just multiplied the vertice positions by 0.5 and translated by 0.5 and they're good now.

As for sending the shadow map to the shaders as a uniform, I'll leave the answer here for posterity: you can send a WebGLRenderTarget to a shader as a regular texture and it'll work just fine.

Here's the updated fiddle: http://jsfiddle.net/7b9G8/1/
The yellow sphere is just to represent the directional light vector, It's not the actual light source.

Is the shadowing working correctly as you're intending? At certain points the shadow of the vertical stick should pass over the horizontal ones, but I can see that it's not. Maybe it's an artifact of my older video card, but it doesn't look right to me.

Adbot
ADBOT LOVES YOU

Colonel J
Jan 3, 2008

HiriseSoftware posted:

Is the shadowing working correctly as you're intending? At certain points the shadow of the vertical stick should pass over the horizontal ones, but I can see that it's not. Maybe it's an artifact of my older video card, but it doesn't look right to me.

Yeah, that's a bit of a weird behavior that seems tied to the renderDepth of the objects in the scene. I tried playing a bit with it but I didn't spend too much time; basically some stuff renders at the wrong time when it does the shadow map and they turn out to be under things that are in front of them.

I added
yCube.renderDepth = 100;
to try and force it to render last and it's better: http://jsfiddle.net/7b9G8/2/

However I fear it wouldn't hold to every camera orientation and I'm not sure how to fix it for good.

Boz0r
Sep 7, 2006
The Rocketship in action.
Not really a question, but I just wanted to say that this Metropolis Light Transport is busting my balls. Our advisor has given us a compact implementation of it to use as a reference, but it's not exactly easy reading.

steckles
Jan 14, 2006

Boz0r posted:

Not really a question, but I just wanted to say that this Metropolis Light Transport is busting my balls. Our advisor has given us a compact implementation of it to use as a reference, but it's not exactly easy reading.
The thing with Kelemen style MLT is that it has everything to do with the random number generator and nothing at all to do with ray tracing. I had to read through the paper a few times before it really clicked.

At its core, it's really just a random number generator that stores the numbers generated. To do a small-step mutation, you pick one of the numbers previous generated and change it slightly. You then reset the generator and "re-play" the numbers as you're building a new path. A large-step mutation just clears the list of numbers and creates new ones. In either case, before mutating, you'd save the state of the generator. If the path generated with the mutated numbers doesn't get accepted, the generator get restored to the saved version.

Because the transition probabilities for mutating random numbers are symmetric, they cancel out in the acceptance computation. To compute the acceptance probability of the newly generated path, you just divide its brightness by your last path's brightness. Aside from substituting calls to the system's RNG with your calls to your RNG, you don't have to touch your ray tracing code.

steckles fucked around with this message at 18:42 on Feb 12, 2014

Boz0r
Sep 7, 2006
The Rocketship in action.
Thanks, those're good points.

slovach
Oct 6, 2005
Lennie Fuckin' Briscoe
Am I missing something here or what with initializing GL?

I need a valid context to even get wglChoosePixelFormatARB() ... but to get a valid context, I need to have a pixel format set. And then SetPixelFormat() expects a filled out PIXELFORMATDESCRIPTOR anyway.

:psyduck:

Spite
Jul 27, 2001

Small chance of that...

slovach posted:

Am I missing something here or what with initializing GL?

I need a valid context to even get wglChoosePixelFormatARB() ... but to get a valid context, I need to have a pixel format set. And then SetPixelFormat() expects a filled out PIXELFORMATDESCRIPTOR anyway.

:psyduck:

PIXELFORMATDESCRIPTOR is a windows specific struct, not a general OpenGL struct. You fill that out yourself and use your Windows Device Context to get a Pixel format. (Via ChoosePixelFormat).
Then with the Pixel Format it returns, you call SetPixelFormat, then wglCreateContext.

code:
PIXELFORMATDESCRIPTOR pfd =
                {
                        sizeof(PIXELFORMATDESCRIPTOR),
                        1,
                        PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,    //Flags
                        PFD_TYPE_RGBA,            //format
                        32,                        //bits per fragment
                        0, 0, 0, 0, 0, 0,
                        0,
                        0,
                        0,
                        0, 0, 0, 0,
                        24,                        //bits for depth
                        8,                        //bits for stencil
                        0,                        //aux
                        PFD_MAIN_PLANE,
                        0,
                        0, 0, 0
                };

                HDC hDC = GetDC(hWnd);

                int  pixfmt;
                pixfmt = ChoosePixelFormat(hDC, &pfd); 
                SetPixelFormat(hDC,pixfmt, &pfd);

                HGLRC ctx = wglCreateContext(hDC);
                wglMakeCurrent (hDC, ctx);
That's one way to do it, anyway.

Because Microsoft didn't want to implement OpenGL3, and OpenGL is a messy API in general, you have to create an OpenGL context using the windows APIs. Then you can request the function pointers for the newer OpenGL context creation methods.
So you'll need to call
wglGetProcAddress
And find the address of wglCreateContextAttribsARB.
Then you'll end up with two contexts, so you'll have to destroy the temporary one.

Or use one of the utilities to do it for you.

Boz0r
Sep 7, 2006
The Rocketship in action.
I tried moving my raytracing project to my desktop Windows PC to get some more power, but when I try running it, I get the most vague error message:
code:
The application was unable to start correctly 0xc00007b.
From googling, this seems to be a general error, so I have no idea where to begin troubleshooting. Any ideas?

Max Facetime
Apr 18, 2009

Boz0r posted:

I tried moving my raytracing project to my desktop Windows PC to get some more power, but when I try running it, I get the most vague error message:
code:
The application was unable to start correctly 0xc00007b.
From googling, this seems to be a general error, so I have no idea where to begin troubleshooting. Any ideas?

File permissions, perhaps? See if your user account needs to take ownership of the new files. Making a copy of them in your Documents folder and trying with that should show if this is the cause.

Boz0r
Sep 7, 2006
The Rocketship in action.
If I run the executable as administrator from Windows Explorer I still get the error. I pretty much assume it has something to do with OpenGL, glut, glew, or something.

ynohtna
Feb 16, 2007

backwoods compatible
Illegal Hen

Boz0r posted:

If I run the executable as administrator from Windows Explorer I still get the error. I pretty much assume it has something to do with OpenGL, glut, glew, or something.

Have you looked into the system event logs, and/or tried started the executable with a debugger attached?

mobby_6kl
Aug 9, 2009

by Fluffdaddy
Why would a raytracing app depend on OpenGL stuff anyway? I'd suspect maybe a more general C runtime or something, but that's really guessing.

Boz0r
Sep 7, 2006
The Rocketship in action.
I use OpenGL as a wireframe renderer where I can move the camera around, and then click on a button and have it render the raytracing. It's a framework we got from our instructor.

I've tried starting the program with a debugger, but I get the error before it triggers the first line in main(). I don't know how to view system event logs.

baka kaba
Jul 19, 2003

PLEASE ASK ME, THE SELF-PROFESSED NO #1 PAUL CATTERMOLE FAN IN THE SOMETHING AWFUL S-CLUB 7 MEGATHREAD, TO NAME A SINGLE SONG BY HIS EXCELLENT NU-METAL SIDE PROJECT, SKUA, AND IF I CAN'T PLEASE TELL ME TO
EAT SHIT

Boz0r posted:

I use OpenGL as a wireframe renderer where I can move the camera around, and then click on a button and have it render the raytracing. It's a framework we got from our instructor.

I've tried starting the program with a debugger, but I get the error before it triggers the first line in main(). I don't know how to view system event logs.

You'd probably be better off in the tech support forum, but from a quick look around, at least one person got this issue from a mismatched freeglut.dll (64-bit) and Visual Studio (32-bit), if you just kinda threw things in there without installing the version your machine needs

Boz0r
Sep 7, 2006
The Rocketship in action.
Yeah, I was thinking I might need to delete all that poo poo and reinstall it, but I wanted to check for other solutions first.

MarsMattel
May 25, 2001

God, I've heard about those cults Ted. People dressing up in black and saying Our Lord's going to come back and save us all.
Sounds like it could be a bad executable or binary dependency -- have you built the executable on that machine or just copied it there?

Boz0r
Sep 7, 2006
The Rocketship in action.
I got it working by deleting everything related to glew and glut and reinstalling it :D

Raenir Salazar
Nov 5, 2010

College Slice
A'ight, so while I couldn't get Picking to work out in time my arcball rotation was *perfect* and I'm reasonably pretty drat happy to know that most of my classmates couldn't figure that out. It fills me with some confidence that I'm not out of my depth or anything.

The next step now for the next and now current assignment is to render animations using a skeleton system. A *lot* has been provided already, the keyframes, transformations, bones and so on have all been provided in files, so I just need to import and read them and apply the transformations.

I'm looking at this tutorial though it uses some SDL stuff we're not bothering with and has a means to manually move 2D bones to create its own keyframes; does this seem like a good guide as any for Ancient OpenGL and does anyone have any suggestions?

I'm currently shifting the code as I see it from a struct based thing to a class based implementation as I have this irrational hatred of structs I can never understand.

For this implementation of keeping track of child bones I'm thinking a vector will do the job.

Raenir Salazar
Nov 5, 2010

College Slice
Okay so an actual question, in the tutorial I linked above the guy defines his bones to have an angle and a length, however he is working in 2D.

In 3D this is puzzling me, I suppose I would need to define an angle for all three planes (well, 2, the "roll" of a line probably is irrelevant)? Unlike his example I don't have angles given to me, just the points between two bones.

I imagine its likely less headache to use an implementation that just uses defined points for both ends of the bone but the tutorial uses angles so there's a time sink either way.

So, suppose I have points A and B, do I derive the vector between them and then derive a "fake" vector from its magnitude on the X,Y (or Z?) planes and find the angles from there?

haveblue
Aug 15, 2005



Toilet Rascal
The vector from B to A is <Ax - Bx, Ay - By,Az - Bz>.

Yes, you'll need 3 angles per bone for 3D bones. You should probably make the assumption that when the model is first loaded all of the angles are zero.

Raenir Salazar
Nov 5, 2010

College Slice

haveblue posted:

The vector from B to A is <Ax - Bx, Ay - By,Az - Bz>.

Yes, you'll need 3 angles per bone for 3D bones. You should probably make the assumption that when the model is first loaded all of the angles are zero.

The initial frame is given to us and its a standard/simple skeleton, all of its bones appear to be angled in some form; here's what my tutor says:

quote:

Right, an angle has to be *between* two lines/rays/segments.

In 2D space you have one position angle, theta, between the vector AB and the positive x-axis. The angle to the y axis is by default 90 degrees minus theta.

In 3D space you have *three* position angles, alpha between vector and positive x axis, beta between vector and positive y axis, and gamma between vector and positive z axis.

Weirdly enough I now have n online student who has an Arabic name but is studying in Japan, who has a texts that *starts out* by defining vectors in terms of these angles.

Anyhow take a box, an ordinary box like a shoebox. The edges leaving from ONE chosen corner give the x, y, z axix. If you can cut open three side of a box nd leve the other two open it will help you to see.
Now put a pencil in the box with one end at the corner you have chosen for your origin. Do you see the three angles? Do you see how you cna change them by rotating the vector to different positions?

Go back to 2D xy plane to see the next bit. We see in the xy plane if we put vector v at the origin,
[Using |v| a s magnitude of v].
x/|v| = a cos(theta)
So x = |v| cos(theta)


In 3D we use the same thing:
x = |v| cos(alpha)
y = |v| cos beta
z = |v| cos gamma

I think my hunch is correct from this that I need to create a second vector in which to figure out the angle from for the initial frame.

shodanjr_gr
Nov 20, 2007

Boz0r posted:

I tried moving my raytracing project to my desktop Windows PC to get some more power, but when I try running it, I get the most vague error message:
code:
The application was unable to start correctly 0xc00007b.
From googling, this seems to be a general error, so I have no idea where to begin troubleshooting. Any ideas?

If you just copied the executable from your development machine to the desktop, chances are you are missing either some DLLs for any of the external dependencies (e.g. GLUT) or you are missing the visual studio runtime that your raytracer was compiled against. Each version of visual studio has a different set of DLLs that implement various bits and pieces of C/C++ functionality and those have to be either in your PATH system variable or in the working directory of the application (generally, the same directory where your executable is). If you Google "Visual Studio 20XX runtime" you will find download links directly from Microsoft.

You might wanna consider trying to build your code locally on your desktop and running it with a debugger attached, that should give you more information about what's happening.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

shodanjr_gr posted:

Each version of visual studio has a different set of DLLs that implement various bits and pieces of C/C++ functionality and those have to be either in your PATH system variable or in the working directory of the application (generally, the same directory where your executable is). If you Google "Visual Studio 20XX runtime" you will find download links directly from Microsoft.
You can also (more sensibly in my opinion) configure it to do static linking so that you don't have to distribute those files with your exe, for any project that isn't going to be made of a bunch of modules. Assuming you're not using MFC or something.

(I expect you'd still need the GLUT, and there's no static linking DirectX either, but the runtime libraries I'm pretty sure still do.)

shodanjr_gr
Nov 20, 2007

roomforthetuna posted:

You can also (more sensibly in my opinion) configure it to do static linking so that you don't have to distribute those files with your exe, for any project that isn't going to be made of a bunch of modules. Assuming you're not using MFC or something.

(I expect you'd still need the GLUT, and there's no static linking DirectX either, but the runtime libraries I'm pretty sure still do.)

That's actually a better approach for something small-scale. I also believe that GLUT can also be static-linked.


On another note, is there an OpenGL debugger that works properly in Windows 8.1? I used to use gRemedy's gDebugger in Windows 7 and it worked great for me, but since I moved to Win 8.1, it crashes whenever I try to look at textures/buffers when at a breakpoint. I tried AMD's version as well, but it crashes the same way. NVidia's NSight graphics debugger doesn't want to debug non-core GL contexts.

edit:

To answer my own question, AMD's GPU Perf Studio 2 seems to work fine on NVidia cards (including GLSL on-the-fly editing) and all the nice HUD injection stuff). The UI is somewhat more janky than gDebugger (slower, .NET based and talks to a server over HTTP) but it gets the job done and the profiling tools are better. However, it doesn't do other stuff that gDebugger does, like allow you to break on certain GL calls and it doesn't seem to be able to show you stack traces that lead to API calls either.

edit 2: Actually, the Frame Debugger and API Trace functionality works but the frame profiler doesn't since it seems to need access to low level hardware counters. Bummer...

shodanjr_gr fucked around with this message at 07:14 on Feb 26, 2014

Boz0r
Sep 7, 2006
The Rocketship in action.
I'm currently trying to convert a pbrt scene to fit into our own ray tracing framework. When reading the .pbrt I see some values get set while I guess other values assumes a default value - there are a lot of translates back and forth.

Would it be possible for me to set a flag or parameter for pbrt such that when running it will be super-verbose - like tell me the actual world coordinates of the camera, the up-vector, the vertex-coordinates and such?

Colonel J
Jan 3, 2008
I'm working with Peter Pike Sloan's paper Stupid Spherical Harmonics Tricks right now, trying to implement a Monte Carlo integrator for the rendering equation.

In appendix A he gives a recurrence equation for calculating the associated Legendre Polynomials:


and then states you increment m in the outer loop and l and the inner, which leads me to believe that to find P(l,m) you have to start with P(0,0) = 1 and work your way up the l's to the one you want, then do P(1,1) and work your way up the l's again, etc.

It works for the l P(l,0) but as soon as I hit P(1,1) it's like the equations up there break. Mathematica tells me P(1,1) = 0 but if you use the equation in the paper you find P(1,1) = (1 - 2*1)*P(0,0) = -1. Who's in the wrong here? (probably me)

Boz0r
Sep 7, 2006
The Rocketship in action.
My MLT is actually looking pretty good, the only problem is that the image keeps getting brighter and brighter the longer it runs. I assume this is wrong as it should converge to f(x), right?

steckles
Jan 14, 2006

Boz0r posted:

My MLT is actually looking pretty good, the only problem is that the image keeps getting brighter and brighter the longer it runs. I assume this is wrong as it should converge to f(x), right?
Yeah, MLT should produce the same results as random path tracing. Brightness variation could be caused by any number of things. If you're not accounting for start-up bias, the brightness of the image can oscillate around the true value, but that should even out over time. Did you implement the multiple importance sampling algorithm in the paper for weighting the large and small step values? Incorrectly computing the weights could lead to a gradual brightening or darkening of the image over time as well.

Twernmilt
Nov 11, 2005
No damn cat and no damn cradle.
I've been playing around with rendering resolution independent 2D shapes on the GPU. Now that I've got a few primitives like circles and bezier curves, I want to start casting dynamic shadows. The approach I'm currently working on is to use OpenCL to take my geometry and lights as inputs and return shadow geometry as the output. The shadow geometry would be triangles representing shadows cast by the objects. Then I'm going to render the original geometry and finally render the shadow geometry on top of it. I'm just curious what you all think of this scheme? I haven't had a lot of luck finding existing information on GPU accelerated 2D shadows, just bits and pieces, but that might be because I'm pretty new to this and I don't know exactly what I'm looking for.

Fellatio del Toro
Mar 21, 2009

Ok so I'm trying to build a simple 2D engine in OpenGL (JOGL) and I'm running into a bit of trouble. I'll try to avoid posting all of the source here but here's what I think are the relevant bits:

My test program currently does this:
Java code:
public void init(Simple2DEngine e) {
        e.loadGraphic("mario.png", "mario");
        e.loadGraphic("block.png", "block");
        mario = e.newGraphicObject("mario");
        block = e.newGraphicObject("block");
        mario.x(100); mario.y(yPos);
        block.x(100); block.y(400);
    }

    public void update(Simple2DEngine e) {
        mario.y(yPos);
        yPos += yVelocity;
        yVelocity -= 1;
        if (yPos < 0) {
            yVelocity = 25;
            yPos *= -1;
        }
What is supposed to happen here is a simple animation of mario bouncing up and down against a block. What actually happens when I added the block texture, though, is that both show up with the block texture.

loadGraphic uses JogAmp to load the texture and sticks it into a tree keyed by name:
Java code:
protected boolean loadGraphic(String path, String name) {
        try {
            if (textureTree.get(name) == null) {
                Texture newText = TextureIO.newTexture(new File(path), false);
                textureTree.set(name, newText);
                return true;
            }
            else return false;
        }
        catch(Exception e) {
            return false;
        }
    }
newGraphicObject generates a "Graphic2D" object that contains a texture (pulled from the tree) and the position/scale/etc of the to-be-rendered sprite.

I'm currently using immediate rendering like so:
Java code:
private void drawImmediate() {
        int size = graphicList.getSize();
        
        gl.glClearColor(0, 0, 0, 0);
        gl.glClear(GL.GL_COLOR_BUFFER_BIT);
        gl.glBegin(GL2.GL_QUADS);
        for (int i = 0; i < size; i++) {
            graphicList.getAt(i).draw();
        }
        gl.glEnd();
    }
And the Graphic2D objects draw like so:
Java code:
protected void draw() {
        texture.bind(gl);
        gl.glColor4f(r, g, b, a);
        gl.glTexCoord2f(textX0, textY0);
        gl.glVertex3f(xPos, yPos, 0);
        gl.glTexCoord2f(textX0, textY1);
        gl.glVertex3f(xPos, yPos + height * scale, 0);
        gl.glTexCoord2f(textX1, textY1);
        gl.glVertex3f(xPos + width * scale, yPos + height * scale, 0);
        gl.glTexCoord2f(textX1, textY0);
        gl.glVertex3f(xPos + width * scale, yPos, 0);
    }
I can confirm through the debugger there are two Textures in the tree with different texture IDs and that the Texture being bound when draw() is called is in fact alternating between texID 1 and 2 as I would expect it. Still though, for some reason the end result looks like this:



EDIT: Hmm. Manually setting the texID to 1/2 shows the block for both. Somehow the second texture is ending up at both texIDs.

DOUBLE EDIT: I'm stupid and thought texture binding was done after glBegin(). :v:

Fellatio del Toro fucked around with this message at 09:33 on Mar 8, 2014

shodanjr_gr
Nov 20, 2007
I haven't used JOGL so I'm not sure what it does internally but I'm noticing a couple of things wrong with your code.

You start a draw call (gl.glBegin()) and then inside of that you render the vertices for both of your primitives. I'm pretty sure that non vertex state can not be affected within a drawCall (between glBegin() and glEnd()).

You should do your glBind call outside of the glBegin()/glEnd() block and only submit vertex geometry within that block. Probably what happens in your situation is that the second glBindTexture() call within your draw call ends up getting applied after the draw call ends and is used as the active texture for the draw calls in the next frame. You probably want to rewrite your Graphics2D::Draw() function like this:

code:
texture.bind(gl);
gl.glBegin(QUADS)
// vertex submission goes here
gl.glEnd(QUADS)
and your drawImmediate should look like this (notice that the glbegin/glend has been removed):

code:
private void drawImmediate() {
        int size = graphicList.getSize();
        
        gl.glClearColor(0, 0, 0, 0);
        gl.glClear(GL.GL_COLOR_BUFFER_BIT);
        for (int i = 0; i < size; i++) {
            graphicList.getAt(i).draw();
        }
   }

Fellatio del Toro
Mar 21, 2009

Yeah that did it. I never noticed the issue up until now as I never tried more than one texture.



Hurrah!

Fellatio del Toro fucked around with this message at 09:36 on Mar 8, 2014

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Does JOGL not have shaders and VBOs and stuff? glBegin and friends have been dead since forever.

Fellatio del Toro
Mar 21, 2009

I'm pretty sure it does but every resource I've found on OpenGL starts with immediate rendering and gets into that stuff later. I was trying to start working with VBOs yesterday but I'm having a hard time finding a good reference.

Scaevolus
Apr 16, 2007

Suspicious Dish posted:

Does JOGL not have shaders and VBOs and stuff? glBegin and friends have been dead since forever.

When you're only pushing a few hundred triangles, using deprecated immediate mode APIs isn't a big deal.

Raenir Salazar
Nov 5, 2010

College Slice
So I'm trying to figure out linear blend skinning and nothings working.

This is the formula we're given:



And I thought I understood it but now the more I look at it the more I'm starting to think I have it completely wrong.

code:
		for (int vertex_i=0;vertex_i<mesh->nfaces.size();vertex_i++) 
                {
			for (int k = 0; k < 3; k += 1) {
				int j = k;//2 - k;

				//cout << "Vertex: " << vertex_i << endl;
				pointp.x = 0;
				pointp.y = 0;
				pointp.z = 0;
				for (int t=0;t<mySkeleton->vertex.at(vertex_i).size();t++) 
                                {
					point.x = mesh->vertex[mesh->faces[vertex_i][j]][0];
					point.y = mesh->vertex[mesh->faces[vertex_i][j]][1];
					point.z = mesh->vertex[mesh->faces[vertex_i][j]][2];

					//glPushMatrix();
					pointp += mySkeleton->vertex[vertex_i][t] *
 myTranslationMatrix *myRotationMatrix * point;

				
					mesh->vertex[mesh->faces[vertex_i][j]][0] = pointp.x;
					mesh->vertex[mesh->faces[vertex_i][j]][1] = pointp.y;
					mesh->vertex[mesh->faces[vertex_i][j]][2] = pointp.z;
					//myPointPrime += MyXformations * myPoint;
				}
			}
		}
I'm trying to take each point in my mesh, and adjust it relative to my skeleton structure, its transformations are further up and not posted here.

Thoughts? Either my matrix multiplication isn't working the way I thought it should work or I completely misunderstood the formula.

Zerf
Dec 17, 2004

I miss you, sandman

Raenir Salazar posted:

*skinning stuff*

It's kind of hard to know exactly what your transformations look like just from this code, but in general this is what you want to do for each joint:

code:
jointTransform = ...(parent transforms here)... * transform[parentJoint] * transform[thisJoint] * inverseBindPose[thisJoint]
Note that inverseBindPose should only occur once in a jointTransform and not be "inherited" from the parent. Also, don't rely exactly on the matrix multiplication order, because I'm too tired to think about the correct order.

Once you have a transform for each joint, you can upload all those to a shader and use the formula you posted, something like this(if you limit influence to four transforms):

code:
for( int i = 0; i < 4; i++ ) {
  skinnedPosition += mul( jointTransform[jointIndex[i]], Position ) * Weight[i];
}
It's maybe not the best of explanations, but if you are missing any of these parts, it could perhaps give you a hint on where your bugs are hiding.

Raenir Salazar
Nov 5, 2010

College Slice

Zerf posted:

It's kind of hard to know exactly what your transformations look like just from this code, but in general this is what you want to do for each joint:

code:
jointTransform = ...(parent transforms here)... * transform[parentJoint] * transform[thisJoint] * inverseBindPose[thisJoint]
Note that inverseBindPose should only occur once in a jointTransform and not be "inherited" from the parent. Also, don't rely exactly on the matrix multiplication order, because I'm too tired to think about the correct order.

Once you have a transform for each joint, you can upload all those to a shader and use the formula you posted, something like this(if you limit influence to four transforms):

code:
for( int i = 0; i < 4; i++ ) {
  skinnedPosition += mul( jointTransform[jointIndex[i]], Position ) * Weight[i];
}
It's maybe not the best of explanations, but if you are missing any of these parts, it could perhaps give you a hint on where your bugs are hiding.

by shaders do you mean modern opengl stuff? We've been using immediate mode/old opengl so far.

What actually outputs the mesh is:

quote:

glBegin(GL_TRIANGLES);
for (int i = 0; i < mesh->nfaces.size(); i += 1)
for (int k = 0; k < 3; k += 1) {
int j = k;//2 - k;


glNormal3f(mesh->normal[mesh->nfaces[i][j]][0],
mesh->normal[mesh->nfaces[i][j]][1],
mesh->normal[mesh->nfaces[i][j]][2]);


glVertex3f(mesh->vertex[mesh->faces[i][j]][0],
mesh->vertex[mesh->faces[i][j]][1],
mesh->vertex[mesh->faces[i][j]][2]);

}
glEnd();

So what I've been trying to do is transform the mesh, upload the new mesh, and then that gets drawn.

I don't have a real heirarchy for my skeleton, I just draw lines between two points for each bone and each bone is a pair of two joints kept in a vector array.

So to clarify, my skeleton animation works perfectly but making the jump from that to my mesh is whats difficult.

Zerf
Dec 17, 2004

I miss you, sandman

Raenir Salazar posted:

by shaders do you mean modern opengl stuff? We've been using immediate mode/old opengl so far.

What actually outputs the mesh is:


So what I've been trying to do is transform the mesh, upload the new mesh, and then that gets drawn.

I don't have a real heirarchy for my skeleton, I just draw lines between two points for each bone and each bone is a pair of two joints kept in a vector array.

So to clarify, my skeleton animation works perfectly but making the jump from that to my mesh is whats difficult.

Then I think what you are missing is the inverse bind pose transform. Is this a school assignment? Is that transform mentioned somewhere? A simple example why it's needed:

Imagine that we have two joints, one at position j1abs(5,0,0) and one at position j2abs(5,2,0). j2abs has a relative transform to j1abs which looks like j2rel(0,2,0). Now we have a vertex which we want to skin. This vertex is placed at v1abs(5,3,0). For simplicity, we want to attach this vertex only to the j2 joint. We cannot apply j2s absolute transform to the vertex position right away(that would give us a new position v1'abs(5+5,2+3,0+0)=(10,5,0) which is not what we want). Therefore, we define the inverse bind pose transform to be the transformation from a joint to the origin of the model. In other words, we want a transform which transforms a position in the model to the local space of a joint. With translations, this is simple, we can just invert the transform by negating it, giving us j2invBindPose(-5,-2,0).

Now, lets try and apply these transformations. First we take the vertex v1 and multiply with the inverse bind pose for j2. This results in a position(0,1,0)(see, we are now in joint local space). Now we can simply apply j1rel(5,0,0) and j2rel(0,2,0) which gives us v1'abs(0+5+0,1+0+2,0+0+0)=(5,3,0), right were we started.

Now imagine we change j1rels transform to j1rel(6,1,0). We again take v1abs(5,3,0)*j2invBindPose(-5,-2,0)*j1rel(6,1,0)*j2rel(0,2,0) = ( 5 + -5 + 6 + 0, 3 + -2 + 1 + 2, 0+0+0+0) = v1'abs(6,4,0), which is exactly what we want.

So, does this explanation make sense to you or have I succeeded in making you more confused? :)

Adbot
ADBOT LOVES YOU

Raenir Salazar
Nov 5, 2010

College Slice
I think I see what you mean but isn't that handled by assigning weights?



Example, vertex[0] = <0.0018 0.0003 0.8716 0.0003 0.0004 0 0 0.0006 0.0007 0.0001 0 0.0063 0.0046 0 0.0585 0.0546 0.0002>

We're given the file that has the associated weights for every vertex. Each float is for a particular bone from 0 to 16; the file has the 17 weights for each of the 6669 vertexes.

e: Out of curiosity Zerf do you have skype and any chance could I add you for not just figuring this out but for opengl help and advice in general. :)

Raenir Salazar fucked around with this message at 21:06 on Mar 11, 2014

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply