Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Sweeper
Nov 29, 2007
The Joe Buck of Posting
Dinosaur Gum

Avenging Dentist posted:

One of the first Google results is for a DirectDraw overlay, which is probably functionally similar to Direct2D: http://www.gamedev.net/community/forums/topic.asp?topic_id=359319
Cool thanks! I didn't know what it was called so I kept searching for things like "on top"!

Well unfortunately it seems like with Windows Vista/7 the directdraw overlay does not work with 3d applications. Guess it's back to searching!

I have discovered the only way to do this is to detour the direct3d functions a program calls?

Sweeper fucked around with this message at 20:55 on Jan 16, 2010

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Hubis posted:

not without doing some pixel shader tricks, no.
One thing you CAN do is tile trims from an atlas though, i.e. textures that are clamped on one axis and repeated on another. ET:QW does this.

newsomnuke
Feb 25, 2007

This is a bit of a ridiculous question because it seems so basic, but: I can't get smooth scrolling in OpenGL (using ortho projection). It seems to be because the floating point coords are getting rounded down (which makes sense) and I wouldn't have thought it would be noticeable at 60 fps, but it is. When I use integer coords, everything is fine.

Alcool Tue
Nov 6, 2009

by Ozma

ultra-inquisitor posted:

This is a bit of a ridiculous question because it seems so basic, but: I can't get smooth scrolling in OpenGL (using ortho projection). It seems to be because the floating point coords are getting rounded down (which makes sense) and I wouldn't have thought it would be noticeable at 60 fps, but it is. When I use integer coords, everything is fine.

From my limited perspective on General Programming, properly declared float values aren't going to round themselves down unless you're turning them into ints OR you're running math with both ints and floats as values, which could either truncate your floats or add a .0 to your ints depending on some poo poo.

Try tracing your values and making sure that you're only operating on floats with floats and ints with ints, sorry if this didn't help very much.

newsomnuke
Feb 25, 2007

Alcool Tue posted:

From my limited perspective on General Programming, properly declared float values aren't going to round themselves down unless you're turning them into ints OR you're running math with both ints and floats as values, which could either truncate your floats or add a .0 to your ints depending on some poo poo.
The rounding occurs when the positions are rasterised to screen, which is a set of discrete pixels.

a slime
Apr 11, 2005

I've never worked with GLSL before but I spent some time reading about it tonight and it sounds pretty interesting. I'm working on a little 2D game that currently uses a bunch of textured quads and renders them using VBOs. The quads are never rotated and never change size, but the texture coordinates change fairly often for animated sprites and that kind of thing.

Is it possible to store texture coordinates and the height/width of the quad in the normal or something that I don't use, and instead render everything as a bunch of point sprites? I'm thinking I can set up the texture coordinates and quad size in a vertex/fragment shader.. Am I way off base? Am I likely to see much increase in throughput by doing so?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

not a dinosaur posted:

I've never worked with GLSL before but I spent some time reading about it tonight and it sounds pretty interesting. I'm working on a little 2D game that currently uses a bunch of textured quads and renders them using VBOs. The quads are never rotated and never change size, but the texture coordinates change fairly often for animated sprites and that kind of thing.

Is it possible to store texture coordinates and the height/width of the quad in the normal or something that I don't use, and instead render everything as a bunch of point sprites? I'm thinking I can set up the texture coordinates and quad size in a vertex/fragment shader.. Am I way off base? Am I likely to see much increase in throughput by doing so?

Yep. In fact, you should have a bunch of TEXCOORDn attributes (for n = [0, 7) I think) which are commonly used to pack those things in. The performance increase you'd see would be almost entirely dependent upon where your current bottlenecks are; however, I'd be willing to say that you're likely CPU/Driver interface bottle necked, in which case doing fewer frequent API calls and offloading more work to the shaders would almost certainly be a win.

thepostaldude
Jan 18, 2009

I regret nothing.
I am trying to do shadow mapping in OpenGL, but can't really seem to get it right.
What I do is render the scene from the light pov into a fbo/depthtexture (seen on in the top right corner). I am saving the projection/model matrix in gl_TextureMatrix[7].

When rendering the scene I don't do anything else than use gluLookAt and then rendering a few VBOs, still, the projection does not seem to be right. Any ideas?

Saving matrix to texture 7:
code:
    float modelview[16];
    float projection[16];
    
    glGetFloatv(GL_MODELVIEW_MATRIX, modelview);
    glGetFloatv(GL_PROJECTION_MATRIX, projection);

    
    glMatrixMode(GL_TEXTURE);
    glActiveTexture(GL_TEXTURE7);
    
    glLoadIdentity();        

    glMultMatrixf(projection);
    glMultMatrixf(modelview);
    
    glMatrixMode(GL_MODELVIEW);
Vertex shader:
code:
varying vec4 shadowCoord;

void main(void)
{ 
    mat4 bias = mat4(0.5, 0.0, 0.0, 0.5,
                     0.0, 0.5, 0.0, 0.5,
                     0.0, 0.0, 0.5, 0.5,
                     0.0, 0.0, 0.0, 1.0);
					 
    shadowCoord = gl_Vertex * gl_TextureMatrix[7] * bias; 
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    gl_TexCoord[0] = gl_MultiTexCoord0;
}
Fragment shader:
code:
uniform sampler2D colorTexture;
uniform sampler2DShadow shadowMap;

varying vec4 shadowCoord;

void main(void)
{
    vec4 color = texture2D(colorTexture, gl_TexCoord[0].xy);
    if(color.a <= 0.6) discard;
    gl_FragColor = vec4(shadow2DProj(shadowMap, shadowCoord) * color.rgb, color.a);
}
Here is what I get from that, like I said, the pojection does not seem to be right

Click here for the full 1286x1049 image.

heeen
May 14, 2005

CAT NEVER STOPS
I have a OpenGL bug that I just can't get a grip on: I render stuff into a fbo, which I can clear to whatever color correctly, but everything I render turns up in white.
I have:
code:
fbo->Bind();
	CHECK_GL_ERROR();
	glPushAttrib(GL_VIEWPORT_BIT | GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT| GL_ENABLE_BIT	| GL_TRANSFORM_BIT);
	
	glClearColor(0.0,0.0,1.0,0.0);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);

	glDisable(GL_CULL_FACE);

	glUseProgram(0);
	
	glActiveTextureARB( GL_TEXTURE1_ARB );
	glBindTexture(GL_TEXTURE_2D, 0);
	glDisable(GL_TEXTURE_2D);

	glActiveTextureARB( GL_TEXTURE0_ARB );
	glBindTexture(GL_TEXTURE_2D, 0);
	glDisable(GL_TEXTURE_2D);	
	
	glDisable(GL_LIGHTING);
	glDisable(GL_LIGHT0);

Am I missing something?

The1ManMoshPit
Apr 17, 2005

You seem to be missing a glEnable(GL_TEXTURE_whatever) in the code you posted.

Edit: woops you explicitly disable them. I have no idea then without seeing more code.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Make sure you enable color writes with glColorMask before clearing?

Martman
Nov 20, 2006

I'm working on a game engine for a class project (we weren't specifically assigned this project, we chose it, so I don't really see it as an academic integrity issue to get help with this certain issue), and we're using OpenGL.

The idea is that the interface is essentially something like Diablo 2 or most RTSes; i.e. isometric view, movement is done by mouse clicks. The ground is made up of square tiles, each tile having four corners whose heights are read from a map file.

Here's a simple mockup of the scenario we're wondering about :



I just wanted to make it clear, but I guess the problem is kind of simple: essentially we're wondering how to find, based on where you've clicked on the screen, exactly which pixel on the map you should be trying to move to. We can use picking to find the specific tile that was clicked on, but otherwise we're a bit confused... we have the location and angle of the camera and the definition of the 2D plane that would represent the appropriate tile, so I feel like it should be possible to find the intersection between the vector defined by the click and the tile, but I guess I'm not sure exactly how to describe the vector. Or maybe this is the wrong way to go about it?

haveblue
Aug 15, 2005



Toilet Rascal

Martman posted:

I'm working on a game engine for a class project (we weren't specifically assigned this project, we chose it, so I don't really see it as an academic integrity issue to get help with this certain issue), and we're using OpenGL.

The idea is that the interface is essentially something like Diablo 2 or most RTSes; i.e. isometric view, movement is done by mouse clicks. The ground is made up of square tiles, each tile having four corners whose heights are read from a map file.

Here's a simple mockup of the scenario we're wondering about :



I just wanted to make it clear, but I guess the problem is kind of simple: essentially we're wondering how to find, based on where you've clicked on the screen, exactly which pixel on the map you should be trying to move to. We can use picking to find the specific tile that was clicked on, but otherwise we're a bit confused... we have the location and angle of the camera and the definition of the 2D plane that would represent the appropriate tile, so I feel like it should be possible to find the intersection between the vector defined by the click and the tile, but I guess I'm not sure exactly how to describe the vector. Or maybe this is the wrong way to go about it?

You can use glProject to find the world space vector of the camera click (the function gives you the location on the near clip plane, just subtract the camera position from that). From there, you can use the camera location and that vector to find the intersection with the tile plane. The obvious way to do that would be to find the distance of the camera from the plane (plug the camera location into the plane equation) and then scale the click vector by (that distance / camera vector magnitude) / (cosine of angle between camera vector and plane normal) and add it to the camera position. There's probably a more efficient formula, that's just off the top of my head.

haveblue fucked around with this message at 07:09 on Feb 25, 2010

Gart
Feb 24, 2010

heeen posted:

I have a OpenGL bug that I just can't get a grip on: I render stuff into a fbo, which I can clear to whatever color correctly, but everything I render turns up in white.
I have:
code:
fbo->Bind();
	CHECK_GL_ERROR();
	glPushAttrib(GL_VIEWPORT_BIT | GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT| GL_ENABLE_BIT	| GL_TRANSFORM_BIT);
	
	glClearColor(0.0,0.0,1.0,0.0);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);

	glDisable(GL_CULL_FACE);

	glUseProgram(0);
	
	glActiveTextureARB( GL_TEXTURE1_ARB );
	glBindTexture(GL_TEXTURE_2D, 0);
	glDisable(GL_TEXTURE_2D);

	glActiveTextureARB( GL_TEXTURE0_ARB );
	glBindTexture(GL_TEXTURE_2D, 0);
	glDisable(GL_TEXTURE_2D);	
	
	glDisable(GL_LIGHTING);
	glDisable(GL_LIGHT0);

Am I missing something?

Where's your draw call? If you disable texturing and lighting and aren't using a shader, vertex colors are used by default by the fixed function shader. The default vertex color is white.

Gart
Feb 24, 2010

thepostaldude posted:

I am trying to do shadow mapping in OpenGL, but can't really seem to get it right.

code:
    shadowCoord = gl_Vertex * gl_TextureMatrix[7] * bias; 

Judging by how you construct your bias matrix and how you saved your projection*view*model matrix, are you sure you are meaning to do vector*matrix instead of matrix*vector? In other words:

code:
    shadowCoord = bias * gl_TextureMatrix[7] * gl_Vertex; 
The best thing to do when debugging shadow map issues is to render your shadow map coordinates as color.

Also bake your bias matrix into the texture matrix as well instead of constructing it in the shader, but don't do it until everything works.

Mata
Dec 23, 2003
I'm using OpenGL's bézier evaluators, and it breaks for me when I pass in more than 64 control points (8x8 grid) to glMap2f. The maximum amount of control points seems to be implementation specific, but can it really be that low? I'd like to send in 65025 atleast :(

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

Mata posted:

I'm using OpenGL's bézier evaluators, and it breaks for me when I pass in more than 64 control points (8x8 grid) to glMap2f. The maximum amount of control points seems to be implementation specific, but can it really be that low? I'd like to send in 65025 atleast :(
:what: I promise you you don't want to use anywhere near that many. A 60,000-degree polynomial will not evaluate to anything sensible. Hell, even 64 terms is overkil for double precision floats.

Dijkstracula fucked around with this message at 19:25 on Mar 2, 2010

Mata
Dec 23, 2003

Dijkstracula posted:

:what: I promise you you don't want to use anywhere near that many. A 60,000-degree polynomial will not evaluate to anything sensible. Hell, even 64 terms is overkil for double precision floats.

Heh, yeah I forgot how bézier curves were calculated, even though I read about them this morning which is why I tried to do this (modelling terrain as a bézier, I tried to make every byte in a 256x256 heightmap a control point :v:)
Calling glgetintegerv with GL_MAX_EVAL_ORDER tells you what the max amount of control points you can pass to glMap2f which was 8x8 on my computer.

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

I'm asking because I don't know and I'm curious - why are you using bezier curves if you've got a heightmap? It's not obvious to me what your control points will be. Are you sure you don't simply want a run-of-the-mill interpolation/tessellation algorithm instead?

Contero
Mar 28, 2004

I'm trying to get rid of this annoying diamond artifact:



I believe it's mach banding, but I could be mistaken. It's a heightmap where the worldspace normal is stored in a varying vec3 and then the diffuse color is calculated in the frag shader.

Is there a straightforward way of getting rid of this problem? It seems like it should be fairly common.

emf
Aug 1, 2002



Contero posted:

I'm trying to get rid of this annoying diamond artifact:



I believe it's mach banding, but I could be mistaken. It's a heightmap where the worldspace normal is stored in a varying vec3 and then the diffuse color is calculated in the frag shader.

Is there a straightforward way of getting rid of this problem? It seems like it should be fairly common.
I don't know if this is straightforward, but if the banding is always of a known period, you can do a DFFT, ring filter of appropriate radius, and back DFFT. If you know that the angle of the diamond is also constant you should be able to get away with spot filters.

Mata
Dec 23, 2003

Dijkstracula posted:

I'm asking because I don't know and I'm curious - why are you using bezier curves if you've got a heightmap? It's not obvious to me what your control points will be. Are you sure you don't simply want a run-of-the-mill interpolation/tessellation algorithm instead?

I just passed in the height map as the control points in the form of a two dimensional grid of x,y,z coordinate data

Here's more about bezier surfaces http://www.opengl.org/resources/code/samples/mjktips/grid/index.html
I've since taken a different approach but if I still wanted to render the whole terrain as a bezier surface I'd have to divide it into 4x4 grids and render them one at a time or something like that.

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

Mata posted:

I just passed in the height map as the control points in the form of a two dimensional grid of x,y,z coordinate data
No I understood that, but typically a heightmap's entries correspond to actual height values that you want your terrain to have, and not Bezier control points, which will in general not be the same thing. This is why the whole thing sounded dicey to me. But if you figured out another way then fine :)

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Contero posted:

I'm trying to get rid of this annoying diamond artifact:



I believe it's mach banding, but I could be mistaken. It's a heightmap where the worldspace normal is stored in a varying vec3 and then the diffuse color is calculated in the frag shader.

Is there a straightforward way of getting rid of this problem? It seems like it should be fairly common.

Can you post a wireframe screenshot? I have a suspicion of what's going on.

The most common way of converting the height map to geometry is by making squares out of each four adjacent vertices, then bisecting the squares to form two triangles. However, if you do this, sometimes the bisection direction will run counter to the actual contour of the surface. What you really want to do is calculate the normals of the two triangles in each bisection direction (top-left to bottom-right, and bottom-left to top-right) and find the dot product between them. Then, use the bisection where the dot product is the least (i.e. where the normals are most different). This will ensure that your geometry matches the contours of the underlying terrain as closely as possible.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Contero posted:

Is there a straightforward way of getting rid of this problem? It seems like it should be fairly common.
Yes. Right now you're using a diagonal that's always going the same direction, i.e. if you have a 2x2 section of the heightmap representing the points:

0 1
2 3

You're doing something like for example, always using (0 1 3) (0 3 2) as the triangles. Don't do this. Instead, alternate which direction the diagonal goes.

i.e. if (heightmap x coord + heightmap y coord) is even, use (0 1 3) (0 3 2), and if it's odd, use (0 1 2) (1 3 2)

OneEightHundred fucked around with this message at 06:00 on Mar 4, 2010

Contero
Mar 28, 2004

Hubis posted:

Can you post a wireframe screenshot? I have a suspicion of what's going on.

The most common way of converting the height map to geometry is by making squares out of each four adjacent vertices, then bisecting the squares to form two triangles. However, if you do this, sometimes the bisection direction will run counter to the actual contour of the surface. What you really want to do is calculate the normals of the two triangles in each bisection direction (top-left to bottom-right, and bottom-left to top-right) and find the dot product between them. Then, use the bisection where the dot product is the least (i.e. where the normals are most different). This will ensure that your geometry matches the contours of the underlying terrain as closely as possible.

Unfortunately this is from Nvidia's ocean FFT example, and the map is going to change every frame. Changing up my vertex ordering every frame might be a little complicated / slow.

OneEightHundred posted:

Yes. Right now you're using a diagonal that's always going the same direction, i.e. if you have a 2x2 section of the heightmap representing the points:

0 1
2 3

You're doing something like for example, always using (0 1 3) (0 3 2) as the triangles. Don't do this. Instead, alternate which direction the diagonal goes.

i.e. if (heightmap x coord + heightmap y coord) is even, use (0 1 3) (0 3 2), and if it's odd, use (0 1 2) (1 3 2)



:sigh: I could have sworn you said change the order based on just the Y coord before. It looks less regular, but it's still there.

Here's the code they're using to calculate the normal, if it's at all relevant:

code:
 vec3 normal      = normalize(cross( vec3(0.0, slope.y*heightScale, 2.0 / size.x), vec3(2.0 / size.y, slope.x*heightScale, 0.0)));
Where the slope is K from here (actually now I'm not so sure...). The entire example seems to be a straightforward implementation of this article, and sadly doesn't even use FFT. It #includes <cufft.h> but then never uses it.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Contero posted:

:sigh: I could have sworn you said change the order based on just the Y coord before. It looks less regular, but it's still there.
No, order by (X+Y), meaning it'll alternate on both axes.

Something like this:

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Contero posted:

Unfortunately this is from Nvidia's ocean FFT example, and the map is going to change every frame. Changing up my vertex ordering every frame might be a little complicated / slow.

Ohhh... for some reason I thought this was a static heightmap. Hmm.

Could you just use normals from the normal map instead of using interpolated triangle normals? That would free your lighting model from the underlying geometry division.

OneEightHundred posted:

No, order by (X+Y), meaning it'll alternate on both axes.

Something like this:


You'll still see the artifacts (the underlying problem of the quads being non-planar will remain) but it will reduce the visibility of it.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Hubis posted:

You'll still see the artifacts (the underlying problem of the quads being non-planar will remain) but it will reduce the visibility of it.
This'll reduce the "diamond" artifacting. The next thing to do is normalize the normal in the pixel shader so it stops degenerating from interpolation.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!
This is a stupid question that I should be able to figure out for myself, but it's just lurking in a blind-spot in my brain and making me crazy.

What I have is a hierarchical 'skeleton' object, in which each bone has a length (float) and a rotation (quaternion); when it is animated, a 'recalculate' function is called that recurses its way down the hierarchy setting the appropriate bone positions thusly (where "FullTransform" is the the transformation of meshes that are attached to the bone; they attach to the base of the bone, so they are rotated but not moved up the length, and "FullSubTransform" is the transformation of bones attached to the bone, which are both rotated and translated):
code:
void RBONE::Recalc(DWORD depth,const R3DMATRIX *m) {
  //m = parent's FullSubTransform, depth is the max depth to recalc
  R3DMATRIX rotate; D3DXMatrixRotationQuaternion(&rotate,&AnimatedOrientation);
  D3DXMatrixMultiply(&FullTransform,&rotate,m);
  R3DMATRIX temp; D3DXMatrixTranslation(&temp,0.0f,Length,0.0f);
  R3DMATRIX bonetrans; D3DXMatrixMultiply(&bonetrans,&temp,&rotate);
  D3DXMatrixMultiply(&FullSubTransform,&bonetrans,m);
  DWORD ndepth=depth-1;
  if (ndepth>0) {
    foreach (SubBone as p) { //this is pseudocode since you don't care to see my lists
      p->Recalc(ndepth,&FullSubTransform);
    }
  }
}
So far this all works fine. The problem is, when a mesh is attached to more than one bone, for the purpose of skinning. For the second bone it's still pretty easy, the secondary bone's influence transformation is just primary->FullTransform followed by primary->TranslateLength, secondary->Rotate, primary->TranslateBackByLength, (that last because the bone influence should *not* include offsetting by the bone length, only rotating around the joint.)

My problem is how do I calculate this for the third bone - I can't just do primary->length, secondary->rotate, secondary->length, tertiary->rotate, secondary->unlength, primary->unlength, because then the effect of primary->unlength is altered by the tertiary rotate. Nor can I do 1length/2rotate/1unlength, 2length/3rotate/2unlength, because then the tertiary rotation is 2length offset from the base of the primary bone instead of from the base of the secondary bone. Nor can I just find the final bone positions of secondary and tertiary and just perform the additional rotations around those points, because then the rotations are done with untransformed axes of rotation.

As I say, I'm pretty sure this should be much easier than I'm making it, I've just got myself stuck in a loop of dumb wrongness. Help!

(If the question is unclear, I'll happily answer questions to clarify - I was trying to keep it as succinct as I could.)

Edit: actually, I think it probably does need to be clearer. Here is a picture!

If I had an 'arm' mesh, affixed to bone b2 and skinned also onto b3 and b4, and let's say I can get the length and rotation matrices from a member function, the transformation to get the arm into place, assuming the rear end is the root of the skeleton, would be:
b1.length * b2.rotation
The transformation for a separate forearm object, on b3, if it were unskinned, would be
b1.length * b2.rotation * b2.length * b3.rotation
The transformation for the b3 'skin' on the b2 'arm', however, must omit that b2.length because we don't want to stretch the arm mesh out, we only want to bend it. But it still wants a rotation around point 2, so it's
b1.length * b2.rotation * (b2.length.inverse * b3.rotation * b2.length)
And the transformation for the b4 'skin' would be...
b1.length * b2.rotation * (b2.length.inverse * b3.rotation * b2.length) * (how the gently caress do I get the correctly transformed rotation around point 3?)

Edit2: corrected for doing the length operations in the right direction in the edit, it's still wrong in the original question but pretend it isn't.

Edit3: Found the answer myself eventually, guess this thread's pretty useless.

roomforthetuna fucked around with this message at 05:53 on Mar 21, 2010

numeric_atrophy
Oct 18, 2003

Download Geared - FREE
(17 Million Downloads)

Windows 8
iOS
Android
WinPhone 7
This is probably a dumb question, but I can't figure out how to do it even after searching google.

I am working on improving my iPhone game's performance, and I was wanting to change it from 32-bit textures to 16bit textures. The app uses PNG images with alpha transparency. Here is the current code:

EDIT: Had to repack the pixel data apparently.

numeric_atrophy fucked around with this message at 05:44 on Mar 21, 2010

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?

haveblue posted:

You can use glProject to find the world space vector of the camera click (the function gives you the location on the near clip plane, just subtract the camera position from that). From there, you can use the camera location and that vector to find the intersection with the tile plane. The obvious way to do that would be to find the distance of the camera from the plane (plug the camera location into the plane equation) and then scale the click vector by (that distance / camera vector magnitude) / (cosine of angle between camera vector and plane normal) and add it to the camera position. There's probably a more efficient formula, that's just off the top of my head.

I have a similar problem to Martman, but I'm using XNA. Does anyone know how you find the world space vector for the mouse click? I'm pretty sure I can do the rest from there.

Edit: Google has been most unhelpful. I probably don't know what I need to search for.

Edit 2: Never mind, here is the solution: http://msdn.microsoft.com/en-us/library/bb203905.aspx

HappyHippo fucked around with this message at 20:21 on Mar 22, 2010

Whilst farting I
Apr 25, 2006

This might be better asked here than the C++ thread: I'm trying to get the square to move to the left and rotate around its center when the d key is pressed twice. It's really simple, I'm just too new at OpenGL to know exactly what I'm loving up. For more context, http://forums.somethingawful.com/showthread.php?threadid=2773485&userid=95987&perpage=40&pagenumber=2#post374137410

Here's the full file:

http://pastebin.ca/1848509

I'm trying to, upon hitting d a second time, get the square to launch to the left while rotating about its center. I can only get it to move to the left by manually subtracting from the x coordinates, which isn't translation and won't rotate anything.

Removing rectX1 and rectX2 and replacing them with actual numbers does not change the outcome.

guenter
Dec 24, 2003
All I want out of life is to be a monkey of moderate intelligence who wears a suit. That's why I've decided to transfer to business school!
I think you're misunderstanding translation in this context. OpenGL doesn't know where your rectangle was last frame so saying something like "move 0.5 pixels to the right" or "rotates on the x axis at -.75 units" won't get you your expected behavior. You need to tell OpenGL the squares final or "world" position should be each frame.

You should probably read into exactly what those glPushMatrix and glLoadIdentity calls are actually doing too because understanding that would probably clear up your confusion on the first part.

At any rate, you'll want to keep your squares position and rotation stored somewhere, a global or whatever. Then modify your rectMove function something like this:

code:
void rectMove()
{
  glPushMatrix(); // saves the matrix as it is
  glLoadIdentity(); // loads the matrix as it is
 
  glTranslatef(g_rect_x, g_rect_y, -6.0f);
 
  glRotatef(g_rect_rot, 0.0f, 0.0f, 1.0f);
 
                glBegin(GL_QUADS);
                        glColor3f(0.5, 1.0, 0.5);
                glVertex3f(rectX1, 110.0, 0.0);
                glVertex3f(rectX2, 110.0, 0.0);
                glVertex3f(rectX2, 245.0, 0.0);
                glVertex3f(rectX1, 245.0, 0.0);
                glEnd();
 
   glPopMatrix();
}
So now if you wanted to cause your rectangle to appear to move right you'd just increment g_rect_x or if you wanted it to rotate you'd increment g_rect_rot. In your case, you'll want to do this each frame depending on currentBarState.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!
Here's a question that's not so much "fix my poo poo" - I gather that DirectX has two ways of doing skinning; one way uses up to four 'system' transformations, and each vertex has weightings for each matrix (with the last one not included because it's 1.0 minus all the others). The other way uses up to 256 transformations, and each vertex has weightings for up to four of them, and four byte-sized indices to indicate which matrices to assign those weights to.

Questions about this that I haven't been able to dig up answers to:

1. I'm pretty sure the former method is well supported, but how well supported is the latter?

2. Is there a significant difference in speed between the two methods?

3. If you're rendering a complex object, is it better to have one big mesh with four weights per vertex (most vertices just being 100% on one bone, so the weighting is overkill and unused for many of the vertices), or to render the object as a number of meshes so as to reduce the weighting to two or three per vertex (so there's a separate mesh around each joint), or even to render the object as even more meshes so that blocks which are bound to only a single bone are rendered unweighted, and only the triangles which need their vertices skinned are rendered with weights?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

roomforthetuna posted:

1. I'm pretty sure the former method is well supported, but how well supported is the latter?
How is the former even workable with only 4 matrices for the entire mesh?

The latter is much more common as far as I know, precisely because you can cram more bones in per draw call.

quote:

3. If you're rendering a complex object, is it better to have one big mesh with four weights per vertex (most vertices just being 100% on one bone, so the weighting is overkill and unused for many of the vertices), or to render the object as a number of meshes so as to reduce the weighting to two or three per vertex (so there's a separate mesh around each joint), or even to render the object as even more meshes so that blocks which are bound to only a single bone are rendered unweighted, and only the triangles which need their vertices skinned are rendered with weights?
D3D9's biggest bottleneck by far is draw calls, and vertex processing is becoming progressively more negligible. Above all else, do as much per draw call as you can.

If vertex processing speeds are an issue, then consider weight pre-blending. i.e. instead of sending the bone matrices as uniforms, send each unique matrix/weight combination, and index that per-vert instead. SSE can be used to speed this up a bit.

krysmopompas
Jan 17, 2004
hi

OneEightHundred posted:

D3D9's biggest bottleneck by far is draw calls, and vertex processing is becoming progressively more negligible. Above all else, do as much per draw call as you can.
As a side-note, if you want to get fancy, http://www.emergent.net/GameFest2008 is an interesting way of bypassing the draw call limitations.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

krysmopompas posted:

As a side-note, if you want to get fancy, http://www.emergent.net/GameFest2008 is an interesting way of bypassing the draw call limitations.
I really don't get this, there isn't really any reason that throwing calls into a work queue would do anything about the overhead of making those calls.

Incidentally this sort of thing isn't even new, Quake 3 did it.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

OneEightHundred posted:

How is the former even workable with only 4 matrices for the entire mesh?

The latter is much more common as far as I know, precisely because you can cram more bones in per draw call.

D3D9's biggest bottleneck by far is draw calls, and vertex processing is becoming progressively more negligible. Above all else, do as much per draw call as you can.

If vertex processing speeds are an issue, then consider weight pre-blending. i.e. instead of sending the bone matrices as uniforms, send each unique matrix/weight combination, and index that per-vert instead. SSE can be used to speed this up a bit.
Thanks, that answers everything pretty well. As far as I can tell, the 4 matrices method works by splitting your mesh as I half-suggested in question 3 - if you have a separate mesh for 'arm' then 4 skinning transforms is plenty. But if it's going to be quicker the other way that's cool and will make things much easier - shame I spent ages making my data files parse compatible with the 4 matrices way before I found out the other much easier way existed! Thanks D3D documentation!

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
You should make your data files index weight blends because you can easily convert that to any of the several ways of processing it. I can name at least three major ways of processing skeletal deformation, and indexed weight blends are the only approach that can be trivially loaded as all three.

i.e. see AX3SkeletalDeform in this:
http://svn.icculus.org/*checkout*/teu/trunk/tertius/docs/tmrf.html?revision=227

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply