Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!
You just want to write a little function that does the conversion you need. You can get the width and height of the image, so the function would simply be something like (probably really as a member function to something):
code:
struct {
 float x1,y1,x2,y2;
} Sprite[MAXSPRITE];

float pixwidth=1.0f/IMAGEWIDTH;    //really these would be a member variable 
float pixheight=1.0f/IMAGEHEIGHT;  //and set when you load the file

void LoadSprite(int spriteindex,int xpix,int ypix,int width=32,int height=32) {
  Sprite[spriteindex].x1=xpix*pixwidth;
  Sprite[spriteindex].y1=ypix*pixheight;
  Sprite[spriteindex].x2=(xpix+width)*pixwidth;
  Sprite[spriteindex].y2=(ypix+height)*pixheight;
}
Then, like your old code, you'd just do LoadSprite(RED_MAN,96,0); or LoadSprite(RED_PILL,0,0,64,0);

Anything you did with a library before, you'll probably have to implement that functionality yourself if you're not using a library now.

Edit: Or apparently you can do it by changing the texture scale in OpenGL? I'm a DirectX user.

Adbot
ADBOT LOVES YOU

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

roomforthetuna posted:

You just want to write a little function that does the conversion you need. You can get the width and height of the image, so the function would simply be something like (probably really as a member function to something):

Is it not possible to make each texture a subset of the main one?

UraniumAnchor
May 21, 2006

Not a walrus.
Possible? Yes. Probably not desirable, though. If you make sure the base image is a power of 2 you're pretty much guaranteed to be able to get pixel accuracy even with floating point. And then you don't have to set up a hundred superfluous textures.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

UraniumAnchor posted:

If you make sure the base image is a power of 2 you're pretty much guaranteed to be able to get pixel accuracy even with floating point.
Not even "pretty much", you ARE guaranteed, power-of-two textures are used precisely because it lets FP coordinates be converted to texture coordinates with nothing but bit shifts at no loss of precision.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

UraniumAnchor posted:

Possible? Yes. Probably not desirable, though. If you make sure the base image is a power of 2 you're pretty much guaranteed to be able to get pixel accuracy even with floating point. And then you don't have to set up a hundred superfluous textures.

The sprites are 22x22 so I figured I would use 32x32 textures and just draw from there.

I guess I will just continue by writing wrapper functions around what I was doing to draw the sprites right now. I have to adjust my thinking since I haven't done any of this stuff since the DOS days.

Spite
Jul 27, 2001

Small chance of that...
Why not just use a rectangle texture?
http://www.opengl.org/registry/specs/ARB/texture_rectangle.txt

UraniumAnchor
May 21, 2006

Not a walrus.

OneEightHundred posted:

Not even "pretty much", you ARE guaranteed, power-of-two textures are used precisely because it lets FP coordinates be converted to texture coordinates with nothing but bit shifts at no loss of precision.

Well, under 99% of use cases, yes, but I'm sure somewhere there's an FP implementation that uses something besides power of 2.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

UraniumAnchor posted:

Well, under 99% of use cases, yes, but I'm sure somewhere there's an FP implementation that uses something besides power of 2.
I suppose someone at some point probably created such a floating point format for fun, but that doesn't mean you'll ever encounter it on a platform that supports OpenGL. I doubt there's even any platforms that support a vaguely recently version of OpenGL, support floating point arithmetic, and use something other than IEEE floats.

PenisOfPathos
May 10, 2007
Damn good device.
While writing a GLSL shader I've run into something I can't quite figure out:

I have a floating value (float, vec3 etc.) and want to attenuate its values.

val *= 0.5 yields all zeroes, as does everything else as long as it's not 1.0.

It works when I divide instead, but even so, the value gets truncated such that

val /= 2.5 == val /= 2.0

Note that I make sure I'm using floats. I believe the problem is the graphics card on my laptop; it's an Intel X3100 with a GM965 chip.

Any thoughts?

PenisOfPathos
May 10, 2007
Damn good device.

PenisOfPathos posted:

While writing a GLSL shader I've run into something I can't quite figure out:

I have a floating value (float, vec3 etc.) and want to attenuate its values.

val *= 0.5 yields all zeroes, as does everything else as long as it's not 1.0.

It works when I divide instead, but even so, the value gets truncated such that

val /= 2.5 == val /= 2.0

Note that I make sure I'm using floats. I believe the problem is the graphics card on my laptop; it's an Intel X3100 with a GM965 chip.

Any thoughts?
Okay, it seems to be a problem with the GLSL compiler in Linux. Apparently they one that's in current versions of mesa is a hack, which they've completely replaced in development/testing versions.

Eponym
Dec 31, 2007
I am trying to draw anti-aliased lines, and this is how to do it, or so I've read:

code:
     // Enable line antialiasing
     glEnable(GL_LINE_SMOOTH);
     glEnable(GL_BLEND);
     glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
     glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
However, I am trying to draw antialiased lines into a framebuffer object. Without the above, my lines draw fine. With the above, nothing draws.

I'm not sure what other details to provide. I am using textures and shaders.

Spite
Jul 27, 2001

Small chance of that...

Eponym posted:

I am trying to draw anti-aliased lines, and this is how to do it, or so I've read:

code:
     // Enable line antialiasing
     glEnable(GL_LINE_SMOOTH);
     glEnable(GL_BLEND);
     glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
     glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
However, I am trying to draw antialiased lines into a framebuffer object. Without the above, my lines draw fine. With the above, nothing draws.

I'm not sure what other details to provide. I am using textures and shaders.

What OS, what GPU? AA lines are an odd duck.
Does it work if you draw to GL_BACK?

Eponym
Dec 31, 2007

Spite posted:

What OS, what GPU? AA lines are an odd duck.
Does it work if you draw to GL_BACK?

Mac OS X 10.6.4, using an Nvidia 9400m.

Nothing changed when I rendered using GL_BACK. However, I did manage to fix things, although I don't know why it worked.

code:
     // This was here before I decided to add antialiased lines
     glClearColor (125.0f/255.0f, 219.0f/255.0f, 127.0f/255.0f, 0.0f);
     glShadeModel(GL_SMOOTH);
     glEnable(GL_MULTISAMPLE);

     // Enable line antialiasing
     glEnable(GL_LINE_SMOOTH);

     // Apparently this not needed to enable line anti aliasing
     //glEnable(GL_BLEND);
     //glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
     //glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
When I had the bottom 3 lines uncommented, nothing rendered. In fact, it didn't even look like the OpenGL commands were running correctly. When my program first runs, the viewport should be filled with the clear color, but instead garbage is displayed. However, gluErrorString wasn't printing out any errors. Even when I drew to GL_BACK, the same garbage displayed.

On a whim, I commented the bottom 3 lines, and my lines render, antialiased.

Spite
Jul 27, 2001

Small chance of that...
Well, for one you aren't actually clearing the buffer.
You have to call glClear(GL_COLOR_BUFFER_BIT)

And I think there are a bunch of AA line bugs, I'd have to check. Turning off blending fixes it? Have you installed the GFX update?

zynga dot com
Nov 11, 2001

wtf jill im not a bear!!!

A dossier and a state of melted brains: The Jess campaign has it all.
I never took linear algebra and so, while I'm teaching myself the relevant parts now, I'm having trouble with a ray-triangle intersection test. I'm also not that great at OpenGL yet, but it's coming along. The task is simple enough: given a click on the screen, and a ray using the x- and y-coordinates of the click cast from the near plane to the far plane, find the closest triangle intersected and the point of intersection.

Here's how I'm going about it currently:
I use gluUnProject with mouse_x and mouse_y at the z-axis points -1.f and 1.f to get the model coordinates of the click at each plane, which creates my ray's origin and destination. Then I use the following code to calculate the intersection point.

code:
        // editor's note: tr is a triangle object containing 
        // (x, y, z) coords as well as its normal vector

	// First, calculate the distance of the plane from the origin.
	
	float distance = distance_3d(tr->normal_vector->x, tr->normal_vector->y, 
			 	     tr->normal_vector->z, 0.f, 0.f, 0.f);

	// Next, using calculate the value of t for the line's intersection
        // with the triangle's plane.

	float t1 = 	(ray_origin[0] * tr->normal_vector->x) + 
			(ray_origin[1] * tr->normal_vector->y) +
			(ray_origin[2] * tr->normal_vector->z) +
			distance;

	float t2 =	(tr->normal_vector->x * (ray_destination[0] - ray_origin[0])) + 
			(tr->normal_vector->y * (ray_destination[1] - ray_origin[1])) + 
			(tr->normal_vector->z * (ray_destination[2] - ray_origin[2]));

	float t = -(t1 / t2);

	// Using the value of t above, calculate the coordinates of the point of
	// intersection with the plane.

	intersection_point[0] = ray_origin[0] + ((ray_destination[0] - ray_origin[0]) * t);
	intersection_point[1] = ray_origin[1] + ((ray_destination[1] - ray_origin[1]) * t);
	intersection_point[2] = ray_origin[2] + ((ray_destination[2] - ray_origin[2]) * t);
The math sort of works, and my intersection calculations aren't wildly off, but the points end up being too high, e.g. they're maybe 30 pixels above my actual click location. Accuracy improves as I zoom in, but something isn't quite right. My guess is that I'm using gluUnProject improperly, but I'm not sure. Does anyone see anything wildly off?

Also, a second question: if I use OpenGL to pick the triangles nearest the click and sort for the closest triangle to the click by z-value, could I guarantee that the click passes through this triangle? That would save some work if I could know that for sure.

zynga dot com fucked around with this message at 22:44 on Sep 11, 2010

PnP Bios
Oct 24, 2005
optional; no images are allowed, only text

Flashdance posted:

I never took linear algebra and so, while I'm teaching myself the relevant parts now, I'm having trouble with a ray-triangle intersection test. I'm also not that great at OpenGL yet, but it's coming along. The task is simple enough: given a click on the screen, and a ray using the x- and y-coordinates of the click cast from the near plane to the far plane, find the closest triangle intersected and the point of intersection.

Here's how I'm going about it currently:
I use gluUnProject with mouse_x and mouse_y at the z-axis points -1.f and 1.f to get the model coordinates of the click at each plane, which creates my ray's origin and destination. Then I use the following code to calculate the intersection point.

code:
        // editor's note: tr is a triangle object containing 
        // (x, y, z) coords as well as its normal vector

	// First, calculate the distance of the plane from the origin.
	
	float distance = distance_3d(tr->normal_vector->x, tr->normal_vector->y, 
			 	     tr->normal_vector->z, 0.f, 0.f, 0.f);

	// Next, using calculate the value of t for the line's intersection
        // with the triangle's plane.

	float t1 = 	(ray_origin[0] * tr->normal_vector->x) + 
			(ray_origin[1] * tr->normal_vector->y) +
			(ray_origin[2] * tr->normal_vector->z) +
			distance;

	float t2 =	(tr->normal_vector->x * (ray_destination[0] - ray_origin[0])) + 
			(tr->normal_vector->y * (ray_destination[1] - ray_origin[1])) + 
			(tr->normal_vector->z * (ray_destination[2] - ray_origin[2]));

	float t = -(t1 / t2);

	// Using the value of t above, calculate the coordinates of the point of
	// intersection with the plane.

	intersection_point[0] = ray_origin[0] + ((ray_destination[0] - ray_origin[0]) * t);
	intersection_point[1] = ray_origin[1] + ((ray_destination[1] - ray_origin[1]) * t);
	intersection_point[2] = ray_origin[2] + ((ray_destination[2] - ray_origin[2]) * t);
The math sort of works, and my intersection calculations aren't wildly off, but the points end up being too high, e.g. they're maybe 30 pixels above my actual click location. Accuracy improves as I zoom in, but something isn't quite right. My guess is that I'm using gluUnProject improperly, but I'm not sure. Does anyone see anything wildly off?

Also, a second question: if I use OpenGL to pick the triangles nearest the click and sort for the closest triangle to the click by z-value, could I guarantee that the click passes through this triangle? That would save some work if I could know that for sure.

Why not just use Picking if you are trying to find what polygon your mouse is hovering over? Basically, it's a Pre-Pass. Draw just the shapes you are going to use, no lighting, no texturing, just a different color for each object, find the color of the pixel your mouse is hovering over, then redraw the scene properly.

zynga dot com
Nov 11, 2001

wtf jill im not a bear!!!

A dossier and a state of melted brains: The Jess campaign has it all.

PnP Bios posted:

Why not just use Picking if you are trying to find what polygon your mouse is hovering over? Basically, it's a Pre-Pass. Draw just the shapes you are going to use, no lighting, no texturing, just a different color for each object, find the color of the pixel your mouse is hovering over, then redraw the scene properly.

Well, that's why I asked question #2. The code actually does use picking to get the nearest triangle, and the previous code approximated the click by just placing a dot in the middle of that triangle. The problem with that method is that 1) it isn't accurate and 2) it doesn't allow multiple dots per triangle. So regardless of method I still need to calculate the point of intersection, but it would save work if I already knew the triangle intersected (e.g. the nearest picked one).

Spite
Jul 27, 2001

Small chance of that...
You need to unproject from 0 (near) and 1 (far) if you're going to go that way, from what I recall. I highly recommend doing the math yourself though.

Think of it as the pick origin being at 0,0,0 and the pick point being on the near plane, then see what that ray intersects with. Since the screen itself covers the entire near plane you can convert from those coords to eye coords without too much trouble.

http://www.opengl.org/resources/faq/technical/selection.htm

DON'T USE GL_SELECT. Unique colors is ok, but it will force a readback of a buffer which probably isn't desirable.

zynga dot com
Nov 11, 2001

wtf jill im not a bear!!!

A dossier and a state of melted brains: The Jess campaign has it all.
After looking at this for a bit more, I think the formulas are right or very close to it, but my data is wrong. For example, for a random triangle in the model, orthographic projection,

code:
gluUnProject(mouse_x, mouse_y, 0.f, t_modelview, t_projection, t_viewport, &origin[0], &origin[1], &origin[2]);
gives values of (-0.000384, 0.000148, -0.100000). I know the picking code works, so I used the nearest triangle to the mouse click to compare values to. The unprojected coordinates aren't even close to the coordinates for a point in the nearest picked triangle (0.032016, 0.011494, 0.764801).

I notice two problems right away - the x and y values are obviously inaccurate, but the other problem is that unprojecting using the near plane returns a negative (if just barely) z-value, and each triangle vertex has a positive z-value. I have to be doing this wrong, because shouldn't an unprojection of the near plane return a larger positive z-value than any triangle vertex in the model?

Also, I noticed that if I unproject using the above code for the far plane as well (1.f), I get different x- and y-values. If I'm using the same mouse_x and mouse_y for each unprojection and only changing the z-value for each call, shouldn't this in effect give me the start and end points for a ray cast from near to far plane using constant x and y coordinates?

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
I've got a problem with transparency in XNA.

The problem is that when a transparent pixel is drawn, the background shows through OK, but the pixel appears to pick up the z-value of its location in the triangle, which means that later objects drawn behind that point don't show through.

I didn't write the shader myself, I used one from one of the tutorials on https://www.riemers.net (I forget which one atm). How do I fix this? Is it a problem with the shader?

Edit: Looking online I see this isn't possible to solve with general alpha blending without sorting the objects back to front (which I would like to avoid). However I only want a masking effect - fully transparent or fully opaque. Is there a way to do this? It looks like the stencil buffer might work, but I have no experience with them and I can't find any good examples of what I want to do.

HappyHippo fucked around with this message at 02:54 on Sep 16, 2010

Screeb
Dec 28, 2004

status: jiggled

HappyHippo posted:

I've got a problem with transparency in XNA.

The problem is that when a transparent pixel is drawn, the background shows through OK, but the pixel appears to pick up the z-value of its location in the triangle, which means that later objects drawn behind that point don't show through.

I didn't write the shader myself, I used one from one of the tutorials on https://www.riemers.net (I forget which one atm). How do I fix this? Is it a problem with the shader?

Edit: Looking online I see this isn't possible to solve with general alpha blending without sorting the objects back to front (which I would like to avoid). However I only want a masking effect - fully transparent or fully opaque. Is there a way to do this? It looks like the stencil buffer might work, but I have no experience with them and I can't find any good examples of what I want to do.

A handy way to do this in GLSL (OpenGL) is discard for fragment shaders (ie if(textureSample.a > 0) discard;). Not sure what the HLSL equivalent is though. Might be the same even.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
I'd never heard of the discard statement for HLSL, so I looked it up and it does exist. It sounds like its the equivalent of what Screeb described for GLSL.

From The Complete Effect and HLSL Guide:
discard: This keyword is used within a fragment shader to cancel rendering of the current pixel.
example: if(alphalevel < alpha_test) discard;

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
Thanks a lot guys. That looks to be what I need.

Spite
Jul 27, 2001

Small chance of that...
Keep in mind that branching, and especially stuff like discard/texkill tends to run a lot better if you've got a lot of localized pixels that will take the same branch.

On projection and picking:

I wouldn't recommend using Unproject.
Think about it this way:
Your screen maps directly to the near plane. Once you've transformed everything into eye space, you know that your eye is at 0,0,0 and you also know where your near plane is at. (you've specified your frustum's top,bottom, left and right so you know those values). This means there's a simple mapping from a spot on your screen to a point on your near plane. You can then cast a ray from the origin through that point on the near plane to generate your pick ray. You'll have to transform it out of eye space (or everything into eye space), but that's the best way to do picking.

Kiwillian
Mar 13, 2004

Poor Sheepy :(
I have a weird problem with DirectX, well, SlimDX with C# if it matters.

I have a little test program that renders a cube of cubes very lazily. The base cube is the .X cube model from the DX SDK more or less

My problem is that everything on the left of the viewport looks like poo poo. It is almost as if the resolution of the screen scales down somehow. Even putting my cube thingy in the middle of the screen doesn't add artifacts on the right hand side.

On top of this, moving in the Z (in and out) direction works OK for the most part except for the back face, which seems to also have some sort of lower resolution in the Z than anything else, causing the back face to jump around.

Is there something simple I am missing somehow? I have not had this problem in other frameworks and libraries but I have not used SlimDX for 3D in the past.

Here's a piccy. edit: attachments don't work any more it seems, one sec...

edit2: here


Click here for the full 791x257 image.

Only registered members can see post attachments!

Kiwillian fucked around with this message at 10:24 on Oct 1, 2010

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.
Do you have a weird depth buffer format which causes a lot of imprecision on the left-hand side of your framebuffer?

What happens if you reverse the order in which you draw the cubes (going from right to left, for instance)?

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
What values are you using for your near plane and far plane distances? If you have a really huge range you might be losing depth resolution which could cause the issue with the back squares jumping around. It might also cause z-fighting between adjacent edges and give you the jaggy lines, but I can't explain why that would matter when moving from left to right.

You could just try setting the z-planes to some smallish range like near=1, far=100 and see if it changes anything.

Kiwillian
Mar 13, 2004

Poor Sheepy :(
Looks like it is z-fighting. Changing the draw order alters the effect and spacing them out removes it.

NotShadowStar
Sep 20, 2000
Okay so this is pretty elementary for all you people doing super texture animation anal mapping with Flargn' Bfar's Optimized gently caress You Algorithm, but...

I've never done anything with any graphics before, and I've given myself a week to see how far I get with a simple Wolf3D style raycaster. I'm surprised that I got all the elementary 2D map stuff down in about a day, but when doing the transform to a 3d perspective I'm having a bit of a rough time (it's been a few years since I've seen trig/linear algebra and I got rid of my books).

I'm in the process of casting rays onto the grid. I'm using an xy coordinate vector and a heading integer in degrees. Using this process should give me any grid intersection points with that information. It works, but I'm not sure how to handle the boundaries of tan(a) when finding horizontal intersection points. The formula they use is x_step = blocksize / tan(a), but that obviously breaks when tracing a ray at tangent boundaries, and strangely nowhere that I can find even considers that, it's pretty elementary. In the example java code they fudge it by adding 0.001 degrees when making pre-computed trig tables and flooring. Is that normal? Seems pretty gross.

I did find another discussion that doesn't use angles at all but constructs several vectors to build a projection plane but I'd have to rework large parts of what I'm doing to use a set of vectors instead of just an angle, so I'd rather not. I'd also have to find angles anyway to do 2d rotation and translation.

NotShadowStar fucked around with this message at 23:00 on Oct 5, 2010

Fecotourist
Nov 1, 2008
It's probably worth understanding how to do the job with vector components rather than angles. There is no end to the gross stuff you'll run into. All too often a nasty-looking expression involving cascaded forward and inverse trig functions simplifies to a couple of multiplies and adds. And you'll never again have to wrap an angle back into [0,360).

Allow yourself an inverse trig function if you need to print a value for a human to look at, otherwise use unit vectors to encode directions. For doing rotations, the rotation matrix is trivial to construct from the components of a direction vector.

Fecotourist fucked around with this message at 06:29 on Oct 6, 2010

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?

NotShadowStar posted:

Okay so this is pretty elementary for all you people doing super texture animation anal mapping with Flargn' Bfar's Optimized gently caress You Algorithm, but...

I've never done anything with any graphics before, and I've given myself a week to see how far I get with a simple Wolf3D style raycaster. I'm surprised that I got all the elementary 2D map stuff down in about a day, but when doing the transform to a 3d perspective I'm having a bit of a rough time (it's been a few years since I've seen trig/linear algebra and I got rid of my books).

I'm in the process of casting rays onto the grid. I'm using an xy coordinate vector and a heading integer in degrees. Using this process should give me any grid intersection points with that information. It works, but I'm not sure how to handle the boundaries of tan(a) when finding horizontal intersection points. The formula they use is x_step = blocksize / tan(a), but that obviously breaks when tracing a ray at tangent boundaries, and strangely nowhere that I can find even considers that, it's pretty elementary. In the example java code they fudge it by adding 0.001 degrees when making pre-computed trig tables and flooring. Is that normal? Seems pretty gross.

I did find another discussion that doesn't use angles at all but constructs several vectors to build a projection plane but I'd have to rework large parts of what I'm doing to use a set of vectors instead of just an angle, so I'd rather not. I'd also have to find angles anyway to do 2d rotation and translation.

If you insist on using the method described (and I strongly urge you to consider Fecotourist's post), then simply realize that when the angle is zero, no horizontal intersection is possible. Think about it for a second and it should be obvious why. So in that that case you can simply not search for one.

NotShadowStar
Sep 20, 2000
Yeah I understand that, it's the asymptotes of the tangent function that were screwing with me. Also I never really dealt with trig functions in a language outside of Maple, but I've discovered the tangent function gets hilariously wrong around 90+-10 degrees. So even if I just do if(sin(a) == 0) { //ignore horizontal intersections } if tan(a) is around 80 degrees the math is so completely off.

I guess I've been horribly confused as I've seen a number of raycasting implementations, including Wolf3d, that just use a location vector and a degree heading to find the first intersection for DDA and it works fine. One implementation I've found did really weird thing by creating pre-built trig tables by translating degrees to some internal hexadecimal system, and also did weird things with the tangent function by pre-computing from 0..45, then doing 135..180 - 180 for tan(45..90) to avoiding the asymptote imprecision weirdness.

I'll give vectors a shot sometime.

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
I guess you could subtract 90 degrees from the heading, and then swap the coordinates around... (if after change in heading you get x'a, y'a then xa = -y'a and ya = x'a) I think that's probably what that implementation you found did.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
I'm having a problem with textures scaling shittily. I start with a texture that looks like this



and as I zoom in and out it converts between looking OK and having poor sampling on the seams between tiles:



I assume this is caused by the texture sampler having problems detecting the one pixel of dark border on some of the edges. Are there any tricks to get around this problem?


e: VVV I will give that a shot, thanks!

PDP-1 fucked around with this message at 23:42 on Oct 16, 2010

haveblue
Aug 15, 2005



Toilet Rascal

PDP-1 posted:

I'm having a problem with textures scaling shittily. I start with a texture that looks like this



and as I zoom in and out it converts between looking OK and having poor sampling on the seams between tiles:



I assume this is caused by the texture sampler having problems detecting the one pixel of dark border on some of the edges. Are there any tricks to get around this problem?

Mipmapping, especially combined with trilinear filtering.

El Wastrellio
Oct 5, 2010
Hi All. I have been looking into converting 3D Model files into objects optimised for collisions. I decided to start with .X files as I already have written a loader.

I need to determine the polygon facing, and have been trying to use normals, however, the normals don't always seem to match what I expect for the face. I've also checked the polygon order and that seems inconsistent. Is there a way I can determine polygon facing for this purpose, or is there a different method to allow me to convert .x meshes into collision objects.

By collision objects I am planning on building objects containing infinite planes to represent the faces and vertexes to represent the vertexes...

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Use a cross-product.

If you have a triangle A,B,C, then the cross-product of the edge directions, i.e. (B-A)x(C-B) is a vector in the direction of that triangle's normal.

Incidentally, the length of that vector will be the area of the triangle.

Null Pointer
May 20, 2004

Oh no!

OneEightHundred posted:

Incidentally, the length of that vector will be the area of the triangle.

It will be double the area.

El Wastrellio
Oct 5, 2010
Ah, but the order you cross product the vectors determines whether the normal you get faces out from the face or into the face. Hence the need for enforced CCW ACCW vertex listings. The normal is double the face area, which is useful for ordering the polygons based on facial area.

Adbot
ADBOT LOVES YOU

passionate dongs
May 23, 2001

Snitchin' is Bitchin'
Does anyone know what is going on here?

I'm trying to make some tubes. For some reason when I use shade with GL_FLAT the lighting looks right (although not smooth of course). If I turn on GL_SMOOTH, for some reason the shading isn't continuous. Where should I look to fix this? normals? geometry?

Right now each segment is an individual vertex array that is drawn in sequence.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply