|
You just want to write a little function that does the conversion you need. You can get the width and height of the image, so the function would simply be something like (probably really as a member function to something):code:
Anything you did with a library before, you'll probably have to implement that functionality yourself if you're not using a library now. Edit: Or apparently you can do it by changing the texture scale in OpenGL? I'm a DirectX user.
|
# ? Aug 24, 2010 17:26 |
|
|
# ? May 16, 2024 11:11 |
|
roomforthetuna posted:You just want to write a little function that does the conversion you need. You can get the width and height of the image, so the function would simply be something like (probably really as a member function to something): Is it not possible to make each texture a subset of the main one?
|
# ? Aug 24, 2010 17:53 |
|
Possible? Yes. Probably not desirable, though. If you make sure the base image is a power of 2 you're pretty much guaranteed to be able to get pixel accuracy even with floating point. And then you don't have to set up a hundred superfluous textures.
|
# ? Aug 24, 2010 18:51 |
|
UraniumAnchor posted:If you make sure the base image is a power of 2 you're pretty much guaranteed to be able to get pixel accuracy even with floating point.
|
# ? Aug 24, 2010 18:58 |
|
UraniumAnchor posted:Possible? Yes. Probably not desirable, though. If you make sure the base image is a power of 2 you're pretty much guaranteed to be able to get pixel accuracy even with floating point. And then you don't have to set up a hundred superfluous textures. The sprites are 22x22 so I figured I would use 32x32 textures and just draw from there. I guess I will just continue by writing wrapper functions around what I was doing to draw the sprites right now. I have to adjust my thinking since I haven't done any of this stuff since the DOS days.
|
# ? Aug 24, 2010 19:00 |
|
Why not just use a rectangle texture? http://www.opengl.org/registry/specs/ARB/texture_rectangle.txt
|
# ? Aug 24, 2010 20:56 |
|
OneEightHundred posted:Not even "pretty much", you ARE guaranteed, power-of-two textures are used precisely because it lets FP coordinates be converted to texture coordinates with nothing but bit shifts at no loss of precision. Well, under 99% of use cases, yes, but I'm sure somewhere there's an FP implementation that uses something besides power of 2.
|
# ? Aug 25, 2010 01:11 |
|
UraniumAnchor posted:Well, under 99% of use cases, yes, but I'm sure somewhere there's an FP implementation that uses something besides power of 2.
|
# ? Aug 25, 2010 01:29 |
|
While writing a GLSL shader I've run into something I can't quite figure out: I have a floating value (float, vec3 etc.) and want to attenuate its values. val *= 0.5 yields all zeroes, as does everything else as long as it's not 1.0. It works when I divide instead, but even so, the value gets truncated such that val /= 2.5 == val /= 2.0 Note that I make sure I'm using floats. I believe the problem is the graphics card on my laptop; it's an Intel X3100 with a GM965 chip. Any thoughts?
|
# ? Aug 27, 2010 08:59 |
|
PenisOfPathos posted:While writing a GLSL shader I've run into something I can't quite figure out:
|
# ? Aug 27, 2010 11:12 |
|
I am trying to draw anti-aliased lines, and this is how to do it, or so I've read:code:
I'm not sure what other details to provide. I am using textures and shaders.
|
# ? Sep 1, 2010 00:22 |
|
Eponym posted:I am trying to draw anti-aliased lines, and this is how to do it, or so I've read: What OS, what GPU? AA lines are an odd duck. Does it work if you draw to GL_BACK?
|
# ? Sep 1, 2010 19:54 |
|
Spite posted:What OS, what GPU? AA lines are an odd duck. Mac OS X 10.6.4, using an Nvidia 9400m. Nothing changed when I rendered using GL_BACK. However, I did manage to fix things, although I don't know why it worked. code:
On a whim, I commented the bottom 3 lines, and my lines render, antialiased.
|
# ? Sep 2, 2010 20:28 |
|
Well, for one you aren't actually clearing the buffer. You have to call glClear(GL_COLOR_BUFFER_BIT) And I think there are a bunch of AA line bugs, I'd have to check. Turning off blending fixes it? Have you installed the GFX update?
|
# ? Sep 3, 2010 18:41 |
|
I never took linear algebra and so, while I'm teaching myself the relevant parts now, I'm having trouble with a ray-triangle intersection test. I'm also not that great at OpenGL yet, but it's coming along. The task is simple enough: given a click on the screen, and a ray using the x- and y-coordinates of the click cast from the near plane to the far plane, find the closest triangle intersected and the point of intersection. Here's how I'm going about it currently: I use gluUnProject with mouse_x and mouse_y at the z-axis points -1.f and 1.f to get the model coordinates of the click at each plane, which creates my ray's origin and destination. Then I use the following code to calculate the intersection point. code:
Also, a second question: if I use OpenGL to pick the triangles nearest the click and sort for the closest triangle to the click by z-value, could I guarantee that the click passes through this triangle? That would save some work if I could know that for sure. zynga dot com fucked around with this message at 22:44 on Sep 11, 2010 |
# ? Sep 11, 2010 22:41 |
|
Flashdance posted:I never took linear algebra and so, while I'm teaching myself the relevant parts now, I'm having trouble with a ray-triangle intersection test. I'm also not that great at OpenGL yet, but it's coming along. The task is simple enough: given a click on the screen, and a ray using the x- and y-coordinates of the click cast from the near plane to the far plane, find the closest triangle intersected and the point of intersection. Why not just use Picking if you are trying to find what polygon your mouse is hovering over? Basically, it's a Pre-Pass. Draw just the shapes you are going to use, no lighting, no texturing, just a different color for each object, find the color of the pixel your mouse is hovering over, then redraw the scene properly.
|
# ? Sep 11, 2010 23:21 |
|
PnP Bios posted:Why not just use Picking if you are trying to find what polygon your mouse is hovering over? Basically, it's a Pre-Pass. Draw just the shapes you are going to use, no lighting, no texturing, just a different color for each object, find the color of the pixel your mouse is hovering over, then redraw the scene properly. Well, that's why I asked question #2. The code actually does use picking to get the nearest triangle, and the previous code approximated the click by just placing a dot in the middle of that triangle. The problem with that method is that 1) it isn't accurate and 2) it doesn't allow multiple dots per triangle. So regardless of method I still need to calculate the point of intersection, but it would save work if I already knew the triangle intersected (e.g. the nearest picked one).
|
# ? Sep 11, 2010 23:53 |
|
You need to unproject from 0 (near) and 1 (far) if you're going to go that way, from what I recall. I highly recommend doing the math yourself though. Think of it as the pick origin being at 0,0,0 and the pick point being on the near plane, then see what that ray intersects with. Since the screen itself covers the entire near plane you can convert from those coords to eye coords without too much trouble. http://www.opengl.org/resources/faq/technical/selection.htm DON'T USE GL_SELECT. Unique colors is ok, but it will force a readback of a buffer which probably isn't desirable.
|
# ? Sep 12, 2010 02:52 |
|
After looking at this for a bit more, I think the formulas are right or very close to it, but my data is wrong. For example, for a random triangle in the model, orthographic projection,code:
I notice two problems right away - the x and y values are obviously inaccurate, but the other problem is that unprojecting using the near plane returns a negative (if just barely) z-value, and each triangle vertex has a positive z-value. I have to be doing this wrong, because shouldn't an unprojection of the near plane return a larger positive z-value than any triangle vertex in the model? Also, I noticed that if I unproject using the above code for the far plane as well (1.f), I get different x- and y-values. If I'm using the same mouse_x and mouse_y for each unprojection and only changing the z-value for each call, shouldn't this in effect give me the start and end points for a ray cast from near to far plane using constant x and y coordinates?
|
# ? Sep 13, 2010 04:37 |
|
I've got a problem with transparency in XNA. The problem is that when a transparent pixel is drawn, the background shows through OK, but the pixel appears to pick up the z-value of its location in the triangle, which means that later objects drawn behind that point don't show through. I didn't write the shader myself, I used one from one of the tutorials on https://www.riemers.net (I forget which one atm). How do I fix this? Is it a problem with the shader? Edit: Looking online I see this isn't possible to solve with general alpha blending without sorting the objects back to front (which I would like to avoid). However I only want a masking effect - fully transparent or fully opaque. Is there a way to do this? It looks like the stencil buffer might work, but I have no experience with them and I can't find any good examples of what I want to do. HappyHippo fucked around with this message at 02:54 on Sep 16, 2010 |
# ? Sep 16, 2010 00:28 |
|
HappyHippo posted:I've got a problem with transparency in XNA. A handy way to do this in GLSL (OpenGL) is discard for fragment shaders (ie if(textureSample.a > 0) discard;). Not sure what the HLSL equivalent is though. Might be the same even.
|
# ? Sep 16, 2010 06:51 |
|
I'd never heard of the discard statement for HLSL, so I looked it up and it does exist. It sounds like its the equivalent of what Screeb described for GLSL. From The Complete Effect and HLSL Guide: discard: This keyword is used within a fragment shader to cancel rendering of the current pixel. example: if(alphalevel < alpha_test) discard;
|
# ? Sep 16, 2010 14:18 |
|
Thanks a lot guys. That looks to be what I need.
|
# ? Sep 16, 2010 18:39 |
|
Keep in mind that branching, and especially stuff like discard/texkill tends to run a lot better if you've got a lot of localized pixels that will take the same branch. On projection and picking: I wouldn't recommend using Unproject. Think about it this way: Your screen maps directly to the near plane. Once you've transformed everything into eye space, you know that your eye is at 0,0,0 and you also know where your near plane is at. (you've specified your frustum's top,bottom, left and right so you know those values). This means there's a simple mapping from a spot on your screen to a point on your near plane. You can then cast a ray from the origin through that point on the near plane to generate your pick ray. You'll have to transform it out of eye space (or everything into eye space), but that's the best way to do picking.
|
# ? Sep 17, 2010 05:19 |
|
I have a weird problem with DirectX, well, SlimDX with C# if it matters. I have a little test program that renders a cube of cubes very lazily. The base cube is the .X cube model from the DX SDK more or less My problem is that everything on the left of the viewport looks like poo poo. It is almost as if the resolution of the screen scales down somehow. Even putting my cube thingy in the middle of the screen doesn't add artifacts on the right hand side. On top of this, moving in the Z (in and out) direction works OK for the most part except for the back face, which seems to also have some sort of lower resolution in the Z than anything else, causing the back face to jump around. Is there something simple I am missing somehow? I have not had this problem in other frameworks and libraries but I have not used SlimDX for 3D in the past. Here's a piccy. edit: attachments don't work any more it seems, one sec... edit2: here Click here for the full 791x257 image. Kiwillian fucked around with this message at 10:24 on Oct 1, 2010 |
# ? Oct 1, 2010 10:21 |
|
Do you have a weird depth buffer format which causes a lot of imprecision on the left-hand side of your framebuffer? What happens if you reverse the order in which you draw the cubes (going from right to left, for instance)?
|
# ? Oct 1, 2010 15:10 |
|
What values are you using for your near plane and far plane distances? If you have a really huge range you might be losing depth resolution which could cause the issue with the back squares jumping around. It might also cause z-fighting between adjacent edges and give you the jaggy lines, but I can't explain why that would matter when moving from left to right. You could just try setting the z-planes to some smallish range like near=1, far=100 and see if it changes anything.
|
# ? Oct 1, 2010 19:14 |
|
Looks like it is z-fighting. Changing the draw order alters the effect and spacing them out removes it.
|
# ? Oct 1, 2010 21:21 |
|
Okay so this is pretty elementary for all you people doing super texture animation anal mapping with Flargn' Bfar's Optimized gently caress You Algorithm, but... I've never done anything with any graphics before, and I've given myself a week to see how far I get with a simple Wolf3D style raycaster. I'm surprised that I got all the elementary 2D map stuff down in about a day, but when doing the transform to a 3d perspective I'm having a bit of a rough time (it's been a few years since I've seen trig/linear algebra and I got rid of my books). I'm in the process of casting rays onto the grid. I'm using an xy coordinate vector and a heading integer in degrees. Using this process should give me any grid intersection points with that information. It works, but I'm not sure how to handle the boundaries of tan(a) when finding horizontal intersection points. The formula they use is x_step = blocksize / tan(a), but that obviously breaks when tracing a ray at tangent boundaries, and strangely nowhere that I can find even considers that, it's pretty elementary. In the example java code they fudge it by adding 0.001 degrees when making pre-computed trig tables and flooring. Is that normal? Seems pretty gross. I did find another discussion that doesn't use angles at all but constructs several vectors to build a projection plane but I'd have to rework large parts of what I'm doing to use a set of vectors instead of just an angle, so I'd rather not. I'd also have to find angles anyway to do 2d rotation and translation. NotShadowStar fucked around with this message at 23:00 on Oct 5, 2010 |
# ? Oct 5, 2010 22:57 |
|
It's probably worth understanding how to do the job with vector components rather than angles. There is no end to the gross stuff you'll run into. All too often a nasty-looking expression involving cascaded forward and inverse trig functions simplifies to a couple of multiplies and adds. And you'll never again have to wrap an angle back into [0,360). Allow yourself an inverse trig function if you need to print a value for a human to look at, otherwise use unit vectors to encode directions. For doing rotations, the rotation matrix is trivial to construct from the components of a direction vector. Fecotourist fucked around with this message at 06:29 on Oct 6, 2010 |
# ? Oct 6, 2010 06:24 |
|
NotShadowStar posted:Okay so this is pretty elementary for all you people doing super texture animation anal mapping with Flargn' Bfar's Optimized gently caress You Algorithm, but... If you insist on using the method described (and I strongly urge you to consider Fecotourist's post), then simply realize that when the angle is zero, no horizontal intersection is possible. Think about it for a second and it should be obvious why. So in that that case you can simply not search for one.
|
# ? Oct 7, 2010 18:18 |
|
Yeah I understand that, it's the asymptotes of the tangent function that were screwing with me. Also I never really dealt with trig functions in a language outside of Maple, but I've discovered the tangent function gets hilariously wrong around 90+-10 degrees. So even if I just do if(sin(a) == 0) { //ignore horizontal intersections } if tan(a) is around 80 degrees the math is so completely off. I guess I've been horribly confused as I've seen a number of raycasting implementations, including Wolf3d, that just use a location vector and a degree heading to find the first intersection for DDA and it works fine. One implementation I've found did really weird thing by creating pre-built trig tables by translating degrees to some internal hexadecimal system, and also did weird things with the tangent function by pre-computing from 0..45, then doing 135..180 - 180 for tan(45..90) to avoiding the asymptote imprecision weirdness. I'll give vectors a shot sometime.
|
# ? Oct 7, 2010 21:41 |
|
I guess you could subtract 90 degrees from the heading, and then swap the coordinates around... (if after change in heading you get x'a, y'a then xa = -y'a and ya = x'a) I think that's probably what that implementation you found did.
|
# ? Oct 8, 2010 20:05 |
|
I'm having a problem with textures scaling shittily. I start with a texture that looks like this and as I zoom in and out it converts between looking OK and having poor sampling on the seams between tiles: I assume this is caused by the texture sampler having problems detecting the one pixel of dark border on some of the edges. Are there any tricks to get around this problem? e: VVV I will give that a shot, thanks! PDP-1 fucked around with this message at 23:42 on Oct 16, 2010 |
# ? Oct 16, 2010 21:23 |
|
PDP-1 posted:I'm having a problem with textures scaling shittily. I start with a texture that looks like this Mipmapping, especially combined with trilinear filtering.
|
# ? Oct 16, 2010 21:27 |
|
Hi All. I have been looking into converting 3D Model files into objects optimised for collisions. I decided to start with .X files as I already have written a loader. I need to determine the polygon facing, and have been trying to use normals, however, the normals don't always seem to match what I expect for the face. I've also checked the polygon order and that seems inconsistent. Is there a way I can determine polygon facing for this purpose, or is there a different method to allow me to convert .x meshes into collision objects. By collision objects I am planning on building objects containing infinite planes to represent the faces and vertexes to represent the vertexes...
|
# ? Oct 20, 2010 18:00 |
|
Use a cross-product. If you have a triangle A,B,C, then the cross-product of the edge directions, i.e. (B-A)x(C-B) is a vector in the direction of that triangle's normal. Incidentally, the length of that vector will be the area of the triangle.
|
# ? Oct 21, 2010 04:22 |
|
OneEightHundred posted:Incidentally, the length of that vector will be the area of the triangle. It will be double the area.
|
# ? Oct 21, 2010 04:30 |
|
Ah, but the order you cross product the vectors determines whether the normal you get faces out from the face or into the face. Hence the need for enforced CCW ACCW vertex listings. The normal is double the face area, which is useful for ordering the polygons based on facial area.
|
# ? Oct 22, 2010 16:38 |
|
|
# ? May 16, 2024 11:11 |
|
Does anyone know what is going on here? I'm trying to make some tubes. For some reason when I use shade with GL_FLAT the lighting looks right (although not smooth of course). If I turn on GL_SMOOTH, for some reason the shading isn't continuous. Where should I look to fix this? normals? geometry? Right now each segment is an individual vertex array that is drawn in sequence.
|
# ? Oct 27, 2010 23:08 |