|
Say the coordinates of your point in world space are given by the vector P, and it has an orientation vector n. I'm not sure what exactly what you mean by "draw a rectangle around it" but I'll assume you want the rectangle in a plane perpendicular to n. You'll also need an "up" direction perpendicular to n to orient the rectangle - we'll say that's u. If you don't have an "up" direction already we could calculate that based on the closest axis, but for now I'll assuming you have one. For ease, we'll assume that both n and u are normalized: |n| = |u| = 1. You can calculate a "right" direction vector from a cross product: r = n x u Now r and u are the basis vectors of a set of local x-y coordinates in the plane of your rectangle. Then for a rectangle of size w x h, the positions of the four corners (in world space) are: C1 = P + hu + wr C2 = P + hu - wr C3 = P - hu - wr C4 = P - hu + wr I hope that's helpful. Hobnob fucked around with this message at 17:48 on Sep 11, 2009 |
# ? Sep 11, 2009 17:44 |
|
|
# ? May 14, 2024 16:06 |
|
Goreld posted:It's probably a Gabor filter. There's a recent Siggraph paper that uses Gabor filters to create noise: here. Thanks!
|
# ? Sep 12, 2009 02:50 |
|
I'm using opengles. I have my world or (flat board) that's all good. But the background is black. Is there a way to make the default emptiness a different color?
|
# ? Sep 16, 2009 00:05 |
|
Unparagoned posted:I'm using opengles. I have my world or (flat board) that's all good. But the background is black. Is there a way to make the default emptiness a different color? In regular OpenGL this is set by glClearColor(R,G,B,A) - I don't think it will be any different in ES.
|
# ? Sep 16, 2009 00:12 |
|
Hobnob posted:In regular OpenGL this is set by glClearColor(R,G,B,A) - I don't think it will be any different in ES. Thanks, I new there was something like that, but couldn't find it. I was looking under drawView, instead of drawSetup...
|
# ? Sep 16, 2009 00:27 |
|
I'm really having trouble wrapping my head around duplicating the OpenGL lighting model in a shader in ES 2.0. So far, I think the procedure goes like this (for a directional light): -Transform the light vector by the modelview matrix, normalize the result, to get the eyespace light vector. -Transform the vertex normal by the normal matrix (which is the upper left 3x3 submatrix of the modelview matrix, neglecting nonuniform scaling), normalize the result, to get the eyespace normal. -Take the dot product of the eyespace light vector and the eyespace normal vector clamped to 0 to get the diffuse contribution. -multiply the diffuse contribution by the light color to get the diffuse component of the fragment. I think something is wrong in the light vector transformation- when I display the eyespace normals they seem to be correct, but I can see the diffuse contribution of each vertex changing as I move the camera forward and back which doesn't seem like something that should happen. I also can't find any good explanations of this online, if anyone has any (all the links I turn up are just OpenGL recipes, and following them doesn't seem to help). haveblue fucked around with this message at 22:58 on Sep 23, 2009 |
# ? Sep 23, 2009 22:51 |
|
haveblue posted:I'm really having trouble wrapping my head around duplicating the OpenGL lighting model in a shader in ES - Eyespace light vector = Inverse camera/eye matrix * light vector - Normal matrix - http://www.lighthouse3d.com/opengl/glsl/index.php?normalmatrix
|
# ? Sep 24, 2009 00:20 |
|
haveblue posted:-Transform the vertex normal by the normal matrix (which is the upper left 3x3 submatrix of the modelview matrix, neglecting nonuniform scaling), normalize the result, to get the eyespace normal. It's the inverse transpose of the upper 3x3 of the modelview. Of course, with a uniform scaling matrix, this is the identity transform. Your problem is probably applying the translation to the light vector, instead of just the rotation/scale.
|
# ? Sep 24, 2009 01:33 |
|
Spite posted:Your problem is probably applying the translation to the light vector, instead of just the rotation/scale. Looks like this was exactly it, thanks!
|
# ? Sep 24, 2009 18:11 |
|
OK, now that eye space works, let's try tangent space I've implemented tangent space generation based on the code segment on this page, but a good chunk of the polygons are coming out as if the normal map is upside-down. Does this mean the handedness is not being handled properly, or is something else wrong? Vertex shader: code:
haveblue fucked around with this message at 20:54 on Sep 29, 2009 |
# ? Sep 29, 2009 18:27 |
|
Are your T,B,N vectors the handedness you are expecting? Maybe your bitangent is pointing the opposite direction or something.
|
# ? Sep 30, 2009 00:08 |
|
Spite posted:Are your T,B,N vectors the handedness you are expecting? Maybe your bitangent is pointing the opposite direction or something. Maybe, but I think I've duplicated the method on the page I linked. The only change I made was generating the bitangent beforehand as a vertex attribute, not putting in the shader as they suggest. This is the tangent/bitangent generator, in case there's something in it I missed: code:
|
# ? Sep 30, 2009 19:00 |
|
Try creating test normal maps, like one that does nothing but straight normals and you should get regular smooth shading. If you don't, something's wrong. Then just work onto the other directions to verify. I've probably hosed up tangent space normal mapping a bajillion times, feel free to pm me if you're having difficulty.
|
# ? Oct 1, 2009 03:52 |
|
Stanlo posted:Try creating test normal maps, like one that does nothing but straight normals and you should get regular smooth shading. If you don't, something's wrong. Then just work onto the other directions to verify. I've probably hosed up tangent space normal mapping a bajillion times, feel free to pm me if you're having difficulty. If I replace the normal map lookup with vec3(0,0,1), I do indeed get smooth shading. So it's got to be something involving texture coordinates.
|
# ? Oct 1, 2009 04:02 |
|
You should get normal smooth shading (i.e. smoothed normals shading) if you shove in (.5, .5, 1), not (0, 0, 1). Unless you're using a signed texture format? Do the other components work?
|
# ? Oct 1, 2009 18:23 |
|
edit: Actually I'm confused. Is this an indexed array of vertices, or do each consecutive three vertices represent one triangle? OneEightHundred fucked around with this message at 06:46 on Oct 3, 2009 |
# ? Oct 3, 2009 06:39 |
|
It's an interleaved array of consecutive triangles. At each stride there is a position followed by a normal followed by a texcoord (hence the elementStride+6 instead of +3).
|
# ? Oct 3, 2009 18:05 |
|
The tangent/binormal generation code looks OK, though there is a possibility it's being mishandled elsewhere. While the effect of this depends VERY heavily on the scale you're doing things at, I would strongly recommend avoiding the use of oversized vector components, and swizzle away components you don't need so you don't accidentally operate on them. i.e. something like this... quote:tcVarying = vec2(fullTexture.x, fullTexture.y); quote:tcVarying = fullTexture.xy; The big problem is when you get stuff like this: quote:vec4 tangentEye = normalize(normalMatrix * tangent); The last row/column (depending on convention) of a transform matrix is generally (0,0,0,1), so the W component would get copied. The side-effect of this is that tne is that the W component of (normalMatrix * tangent) is non-zero, and will consequently affect the result of the normalize. That goes double for this case, where tangent.w may be -1 or 1. The dot product may result in even more issues if lightVector.w is non-zero. Generally speaking, you should use swizzling to remove components you're not using to prevent unintended side-effects. i.e. something like: quote:vec3 tangentEye = normalize((normalMatrix * tangent).xyz); Best thing you can do for now though is make a normal map where the direction things should be facing is unmistakable, i.e. put a white circle on a black background and just run your heightmap-to-normalmap filter of choice on it, and make sure they are actually getting flipped AND getting flipped inconsistently (as opposed to being flipped consistently in which case you just negate the bitangent or something.)
|
# ? Oct 4, 2009 17:32 |
|
I should be calculating the same handedness for each vertex of a polygon and if I don't then something is terribly wrong, correct? e: Yes, something was terribly wrong. code:
haveblue fucked around with this message at 21:04 on Oct 7, 2009 |
# ? Oct 7, 2009 20:31 |
|
So I'm trying to build a ray tracer that incorporates translation, rotation, and scale to a sphere, then runs a typical tracing algorithm, but I'm running into a weird issue. It seems that the further away from the center of the field of view I transform the sphere, the more stretched the sphere is, but always in the direction towards the center. Anyone have any idea what could be making this happen? I've tried 2 different intersection solutions (both algebraic and geometric) and my tracing code works when the sphere is at (0,0,0). Here is the code to my tracing algorithm, as I'm thinking that it's hidden somewhere in here: code:
|
# ? Oct 20, 2009 02:36 |
|
Actually heres my two different intersection algorithms. I just tested their distance values and one is showing less values than the other, yet only one produces the proper lighting (geometric).code:
|
# ? Oct 20, 2009 03:11 |
|
I'm having some problems getting vertex buffer objects to work in windows... At first I was getting undefined reference errors which I fixed by including glext and setting the GL_GLEXT_PROTOTYPES flag like this in my header file: code:
[Linker error] undefined reference to `glGenBuffers@8' [Linker error] undefined reference to `glBindBuffer@8' [Linker error] undefined reference to `glBufferData@16' I use devcpp so I tried downloading the GLEW devpak and changing my headers around a little which gives me this error instead: [Linker error] undefined reference to `_imp____glewGenBuffersARB' EDIT: Fixed it.... Turns out opengl32.dll does not support extensions higher than v1.1 (i.e vertex buffer objects) and the workaround is something like this: code:
Mata fucked around with this message at 07:33 on Nov 10, 2009 |
# ? Nov 10, 2009 07:11 |
|
It's been a while since I've used OGL, but isn't it around version 3.1 or something? Are you using the latest dlls?
|
# ? Nov 10, 2009 17:57 |
|
Static linking to opengl32.dll is something you should never do anyway, since the program will close if a proc lookup fails, making it impossible to support optional features. You can macro it to make things a bit easier, i.e.: code:
code:
|
# ? Nov 10, 2009 20:21 |
|
Thanks for the help above but my crappy method WorksForMe so fixing it up better is low priority now! My latest opengl headache in my long line of trials & tribulations is getting textures to work... I load .obj files which is like a cleartext list of vertices, for example the vertices in a 10x10 cube looks like this: code:
code:
code:
I hope you're with me so far... The problem is OpenGL doesnt seem to support texture indices (or normal indices but whatever) and just uses the same vertex index for the texture and normal. So, to opengl the indices above are essentially 1/1/1 3/3/3 4/4/4 etc. So I figure, that sucks but I only have to go through the vertex and normals arrays and rearrange them so the vertex index points to the proper texture index, so that 1/1/1 3/3/3 4/4/4 become the correct indices. Problem is, it's not that simple because .objs reuse the same vertex index for several different texture and normal indices - for example another one of the indices for our cube will be: code:
I feel like a loving retard because texture mapping shouldn't be this complicated... Please tell me I AM retard and am approaching this from the wrong angle because to fix this I would have to make a new vertex for each unique permutation of coordv/coordt/coordn which is not only a pain in the rear end to program but will also bloat the size of my models to hell. Mata fucked around with this message at 01:27 on Nov 13, 2009 |
# ? Nov 13, 2009 01:24 |
|
Mata posted:to fix this I would have to make a new vertex for each unique permutation of coordv/coordt/coordn which is not only a pain in the rear end to program but will also bloat the size of my models to hell. Unfortunately this really is what you have to do. Each vertex in OpenGL has exactly one index which is used for all the attribute arrays. You should be able to have your 3D package generate models that already have that property and not have to worry about generating it yourself, although you're right about the increased footprint. What kind of models are you making that you have significant redundancy of normals?
|
# ? Nov 13, 2009 01:31 |
|
haveblue posted:Unfortunately this really is what you have to do. Each vertex in OpenGL has exactly one index which is used for all the attribute arrays. You should be able to have your 3D package generate models that already have that property and not have to worry about generating it yourself, although you're right about the increased footprint. I couldn't find this option in 3dsmax I didn't really check if my models had significant redundancy of indices, it's just the principle of the thing. Is there any point to using indices at all, then?
|
# ? Nov 13, 2009 01:33 |
|
Mata posted:I couldn't find this option in 3dsmax I didn't really check if my models had significant redundancy of indices, it's just the principle of the thing. Indices reduce the size, both on the file and on the GPU. Most of the time for personal projects when perf isn't an issue you can just cheese your way around and draw everything as unindexed triangle lists. If you want the perf and space boost, a simple optimization would be: code:
Sex Bumbo fucked around with this message at 01:50 on Nov 13, 2009 |
# ? Nov 13, 2009 01:46 |
|
Stanlo posted:Indices reduce the size, both on the file and on the GPU. Yeah I worked out an algorithm in my head but it's still funny how something like "create a list of unique vertices" which would be EZ in say, python, is difficult for me in C++. But it seems like pretty much EVERY vertex triplet will be unique. I might be wrong, but if there are only like a handful of shared vertices then surely using indexed vertex buffer objects won't be much of a performance boost at all. Oh well, I'm not one to pinch every cycle..
|
# ? Nov 13, 2009 02:06 |
|
Stanlo posted:Indices reduce the size, both on the file and on the GPU. Space (or more correctly, bandwidth) is factor in that indices allow adjacent triangles using the same vertices to only load the vertex data once (using a pre-transform cache); however a bigger benefit of using indices is that it allows the GPU to cache the results of the transform/vertex shader stage into a post-transform cache as well, thus saving the the cost of the vertex processing. So yeah, doesn't matter if you're not GPU performance bound, but pretty important if you are.
|
# ? Nov 13, 2009 02:17 |
|
Mata posted:Yeah I worked out an algorithm in my head but it's still funny how something like "create a list of unique vertices" which would be EZ in say, python, is difficult for me in C++. If you'd use a dictionary in Python (maybe even if you wouldn't), use a std::map in C++. Something like a map from tuple<vertex_pointer,normal_pointer...> to index in the vertex array. Map is happy to tell you if it's already seen something, with log complexity. Or I'm completely missing the point. vvvvvvvvvv When I did this, it was at load time. Given raw vertices, normals, etc. and the code to draw the geometry, replaced glVertex();glNormal()... with a map lookup. If the combination was novel, add a new interleaved vertex to the array and do a map insertion. Otherwise just took the index the map gave me. Either way the index array got a new index. Fecotourist fucked around with this message at 17:54 on Nov 13, 2009 |
# ? Nov 13, 2009 05:39 |
|
Creating a list of unique vertices is something you do during preprocessing, not runtime.
|
# ? Nov 13, 2009 13:21 |
|
Fecotourist posted:Something like a map from tuple<vertex_pointer,normal_pointer...> to index in the vertex array. Make sure you supply custom equality and comparison operators so that you catch floating point errors if you do this. But really, I doubt you need the vertex throughput, just throw it all at the gpu however you want and it will probably be dandy.
|
# ? Nov 13, 2009 20:30 |
|
I had the luxury of just relying on pointer comparison. I was really just going from an independent multiple-index format to single-index.
|
# ? Nov 13, 2009 20:45 |
|
I'm puttering around in XNA as an intro to 3D graphics and I'm getting a really strange error - adding text sprites seems to be messing up my 3D images. The picture on the left below is what I want - some axes and a red triangle sitting on a ground mesh. When I try to put in some text the letters show up just fine but the red triangle 'moves' below ground. It isn't actually changing location, the ground plane is drawing on top of it. The code below is the main draw loop, the lines that cause the problem are commented out. The base.Draw call draws the 3D stuff, the spriteBatch calls do the text. code:
Any suggestions? edit: On further research it looks like going into sprite mode sets a bunch of flags that don't get reset when returning to 3D mode on the next frame update. There is a writeup here with more details. Adding the following lines after spriteBatch.End() fixes the problem. code:
PDP-1 fucked around with this message at 04:16 on Nov 16, 2009 |
# ? Nov 16, 2009 00:36 |
|
Most likely candidate would be the depth testing mode being changed, or depth testing being disabled.
|
# ? Nov 16, 2009 03:03 |
|
Can I seriously not tile textures from an atlas in OpenGL
|
# ? Jan 14, 2010 21:28 |
|
not a dinosaur posted:Can I seriously not tile textures from an atlas in OpenGL not without doing some pixel shader tricks, no.
|
# ? Jan 14, 2010 21:50 |
|
I crossposted this with the games thread but this seems like a better place for it. I was wondering how simple it was to have a Direct2D application be on top of a game. I want to put my window on top of my game so I can get updates from my program while playing. Unfortunately this seems like it would be harder than I had thought and requires hooking Direct3D functions to write to the screen. Is there an easier way to do this?
|
# ? Jan 15, 2010 06:39 |
|
|
# ? May 14, 2024 16:06 |
|
One of the first Google results is for a DirectDraw overlay, which is probably functionally similar to Direct2D: http://www.gamedev.net/community/forums/topic.asp?topic_id=359319
|
# ? Jan 15, 2010 06:53 |