|
Cross posting from the iPhone thread, maybe you guys would know: I want to use multitexturing (which I have not done before). I set a color (via glColor), an RGBA texture (1), and an alpha texture (2). I want this behavior: Color of fragment = glColor x color from RGBA texture Alpha of fragment = glColor x alpha from RGBA texture x alpha from alpha texture And then for the fragment to be drawn. OpenGL initiation: code:
code:
|
# ? Mar 31, 2009 01:52 |
|
|
# ? May 14, 2024 12:59 |
|
Small White Dragon posted:Cross posting from the iPhone thread, maybe you guys would know: Make sure the alpha texture is uploaded with ALPHA as the internal representation (when you call TexImage2D), not LUMINANCE or RGB. Using MODULATE with a LUMINANCE/RGB texture will cause the color to be multiplied by the texture and leave the alpha channel alone (which sounds like what you're experiencing), using MODULATE with an ALPHA texture will cause the alpha channel to be multiplied by the texture and leave the color alone. OneEightHundred fucked around with this message at 10:09 on Mar 31, 2009 |
# ? Mar 31, 2009 09:55 |
|
Plus, you need to call glEnable(GL_TEXTURE_2D) separately for each texture unit you want to use (after glActiveTexture)
|
# ? Mar 31, 2009 10:51 |
|
zzz posted:Plus, you need to call glEnable(GL_TEXTURE_2D) separately for each texture unit you want to use (after glActiveTexture) I don't think thats right, you should only need to turn on the texture state once.
|
# ? Apr 1, 2009 09:44 |
|
PnP Bios posted:I don't think thats right, you should only need to turn on the texture state once.
|
# ? Apr 2, 2009 02:03 |
|
Also, you stop doing multitexture by disabling GL_TEXTURE_* on all the texture units except 0, so it has to be per-unit.
|
# ? Apr 2, 2009 02:28 |
|
Suppose this is more of an annoying 2D question, but need to figure this thing out for my lab hopefully by tonight. (For OpenGL) I'm doing a very basic drawing program that essentially takes in 2 points via left click, then draws the appropriated primitive based on a selection from a menu. For some reason I have been having some wierd problems with the display function, where it displays everything properly according to the coordinate system I've set up with the glOrtho() command, however everytime a glutPostRedisplay() is called (which I've been using to redisplay the objects after a background color change) the viewport seems to zoom in on a corner of the render space on each redisplay. I'm not very sure whats causing this to happen, or am I really sure if I am using the Ortho command in the right place (I have it in the display function right now), but this is making it so that I cannot make any changes to the renders, and will be a fail =( code:
I'd really apprecaite some help, because I'm stumped on how to get this working right now. Here is the code in it's entirety if you want to see it run. http://www.megaupload.com/?d=AE8HIC5S
|
# ? Apr 21, 2009 00:46 |
|
Do a glLoadIdentity(); call before the glOrtho call. The active matrices do not reset on their own .
|
# ? Apr 21, 2009 01:10 |
|
I love you
|
# ? Apr 21, 2009 01:18 |
|
Direct3D beginner here. I've been playing around with vertex buffers but I haven't been able to find any good examples so I'm making a lot of assumptions about how I should be doing things. I'm working on a simple application that just renders some different shapes to the sceen. To do this I create one big vertex buffer and add all the vertices for each shape to this buffer, and keep a list of all the shapes I've added (number of vertices, vertex size and primitive type), so when it comes to rendering everything I can go through the list and render each shape one by one by calculating the number of bytes to offset when reading from the buffer. I hope that makes sense. Is this a good way of doing this or am I being retarded? If I understand correctly it's best to be using one vertex buffer (as opposed to a seperate buffer for each shape). If I want to start animating things the important thing is to minimize the number of locks/unlocks each frame, so I would send the vertices for all the shapes in one batch. What if I want to dynamically change the number of shapes? I would need to also change the size of the vertex buffer. Is there a way of doing this or should I just release it and create a new one whenever I need more/less space? If somebody could let me know if I'm on the right track or if I'm grossly misunderstanding anything it would be much appreciated!
|
# ? Apr 21, 2009 18:43 |
|
Premature optimization is the root of all evil.
|
# ? Apr 22, 2009 01:28 |
|
Dicky B posted:Direct3D beginner here. I've been playing around with vertex buffers but I haven't been able to find any good examples so I'm making a lot of assumptions about how I should be doing things. How many objects are we talking about here? You won't realize a significant savings unless you are eliminating hundreds or thousands of locks per frame. And it's going to become a huge headache when you want to be able to dynamically add and remove objects from the scene.
|
# ? Apr 22, 2009 02:12 |
|
Avenging Dentist posted:Premature optimization is the root of all evil. Your approach is basically good - you may want to draw more than one shape per draw call though. There is no reason you need to change the size of the vertex buffer or copy things around in it; modify the index buffer instead. If you aren't drawing any indices that refer to a vertex, then it's as good as deleted, right? Put the vertex indices of the shape you're deleting into a free list, then reuse them when creating a new shape. It won't be the greatest on the post-transform cache, but with this kind of thing you tend to be limited at the cpu side of things anyhow. As far as the index buffer goes, only modify it when you create a new shape - skip over the shape indices you've deleted. You'll need to use multiple draw calls to jump around the 'holes', but you should be filling those holes asap with new shapes. If the list doesn't change that frequently, you may want to move shapes around in the index list to defragment it. It's a similar strategy to what a lot of memory allocators use, so if you google around for literature along those lines, you might get some other ideas. krysmopompas fucked around with this message at 03:44 on Apr 22, 2009 |
# ? Apr 22, 2009 03:37 |
|
I'm pretty sure he's not actually using an index buffer (at least, he didn't mention it), which is part of the problem. With an index buffer, you can get quite a bit more creative with how you use the vertex buffer(s), but without it, I think it would be a pretty classic case of premature optimization, since not all cases would benefit (e.g. highly dynamic data sets) and you'd need to do actual performance benchmarking to determine if it's worthwhile. Optimizing for its own sake might be fun, but unless you know where the bottlenecks in your application are (and "playing around with vertex buffers" does not suggest that to me), you're probably going to end up optimizing the hell out of something that doesn't really matter all that much. EDIT: Also, keep in mind that even when using index buffers and a free list to do something like the heap, you're still going to run into some important differences from how malloc works. When implementing malloc, it is generally straightforward to request more memory from the operating system when necessary. As far as I know, VBOs in DirectX are non-resizable, so you'll run into problems if you ever fill up your vertex buffer. One solution would be to do something similar to what people generally do on consoles, which is to malloc a big ol' block of memory at once and only ever use that, but this isn't necessarily appropriate on PCs, where you don't even know a priori how much memory you have available. Avenging Dentist fucked around with this message at 04:12 on Apr 22, 2009 |
# ? Apr 22, 2009 04:02 |
|
I don't think anyone is seriously advocating just allocating one vertex buffer for an entire app these days, nobody would even do that on a console. You're going to waste far more resources dealing with micromanaging fences or stuck in contention than you'd waste changing the comparatively miniscule amount of state to go from a vb of the same layout to another of the same layout. In my dynamic geometry system, I allocate blocks of 4k verts and indices. If I fill one of those up, I grab another (I keep an arbitrary number of free blocks around at all times, which is tuned for the particular app or scenario.) Not being able to resize isn't an issue.
|
# ? Apr 22, 2009 04:52 |
|
krysmopompas posted:I don't think anyone is seriously advocating just allocating one vertex buffer for an entire app these days, nobody would even do that on a console. But that's exactly what he was asking about : Dicky B posted:If I understand correctly it's best to be using one vertex buffer...
|
# ? Apr 22, 2009 05:01 |
|
Avenging Dentist posted:But that's exactly what he was asking about : So, whatever. Don't do that.
|
# ? Apr 22, 2009 06:00 |
|
That's fair. (God I hate typing, and yet I always get roped into providing actual advice.)
|
# ? Apr 22, 2009 06:07 |
|
Thanks guys you basically cleared up my confusion. I do realise the optimisation is premature in this case, as I said I'm just playing around. Somehow it never clicked that I could use an index buffer for this, which seems obvious now. All the stuff I've read about index buffers is just "oh wow look you can make a square with only four vertices!!!" (Book recommendations?)
|
# ? Apr 22, 2009 08:57 |
|
I've got an OpenGL graphics assignment problem where I draw 3D objects which may have concave faces, and one method I would have to do it in is via drawing the faces using the stencil buffer. Right now, my implementation only draws one face, and for the life of me I cannot figure out why. A for-loop will iterate through a 3D object struct that contains the faces and pass them into this function: code:
|
# ? May 3, 2009 05:07 |
|
What's with objects still being visible in OpenGL even with all of my lights turned off / set to zero? Is there some gl option to say "yes I really want complete darkness"? Or am I just doing something dumb?
|
# ? May 4, 2009 06:49 |
|
Contero posted:What's with objects still being visible in OpenGL even with all of my lights turned off / set to zero? Is there some gl option to say "yes I really want complete darkness"? Or am I just doing something dumb? The default value of the ambient material property is not zero, you're probably seeing that.
|
# ? May 4, 2009 06:51 |
|
sex offendin Link posted:The default value of the ambient material property is not zero, you're probably seeing that. Shouldn't it not matter what the material is set to if ambient light value is set to 0?
|
# ? May 5, 2009 03:49 |
Contero posted:Shouldn't it not matter what the material is set to if ambient light value is set to 0? You can turn off all the lights in a room, glow in the dark paint still lights up. Sounds like you're adjusting a global ambient light instead of the ambient light for the individual materials.
|
|
# ? May 7, 2009 01:04 |
|
Jo posted:You can turn off all the lights in a room, glow in the dark paint still lights up. (bad analogy; that's emissive material) Since indirect illumination is really complicated to actually simulate properly (as opposed to direct illumination, which has a simple analytical solution once you determine visibility), the 'ambient' component is used and is basically a hack to approximate secondary and greater light bounces off of other surfaces in the scene (as opposed to coming directly from the light source). In most cases, ambient light should just be treated exactly as direct illumination (using the same materials, etc) except that it is completely diffuse and directionless -- i.e. instead of N.L*cLight*cDiffuse+R.L*cLight*cSpecular it would just be cLight*cAmbient. Glow in the dark paint is emissive, which means light that is emitted directly from the surface; it can be whatever color you want.
|
# ? May 7, 2009 15:26 |
|
Jo posted:You can turn off all the lights in a room, glow in the dark paint still lights up. If the equation for ambient light is light*material, then wouldn't only one of those need to be 0 to make it completely dark? I only have one light turned on in the scene. The only way for me to actually get everything to be black is to set its ambient value to -0.5 instead of 0. To me this means that there is probably some other automatic ambient light that's on by default in openGL (so you can actually see something if you're messing around and haven't set up any lights yet). I just want to know how to turn that light off. Setting the material for every object instead of just the light seems backwards to me.
|
# ? May 7, 2009 21:02 |
|
Two things. OpenGL has a ambient lighting state variable that is not tied to a light source. I think it is GL_LIGHT_MODEL_AMBIENT, but don't quote me on that. So make sure that is also set to 0. Also, make sure that you have not set any GL_EMISSION properties to any materials, since that will make them light up on their own. Also, (just checking :P) make sure that GL_LIGHTING is set to on, else you'll just get flat shading based on the glColor3f properties of the geometry (or the textures, if you are using textures).
|
# ? May 7, 2009 21:12 |
|
shodanjr_gr posted:OpenGL has a ambient lighting state variable that is not tied to a light source. I think it is GL_LIGHT_MODEL_AMBIENT, but don't quote me on that. So make sure that is also set to 0. I love you. This worked like a charm. Edit: I'm so happy I took a screenshot
|
# ? May 8, 2009 02:05 |
Hubis posted:(bad analogy; that's emissive material) I completely misread his post. Good that it's working though.
|
|
# ? May 8, 2009 03:33 |
|
I am writting a small volume renderer using OpenGL, ObjC and a bit of Cocoa. I've been testing it out with data sets from here: https://www.volvis.org I've tested the typical engine.raw, foot.raw and skull.raw volumes, and they render fine. However, I've tried some of the larger datasets that are available here (http://www.gris.uni-tuebingen.de/edu/areas/scivis/volren/datasets/new.html) and I seem to have trouble. The 3D texture gets created, but it seems really corrupted, as if data is shifted somehow, but I can't for the life of me figure out what's wrong.... I'm using the 8bit downsampled sets. I've tried setting the internal format of the texture to both GL_BYTE and GL_UNSIGNED byte, but it wont fix the issue (I just get the expected "shift" in density values). To read the data i use the NSData class: code:
I know its a bit of a longshot but have any of you goons by any chance tried to render these sets?
|
# ? May 11, 2009 23:20 |
|
Just rendered the 8-Bit "Head MRT Angiography," using super-low sampling (slow computer ) and an almost random transfer function and got: You can see the basic structure (although it has no sampling, no isosurface lightning, etc). Here is how I load the data (code snippets): code:
Edit: And here's how I index into the 1-D array: code:
|
# ? May 13, 2009 19:51 |
|
Thanks for the help, but it turns out I was a moron and was using the wrong datasets (had the 12 bit and 8bit mixed up in my harddrive). The 8bit ones render mostly OK (maybe with the expection of a few slices of noise in backpack.raw).
|
# ? May 13, 2009 20:12 |
|
Are you making a volume renderer for the iPhone? That sounds pretty neat. I'm thinking about putting together a little raytracer for Android/the iPhone, just for the experience.
|
# ? May 13, 2009 22:02 |
|
TheSpook posted:Are you making a volume renderer for the iPhone? That sounds pretty neat. Well, I wont lie to you, I was thinking about it, but I seriously doubt that the iphone has enough omph to do even rudimentary real-time volume rendering. The CPU is clocked at 400mhz and the graphics chip is not programabe (so stuff like ray marching is out of the question). So I'm developing it using ObjC/Cocoa/Glut on my white macbook with a 9400m. A basic raytracer should be be more more pheasible.
|
# ? May 13, 2009 22:27 |
|
Hi guys. I'm having trouble implementing a "distance between ray and line segment" algorithm in 3D. Currently I'm finding the perpendicular ray between two infinite lines and calculating it's length. This seems to work for infinite lines, but I'm not sure if it can be adapted for ray to line segment checks. There's an article here: http://homepage.univie.ac.at/Franz.Vesely/notes/hard_sticks/hst/hst.html But it operates on line segments, where I want to use a line segment (bounded on either end) and a ray (bounded on it's origin only).
|
# ? May 13, 2009 22:29 |
|
shodanjr_gr posted:Well, I wont lie to you, I was thinking about it, but I seriously doubt that the iphone has enough omph to do even rudimentary real-time volume rendering. The CPU is clocked at 400mhz and the graphics chip is not programabe (so stuff like ray marching is out of the question). So I'm developing it using ObjC/Cocoa/Glut on my white macbook with a 9400m. I have high confidence you could get this working on an NVIDIA Ion system, which is pretty close to that. What I'd really be interested in is if you could get it working on a Tegra, but I don't know how you'd get a test platform for that.
|
# ? May 14, 2009 01:44 |
|
Or a PowerVR SGX, which is just on the cusp of appearing on store shelves if it isn't there already and is pretty much a lock for a future iPhone.
|
# ? May 14, 2009 01:49 |
|
Hubis posted:I have high confidence you could get this working on an NVIDIA Ion system That is very likely, considering that the GPU is the same as the one on the Macbook im currently working on.
|
# ? May 14, 2009 11:33 |
|
I'm having trouble doing mouse picking using the opengl selection buffer, for some reason whatever i'm doing wrong is resulting in it returning every object on screen and not in the area around the mouse cursor, I think i'm doing the gluPickMatrix call right so i'm a bit confused. Here's the relevant code:code:
|
# ? May 15, 2009 18:46 |
|
|
# ? May 14, 2024 12:59 |
|
Don't clear the matrix between gluPickMatrix and gluPerspective, the pick matrix is supposed to modify the current projection. I'm having trouble trying to use multitexture and and lighting together in OpenGL ES 1.1. When I set the first texture unit to modulate and the second to decal, it looks like the vertex colors are only modulating the first texture unit and the second is going straight through only modulated by its own alpha. Is there some trick I can do with the texture combiner to calculate C = C1*A1*Cf + C0*(1-A1)*Cf or is this impossible (in a single pass) in the fixed-function pipeline? haveblue fucked around with this message at 18:59 on May 15, 2009 |
# ? May 15, 2009 18:53 |