Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Small White Dragon
Nov 23, 2007

No relation.
Cross posting from the iPhone thread, maybe you guys would know:

I want to use multitexturing (which I have not done before).

I set a color (via glColor), an RGBA texture (1), and an alpha texture (2). I want this behavior:
Color of fragment = glColor x color from RGBA texture
Alpha of fragment = glColor x alpha from RGBA texture x alpha from alpha texture

And then for the fragment to be drawn.

OpenGL initiation:
code:
//Setup OpenGL projection matrix
glMatrixMode(GL_PROJECTION);
glOrthof(0, glWidth, glHeight, 0, -1, 1); // flip y
glMatrixMode(GL_MODELVIEW);
        
//Setup initial OpenGL states
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
Painting:
code:
/* Setup view and clear to black */
glMatrixMode(GL_MODELVIEW);	
glLoadIdentity();
	
glClearColor(r, g, b, a);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

/* Paint */
glColor4f(1,1,1, 1);
glVertexPointer(3, GL_FLOAT, 0, vertices);

glClientActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture0);
glTexCoordPointer(2, GL_FLOAT, 0, coordinates);

glClientActiveTexture(GL_TEXTURE1);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture1);
glTexCoordPointer(2, GL_FLOAT, 0, coordinates);

glDrawArrays(GL_TRIANGLES, 0, 6*quads);
Instead, somehow I end up with opaque fragment pixels showing up semi-transparent, and the ones that should be transparent are just black :confused:

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Small White Dragon posted:

Cross posting from the iPhone thread, maybe you guys would know:

I want to use multitexturing (which I have not done before).

I set a color (via glColor), an RGBA texture (1), and an alpha texture (2). I want this behavior:
Color of fragment = glColor x color from RGBA texture
Alpha of fragment = glColor x alpha from RGBA texture x alpha from alpha texture
Set TEXTURE_ENV_MODE on texture 1 to MODULATE.

Make sure the alpha texture is uploaded with ALPHA as the internal representation (when you call TexImage2D), not LUMINANCE or RGB. Using MODULATE with a LUMINANCE/RGB texture will cause the color to be multiplied by the texture and leave the alpha channel alone (which sounds like what you're experiencing), using MODULATE with an ALPHA texture will cause the alpha channel to be multiplied by the texture and leave the color alone.

OneEightHundred fucked around with this message at 10:09 on Mar 31, 2009

zzz
May 10, 2008
Plus, you need to call glEnable(GL_TEXTURE_2D) separately for each texture unit you want to use (after glActiveTexture)

PnP Bios
Oct 24, 2005
optional; no images are allowed, only text

zzz posted:

Plus, you need to call glEnable(GL_TEXTURE_2D) separately for each texture unit you want to use (after glActiveTexture)

I don't think thats right, you should only need to turn on the texture state once.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

PnP Bios posted:

I don't think thats right, you should only need to turn on the texture state once.
Different layers may use different texture types as sources, so each one requires texture enabling.

haveblue
Aug 15, 2005



Toilet Rascal
Also, you stop doing multitexture by disabling GL_TEXTURE_* on all the texture units except 0, so it has to be per-unit.

MrPeanut
Mar 11, 2005

My word!
Suppose this is more of an annoying 2D question, but need to figure this thing out for my lab hopefully by tonight. (For OpenGL)

I'm doing a very basic drawing program that essentially takes in 2 points via left click, then draws the appropriated primitive based on a selection from a menu.

For some reason I have been having some wierd problems with the display function, where it displays everything properly according to the coordinate system I've set up with the glOrtho() command, however everytime a glutPostRedisplay() is called (which I've been using to redisplay the objects after a background color change) the viewport seems to zoom in on a corner of the render space on each redisplay.

I'm not very sure whats causing this to happen, or am I really sure if I am using the Ortho command in the right place (I have it in the display function right now), but this is making it so that I cannot make any changes to the renders, and will be a fail =(

code:
void display() 
{ 
  glClear(GL_COLOR_BUFFER_BIT);
  glOrtho(0.0,1.0,0.0,1.0,0.0, 50.0);			//dunno why but this has to go here, won't work in main
								// >_<
  index = 0;
  while(index < MAX_OBJECTS && prim[index].type != NONE)
  {
	  /*printf("gently caress this mother fucker %d\n", index);
	  printf("Object Stats:\n Shape: %d\n Color(RGB): %d%d%d\n Fill: %d\n Points(X1,Y1,X2,Y2): %f, %f, %f, %f\n", 
		  prim[index].type, prim[index].r, prim[index].g, prim[index].b, prim[index].fill, 
		  prim[index].pnts[0][0], prim[index].pnts[0][1], prim[index].pnts[1][0], prim[index].pnts[1][1]);*/
	glColor3f(prim[index].r,prim[index].g,prim[index].b);
	if(prim[index].type == RECTANGLE)
	{
		if(prim[index].fill)
		{
			glBegin(GL_POLYGON);
			 glVertex3f(prim[index].pnts[0][0],
						prim[index].pnts[0][1],
						-10);
			 glVertex3f(prim[index].pnts[0][0],
						prim[index].pnts[1][1],
						-10);
			 glVertex3f(prim[index].pnts[1][0],
						prim[index].pnts[1][1],
						-10);
			 glVertex3f(prim[index].pnts[1][0],
			 			prim[index].pnts[0][1],
						-10);
			glEnd();
		}
		else
		{
			glBegin(GL_LINE_LOOP);
			 glVertex3f(prim[index].pnts[0][0],
						prim[index].pnts[0][1],
						-10);
			 glVertex3f(prim[index].pnts[0][0],
						prim[index].pnts[1][1],
						-10);
			 glVertex3f(prim[index].pnts[1][0],
						prim[index].pnts[1][1],
						-10);
			 glVertex3f(prim[index].pnts[1][0],
			 			prim[index].pnts[0][1],
						-10);
			glEnd();
		}
	}
	else if(prim[index].type == LINE)
	{
		glBegin(GL_LINES);
		 glVertex3f(prim[index].pnts[0][0],
						prim[index].pnts[0][1],
						-10);
		 glVertex3f(prim[index].pnts[1][0],
						prim[index].pnts[1][1],
						-10);
		glEnd();
	}
	else if(prim[index].type == CIRCLE)
	{
		float r = sqrt(pow((prim[index].pnts[0][0] - prim[index].pnts[1][0]), 2)+ 
					   pow((prim[index].pnts[0][1] - prim[index].pnts[1][1]), 2));
		float h = (prim[index].pnts[0][0] + prim[index].pnts[1][0])/2;
		float k = (prim[index].pnts[0][1] + prim[index].pnts[1][1])/2;
		if(prim[index].fill)
		{
			glBegin(GL_TRIANGLE_FAN);
			 glVertex3f(h, k, -10);
				for (int i = 0;i < 360;i++)
				{
					glVertex3f(h + sin(i*(180/3.14159)) * r, k + cos(i*(180/3.14159)) * r, -10);
				}
			glEnd();
		}
		else
		{
			glBegin(GL_LINE_LOOP);
			for(int i = 0; i < 360; i++)
			{
				glVertex3f(h + sin(i*(180/3.14159)) * r, k + cos(i*(180/3.14159)) * r, -10);
			}
		}
	}
	glFlush();	// this 'flushes' the command buffer
	index++;
  }
} 
This is then entirety of my display function. The menu function that calls a redisplay uses a glClearColor() command, then the redisplay if that makes a difference...

I'd really apprecaite some help, because I'm stumped on how to get this working right now.

Here is the code in it's entirety if you want to see it run.

http://www.megaupload.com/?d=AE8HIC5S

shodanjr_gr
Nov 20, 2007
Do a glLoadIdentity(); call before the glOrtho call.

The active matrices do not reset on their own ;).

MrPeanut
Mar 11, 2005

My word!
I love you

Dicky B
Mar 23, 2004

Direct3D beginner here. I've been playing around with vertex buffers but I haven't been able to find any good examples so I'm making a lot of assumptions about how I should be doing things.

I'm working on a simple application that just renders some different shapes to the sceen. To do this I create one big vertex buffer and add all the vertices for each shape to this buffer, and keep a list of all the shapes I've added (number of vertices, vertex size and primitive type), so when it comes to rendering everything I can go through the list and render each shape one by one by calculating the number of bytes to offset when reading from the buffer. I hope that makes sense.

Is this a good way of doing this or am I being retarded? If I understand correctly it's best to be using one vertex buffer (as opposed to a seperate buffer for each shape). If I want to start animating things the important thing is to minimize the number of locks/unlocks each frame, so I would send the vertices for all the shapes in one batch.

What if I want to dynamically change the number of shapes? I would need to also change the size of the vertex buffer. Is there a way of doing this or should I just release it and create a new one whenever I need more/less space?

If somebody could let me know if I'm on the right track or if I'm grossly misunderstanding anything it would be much appreciated! :)

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh
Premature optimization is the root of all evil.

haveblue
Aug 15, 2005



Toilet Rascal

Dicky B posted:

Direct3D beginner here. I've been playing around with vertex buffers but I haven't been able to find any good examples so I'm making a lot of assumptions about how I should be doing things.

I'm working on a simple application that just renders some different shapes to the sceen. To do this I create one big vertex buffer and add all the vertices for each shape to this buffer, and keep a list of all the shapes I've added (number of vertices, vertex size and primitive type), so when it comes to rendering everything I can go through the list and render each shape one by one by calculating the number of bytes to offset when reading from the buffer. I hope that makes sense.

Is this a good way of doing this or am I being retarded? If I understand correctly it's best to be using one vertex buffer (as opposed to a seperate buffer for each shape). If I want to start animating things the important thing is to minimize the number of locks/unlocks each frame, so I would send the vertices for all the shapes in one batch.

What if I want to dynamically change the number of shapes? I would need to also change the size of the vertex buffer. Is there a way of doing this or should I just release it and create a new one whenever I need more/less space?

If somebody could let me know if I'm on the right track or if I'm grossly misunderstanding anything it would be much appreciated! :)

How many objects are we talking about here? You won't realize a significant savings unless you are eliminating hundreds or thousands of locks per frame.

And it's going to become a huge headache when you want to be able to dynamically add and remove objects from the scene.

krysmopompas
Jan 17, 2004
hi

Avenging Dentist posted:

Premature optimization is the root of all evil.
Belated pessimization is the leaf of no good.

Your approach is basically good - you may want to draw more than one shape per draw call though. There is no reason you need to change the size of the vertex buffer or copy things around in it; modify the index buffer instead. If you aren't drawing any indices that refer to a vertex, then it's as good as deleted, right? ;)

Put the vertex indices of the shape you're deleting into a free list, then reuse them when creating a new shape. It won't be the greatest on the post-transform cache, but with this kind of thing you tend to be limited at the cpu side of things anyhow.

As far as the index buffer goes, only modify it when you create a new shape - skip over the shape indices you've deleted. You'll need to use multiple draw calls to jump around the 'holes', but you should be filling those holes asap with new shapes. If the list doesn't change that frequently, you may want to move shapes around in the index list to defragment it.

It's a similar strategy to what a lot of memory allocators use, so if you google around for literature along those lines, you might get some other ideas.

krysmopompas fucked around with this message at 03:44 on Apr 22, 2009

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh
I'm pretty sure he's not actually using an index buffer (at least, he didn't mention it), which is part of the problem. With an index buffer, you can get quite a bit more creative with how you use the vertex buffer(s), but without it, I think it would be a pretty classic case of premature optimization, since not all cases would benefit (e.g. highly dynamic data sets) and you'd need to do actual performance benchmarking to determine if it's worthwhile.

Optimizing for its own sake might be fun, but unless you know where the bottlenecks in your application are (and "playing around with vertex buffers" does not suggest that to me), you're probably going to end up optimizing the hell out of something that doesn't really matter all that much.

EDIT: Also, keep in mind that even when using index buffers and a free list to do something like the heap, you're still going to run into some important differences from how malloc works. When implementing malloc, it is generally straightforward to request more memory from the operating system when necessary. As far as I know, VBOs in DirectX are non-resizable, so you'll run into problems if you ever fill up your vertex buffer. One solution would be to do something similar to what people generally do on consoles, which is to malloc a big ol' block of memory at once and only ever use that, but this isn't necessarily appropriate on PCs, where you don't even know a priori how much memory you have available.

Avenging Dentist fucked around with this message at 04:12 on Apr 22, 2009

krysmopompas
Jan 17, 2004
hi
I don't think anyone is seriously advocating just allocating one vertex buffer for an entire app these days, nobody would even do that on a console. You're going to waste far more resources dealing with micromanaging fences or stuck in contention than you'd waste changing the comparatively miniscule amount of state to go from a vb of the same layout to another of the same layout.

In my dynamic geometry system, I allocate blocks of 4k verts and indices. If I fill one of those up, I grab another (I keep an arbitrary number of free blocks around at all times, which is tuned for the particular app or scenario.)

Not being able to resize isn't an issue.

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh

krysmopompas posted:

I don't think anyone is seriously advocating just allocating one vertex buffer for an entire app these days, nobody would even do that on a console.

But that's exactly what he was asking about :

Dicky B posted:

If I understand correctly it's best to be using one vertex buffer...

krysmopompas
Jan 17, 2004
hi

Avenging Dentist posted:

But that's exactly what he was asking about :
I interpreted that as "one vertex buffer for this collection of dynamic objects that happen to share the same format"

So, whatever. Don't do that.

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh
That's fair.

(God I hate typing, and yet I always get roped into providing actual advice.)

Dicky B
Mar 23, 2004

Thanks guys you basically cleared up my confusion. I do realise the optimisation is premature in this case, as I said I'm just playing around. Somehow it never clicked that I could use an index buffer for this, which seems obvious now. All the stuff I've read about index buffers is just "oh wow look you can make a square with only four vertices!!!"

(Book recommendations?)

ShinAli
May 2, 2003

The Kid better watch his step.
I've got an OpenGL graphics assignment problem where I draw 3D objects which may have concave faces, and one method I would have to do it in is via drawing the faces using the stencil buffer.

Right now, my implementation only draws one face, and for the life of me I cannot figure out why. A for-loop will iterate through a 3D object struct that contains the faces and pass them into this function:

code:
void gldisplay2(struct poly * p)
{
   struct poly *q;
   int i,j;
   
   glEnable(GL_STENCIL_TEST);
   glStencilFunc(GL_ALWAYS, 0x1, 0x1);
   glStencilOp(GL_INVERT, GL_INVERT, GL_INVERT);
   glClear(GL_STENCIL_BUFFER_BIT);
   glClearStencil(0x0);
   glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
   struct vector ori;
   struct vector pt1;
   struct vector pt2;
   q = p->f[0].p;
   ori.x = q->origin.x + q->f[0].t * q->d.x;
   ori.y = q->origin.y + q->f[0].t * q->d.y;
   ori.z = q->origin.z + q->f[0].t * q->d.z;
   // draw the triangles using GL_INVERT to get a stencil
   // of the face
   for(i = 1; i < p->curnum; i++) {
	q = p->f[i].p;
        pt1.x = q->origin.x + q->f[0].t * q->d.x;
        pt1.y = q->origin.y + q->f[0].t * q->d.y;
        pt1.z = q->origin.z + q->f[0].t * q->d.z;
	pt2.x = q->origin.x + q->f[1].t * q->d.x;
        pt2.y = q->origin.y + q->f[1].t * q->d.y;
        pt2.z = q->origin.z + q->f[1].t * q->d.z;
	if(!(PTEQU(ori, pt1)) && !(PTEQU(ori, pt2))) {
		//glNormal3f(p->n.x, p->n.y, p->n.z);
		glBegin(GL_POLYGON);
		glVertex3f(ori.x, ori.y, ori.z);
		glVertex3f(pt1.x, pt1.y, pt1.z);
		glVertex3f(pt2.x, pt2.y, pt2.z);
		glEnd();
	}
   }
   // redraw the triangles via color buffer
   glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
   glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
   glStencilFunc(GL_EQUAL, 0x1, 0x1);
   glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
   for(i = 1; i < p->curnum; i++) {
	q = p->f[i].p;
        pt1.x = q->origin.x + q->f[0].t * q->d.x;
        pt1.y = q->origin.y + q->f[0].t * q->d.y;
        pt1.z = q->origin.z + q->f[0].t * q->d.z;
	pt2.x = q->origin.x + q->f[1].t * q->d.x;
        pt2.y = q->origin.y + q->f[1].t * q->d.y;
        pt2.z = q->origin.z + q->f[1].t * q->d.z;
	if(!(PTEQU(ori, pt1)) && !(PTEQU(ori, pt2))) {
		//glNormal3f(p->n.x, p->n.y, p->n.z);
		glBegin(GL_POLYGON);
		glVertex3f(ori.x, ori.y, ori.z);
		glVertex3f(pt1.x, pt1.y, pt1.z);
		glVertex3f(pt2.x, pt2.y, pt2.z);
		glEnd();
	}
   }
   glDisable(GL_STENCIL_TEST);
}
I know it's really inefficient but I'm just trying to make this damned thing work. It's pretty much the exact method that the redbook talks about in chapter 14.

Contero
Mar 28, 2004

What's with objects still being visible in OpenGL even with all of my lights turned off / set to zero? Is there some gl option to say "yes I really want complete darkness"? Or am I just doing something dumb?

haveblue
Aug 15, 2005



Toilet Rascal

Contero posted:

What's with objects still being visible in OpenGL even with all of my lights turned off / set to zero? Is there some gl option to say "yes I really want complete darkness"? Or am I just doing something dumb?

The default value of the ambient material property is not zero, you're probably seeing that.

Contero
Mar 28, 2004

sex offendin Link posted:

The default value of the ambient material property is not zero, you're probably seeing that.

Shouldn't it not matter what the material is set to if ambient light value is set to 0?

Jo
Jan 24, 2005

:allears:
Soiled Meat

Contero posted:

Shouldn't it not matter what the material is set to if ambient light value is set to 0?

You can turn off all the lights in a room, glow in the dark paint still lights up.

Sounds like you're adjusting a global ambient light instead of the ambient light for the individual materials.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Jo posted:

You can turn off all the lights in a room, glow in the dark paint still lights up.

Sounds like you're adjusting a global ambient light instead of the ambient light for the individual materials.

(bad analogy; that's emissive material)

Since indirect illumination is really complicated to actually simulate properly (as opposed to direct illumination, which has a simple analytical solution once you determine visibility), the 'ambient' component is used and is basically a hack to approximate secondary and greater light bounces off of other surfaces in the scene (as opposed to coming directly from the light source). In most cases, ambient light should just be treated exactly as direct illumination (using the same materials, etc) except that it is completely diffuse and directionless -- i.e. instead of N.L*cLight*cDiffuse+R.L*cLight*cSpecular it would just be cLight*cAmbient. Glow in the dark paint is emissive, which means light that is emitted directly from the surface; it can be whatever color you want.

Contero
Mar 28, 2004

Jo posted:

You can turn off all the lights in a room, glow in the dark paint still lights up.

Sounds like you're adjusting a global ambient light instead of the ambient light for the individual materials.

If the equation for ambient light is light*material, then wouldn't only one of those need to be 0 to make it completely dark?

I only have one light turned on in the scene. The only way for me to actually get everything to be black is to set its ambient value to -0.5 instead of 0. To me this means that there is probably some other automatic ambient light that's on by default in openGL (so you can actually see something if you're messing around and haven't set up any lights yet). I just want to know how to turn that light off. Setting the material for every object instead of just the light seems backwards to me.

shodanjr_gr
Nov 20, 2007
Two things. OpenGL has a ambient lighting state variable that is not tied to a light source. I think it is GL_LIGHT_MODEL_AMBIENT, but don't quote me on that. So make sure that is also set to 0.

Also, make sure that you have not set any GL_EMISSION properties to any materials, since that will make them light up on their own.

Also, (just checking :P) make sure that GL_LIGHTING is set to on, else you'll just get flat shading based on the glColor3f properties of the geometry (or the textures, if you are using textures).

Contero
Mar 28, 2004

shodanjr_gr posted:

OpenGL has a ambient lighting state variable that is not tied to a light source. I think it is GL_LIGHT_MODEL_AMBIENT, but don't quote me on that. So make sure that is also set to 0.

I love you. This worked like a charm.

Edit: I'm so happy I took a screenshot

Jo
Jan 24, 2005

:allears:
Soiled Meat

Hubis posted:

(bad analogy; that's emissive material)

:doh: I completely misread his post. Good that it's working though.

shodanjr_gr
Nov 20, 2007
I am writting a small volume renderer using OpenGL, ObjC and a bit of Cocoa. I've been testing it out with data sets from here: https://www.volvis.org

I've tested the typical engine.raw, foot.raw and skull.raw volumes, and they render fine.

However, I've tried some of the larger datasets that are available here (http://www.gris.uni-tuebingen.de/edu/areas/scivis/volren/datasets/new.html) and I seem to have trouble. The 3D texture gets created, but it seems really corrupted, as if data is shifted somehow, but I can't for the life of me figure out what's wrong....


I'm using the 8bit downsampled sets. I've tried setting the internal format of the texture to both GL_BYTE and GL_UNSIGNED byte, but it wont fix the issue (I just get the expected "shift" in density values).

To read the data i use the NSData class:

code:
NSData * myData = [[NSData alloc] initWithContentsOfFile:filenameIn options:0 error:myErrors];
and then use the bytes selector to get a pointer to the byte array.


I know its a bit of a longshot but have any of you goons by any chance tried to render these sets?

TheSpook
Aug 21, 2007
Spooky!
Just rendered the 8-Bit "Head MRT Angiography," using super-low sampling (slow computer :() and an almost random transfer function and got:



You can see the basic structure (although it has no sampling, no isosurface lightning, etc). Here is how I load the data (code snippets):

code:
unsigned char *volData;
...
fread (*data, 1, size, fp);
Unsigned char = 8 bits of unsigned goodness.


Edit: And here's how I index into the 1-D array:
code:
#define VOLUMEINDEX(x,y,z) (((z)*(DATAX)*(DATAY)) + ((y)*(DATAX)) + (x))

shodanjr_gr
Nov 20, 2007
Thanks for the help, but it turns out I was a moron and was using the wrong datasets :haw: (had the 12 bit and 8bit mixed up in my harddrive).

The 8bit ones render mostly OK (maybe with the expection of a few slices of noise in backpack.raw).

TheSpook
Aug 21, 2007
Spooky!
Are you making a volume renderer for the iPhone? That sounds pretty neat.

I'm thinking about putting together a little raytracer for Android/the iPhone, just for the experience.

shodanjr_gr
Nov 20, 2007

TheSpook posted:

Are you making a volume renderer for the iPhone? That sounds pretty neat.

I'm thinking about putting together a little raytracer for Android/the iPhone, just for the experience.

Well, I wont lie to you, I was thinking about it, but I seriously doubt that the iphone has enough omph to do even rudimentary real-time volume rendering. The CPU is clocked at 400mhz and the graphics chip is not programabe (so stuff like ray marching is out of the question). So I'm developing it using ObjC/Cocoa/Glut on my white macbook with a 9400m.

A basic raytracer should be be more more pheasible.

Steve Montago
Nov 13, 2004
Trentos the Freshmaker
Hi guys. I'm having trouble implementing a "distance between ray and line segment" algorithm in 3D. Currently I'm finding the perpendicular ray between two infinite lines and calculating it's length. This seems to work for infinite lines, but I'm not sure if it can be adapted for ray to line segment checks. There's an article here:

http://homepage.univie.ac.at/Franz.Vesely/notes/hard_sticks/hst/hst.html

But it operates on line segments, where I want to use a line segment (bounded on either end) and a ray (bounded on it's origin only).

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

shodanjr_gr posted:

Well, I wont lie to you, I was thinking about it, but I seriously doubt that the iphone has enough omph to do even rudimentary real-time volume rendering. The CPU is clocked at 400mhz and the graphics chip is not programabe (so stuff like ray marching is out of the question). So I'm developing it using ObjC/Cocoa/Glut on my white macbook with a 9400m.

A basic raytracer should be be more more pheasible.

I have high confidence you could get this working on an NVIDIA Ion system, which is pretty close to that. What I'd really be interested in is if you could get it working on a Tegra, but I don't know how you'd get a test platform for that.

haveblue
Aug 15, 2005



Toilet Rascal
Or a PowerVR SGX, which is just on the cusp of appearing on store shelves if it isn't there already and is pretty much a lock for a future iPhone.

shodanjr_gr
Nov 20, 2007

Hubis posted:

I have high confidence you could get this working on an NVIDIA Ion system

That is very likely, considering that the GPU is the same as the one on the Macbook im currently working on.

brian
Sep 11, 2001
I obtained this title through beard tax.

I'm having trouble doing mouse picking using the opengl selection buffer, for some reason whatever i'm doing wrong is resulting in it returning every object on screen and not in the area around the mouse cursor, I think i'm doing the gluPickMatrix call right so i'm a bit confused. Here's the relevant code:

code:
void GPHMain::GLSelectionRender(int x, int y)
{
	GLuint selectionBuffer[64];
	GLint viewport[4];

	glSelectBuffer(64,selectionBuffer);
	glRenderMode(GL_SELECT);

	glMatrixMode(GL_PROJECTION);
	glPushMatrix();
	glLoadIdentity();

	glGetIntegerv(GL_VIEWPORT,viewport);
	gluPickMatrix(x,viewport[3]-y, 2,2,viewport);
	glLoadIdentity();
	gluPerspective(60, 1.0, 0.0001, 1000.0);

	glMatrixMode( GL_MODELVIEW );
	glLoadIdentity();
	m_pCamera->ViewScene();

	glInitNames();	

	for(int i = 0; i < m_vObjects.size(); i++)
	{
		glPushName(m_vObjects[i]->GetName().getHash());
		m_vObjects[i]->Render();
		glPopName();
	}

	glMatrixMode(GL_PROJECTION);
	glPopMatrix();

	glMatrixMode(GL_MODELVIEW);

	glFlush();

	int numHits = glRenderMode(GL_RENDER);

	GLuint names, *ptr, minZ,*ptrNames, numberOfNames;

	ptr = (GLuint *)selectionBuffer;
	minZ = 0xffffffff;
	for (int i = 0; i < numHits; i++) {	
		names = *ptr;
		ptr++;
		if (*ptr < minZ) {
			numberOfNames = names;
			minZ = *ptr;
			ptrNames = ptr+2;
		}

		IPhysicsObject* pObj = GetPhysObject(*(ptr+2));

		ptr += names+2;
	}

	IPhysicsObject* pObj = GetPhysObject(*ptrNames);
	if(pObj)
	{
		if(m_pSelectedObject)
			m_pSelectedObject->SetRenderState(eNormal);
		m_pSelectedObject = pObj;
		m_pSelectedObject->SetRenderState(eSelected);
	}
}
Any help would be fabbo, ask if any more code is needed to work out what's wrong.

Adbot
ADBOT LOVES YOU

haveblue
Aug 15, 2005



Toilet Rascal
Don't clear the matrix between gluPickMatrix and gluPerspective, the pick matrix is supposed to modify the current projection.

I'm having trouble trying to use multitexture and and lighting together in OpenGL ES 1.1. When I set the first texture unit to modulate and the second to decal, it looks like the vertex colors are only modulating the first texture unit and the second is going straight through only modulated by its own alpha. Is there some trick I can do with the texture combiner to calculate C = C1*A1*Cf + C0*(1-A1)*Cf or is this impossible (in a single pass) in the fixed-function pipeline?

haveblue fucked around with this message at 18:59 on May 15, 2009

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply