Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
heeen
May 14, 2005

CAT NEVER STOPS

Entheogen posted:

Also another question, does anybody have any success using Geometry shader in OGL? I believe NVidia made some extension for them, but I am not quite sure how to use them. Does it still use GLSL?

Yes and Yes. They're quite simple to use. Here's something I googled for you:
http://cirl.missouri.edu/gpu/glsl_lessons/glsl_geometry_shader/index.html

Adbot
ADBOT LOVES YOU

heeen
May 14, 2005

CAT NEVER STOPS

Mithaldu posted:

I'm, for now, trying to display a grid consisting of cubes.

Subdivide world into chunks of say 32x32xdepth cubes
Collect all cubes of equal appearance in this chunk into a display list/VBO.

Cull chunks with view frustum, only render visible chunks
for all types of cubes:
render all cubes of current type from all visible chunks sequentially.

heeen
May 14, 2005

CAT NEVER STOPS
Did anyone ever have to deal with continuing rendering after windows resumes from standby or even hibernation?
I have the source for a foobar extension that stops working after I resume from standby. How can I query if standby has occured to reclaim the device context or render context?

edit: using OpenGL

heeen
May 14, 2005

CAT NEVER STOPS
Why not make it all triangles and save yourself a state change and an additional vertex and index buffer?

heeen
May 14, 2005

CAT NEVER STOPS
What GPU are you running on? I have no problems outputting dozens of vertices.
There are some OpenGL variables you can query to determine how much data you can output. Keep in mind vertex attributes do count against these limits, so you can output less vertices if you assign a hundred varyings with each.

code:
glGetIntegerv(GL_MAX_GEOMETRY_OUTPUT_VERTICES_EXT, &maxGeometryShaderOutVertices); 
	glGetIntegerv(GL_MAX_GEOMETRY_TOTAL_OUTPUT_COMPONENTS_EXT, &maxGeometryTotalOutComponents); 
Also you should set the maximum number of vertices your shader will output via:
code:
  glProgramParameteriEXT(ProgramObject, GL_GEOMETRY_VERTICES_OUT_EXT, _nVerticesOut);

heeen
May 14, 2005

CAT NEVER STOPS
Why is my opengl occlusion_query returning almost exactly (+-2) the half of my viewport resolution?

heeen
May 14, 2005

CAT NEVER STOPS

shodanjr_gr posted:

bonus question:

Im trying to do some shadowmapping, but I want to avoid using FBOs. Is there way to read the current depth buffer into a texture using something like glCopyTexSubImage2D? <- fixed this. Turns out if the target texture is a depth component one, then glCopyTexSubImage2D will read straight from the depth buffer.

try a pbuffer.

heeen
May 14, 2005

CAT NEVER STOPS

OneEightHundred posted:

Normally you want to use framebuffer objects and multiple render targets for that (using the extensions that conveniently have the same names!), pbuffers involve expensive context switches and kind of suck.

The OP specifically said he didn't want to use FBOs.

heeen
May 14, 2005

CAT NEVER STOPS
I'm having a problem with non power of two textures under OpenGL, this is what I'm getting:





Somehow all values are off by one channel. I'm using GL_RGB for the data and internal alignment.

heeen
May 14, 2005

CAT NEVER STOPS
I'd say store it as a VBO on the gpu and only call it by id.
Also 8 vertexes easily fit into the GPU transformation cache so nothing to worry about there.
If you have meshes of several hundreds of thousands of vertices you want to make sure your geometry is reusing indexes as efficiently as possible.

heeen
May 14, 2005

CAT NEVER STOPS
the glVertexAttrib family of functions allows to add generic attributes to each vertex.
You can set a index that the data will be associated with. However, you can't choose the index arbitrarily, since I discovered that using indices lower than 4 will break standard (Normal/TexCoord) attributes, plus 0 is the vertex position.
How can I find out, which will be the first real free index to use for custom attributes?

heeen
May 14, 2005

CAT NEVER STOPS
I seem to have a bug in my ppl shader, but I can't wrap my head around it:
code:
	mat3 tbn=mat3(tvec, svec, ws_normal);
	vec3 normalmap=texture2D(normalmap, gl_TexCoord[0].xy).rgb*2.0-1.0;		

	vec3 lightcolor=vec3(0.0,0.0,0.0);
	float NdotL, NdotHV;
	vec3 lightdir = gl_LightSource[0].position.xyz - ws_pos.xyz;
	float dist= length(lightdir);
	lightdir= normalize(lightdir)*tbn;
	
	NdotL = max(dot(normalmap, lightdir), 0.0);

	gl_FragColor.rgb=NdotL;
	gl_FragColor.a=1.0;
the source seems to be the NdotL value:

heeen
May 14, 2005

CAT NEVER STOPS

OneEightHundred posted:

mat3 tbn=mat3(svec, tvec, ws_normal);

Try that.

thanks, although I found it myself, that was exactly the error :)

heeen
May 14, 2005

CAT NEVER STOPS
I'm trying to do displacement mapping after the projection matrix has been applied, but before perspective divide.
Math gives me:

(vertex + displacement) * Mproj = vertex * Mproj + displacement * Mproj

However I'm getting artefacts: Without backface culling I have regions where backfaces get a higher z-value than faces that should be in front.
In the attachement you can see the backside of the not displaced mesh overlaying the displaced mesh which should lie in front of it.

code:
	vec4 displacedir=vec4(
		gl_Normal.x,
		gl_Normal.y,
		gl_Normal.z,
		0
	) * projectionmatrix;

	gl_Position -=	displacedir*0.05;
(I can't scale the attachement down anymore, can a mod help me out?)

Only registered members can see post attachments!

Somebody fucked around with this message at 01:55 on Aug 10, 2009

heeen
May 14, 2005

CAT NEVER STOPS

shodanjr_gr posted:

Why would you want to do displacement mapping in projection space?

I'm doing adaptive subdivision surfaces, and I'm projecting the control mesh first and do the subdivision on the already projected points. This way I can calculate the error to the limit surface depending on perspective and I save a lot of matrix-vector nultiplications.

I found my code works if I write matrix * vector instead of vector * matrix.

heeen
May 14, 2005

CAT NEVER STOPS

Avenging Dentist posted:

Oh come the f8ck on people! sure it looks great, sure its real time, sure its closer to photo realistic, but who cares?!!?
Its not gonna make the game any better, some of the best games I've played looked like poo poo, and that is one of the most talked about issues of this gen, the fact that a LOT of man power goes to the graphics, rather than going on better AI/ideas/physics.
I respect them for making such an advanced engine, but this simply shouldn't be of interest to the gaming industry, we're not watching a movie here, we're playing a game, and as seen in games such as Crysis , graphics dont make a game.

Maybe go rant in the gameplay development thread?

heeen
May 14, 2005

CAT NEVER STOPS

Contero posted:

Where do you guys look for research-y type free models to test out rendering techniques with?

In particular I'm looking for a huge, textured, relatively nice looking landscape mesh to play around with.
There's the puget terrain, it's not textured per se, though.

heeen
May 14, 2005

CAT NEVER STOPS
I have a OpenGL bug that I just can't get a grip on: I render stuff into a fbo, which I can clear to whatever color correctly, but everything I render turns up in white.
I have:
code:
fbo->Bind();
	CHECK_GL_ERROR();
	glPushAttrib(GL_VIEWPORT_BIT | GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT| GL_ENABLE_BIT	| GL_TRANSFORM_BIT);
	
	glClearColor(0.0,0.0,1.0,0.0);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);

	glDisable(GL_CULL_FACE);

	glUseProgram(0);
	
	glActiveTextureARB( GL_TEXTURE1_ARB );
	glBindTexture(GL_TEXTURE_2D, 0);
	glDisable(GL_TEXTURE_2D);

	glActiveTextureARB( GL_TEXTURE0_ARB );
	glBindTexture(GL_TEXTURE_2D, 0);
	glDisable(GL_TEXTURE_2D);	
	
	glDisable(GL_LIGHTING);
	glDisable(GL_LIGHT0);

Am I missing something?

heeen
May 14, 2005

CAT NEVER STOPS

Contero posted:

What is the difference between pixel buffer objects and render buffer objects. I'm trying to render something offscreen and I looks like I can use either. What are their advantages/disadvantages? Is one more dated than the other?

As far as I know, PBOs require a context switch to render to and are slower because of this. I think the data you copy into a texture subsequently also has to travel over the bus.
All the cool kids use FBOs nowadays. Plus you can use multiple render targets etc.

heeen
May 14, 2005

CAT NEVER STOPS
What would you reckon is the best way to sort render states:
shaders, uniforms(textures), vbos?

heeen
May 14, 2005

CAT NEVER STOPS

OneEightHundred posted:

Generally speaking, shader changes are more expensive but nearly everything else causes a full re-upload of the state on modern hardware.

The real solution is to do things that let you avoid render state changes completely. Merging more things into single draw calls, and overarching stuff like deferred lighting.

Can you cite anything for those claims? I'd love to read about it more in depth.
I thought changing the source of the vertex data wouldn't require a pipeline flush since vertices must be passed by value anyways, whereas shaders and uniform changes require the pipeline to be empty before you can change anything.

heeen
May 14, 2005

CAT NEVER STOPS

adante posted:

I'm a beginner and am writing a simple engine to display simple geometric shapes - no textures needed. I want to draw points as X's on the screen - am I right in thinking I now need to look into point sprites and textures for this, or is there a simply way to achieve this?

Just draw two lines for every point: one from (-1,-1) to (1,1) and one from (-1,1) to (1,-1). Scale as needed.

heeen
May 14, 2005

CAT NEVER STOPS

Bonus posted:

I'm just using glBegin/glVertex/glEnd for each triangle. Does using a vertex array or VBO really help that much? I have an 8800 GT so the card itself shouldn't be a problem. I'll try using vertex arrays and see how it goes.

glVertex calls means sending each triple of floats over the bus to the gpu. 1024 * 1024 in triangles means 1024*1024*2*3 ~ 6 million calls per frame.

Next step would be glDrawArrays, which means telling opengl with a single call to render triangles from 6 million vertex positions.

Next you'd use a index buffer, so you have a buffer of 1024*1024 vertices (1024*1024*3*sizeof(float)), and an index buffer telling you which vertices a triangle is made up of which would be 1024*1024*3*sizeof(int). Of course you can optimize by using tri-strips which use the previous two vertices plus one new vertex to form the next triangle. You can also optimize by vertex transform cache which is the last 24? 36? vertices after they went through transformations (or vertex shader).

On top of that you'd put the vertex buffer and the index buffer entirely on gpu memory, which is what VBOs are. Basically you request a block of memory from the gpu, upload your data, and tell opengl to use the on-gpu-buffer for your subsequent render calls.

heeen
May 14, 2005

CAT NEVER STOPS
Whats the latest word in shadowing techniques? Does anyone have some performance numbers on shadow volumes vs. (omnidirectional) shadowmapping?

Is there a better technique for shadowmaps than 6 passes for the six sides for a depth cubemap? I've heard the term "unrolled cubemap" somewhere, what's that about?

OneEightHundred: any chance you know LordHavoc from irc?

heeen
May 14, 2005

CAT NEVER STOPS
Display List do have great performance, especially on nvidia hardware. The compiler does a very good job at optimizing them. But as soon as you're dealing with shaders things will start to get ugly because there are problems with storing uniforms etc.

While you're at it, stick to the generic glVertexAttrib functions instead of the to-be-deprecated glVertexPointer/glNormalPointer/... functions.
You will probably need to write simple shaders for the generic attrib functions, though.

heeen
May 14, 2005

CAT NEVER STOPS

UraniumAnchor posted:

Is there a way to write shader attribs straight from CUDA without having to pass through main memory?

Specifically what I'd like to do is simulate some 'terrain' morphing (in this case shifting water levels) where the morphing computation is done in CUDA, and passes the updated height information right into the vertex pipeline without ever leaving the card.

allocating:
code:
glGenBuffers(1, &vbo);
   glBindBuffer(GL_ARRAY_BUFFER, vbo);
	glBufferData(GL_ARRAY_BUFFER, pre_numvertices * sizeof(vertex), 0, GL_DYNAMIC_COPY);
	CheckGLError(__FILE__, __LINE__);
	glBindBuffer(GL_ARRAY_BUFFER, 0);
    cutilSafeCall(cudaGLRegisterBufferObject(vbo));
main loop:
code:
	cutilSafeCall(cudaGLMapBufferObject((void**)&faces, vbo));
//call kernels to work on "faces"
	cutilSafeCall(cudaGLUnmapBufferObject(vbo));
//draw vbo
deallocating:
code:
cutilSafeCall(cudaGLUnregisterBufferObject(vbo));
	glDeleteBuffers(1, &vbo);

heeen
May 14, 2005

CAT NEVER STOPS
Does anyone have a uniform buffer object class I could have a look at? I'm trying to figure out a good way to bring global uniforms, material-specific uniforms and maybe surface specific uniforms together.
Standard uniforms were giving me a headache because I didn't want to query uniforms by name every time I set them (hundrets of times per frame), so I had to cache uniform locations somehow, which is tricky: a global uniform could have a different location/index in every shader that it is used in, but the material/"global uniform manager" each needed to track them individually.

heeen
May 14, 2005

CAT NEVER STOPS

PDP-1 posted:

Are there any tips or tricks for debugging shader files?

I'm using DirectX/HLSL and finding writing custom shaders to be very frustrating since it isn't possible to step through the code like you can in a normal IDE and there's no way to print values to a debug file.

Are there any kind of emulators that will take a .fx file as input, let you specify the extern values and vertex declaration, and walk through the code one line at a time?

There's glsldevil and gdebugger.

Adbot
ADBOT LOVES YOU

heeen
May 14, 2005

CAT NEVER STOPS
code:
extension GL_ARB_texture_gather not supported in profile gp4fp
Huh?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply