Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
shodanjr_gr
Nov 20, 2007

OneEightHundred posted:

GameDev.net's forums are mostly aspiring programmers with no experience, so yes.


So what are some good fora for getting OGL help?



My question is this:

I am working on implementing shadow mapping in some GLSL stuff i'm working on. I figured id use a GL_DEPTH_COMPONENT texture attached to a Frame Buffer Object and render the scene from my lights POV into that FBO.

What i want to do now is visualize the light's depth buffer (render it to a quad). And i am a bit clueless as to how i can read a GL_DEPTH_COMPONENT texture from inside a shader (using a texture sampler).

Any ideas?

Adbot
ADBOT LOVES YOU

shodanjr_gr
Nov 20, 2007

OneEightHundred posted:

If you need to visualize the depth, then you'll need to render to a non-depth format. I think the only alternative is to read the depth values using glReadPixels, which is slow as gently caress, but will obviously work for debugging.

Actually, i set the compare mode to GL_NONE, and then i managed to read it using a sampler_2D and accessing the R-channel.


Ive sorta completed my implementation (although it was jaggy as hell, despite running it on Nvidia hardware). I am using a 16-sample dithering technique now, and the result is better, but not THAT better...

Also, is it possible to do shadow-mapping with point-lights? I suppose you could approximate it by giving the light a very large FOV, but my tests show that this doesnt give good results at all (it obviously decreases the shadowmap detail).


edit: also, agreed on OpenGL being a pain...ive been getting increasingly tempted to port my research code over to D3D and XNA, due to the vast amounts of documentation/tutorials and IDE integration that help you get the technical stuff out of the way easily...Plus i would get to run my stuff on my 360!!

shodanjr_gr
Nov 20, 2007

Stramit posted:

Deferred Shading
HDR



Id actually love a brief how-to on Multiple Render Targets in OGL since id like to build a basic deferred renderer at some point.

Also, what are you guys using as a scenegraph/model loading/camera managing framework? I dont want to do too complicated stuff, my aim is basically to load some models and move them/the camera around, and animate them (applying some shader effects on them of course), and this has been getting kinda clunky using pure OGL.

shodanjr_gr
Nov 20, 2007
Out of curiosity, what are you making exactly?

That looks like a plenty cool engine for an old school Strategy RPG :D

shodanjr_gr
Nov 20, 2007
Say i got an opengl FBO with 4 color attachments, and i want to clear only one of those attachments (that is, do the equivalent of glClear(GL_COLOR_BUFFER_BIT);). How do i do that?

shodanjr_gr
Nov 20, 2007

OneEightHundred posted:

Use glColorMask to disable the channels you don't want to clear, then use glClear

I dont want to mute certain channels, i want to mute whole buffers against clearing (for instance my FBO has 4 color buffers + a depth buffer, i only want to clear the first color buffer). How does glColorMask help me with that?



Also i am setting glClearColor(1.0,0.0,0.0,1.0) and then calling glClear(COLOR_BUFFER_BIT | DEPTH_BUFFER_BIT) and it doesnt seem to be "going through" (i dont get a red background color in my Color_attachments). Any idea what's up with that? edit: Nevermind, fixed.

shodanjr_gr fucked around with this message at 01:37 on Sep 1, 2008

shodanjr_gr
Nov 20, 2007
Thanks for the tut mate (although it came a bit late, i figured it out myself a month or so ago and wrote my renderer :P)!

I got another question, which is quite interesting.


I am rendering a shadowmap, which is of course a depth buffer from the light's point of view. As far as i know, a GL_DEPTH_COMPONENT texture has 8 bits that are used as a stencil value, when needed.

I want to be able to access those 8 bits and store a value of my own.

The thing is, i cant figure out the best and easiest way to do this.

If i assume that i use an FBO for my shadow map rendering, and then attach the depth texture to the GL_DEPTH_ATTACHMENT point of that FBO, i cant see a way to access the stencil value and WRITE to it from inside a shader (the only DB-related output variable that GLSL seems to offer is gl_FragDepth, which is a float).

So my alternative is to use a depth buffer for well, depth buffering, then attach a separate GL_DEPTH_COMPONENT texture to one of the FBOs GL_COLOR_ATTACHMENT points. I then assume that i will be able to access the depth value by gl_FragData[i].x and the stencil value by gl_FragData[i].a (correct me if i am wrong). However, i am not sure about the actual depth value that needs to be stored so that the texture can be used properly as a shadowmap (using shadow2DProj, so that i can be filtered etc). Does it have to be the eye-space Z value, eye-space Z/W value, or something else?

Any ideas?


edit:

Extra question:

Click here for the full 800x800 image.


Can anyone explain to me why does this artifacting happen? I dont mean the small artifacts (which i assume will go away with extra filtering), i mean the handle of the teapot casting a shadow on the FRONT face of the teapot...it doesnt make much sense to me....

shodanjr_gr fucked around with this message at 05:27 on Sep 5, 2008

shodanjr_gr
Nov 20, 2007
Can anyone give me some guildlines on geometry shader performance? (a paper/article/tutorial for instance)

I am using GSs to output GL_POINTS. If i output 4 or less points per shader call, performance is fine. Anything more than that, and i get 1-4 fps (down from > 75). I have a feeling that the shaders MIGHT be running in software for some reason, but i cant figure out what i am doing wrong...Any ideas?

(also is there a way to include a "static" vec2 array inside a geometry shader? Such a thing is possible inside a Vertex/Fragment shader, but the compiler does not get past the "opengl does not allow C-style initializers" when compiling the GS).

shodanjr_gr
Nov 20, 2007

heeen posted:

What GPU are you running on? I have no problems outputting dozens of vertices.
nvidia 8800GS

Ive checked those variables, and i am well within their limits (max output is 1024 primitives per call iirc).

shodanjr_gr
Nov 20, 2007
Thanks for the input!

Hubis posted:

This is due to the fact that the primatives (triangles, points, etc) need to be rasterized in issue order, which means that the memory buffer coming out of the geometry shader stage has to match the order of primatives coming into the geometry shader (i.e. primative A, or any shapes derived from primative A, must all come before primative B).
I dont suppose there is a way to disable that? What i am doing does not actually require in-order rasterization, i use additive blend to accumulate the fragments from all the generated geometry, so order does not matter.

I am just weirded out that generating a display list with my high-quality point mesh and sending it over the card is a lot faster than generating a low quality mesh display list and expanding it in the shader... (for instance, i want to end up with an 800 * 800 point mesh, so i figured id send an 80*80 mesh, then generate further points inside the GS).

If anyone else has any clues, please let me know.

e:

quote:

e: To be taken with a grain of salt. Looking at the post again, 75fps to 4fps seems like a very dramatic slowdown for this sort of thing. It could actually be possible that you're running into software mode, but that seems unlikely based on your description.

To give you a measure, lets assume that i send over a 200 * 200 grid display list to the card and i generate 1 point in my GS. This runs at < 75 fps and all is well. If i keep the display list constant, and i generate 4 points in my GS, i drop to 25 FPS! If i move up to 9 points, i drop to 12 FPS. 16 points brings me into single digits. At 16 points per grid point, i get a total of 640000 points. Now, if i send an 800 * 800 display list for rendering, and only generate a single point (for the same number of total points), i get at least 15 FPS. So for the same amount of geometry, using the GS to expand gives me a 75% reduction to frame rate compared to a display list...

shodanjr_gr fucked around with this message at 18:39 on Sep 26, 2008

shodanjr_gr
Nov 20, 2007

Hubis posted:

ah ok. Then yes, this sounds like you're running into the issue I described above.

I'll look into your suggestion to see if i can get a buff in FPSes...i really dont want to can this idea, since its been producing very nice results visually.

Can you point me to the right direction for Stream-out buffers?


edit:

Is there a chance this is an nvidia only problem? (i assume there are pretty large architectural differences between nVidia and ATi GPUs).

shodanjr_gr fucked around with this message at 20:45 on Sep 26, 2008

shodanjr_gr
Nov 20, 2007
Thanks for the replies Hubis. Ill check out the spec you linked.

Final question, is there a way to query the driver to see if it's running in software mode?

shodanjr_gr
Nov 20, 2007

sex offendin Link posted:

Can't you request that software fallback be disabled when creating the context? That would cause it to error out instead of silently slowing down, right?

Not sure...Ive been using GLUT to handle context creation...

shodanjr_gr
Nov 20, 2007

brian posted:

Ok another question for you dapper fellas, how do I copy a texture from one handle to another in openGL without copying the pixels manually? I want to create a faux-motion blur effect by taking the previous frame and doing a simple fragment shader blend with the current one, I can get the current framebuffer using the framebuffer EXT and it's stored in an image handle but that handle is linked to the FBO so whenever the FBO is drawn to it's overwritten. I know I can detach the current image and attach a different one but that doesn't seem like a particularly good way to go about it but I could be wrong, would using an alternating color attachment each frame do the job? Any help would be fabbo.

Using one framebuffer and alternating color attachments sounds very feasible to me. What i am not 100% sure about is whether you can read and write to the same texture inside the same shader call. If you could that would mean that you'd only need two textures to do the motion blur (previous frame texture and the current framebuffer texture. For each framebuffer pixel you read its value plus the "motion" equivalent from the old buffer and mix them, then write the result back on the same buffer) else, you'd need a third texture as a buffer.

Mind you that you can also "fake" motion blur without saving the previous framebuffer. Just sample on the current frame framebuffer, in the same direction as you would normally, to get your "previous frame" value. Im pretty sure it works OK and it saves memory to boot.

shodanjr_gr
Nov 20, 2007

brian posted:

Well I got that working like that, but now i'm having a buttload of trouble getting GLSL to work properly to blend it, the GLSL code is fine (tested in shader designer) but applying it correctly is hurting my head, at the moment I get it displaying without any noticable blur, with the alpha channel not working and there being a load of flickering, is there a way to blend the textures without using shaders because the whole glTexEnv stuff is hurting my already broken brain :(

Why are you using blending?

Do something like this:


gl_FragColor = vec4(texture2D(currentFrame, gl_TexCoords[0]).rgb * 0.8 + texture2D(previousFrame, gl_TexCoords[0] + motionVec * sampOffset).rgb * 0.2, 1.0);

inside your blur shader.

And even if you use alpha blending, its not hard to do. Set gl_BlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) and just regulate the alpha from side the shader to get the wanted result.

If you are getting flickering, then you are (probably) doing some weird stuff with your framebuffer objects and/or your texture bindings and/or your clearing of color buffers. Cant tell you much more without looking at your code.

shodanjr_gr fucked around with this message at 00:55 on Oct 10, 2008

shodanjr_gr
Nov 20, 2007

brian posted:

Here's the main gl stuff, the ShooterWorld::draw() function just draws a series of sprites from the world, here's the sprite drawing code, I tried it with your fragment jobby but it just produces a pure black window because I assume i'm doing something wrong (I set the motionVec to (5,0) since that's the speed the camera and ship move but that's probably wrong too. I don't quite understand why it needs a direction if everything is moving anyway since the next frame will be further along the sprite than the second one thus causing the same effect right?

Any help would be fabbo fella!

Ill look into your code later, but for the moment, if you are getting a totally black picture there are a few things that may be going on:

a) Your previous and current frame textures are blank, or they are not bound/initialized properly
b) You are clearing your before and after buffers before you actually do the blur pass (so the textures are blank when you try to read them)
c) You are using a wrong alpha value inside your blur shader (probably zero)
d) You have enabled backface culling during your post processing blur pass (and your quad gets culled because it faces the wrong way)
e) You have enabled Z-culling and your quad gets culled (although this isnt too likely, since you are only rendering one primitive)
f) You are doing your transformations wrong. If you want to do a post processing pass, just set your modelview and projection matrices to identity, throw up a quad from -1,-1 to 1,1 at some Z-value, and it should work.


Ill check your code later :)

shodanjr_gr
Nov 20, 2007

brian posted:

Well it works (not correctly and really oddly) if I do a simple gl_FragColor = texColor1 + texColor2 type thing but then again I have no idea what i'm doing wrong, i've rejigged it so many times just trying to get it to work that i've lost any clue on what i'm doing and the whole file is a mess, it was just to quickly test it before I clean it up but it's turned awry! There's an issue where if I have the FBO go to one color attachment the other one gets turned to pure black despite nothing being done to it, which could be the cause but that wouldn't explain why it works the aforementioned simple way. I'm so horribly confused I don't usually do alot of graphics programming beyond simplistic sprites and particles so my understanding of the GL state system is spotty and I just tend to try things until it works instead of thinking about it correctly, that said I can't seem to wrap my head around the processes involved with FBOs and shaders when it comes to textures because of the whole removing the fixed function pipeline jobby.

Post your shader code here if you can.

Also, nothing has been removed so far from the fixed functionality as far as i know. I think some stuff became deprecated in OpenGL 3.0, but its still there.

An FBO is something OGL can render to. It can have multiple color attachments (for Multiple render target rendering), a depth attachment, and i think one more type (which eludes me now).

If you want, you can generate a texture (as normal) and attach it to a color attachment point of an FBO. So for instance, you can generate an FBO, create a GL_DEPTH_COMPONENT texture and attach it to the GL_DEPTH_ATTACHMENT point (so it becomes the FBOs Z-buffer) and an GL_RGBA texture and attach it to the GL_COLOR_ATTACHMENT0 point. Remember to call glDrawBuffers() or something like that, to specify which color attachments are active for the current FBO. Make SURE that both your textures have exactly the same height and width. OpenGL does NOT support different render target sizes for the same FBO, if you do it, your FBO wont initialize. If you have set only one color_attachment as the target for an FBO, inside your shader, use gl_FragColor to save the result. Else you will have to use gl_FragData[0/1/2/4...].

After that, when you want to write to the FBO, you can just "bind" it (glFramebufferEXT(GL_FRAMEBUFFER_EXT, yourNewFBO)), and use it as you would use your normal framebuffer, with the exception that instead of rendering to your screen, you render to a texture, which you can also use later on for whatever you need.

Another goon had posted an FBO tutorial a few pages back (it might also be in the Game Development MEgathread...im not sure). Go look at it if you can, it will help you out.

shodanjr_gr
Nov 20, 2007

brian posted:

Actually the FBO on its' own works fine, if the shaders aren't enabled I can get a mini viewport of the same scene as the main viewport and it works great, when I do two color attachments and alternate frames on which they're assigned by glDrawBuffer()
Dont attach both textures at the same time. Im not sure how that handles. Use ONE color_attachment and alternate the texture you attach to it. o gets deleted.

quote:

test.vert:
code:
void main(void)
{
	gl_TexCoord[0] = gl_MultiTexCoord0;
	gl_TexCoord[1] = gl_MultiTexCoord1;
	
	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
test.frag
code:
uniform sampler2D TextureUnit0;
uniform sampler2D TextureUnit1;

void main()
{
	vec4 value1 = texture2D(TextureUnit0, vec2(gl_TexCoord[0]));
	vec4 value2 = texture2D(TextureUnit1, vec2(gl_TexCoord[1]));
	
	vec4 color = (value1 + value2);

	gl_FragColor = color;
}
This looks fine.

quote:

code:
glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0, 0);
glMultiTexCoord2fARB(GL_TEXTURE1_ARB, 0, 0);
glVertex2i(0, 0);

glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0, 1);
glMultiTexCoord2fARB(GL_TEXTURE1_ARB, 0, 1);
glVertex2i(0, 512);

glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1, 1);
glMultiTexCoord2fARB(GL_TEXTURE1_ARB, 1, 1);
glVertex2i(1024, 512);

glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1, 0);
glMultiTexCoord2fARB(GL_TEXTURE1_ARB, 1, 0);
glVertex2i(1024, 0);

I think the way you throw your postprocessing quad is wrong. The coordinates should be

code:
glBegin(GL_QUADS);
			glTexCoord2f(0.0,0.0);				
			glVertex3f(-1.0,-1.0,-1.0);
			glTexCoord2f(1.0,0.0);				
			glVertex3f(1.0,-1.0,-1.0);			
			glTexCoord2f(1.0,1.0);				
			glVertex3f(1.0,1.0,-1.0);
			glTexCoord2f(0.0,1.0);					
			glVertex3f(-1.0,1.0,-1.0);
		glEnd();
Since you are working in screen-space. Try it that way and let us now how it goes. Disregard the glTexCoords, im just using one coordinate set.

quote:

As for the fixed function pipeline I was under the impression by using any vertex or fragment shader it goes around the usual processes that gl performs (like having to transform vertices with ftransform() or modelview * vertex).

Yes that's true.

shodanjr_gr
Nov 20, 2007
I got a small question.

Suppose i want to render slices in front of my view volume for something (a volumetric effect for instance) and save them to texture. The obvious way to this would be to set (Zfar - Znear) to something small, then constantly keep pushing my the near clipping plane forward, capturing more and more of my volume as i go (and saving to a different texture each time). However, this seems really slow, especially considering that id have to reissue my geometry a number of times equal to the number of slices.

Is there some way to make this faster? I remember reading something about an instancing extension, googling seems to indicate that this is about instancing as in rendering multiple pieces of geometry using the same vertex data etc etc).

Any ideas?

shodanjr_gr
Nov 20, 2007
Are those "blue things" as well as the rocks spites (as in textures rendered on billboards)? If so, maybe you are rendering them right on top of each other (which would explain the z-fighting)...

To improve performance, you can look into lots of things, including geometry instancing (you have TONS of similar looking geometry in those pictures) and various occlusion-culling techniques. Furthermore, you could easily save lots of display-list calls, if you created lists for larger block sizes as well (for instance, instead of just keeping 1x1 block size lists, make some 2x2, 3x3, etc etc lists and call them, based on the size of your areas).

I think that the Z accuracy of opengl only depends on the Zfar - Znear value, but i could be wrong...When you create your frame buffer, try using a larger accuracy depth component (32 bits for instance) to rule out Z-issues.

shodanjr_gr
Nov 20, 2007

quote:

How would i do that in OpenGL? I've never seen an explicit option for that and from what i remember from older games (Giants: Citizen Kabuto) that pretty much depends on the display bit-depth. I think OpenGL actually uses the alpha channel for each pixel to store the depth value.

No, opengl uses a separate render-target sized buffer to store the depth information. The precision of that buffer is adjustable. Besides, imagine if OGL used the alpha channel to store depth info, how would you do alpha blending AND depth testing at the same time? :)

I only code in C, so im not sure if you can do this in perl, but in C, when you create your Frame Buffer Object, you can do something like this:

glGenTexture(blah, blah, blah, GL_DEPTH_COMPONENT_32, blah, blah, blah);


before attaching your texture to the color attachment point and your depth buffer is assigned a higher precision value.
If you arent using FBOs, use FBOs. Its one of the features of OpenGL that ive grown to ADORE ever since i learned about it.


Also, why are you using Perl?

fakedit: but as its been said already, i find it unlikely that these are depth precision issues. Its more likely that you have forgotten some sort of Z-test flag off at a point inside your rendering code...

shodanjr_gr fucked around with this message at 22:53 on Oct 25, 2008

shodanjr_gr
Nov 20, 2007

Mithaldu posted:

I'm not, since I have no idea what they are and what they do. I mentioned it earlier in the thread, but this is the first OpenGL app i'm doing and everything i'm using i'm literally learning as i go along. Could you maybe explain in plain english and shortly what they are and why i should be using them? :)

Framebuffer objects are basically off-screen framebuffers. You can render to them as you could render to your screen, but nothing shows up on your screen, everything is stored in textures. When you create a framebuffer objects, you "attach" textures to its "attachment points". You can have many "color attachments" (so you can do something that's called Multiple Render-target rendering), one "depth attachment" (a depth buffer, of varying precision), and an accumulation buffer (i think). So you generate your FBO, generate your textures, bind them to your FBO, then use your FBO in the same way you would normaly. This is really useful if you want to do post processing, or use a render pass' output as input in a shader, etc etc. Also its pretty easy to do. A goon made a tutorial at some point, its either in here or in the Game Development thread, so you should check it out.

quote:

For the above reason. Hilariously fast prototyping that allows me to concentrate on learning what i'm dealing with instead of worrying about arcane intricacies of C++ memory handling. Aside from that it also allows me to keep everything Public Domain and increases the chance of someone coming after me and picking it up in case i ever decide to drop it.

Oh well, whatever floats yer boat :P. I am terrible with C, but i havent had issues with coding OpenGL for it (although i mainly focus on shader development, so my C code rarely changes).

quote:

Well, as far as i'm aware these are all relevant bits:
code:
# called once at the start
sub initialize_opengl {
    [snip]
    glClearDepth(1.0);
    glDepthFunc(GL_LESS);
    [snip]
}[/quote]

Out of curiousity, change the GL_LESS to GL_LEQUAL (i doubt it will change much though).

[quote]
[code]        
    render_models();

See anything missing?

Post your render_models(); code as well.

shodanjr_gr
Nov 20, 2007
Ive got an Screen Space Ambient Occlusion related question (far more about graphics "theory" than OpenGL technicalities). Ive written my first SSAO shader, and am quite pleased with the initial results.

However, the results get less pronounced the further the object is inside the view frustrum. This is a result of the farther objects recieving less precision, and thus the difference between adjacent Z-values being almost zero. Can anyone suggest ways to "counter" this, other than expanding the sampling kernel (which i am already doing)?

shodanjr_gr
Nov 20, 2007

Scarboy posted:

This should be really simple but I can't figure it out. I'm drawing a big black quad over the entire iphone screen. I want to load a texture of a circle onto the screen to REMOVE the black from that area (e.g. see what's behind the big black quad in that area). Any simple way to get this working with blend functions or something obvious that I can't figure out?



You can do this in a few ways (not with blending though). THe first that comes to mind is to use the stencil buffer to find the affected fragments by the "circle draw" and only draw on those pixels for the final pic.

shodanjr_gr
Nov 20, 2007
I have an issue with some fixed pipeline code I am writting. I'm running the code on my Acer Aspire One, with a GMA945. The problem is that it looks like it is extremely fill-rate limited, and it feels like its not being accelerated by the GMA. Now I understand the crapyness, but this thing runs unreal tournament fine at high resolutions, so it's not a lack of support on the hardware side.

I think that the header files, or the DLLs I am using are not talking to the driver properly and thus everything runs in software mode. I cant track down an "Intel-specific" opengl32.dll anywhere...




Does anyone have any suggestions on where I can go from here?

shodanjr_gr
Nov 20, 2007

Stramit posted:

Try direct x :S It sounds like a dll related issue. Are you running UT in GL or DX? If the first can you track down the DLL that it is using?

I think its not using the driver DLL but instead the generic windows one. I just cant figure out how to change that...On runtime it loads OPENGL32.DLL

shodanjr_gr
Nov 20, 2007

sex offendin Link posted:

Maybe it's falling back to software? Can you request a hardware-only context and see if it refuses or fails to render?

Is there a way to do this with GLUT?

shodanjr_gr
Nov 20, 2007

StickGuy posted:

What exactly are you trying to do? It sounds like you're attempting to use some GL functions that aren't supported by your hardware, which is causing it to default to software rendering.

Im writting some OpenGL 1.1 (or it could be 1.2) code, and I doesnt seem to be running in hardware mode (it is extremely fill-rate limited). I dont think I am using any unsupported calls...

shodanjr_gr
Nov 20, 2007

StickGuy posted:

You're also being incredibly vague about your application. What is it and what does it do? What do you mean by fill rate limited (i.e. what and how many things are you drawing that need to be filled)? Are you using any shaders? If so, what are you doing with them? What other things are you using? Lights? Textures? Blending? etc, etc

My app loads a low-poly model off an OBJ file, creates a display list and renders it on the screen.

I think it is fillrate limited, because the framerate is only affected by the size of the viewport and not by the complexity of the model being rendered (a 500*500 window runs at 5-10 fps, a 100*100 window runs way faster).

I am not using any shaders (this is pure fixed pipeline code), only one light, no blending, I do use textures though.

shodanjr_gr
Nov 20, 2007
Does Intel release any sort of OpenGL SDK/headers? Ive scourged the internet for them, but no luck, and I need some 1.4 functionality that the "standard" Windows headers (stuck at version 1.1 i think) do not offer.

shodanjr_gr
Nov 20, 2007

Avenging Dentist posted:

Doesn't Mesa 3D compile on Windows?

Wow. Thanks for linking this...

I've been coding in OpenGL for half a year now, and this is the first time I hear of Mesa 3D. Compiling it now!

edit: I compiled it (had to fiddle around with the GLUT header a bit though) and ran it. It "works", but the performance is equally bad to the headers that come bundled with windows...(when i run any of my code, it hits the CPU, HARD). Should i just shore this up to the general crappyness of the Intel driver?

bonus question:

Im trying to do some shadowmapping, but I want to avoid using FBOs. Is there way to read the current depth buffer into a texture using something like glCopyTexSubImage2D? <- fixed this. Turns out if the target texture is a depth component one, then glCopyTexSubImage2D will read straight from the depth buffer.

shodanjr_gr fucked around with this message at 11:25 on Dec 17, 2008

shodanjr_gr
Nov 20, 2007

OneEightHundred posted:

As for lovely performance, the 915G chipset (a.k.a. GMA950, the most common Intel IGP right now) performs worse than a GeForce 2, so don't be surprised. They're so alarmingly bad that you really only have two sane design decisions: Make your game look like it was made in 1997, or don't support Intel IGPs.

I'm just writting some demo code for a workshop I'm teaching, and most of the PCs at the lab have crappy Intel IGPs on them.


The thing is that performance is WAY too crappy. Let me put it this way:

I render at 512 * 512 resolution, a scene comprising of 2 spheres and a quad, 3 times (i'm doing shadowmapping), and this thing draws at LESS than 1 frame per second and hammers the CPU like there is no tomorrow.

shodanjr_gr
Nov 20, 2007

Mithaldu posted:

Just for kicks, try dropping the spheres and stick to cubes for now. See what happens.

Tried that, same thing (maybe sliiiiiiiiiiiiightly faster).


I also tried uncommenting all calls that are not related to matrix stuff, or to geometry production, but still the peformance is equally crappy...I have also reinstalled the GMA drivers by intel...


quote:

Oops.

What's wrong with FBOs? (i.e. why would you ever want to use pbuffers over them?)

While I understand (and love) FBOs, I just want to show how shadowmapping works in principle and dont want to overcomplicate the demo. Plus I cant be sure if all the lab hardware supports them...

shodanjr_gr
Nov 20, 2007

heeen posted:

How can I find out, which will be the first real free index to use for custom attributes?

http://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/attributes.php

bottom of the page, there is a table.

shodanjr_gr
Nov 20, 2007
Any suggestions on how I can do some profiling on GLSL code? I got an app running various shaders in succession and I want to see how each frame time is divided between those shaders.

I've installed Nvidias Perfkit that comes with the gDEBUGGER app, but I cant say I found out something that helps me out (PerfHUD on the other hand looks awesome...if only I were using DirectX).

shodanjr_gr
Nov 20, 2007
Do a glLoadIdentity(); call before the glOrtho call.

The active matrices do not reset on their own ;).

shodanjr_gr
Nov 20, 2007
Two things. OpenGL has a ambient lighting state variable that is not tied to a light source. I think it is GL_LIGHT_MODEL_AMBIENT, but don't quote me on that. So make sure that is also set to 0.

Also, make sure that you have not set any GL_EMISSION properties to any materials, since that will make them light up on their own.

Also, (just checking :P) make sure that GL_LIGHTING is set to on, else you'll just get flat shading based on the glColor3f properties of the geometry (or the textures, if you are using textures).

shodanjr_gr
Nov 20, 2007
I am writting a small volume renderer using OpenGL, ObjC and a bit of Cocoa. I've been testing it out with data sets from here: https://www.volvis.org

I've tested the typical engine.raw, foot.raw and skull.raw volumes, and they render fine.

However, I've tried some of the larger datasets that are available here (http://www.gris.uni-tuebingen.de/edu/areas/scivis/volren/datasets/new.html) and I seem to have trouble. The 3D texture gets created, but it seems really corrupted, as if data is shifted somehow, but I can't for the life of me figure out what's wrong....


I'm using the 8bit downsampled sets. I've tried setting the internal format of the texture to both GL_BYTE and GL_UNSIGNED byte, but it wont fix the issue (I just get the expected "shift" in density values).

To read the data i use the NSData class:

code:
NSData * myData = [[NSData alloc] initWithContentsOfFile:filenameIn options:0 error:myErrors];
and then use the bytes selector to get a pointer to the byte array.


I know its a bit of a longshot but have any of you goons by any chance tried to render these sets?

shodanjr_gr
Nov 20, 2007
Thanks for the help, but it turns out I was a moron and was using the wrong datasets :haw: (had the 12 bit and 8bit mixed up in my harddrive).

The 8bit ones render mostly OK (maybe with the expection of a few slices of noise in backpack.raw).

Adbot
ADBOT LOVES YOU

shodanjr_gr
Nov 20, 2007

TheSpook posted:

Are you making a volume renderer for the iPhone? That sounds pretty neat.

I'm thinking about putting together a little raytracer for Android/the iPhone, just for the experience.

Well, I wont lie to you, I was thinking about it, but I seriously doubt that the iphone has enough omph to do even rudimentary real-time volume rendering. The CPU is clocked at 400mhz and the graphics chip is not programabe (so stuff like ray marching is out of the question). So I'm developing it using ObjC/Cocoa/Glut on my white macbook with a 9400m.

A basic raytracer should be be more more pheasible.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply