|
Thug Bonnet posted:My question: Is there any reason to ever use immediate mode in OpenGL for anything but toy programs? Real answer: No there is no reason too. You can just wrap any immidiate call into a VBO / dynamic VBO and do it that way. Practical answer: I'm lazy and draw full screen quads in immidiate mode all the time :S. It's only 4 verts, it's not that slow right...
|
# ¿ Jul 9, 2008 01:24 |
|
|
# ¿ Apr 28, 2024 22:55 |
|
My awesome shadow mapping tutorial (in OpenGL) Disclaimer: I whipped this up pretty quick from some old source code, there are many optimisations I can see from just looking though the source ect, but it's a pretty good starting point to getting shadow mapping up and running Okay to start with there are three main steps to getting shadow maps working, each is important, and it's best to tackle the problems in an incremental way. Before you begin you really need to get yourself acquainted with the coordinate space which you will be using for texture projection and the like. Ensure you understand the notions of Model Space, World Space, Eye Space, and Texture Space and how all the matrix translations relate before trying to delve into texture projection. The three main steps to getting texture projection working are: 1) Texture projection - Projection is what is used to get a shadow aligned with the target surface. The best way to start is just by projecting a generic texture onto a surface. 2) Depth render - Rendering a depth buffer from the lights perspective for use as a depth comparison. 3) Performing the comparison to not colour surface which are in shadow. The example I am going to run through here is quite simple, it is a perspective projection, there are really no lights in the scene, and no filtering is applied to the shadow so it has rough edges. Texture projection Things you will need to get this working: 1) A scene that is working 2) A nice camera / transform system which allows access to the transform and projection matrices for the scene. Texture projection is a relatively simple concept. Much like a camera a projector can be considered to have a frustum. That is it has an volume which is projected into. Because of this a projector has a projection matrix like a camera. This matrix determines the volume and type of the projection. A projector can either project in orthographic or perspective mode, just like a camera. For information on how the projection matrices work you should check up in the Open GL specification if you don't already know. In the following example the following happens. 1) The scene is rendered using standard textures. 2) The scene is rendered again using the texture projection in an adative blend. Where the texture is black no colour is applied. Where the texture is coloured colour is adatively blended in. code:
Before looking into the shader itself it's probably bet to quickly cover the texture projection class, the code is documented and should provide the needed information. the only really interesting thing to note is that an offset matrix is required to convert points from perspective space (-1, 1) into a range of (0, 1) for the texture lookup. code:
code:
code:
Click here for the full 1104x870 image. Writing the depth information from the lights perspective The first thing you need to do is set up a FrameBufferObject with a depth texture. You probably won't need a colour buffer as you won't be writing colour for the depth write stage. This is a little codedump on how to do it, if you want an explanation I can provide it but it's a bit out of the scope of this tutorial. The code simply creates a depth texture which can be written to and read from. code:
code:
Click here for the full 1104x870 image.
|
# ¿ Aug 4, 2008 13:30 |
|
(Was a bit long for one post ) Doing the depth comparison Now for the fun part, putting it all together! What we need to do here is render the scene from the cameras point of view, but perform a projection (using the depth texture) from the lights point of view. At each texel on the screen we then compare the value in the light's depth buffer to the distance from that texel to the light. If the value in the depth buffer is closer then that texel is in shadow, otherwise it is not in shadow. I set up my pass as follows, here we bind the depth texture as well as the colour texture, the shader is written to only apply the diffuse colour if the the texel passes the depth test: code:
code:
Click here for the full 1104x870 image. I'm sure you have heaps of questions about all this, it's a pretty decent technique, but there are a few little gotchas along the way. Good luck with implementation! If anyone else has any 'tutorial' requests I can whip them up pretty easily. I have a decent code base with quite a few nifty little demos. Some I could write up if anyone is interesed are: Screen space ambient occlusion Deferred Shading Using (and abusing) the Phong shading model Stencil shadows Normal / Parrallax mapping A simple HDR pipeline
|
# ¿ Aug 4, 2008 13:33 |
|
shodanjr_gr posted:I have an issue with some fixed pipeline code I am writting. I'm running the code on my Acer Aspire One, with a GMA945. The problem is that it looks like it is extremely fill-rate limited, and it feels like its not being accelerated by the GMA. Now I understand the crapyness, but this thing runs unreal tournament fine at high resolutions, so it's not a lack of support on the hardware side. Try direct x :S It sounds like a dll related issue. Are you running UT in GL or DX? If the first can you track down the DLL that it is using?
|
# ¿ Dec 10, 2008 22:42 |
|
Spite posted:I think it's absolutely horrible, even if you find a binding that doesn't create some over-engineered OO paradigm. You are wrong. LWJGL is a great binding. Uses native buffers to imitate float pointers and the like and the API is for the most part a direct binding. There is some Display classes and stuff to get it started that are not GL specific but all the openGL code will be. http://lwjgl.org/
|
# ¿ Jul 9, 2009 07:47 |
|
RussianManiac posted:I also had fun time using JOGL - https://jogl.dev.java.net/. From what I remember JOGL does a bit more then be a straight up GL wrapper. LWJGL (the gl portion at least) is a straight up wrapper. Functions have the same names and take the same arguments. It just pipes the commands down into native via JNI. I think JOGL tries to do more but it has been a few years (I jumped on the C++ bus). e: I was wrong about JOGL it is aparently 'just a binding'. quote:JOGL differs from some other Java OpenGL wrapper libraries in that it merely exposes the procedural OpenGL API via methods on a few classes, rather than attempting to map OpenGL functionality onto the object-oriented programming paradigm. Indeed, the majority of the JOGL code is autogenerated from the OpenGL C header files via a conversion tool named Gluegen, which was programmed specifically to facilitate the creation of JOGL. stramit fucked around with this message at 08:51 on Jul 9, 2009 |
# ¿ Jul 9, 2009 08:48 |
|
And whilst all this is 'good practice' if you are writing a little engine to play around with at home it's mostly overkill and it's probably better spending time writing features than optimising the poo poo out of context changes / synchronisation. Not having a go, this is really important stuff. Just trying to say that if you are just learning to program graphics there are better ways to spend your time.
|
# ¿ Apr 30, 2010 01:40 |
|
PDP-1 posted:Are there any tips or tricks for debugging shader files? Use PIX for windows. It comes with the Direct X SDK and allows you to do a variety of things. You can debug a pixel (step through each render call that wrote to that pixel), Check render state and a whole bunch of other features. If your application is in DX10 then it should be pretty easy to use. DX9 PIX is a bit flakey but will still get the job done. I prefer these to specific 'shader' debuggers as you can check a variety of things and it is based on what YOUR application is doing. stramit fucked around with this message at 00:09 on May 12, 2010 |
# ¿ May 12, 2010 00:07 |
|
tractor fanatic posted:I'm writing a little program with a top-down camera (like Diablo) and I want to add some shadows. Is shadow volume or shadow mapping a better option for this? I'm concerned shadow mapping will produce aliasing effects since the light sources will be close to the ground but cast long shadows, but I've read in this thread that shadow volumes are more or less unused these days? There are some specific situations where shadow volumes are useful but for the most part they are not really used. You can filter your shadow maps to reduce aliasing effects, or increase the size of your shadow map buffer. If they are good enough for AAA titles then they should be good enough for you
|
# ¿ Jun 6, 2010 07:11 |
|
|
# ¿ Apr 28, 2024 22:55 |
|
I found some awesome OpenCL tutorials here: http://macresearch.org/opencl Subscribe in itunes and they have video presentations with slides and everything... reminds me of being back at uni. Oh how times change! If you know anything about the GPU I would skip the first one and start on the second. Things don't really start to get interesting till the 3'rd though, but the second goes over how OpenCL works. (first lectures are just explaining what GPGPU is ect). I'm going to do some particle simulations or a raytracer in GPGPU when my new mac arrives I think. Yay
|
# ¿ Jun 23, 2010 06:57 |