|
Hubis posted:I have high confidence you could get this working on an NVIDIA Ion system That is very likely, considering that the GPU is the same as the one on the Macbook im currently working on.
|
# ¿ May 14, 2009 11:33 |
|
|
# ¿ May 14, 2024 13:48 |
|
Can't help you with the selection buffer bit, but I caught this while going over your code:code:
Any particular reason you are doing that?
|
# ¿ May 15, 2009 18:58 |
|
PnP Bios posted:I imagine most of the changes in 3.0 have to do with GLSL rather than the core API. ex, implementation of geometry shaders. Geometry Shaders have been in as an extension before 3.0 came out. 3.0 was a horrible release for the most part though. Khronos had been promissing blow-jobs and a new API all around, and they ended up with an incremental update instead of a major revision...
|
# ¿ Jun 6, 2009 05:50 |
|
OneEightHundred posted:As for getting tutorials for it: Try to think of OpenGL 3.0 as OpenGL 2.2, except with "GL3/gl3.h" as your include file. What you really want to get in to is using GLSL instead of the fixed-function poo poo, since they hardly changed anything else. I'd actually like to track down a tutorial for the new "direct state access" dealie. I can see that making my OpenGL code a bit more readable. Also, is there a go-to tutorial for using stream-out buffers?
|
# ¿ Jun 6, 2009 17:22 |
|
Since we are talking about point sprites, is it possible to get to the point-sprite generated geometry inside a geometry shader?
|
# ¿ Jun 18, 2009 02:19 |
|
Spite posted:If you are using geometry shaders (and really, they kind of suck since they aren't very performant), why not extrude a single vertex into a quad yourself? It could possibly be faster if done natively by the GPU.
|
# ¿ Jun 18, 2009 09:12 |
|
code:
code:
Why would you want to do displacement mapping in projection space?
|
# ¿ Aug 10, 2009 12:13 |
|
heeen posted:I'm doing adaptive subdivision surfaces, and I'm projecting the control mesh first and do the subdivision on the already projected points. This way I can calculate the error to the limit surface depending on perspective and I save a lot of matrix-vector nultiplications. Interesting. So you only subdivide the points that actually matter to the viewer, that makes sense. Do you have any relevant papers that you can link to? I like learning new stuff .
|
# ¿ Aug 10, 2009 20:39 |
|
http://kotaku.com/5335483/new-cryengine-3-demo Anyone got any info/links for the technique demonstrated by Cry-tek in the linked video? edit: http://www.crytek.com/fileadmin/user_upload/inside/presentations/2009/Light_Propagation_Volumes.pdf white paper here! shodanjr_gr fucked around with this message at 17:40 on Aug 12, 2009 |
# ¿ Aug 12, 2009 17:32 |
|
BattleMaster posted:Wait people still think that graphics and gameplay are tradeoffs? Even though the skills needed to develop either don't really overlap? Fallout 3 would have been so much better if it didn't use 3D graphics. Bethesda should have opted for ASCII visuals instead, they improve gameplay :iamafag:.
|
# ¿ Aug 12, 2009 19:45 |
|
sex offendin Link posted:I could barely follow that Crytek whitepaper, but just enough to see that my guess was completely wrong. I didn't think we had reached the point where a true volumetric effect like that was possible, I thought screenspace tricks were still a necessity. They use a reflective shadowmap (downsampled render target from the light's POV) to capture the first bounce, but I'm trying to figure out exactly what they do after that...it looks like they generate point light sources from the RSM which are then stored into a volume and their radiance iteratively propagated?
|
# ¿ Aug 12, 2009 20:22 |
|
Contero posted:Where do you guys look for research-y type free models to test out rendering techniques with? Can't say I've ever seen any "reference" terrain models (like Cornell Box/Sponza Atrium/Stanford Bunny/Little Bhuda/Dragon etc) . Just do what MasterSlowPoke suggests, displacement mapping on a planar mesh with reads off a perlin noise texture, it gives some very nice results. If you feel like it you can do coloring based on the displacement value to simulate water/snowy mountains/whatever.
|
# ¿ Aug 13, 2009 17:03 |
|
Can anyone recommend a "robust" camera class/mini-library with the ability to switch between arcball/flythrough manipulation modes etc. on the fly?
|
# ¿ Apr 23, 2010 00:12 |
|
So I am working on getting Ogre3D to work inside a CAVE environment. I've written some code for calculating off-axis projection matrices for an arbitrary viewport (defined by the bottom left, top left and bottom right corners) and an arbitrary eye position. If I define a viewport with it's center at (0,0,-1), extending 1 units to each side (so that the top frusturm edge is at (0,1,-1), the bottom one at (0,-1,-1), etc) and I place my eye at (0,0,0) in this "frustum space", I get this sort of result: Click here for the full 797x750 image. While I am expecting to get this (which is what gets produced if I just create a viewport with an aspect ratio of 1.0): Click here for the full 800x747 image. I've actually tried 2 different ways to calculate the off-axis projection matrices (1 on my own, 1 ripped off from Syzygy, a VR Library) and I get the same result. My intuition is that the custom projection matrix ends up having a far larger FOV than the non-custom one... Any ideas?
|
# ¿ Jul 19, 2010 16:44 |
|
PDP-1 posted:I ran into something I don't understand today while working on a shader - I was sampling a mipmapped texture and the shader ran fine. Then I changed the texture and forgot to generate the mipmap and the framerate absolutely tanked. When I generated the mipmap on the new texture things ran great again. Better data locality. If you map the same region of the texture to a surface, the mipmap one requires fewer memory fetches.
|
# ¿ Jul 22, 2011 07:09 |
|
I've run into an OpenGL/OpenCL multithreading/resource sharing question. I have an app that uses multiple threads to poll a bunch of Kinects for depth/rgb frames. It then renders the frames into separate OpenGL contexts (right now there is a 1-1 correspondence between an image and an OpenGL context). To my understanding it is possible to get OpenGL contexts to share display lists and textures (I'm using Qt for the UI and it offers such functionality and bone stock OpenGL does it as well). However, I haven't found it explicitly stated anywhere that more than two contexts can share resources. Additionally, I plan to add some OpenCL functionality that basically does computation on these images and outputs results that I also want to be able to render in the aforementioned OpenGL contexts. Now, OpenCL allows you to interop with OpenGL by defining a single OpenGL context to share resources with. The overarching question is whether I can "chain" resource sharing between contexts. As in, when my application starts, create a single "parent" OpenGL context, then have ALL other OpenGL and OpenCL contexts (that may reside in other threads) actually share resources with that "parent" context and as an extension share resources with each other?
|
# ¿ Nov 1, 2011 23:22 |
|
Paniolo posted:Why are you using separate OpenGL contexts for everything? I have different widget classes that handle visualizing different types of data and Qt makes no guarantee that they will share an OpenGL context.
|
# ¿ Nov 2, 2011 03:08 |
|
Spite posted:If you are using multiple contexts you MUST use a separate context per thread. That's what I ended up doing. I create context on application launch and then any other contexts that are initialized share resources with that one context. quote:Keep in mind you've now created a slew of race conditions, so watch your locks! quote:You probably want to re-architect your design, as it doesn't sound very good to me. Also, can't Qt pass you an opaque pointer? You can't hold a single context there and lock around its use? Or have a single background thread doing the rendering?
|
# ¿ Nov 3, 2011 19:40 |
|
Spite posted:This is way complicated. Have you tried a single OpenGL context that does all your GL rendering and passes the result back to your widgets? The widgets can update themselves as the rendering comes back. Remember: there's only one GPU so multiple threads will not help with the actual rendering. You mean as in having a single class that manages the context and rendering requests get posted to that class which does Render-To-Texture for each request then returns the texture to a basic widget for display? quote:Also GPU and CPU resources are separate from each other unless you are using CLIENT_STORAGE or just mapping the buffers and using that CPU side. You can track what needs to be uploaded to the GPU by just making dirty bits and setting them. Multiple threads should not be trying to update the same GPU object at once in general - that gets nasty very fast. That's what I plan on doing...basically have each wrapper for my resources carry a CPU-side version ID and a GPU-side version ID and then a function that ensures consistency between the two versions. I am also providing a per-resource lock so technically more than one thread should not be locking the same resource for writing at the same time (either on the CPU or the GPU side).
|
# ¿ Nov 6, 2011 02:16 |
|
ickbar posted:I apologize complete Noob here, don't have too much experience with OpenGl but i'm trying to make a hack for an game using a proprietary Opengl engine with no available SDK or source-code to download. Shot in the dark here but maybe they are using some extension wrangler (like GLEW) to get access to various API entry points? (e: thus mangling up the symbols/names)
|
# ¿ Dec 25, 2011 18:32 |
|
I got a question about GLSL Tesselation Shaders. I got some geometry that I'm rendering either as GL_Quads (through a vertex shader) or GL_PATCHES (through a vertex->tess control->tess eval shader chain). The VBO is exactly the same (same vertex definitions and indices). When I look at the wireframe of the GL_QUADS version of the geometry, it shows, as expected, quads. When I look at the wireframe of the GL_PATCHES version however, each quad is subdivided into two triangles. My tesselation control shader has layout(vertices=4) out, set at the top and my tesselation evaluation shader is set to layout(quads) in. Is there some way to work around this issue or am I stuck with looking at my quads with a line in the middle (I'm asking cause I want to make figures for a paper I'm writing and having to explain that "I swear I'm issuing 4-vertex patches to teh GPU instead of two triangles" might not jive very well...).
|
# ¿ Mar 11, 2012 18:46 |
|
Are const arrays inside shader code supposed to be blatantly slow? I have a tessellation control shader that needs to index the vertices of the input primitive based on the invocation ID (so gl_InvocationID == 0 means that the TCS operates on the edge of gl_in[0] and gl_in[1], etc). Initially, my code had an if-statement (which I would assume GPUs don't like that much when it diverges inside the same execution unit) to make this determination. I figured that I could flatten the vertex indices out into a single const int[8] and index them based on the invocation ID (so I could say indices[glInvocationID * 2] and indices[glInvocationID * 2 + 1] and get the stuff that I need). However, doing this seems to hit me with a 66% performance drop when compared to using if-statements! Would passing the index array as a uniform yield a performance benefit?
|
# ¿ Mar 19, 2012 22:55 |
|
Hubis posted:What graphics card are you seeing this with? Depending on the hardware, the problem probably isn't the const array, but the fact that you are using dynamic indexing into the array. Some GPUs don't really support base+offset indexing, instead mimicing it using registers. Unfortunately, if you index the arrays dynamically, this requires all the accesses to be expanded (either by unrolling loops, or expanding into a giant nasty set of branches). So you could actually be gaining MORE branches, instead of eliminating them. This is on a Quadro 5000. quote:Why do you need to index the edges like you are doing? Your best bet would be to structure the input to your shader so that it doesn't need to branch at all, even if that means just adding extra interpolants. I'm not sure if that would work for you here, though. quote:e: There's no way to see intermediate assembly with GLSL, right? For DirectX, you could use FXC to dump the shader which might show you if that were happening at the high-level level (though not if it's being introduced at the machine-code translation stage). I believe there are utilities released by AMD/NVIDIA that will compile GLSL down to the machine-level assembly for you....
|
# ¿ Mar 20, 2012 15:16 |
|
Hubis posted:Higher quality bins of chips, much better handling of geometry-heavy scenes (not just lots of triangles, but lots of small triangles), and the driver and technical support commensurate with a workstation-level GPU (not just perf, but some weird/edge-case features that don't matter much for consumers but might for a professional system). That's very true. I work at a research University and I've been able to ping both NVIDIA and AMD engineers regarding weird behaviors/driver issues/optimizations with their professional level cards. I assume that if you are buying geforce/radeon you can't really call em up and say "Why do the textures in Rage look like crap? send me a custom driver that fixes this!".
|
# ¿ Mar 21, 2012 23:25 |
|
quote:glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); I am pretty sure glTexParameter() calls are applied PER TEXTURE and not globally. That means that the current active texture (for the last call to glBindTexture) will have its state affect by these. Also, I believe that you need to provide minification/magnification filters for an OpenGL texture to be valid. What ends up happening is, if you comment out the background call (including the glBindTexture), the active texture at the end of render() will be your teapot_texture_id. During the next render, the glTexParameteri calls get applied to teapot_texture_id and the texture becomes valid for usage. Then you switch to the plane texture (which is not properly set up, hence your ground plane being screwed up) and then back to the teapot texture and your teapot renders fine. Try applying these texture sampler settings to EACH texture, during initialization. You generally do not want to be reapplying them on runtime, unless you have to change their state.
|
# ¿ Sep 6, 2012 08:01 |
|
RocketDarkness posted:Dang, you hit the nail on the head. Tossing those lines of code in after each BindTexture call fixed it right up. I really appreciate it! And thanks to everyone else that spent any time looking, as well. No problem . Just to reiterate, there isn't much point in setting the texture filtering modes every time you render (and it probably hurts performance wise). Just do it once when you initialize the textures and then just switch them when you really need to. Also, use GL_LINEAR .
|
# ¿ Sep 6, 2012 18:42 |
|
e: Nevermind, I'm a moron.
shodanjr_gr fucked around with this message at 06:20 on Oct 23, 2012 |
# ¿ Oct 23, 2012 06:09 |
|
Is there an OpenGL debugger that works in Windows 8.1? I've tried gDebugger (both the Graphics Remedy version and the new AMD one) and they all blow up on me when breaking and trying to inspect textures. NVidia's NSight doesn't seem to like the fact that I'm rendering in OpenGL within a QT widget (most/all of the fancy HUD stuff does not work).
|
# ¿ Jan 18, 2014 04:17 |
|
Boz0r posted:I tried moving my raytracing project to my desktop Windows PC to get some more power, but when I try running it, I get the most vague error message: If you just copied the executable from your development machine to the desktop, chances are you are missing either some DLLs for any of the external dependencies (e.g. GLUT) or you are missing the visual studio runtime that your raytracer was compiled against. Each version of visual studio has a different set of DLLs that implement various bits and pieces of C/C++ functionality and those have to be either in your PATH system variable or in the working directory of the application (generally, the same directory where your executable is). If you Google "Visual Studio 20XX runtime" you will find download links directly from Microsoft. You might wanna consider trying to build your code locally on your desktop and running it with a debugger attached, that should give you more information about what's happening.
|
# ¿ Feb 24, 2014 22:13 |
|
roomforthetuna posted:You can also (more sensibly in my opinion) configure it to do static linking so that you don't have to distribute those files with your exe, for any project that isn't going to be made of a bunch of modules. Assuming you're not using MFC or something. That's actually a better approach for something small-scale. I also believe that GLUT can also be static-linked. On another note, is there an OpenGL debugger that works properly in Windows 8.1? I used to use gRemedy's gDebugger in Windows 7 and it worked great for me, but since I moved to Win 8.1, it crashes whenever I try to look at textures/buffers when at a breakpoint. I tried AMD's version as well, but it crashes the same way. NVidia's NSight graphics debugger doesn't want to debug non-core GL contexts. edit: To answer my own question, AMD's GPU Perf Studio 2 seems to work fine on NVidia cards (including GLSL on-the-fly editing) and all the nice HUD injection stuff). The UI is somewhat more janky than gDebugger (slower, .NET based and talks to a server over HTTP) but it gets the job done and the profiling tools are better. However, it doesn't do other stuff that gDebugger does, like allow you to break on certain GL calls and it doesn't seem to be able to show you stack traces that lead to API calls either. edit 2: Actually, the Frame Debugger and API Trace functionality works but the frame profiler doesn't since it seems to need access to low level hardware counters. Bummer... shodanjr_gr fucked around with this message at 07:14 on Feb 26, 2014 |
# ¿ Feb 26, 2014 05:07 |
|
I haven't used JOGL so I'm not sure what it does internally but I'm noticing a couple of things wrong with your code. You start a draw call (gl.glBegin()) and then inside of that you render the vertices for both of your primitives. I'm pretty sure that non vertex state can not be affected within a drawCall (between glBegin() and glEnd()). You should do your glBind call outside of the glBegin()/glEnd() block and only submit vertex geometry within that block. Probably what happens in your situation is that the second glBindTexture() call within your draw call ends up getting applied after the draw call ends and is used as the active texture for the draw calls in the next frame. You probably want to rewrite your Graphics2D:raw() function like this: code:
code:
|
# ¿ Mar 8, 2014 09:30 |
|
MarsMattel posted:Not sure if this is the right thread, but it seems the best fit. Which OpenCL version is this? http://www.khronos.org/message_boards/showthread.php/6788-Multiple-host-threads-with-single-command-queue-and-device this is old but apparently 1.0 does not support thread-safe kernel enqueueing?
|
# ¿ May 11, 2014 08:22 |
|
fritz posted:OK, I'm now using a model/view/projection thing, and every prism has its own model matrix, so it's something like: Also, if you are drawing hundreds/thousands of these, you might wanna look into instanced rendering, since your geometry is the same across all draw calls.
|
# ¿ Oct 19, 2014 20:00 |
|
roomforthetuna posted:It looks like I need to just ask this because outdated answers from the internet are no answers at all. May be a bit of an overkill but you could use OpenSceneGraph....especially if you are not rolling out shaders.
|
# ¿ Nov 4, 2014 05:45 |
|
nye678 posted:My guess would be that your VM's graphics driver cannot create a 3.3 context. Try commenting out the hints for the GL version and see if it won't create a window for you. I believe glfw will create the window with whatever the highest possible context version is so check out what what it gives you after the fact. That's most likely the case...I was messing around with OpenGL inside Windows VMs on OS X and i think that none were able to generate a context with version > 2.1 (that was a year or so ago)...
|
# ¿ Jan 8, 2015 04:42 |
|
fritz posted:Update: y'all were right, gl in VirtualBox is only 1.1 (!!!), when I hauled out the windows laptop and built it over there it all works ok. If you try VMWare Fusion or Parallels, at least you will get a context that you can compile GLSL in.
|
# ¿ Jan 8, 2015 23:03 |
|
Nahrix posted:This is what I'm aiming at: packing multiple meshes into a single draw call. My confusion lies in how the pixel shader would know which index in the texture array to call for that pixel. Right now, I just reference a single texture, and use a texcoords variable to find which pixel color to draw. If there's an array of textures in there, what's a good way of finding which texture to start with when referencing coordinates? Simple solution: pass the texture array index for each piece of geometry as a vertex attribute, and pass that through to the pixel shader. Less simple solution: encode the index of the texture array into one of the vertex attributes you are already passing to the vertex shader (e.g. use 2 bits from the x component tex coord and 2 bits from the y tex coord to encode up to 16 indices). Depending on the size of your atlases and the precision of your vertex attributes, you might be able to get away with this without a loss of image quality. Less less simple solution: pack all your individual textures into a single huge texture (you can go up to 16K by 16K these days) and then post-processes the texture coordinates of your meshes to index directly into that. This is what Sex Bumbo meant by "texture atlas".
|
# ¿ Apr 2, 2015 23:48 |
|
|
# ¿ May 14, 2024 13:48 |
|
Nahrix posted:I see. I think a modified version of your second method ("Less simple") would ideally work for me. Now that I know what an atlas is, I'm not sure it would work, in a practical sense, for a scene that has models dynamically added / removed, as it would require a reprocessing of the atlas, and putting it back on the graphics card. quote:What I mean by "modified version" is using a second TEXCOORD (or other variable in the layout; I'm not familiar with them all yet), so I don't potentially cause issues with drawing textures. Although, I'm thinking that you suggested using a few bits in an existing variable, because it's an unreasonable amount of waste to use an entire other TEXCOORD (or other variable), for a texture index. Exactly! The whole idea of approach #2 is that you don't actually use an additional variable in your vertex layout and you do not generate another array of vertex attributes (which would waste memory). So, if you are using 16-bit texture coordinates, you would pack the 14 most significant bits of the actual coordinate component and then use the other 2 to encode "half" of your texture index. Repeat for the other texture coordinate component to get the remaining 2 bits. Then combine and enjoy. But as Sex Bumbo said, this entire thing will only really matter if you use A LOT of different textures (100s). If you are using 10 or 20 or whatever, just render your meshes in "batches", grouped by texture (assuming that the rest of the render state is the same). shodanjr_gr fucked around with this message at 07:04 on Apr 3, 2015 |
# ¿ Apr 3, 2015 06:56 |