|
ultra-inquisitor posted:Ok, that's a bit more drastic than I was expecting, and pretty conclusive. I was actually only considering using one for a full-screen overlay (ie using one 1024x768 texture) because using a 1024x1024 is causing slight but noticeable scaling artifacts. I don't think this is going to impact performance - I'm more worried that drivers (especially ATI's) will be dodgy and give me the old white texture routine. edit: Yes, you can use a subset of a larger texture, but get a full-screen overlay on a 1680x1050 screen and tell me if you can find a better use for the 7MB of VRAM you're wasting. OneEightHundred fucked around with this message at 18:20 on Dec 15, 2008 |
# ? Dec 15, 2008 18:13 |
|
|
# ? May 16, 2024 17:46 |
|
Just make a 1024x1024 texture, and draw just set the UV coodinates on your 1024x768 quad to keep from stretching. You don't have to utilize the whole thing texture.
|
# ? Dec 15, 2008 18:15 |
|
OneEightHundred posted:edit: Yes, you can use a subset of a larger texture, but get a full-screen overlay on a 1680x1050 screen and tell me if you can find a better use for the 7MB of VRAM you're wasting. Savagely optimize it and put other (small) textures in the space that would be offscreen for the overlay.
|
# ? Dec 16, 2008 00:44 |
|
As far as I remember, for clamped texturing the speed is almost the same since you avoid a mod for every texture lookup.
|
# ? Dec 16, 2008 01:05 |
|
Avenging Dentist posted:Savagely optimize it and put other (small) textures in the space that would be offscreen for the overlay. or split your overlay into multiple pieces, compact the texture to as small as it can go, and draw multiple quads per pass
|
# ? Dec 16, 2008 07:56 |
|
BernieLomax posted:As far as I remember, for clamped texturing the speed is almost the same since you avoid a mod for every texture lookup.
|
# ? Dec 16, 2008 08:50 |
|
OneEightHundred posted:As far as I know, rectangle textures (see: ARB_texture_rectangle) give good speed on all hardware, you just have to deal with the fact that the coordinates aren't normalized and you can't mipmap them.
|
# ? Dec 16, 2008 13:25 |
|
Does Intel release any sort of OpenGL SDK/headers? Ive scourged the internet for them, but no luck, and I need some 1.4 functionality that the "standard" Windows headers (stuck at version 1.1 i think) do not offer.
|
# ? Dec 17, 2008 03:20 |
|
Doesn't Mesa 3D compile on Windows?
|
# ? Dec 17, 2008 03:37 |
|
Avenging Dentist posted:Doesn't Mesa 3D compile on Windows? Wow. Thanks for linking this... I've been coding in OpenGL for half a year now, and this is the first time I hear of Mesa 3D. Compiling it now! edit: I compiled it (had to fiddle around with the GLUT header a bit though) and ran it. It "works", but the performance is equally bad to the headers that come bundled with windows...(when i run any of my code, it hits the CPU, HARD). Should i just shore this up to the general crappyness of the Intel driver? bonus question: Im trying to do some shadowmapping, but I want to avoid using FBOs. Is there way to read the current depth buffer into a texture using something like glCopyTexSubImage2D? <- fixed this. Turns out if the target texture is a depth component one, then glCopyTexSubImage2D will read straight from the depth buffer. shodanjr_gr fucked around with this message at 11:25 on Dec 17, 2008 |
# ? Dec 17, 2008 04:10 |
|
Try an extension library like glew, it makes accessing all of OpenGL's functionality painlessly transparent Not sure if it differentiates between hardware support and emulation, but it's worth a try
|
# ? Dec 17, 2008 05:13 |
|
You don't want to compile Mesa 3D, you do want to use it to get OpenGL-compatible headers. It comes with gl.h, get glext.h and wglext.h from here, and either link against opengl32.lib, or load opengl32.dll and get the entry points manually. As for lovely performance, the 915G chipset (a.k.a. GMA950, the most common Intel IGP right now) performs worse than a GeForce 2, so don't be surprised. They're so alarmingly bad that you really only have two sane design decisions: Make your game look like it was made in 1997, or don't support Intel IGPs. OneEightHundred fucked around with this message at 10:14 on Dec 17, 2008 |
# ? Dec 17, 2008 10:10 |
|
OneEightHundred posted:As for lovely performance, the 915G chipset (a.k.a. GMA950, the most common Intel IGP right now) performs worse than a GeForce 2, so don't be surprised. They're so alarmingly bad that you really only have two sane design decisions: Make your game look like it was made in 1997, or don't support Intel IGPs. I'm just writting some demo code for a workshop I'm teaching, and most of the PCs at the lab have crappy Intel IGPs on them. The thing is that performance is WAY too crappy. Let me put it this way: I render at 512 * 512 resolution, a scene comprising of 2 spheres and a quad, 3 times (i'm doing shadowmapping), and this thing draws at LESS than 1 frame per second and hammers the CPU like there is no tomorrow.
|
# ? Dec 17, 2008 11:24 |
|
Apologies if this has been asked before but does anyone know of a good resource that has properties of common materials? E.G. Diffuse, Specular, etc of metals and plastics and what not. Thanks.
|
# ? Dec 17, 2008 14:29 |
|
shodanjr_gr posted:bonus question: try a pbuffer.
|
# ? Dec 17, 2008 16:07 |
|
Normally you want to use framebuffer objects and multiple render targets for that (using the extensions that conveniently have the same names!), pbuffers involve expensive context switches and kind of suck.
|
# ? Dec 17, 2008 16:20 |
|
shodanjr_gr posted:I render at 512 * 512 resolution, a scene comprising of 2 spheres and a quad, 3 times (i'm doing shadowmapping), and this thing draws at LESS than 1 frame per second and hammers the CPU like there is no tomorrow.
|
# ? Dec 17, 2008 19:49 |
|
OneEightHundred posted:Normally you want to use framebuffer objects and multiple render targets for that (using the extensions that conveniently have the same names!), pbuffers involve expensive context switches and kind of suck. The OP specifically said he didn't want to use FBOs.
|
# ? Dec 17, 2008 21:43 |
|
shodanjr_gr posted:I render at 512 * 512 resolution, a scene comprising of 2 spheres and a quad, 3 times (i'm doing shadowmapping), and this thing draws at LESS than 1 frame per second and hammers the CPU like there is no tomorrow. Just for kicks, try dropping the spheres and stick to cubes for now. See what happens.
|
# ? Dec 17, 2008 21:48 |
|
heeen posted:The OP specifically said he didn't want to use FBOs. What's wrong with FBOs? (i.e. why would you ever want to use pbuffers over them?)
|
# ? Dec 17, 2008 22:31 |
|
Mithaldu posted:Just for kicks, try dropping the spheres and stick to cubes for now. See what happens. Tried that, same thing (maybe sliiiiiiiiiiiiightly faster). I also tried uncommenting all calls that are not related to matrix stuff, or to geometry production, but still the peformance is equally crappy...I have also reinstalled the GMA drivers by intel... quote:Oops. While I understand (and love) FBOs, I just want to show how shadowmapping works in principle and dont want to overcomplicate the demo. Plus I cant be sure if all the lab hardware supports them...
|
# ? Dec 17, 2008 22:38 |
|
just keep cutting stuff until the framerate goes over 1fps
|
# ? Dec 18, 2008 10:43 |
|
This might sound silly, but I got this to work by setting the desktop to 16 bit on my really crap computer. Wild guess?
|
# ? Dec 18, 2008 12:06 |
|
BernieLomax posted:Wild guess?
|
# ? Dec 18, 2008 15:25 |
|
shodanjr_gr posted:Tried that, same thing (maybe sliiiiiiiiiiiiightly faster). How are you implementing the shadow mapping? What's your state look like? Are you copying the depth texture to the CPU memory and back to the GPU? Are you letting OpenGL generate the texture coordinates for you with regards to the shadow mapping? Pbuffers suck horribly and the intel chips are crap with FBOs (and in general). What happens if you just draw your scene twice, does performance still suck? Have you tried a profiler?
|
# ? Dec 20, 2008 00:27 |
|
This is a 2D rather than a 3D question, but - if I'm using OpenGL, what's the recommended way(s) of doing an integer scaling of the output to the screen (i.e. either normal size, or exactly double in both dimensions, or exactly triple, etc.), with no filtering (nearest neighbour scaling)? I do want this to be over the entire rendered output, not just individual components (so, for example, if something is rendered rotated at 45 degrees, its pixels still appear aligned to the edges of the screen at 4x scale).
|
# ? Jan 11, 2009 18:26 |
|
dimebag dinkman posted:This is a 2D rather than a 3D question, but - if I'm using OpenGL, what's the recommended way(s) of doing an integer scaling of the output to the screen (i.e. either normal size, or exactly double in both dimensions, or exactly triple, etc.), with no filtering (nearest neighbour scaling)? I do want this to be over the entire rendered output, not just individual components (so, for example, if something is rendered rotated at 45 degrees, its pixels still appear aligned to the edges of the screen at 4x scale). Render the scene to an FBO appropriately smaller than the screen, render the result on a fullscreen quad with the filter set to GL_NEAREST, go hog wild.
|
# ? Jan 11, 2009 19:25 |
|
sex offendin Link posted:go hog wild.
|
# ? Jan 11, 2009 20:33 |
|
sex offendin Link posted:Render the scene to an FBO appropriately smaller than the screen, render the result on a fullscreen quad with the filter set to GL_NEAREST, go hog wild. Why would you do that instead of just changing the viewport appropriately?
|
# ? Jan 11, 2009 22:17 |
|
not a dinosaur posted:Why would you do that instead of just changing the viewport appropriately?
|
# ? Jan 11, 2009 22:25 |
|
dimebag dinkman posted:I wanted the output to be simplistically scaled with no increase in resolution, as you would get if you just change the viewport. So, change the viewport and the projection for the camera.
|
# ? Jan 11, 2009 22:30 |
|
dimebag dinkman posted:I wanted the output to be simplistically scaled with no increase in resolution, as you would get if you just change the viewport. Right. This is what I usually do for integral scaling. code:
edit: should note this will center the viewing area in the middle of the window
|
# ? Jan 11, 2009 22:38 |
|
I'll try to clarify what I mean: I want the one on the left. (Yes, really).
|
# ? Jan 11, 2009 23:00 |
|
not a dinosaur posted:Right. This is what I usually do for integral scaling. Well, yeah, he could do that, but he still has to scale the result up to fill the entire normal viewport, which is included in what I posted.
|
# ? Jan 11, 2009 23:10 |
|
dimebag dinkman posted:I'll try to clarify what I mean: Sample the buffer and when you call glTexParameter, use gl_Nearest?
|
# ? Jan 12, 2009 00:18 |
|
PnP Bios posted:Sample the buffer and when you call glTexParameter, use gl_Nearest?
|
# ? Jan 12, 2009 21:19 |
|
dimebag dinkman posted:Could you explain what you mean by "sample the buffer" a bit more? Do you just mean rendering to a framebuffer object, as sex offendin Link suggested? If you're rendering to an FBO, then there's no sampling to be done (assuming you're using a texture as the appropriate color attachment), otherwise, use glReadPixels
|
# ? Jan 13, 2009 00:51 |
|
I'm having a problem with non power of two textures under OpenGL, this is what I'm getting: Somehow all values are off by one channel. I'm using GL_RGB for the data and internal alignment.
|
# ? Jan 14, 2009 17:50 |
|
heeen posted:I'm having a problem with non power of two textures under OpenGL, this is what I'm getting:
|
# ? Jan 14, 2009 18:01 |
|
|
# ? May 16, 2024 17:46 |
|
Say I have a mesh of vertices that I want to draw as triangle strips or triangle fans. The vertices are packed into (an) array(s), and vertices are shared by multiple primitives, of course. 1. Does glVertexPointer(),glNormalPointer(), etc. followed by glDrawElements() have close to optimal performance nowadays, or is there a different interface that's a much better choice? At present, each primitive set is 8 vertices, a hexagon drawn as a trifan. 2. Does it matter whether a: the vertices, normals, and texture coordinates are packed into an array of structs b: there are separate vertex array, normal array, TC array 3. How much does spatial locality matter to performance on modern hardware? glDrawElements() takes an array of indices, is it faster if the used vertices are close together in memory than if the indices (uint) jump all over the place? I'm basically looking for current common knowledge. I don't want to put a lot of work into using an interface that's about to be obsoleted and/or stupidly inefficient.
|
# ? Jan 18, 2009 04:40 |