|
Woz My Neg rear end posted:Formally don't-write-in-new-code deprecated, or just out of favor? I thought they were still better for frequently updated buffers. If you have a frequently-updated buffer, create it as DYNAMIC. Use discards (pass NULL to e: Oops you're right, mentioned the wrong call. OneEightHundred fucked around with this message at 21:45 on Jan 11, 2012 |
# ? Jan 11, 2012 18:48 |
|
|
# ? May 30, 2024 13:18 |
|
OneEightHundred posted:I believe VBOs are mandatory for all geometry in the forward-compatibility contexts. D3D switched to buffer-only ages ago, at least circa D3D8. Yes, use VBO for sure. VAR are probably super stale code in most drivers these days. Note: calling glBufferSubData with NULL doesn't discard the buffer. You should use glBufferData with NULL to get that orphaning behavior. glBufferSubData is specified to not modify the buffer, only the data. You can also use the unsynchronized parameters to get nonblocking behavior.
|
# ? Jan 11, 2012 21:17 |
|
Woz My Neg rear end posted:Formally don't-write-in-new-code deprecated, or just out of favor? I thought they were still better for frequently updated buffers. I would think they'd be about the same performance as vertex arrays are always sent over to the GPU's memory.
|
# ? Jan 12, 2012 18:23 |
|
Arrays are slower because if you don't use compiled vertex arrays, DMA doesn't start until you issue a draw call and then it blocks until DMA is finished, and if you do use CVA, then it blocks your unlocks unless DMA is finished. Static VBOs only DMA once ever and dynamic VBOs with discard don't block on DMA because it'll just give you a new memory region if the old one is in use.
|
# ? Jan 12, 2012 22:06 |
|
Where you get low level info like that? Just from experience?
|
# ? Jan 16, 2012 18:44 |
|
ShinAli posted:Where you get low level info like that? Just from experience? Some are inferrable. i.e. the entire point of discard is to ensure that the driver can pull data from a memory buffer that isn't being modified, that can't be done with arrays in system memory unless the array is locked, which means the unlock will block if the driver is using it (just like it ALWAYS blocks if you don't use compiled arrays or VBOs). A LOT of extensions make more sense if you think of things in terms of the GPU being a high-latency device, and it generally being preferable to do things that don't care when the card gets around to starting OR finishing the operation. OneEightHundred fucked around with this message at 19:09 on Jan 16, 2012 |
# ? Jan 16, 2012 19:06 |
|
It's been like three years since I last did any 3d programming and back then I didn't know what the gently caress I was doing either. I started up using opengl again a few days ago and everything is totally different. Is this basically the gist of how to draw an object now? -bind the array, index and texture buffers (glBindBuffer, glBindTexture) -use appropriate shader program (glUseProgram) -send crap to the shader's uniform variables, like a vector for translation, the texture units for the sampler(s) to use. (glProgramUniform****) -glDrawWhatever(), doing all the matrix operations on the shader I have this working, but is it efficient? Am I doing anything retarded here? It's so hard to find tutorials that aren't full of deprecated stuff.
|
# ? Jan 18, 2012 14:05 |
|
^^ Also why isn't all this required stuff in the OpenGL libraries? Right now I had to figure out I needed GLEW to be able to use the ..BufferARB() stuff. It feels really out-there to ban glBegin()/glEnd() altogether. They're slow but are very useful for someone getting to learn 3D programming. Just imagine future graphics classes: "To draw your first basic polygon, poll your graphis card for function pointers to glBufferBlablabla, then create a pre-defined array of..." Bisse fucked around with this message at 15:35 on Jan 18, 2012 |
# ? Jan 18, 2012 15:30 |
|
I agree. I am just starting out on learning OpenGL and am making a terrain generator (bit more interesting than starting with rotating cubes) and wanted to draw a set of axis in my scene. I used glBegin/glEnd because I didn't want to bind buffers and set up shaders just to draw 3 loving lines. I figure it may be slower than the alternative but it me saved a lot of effort and should it cause me problems later on I'll deal with it then. As I said I'm still a novice to this so if I said something dumb please correct me.
|
# ? Jan 19, 2012 00:27 |
|
Bisse posted:^^ Also why isn't all this required stuff in the OpenGL libraries? Right now I had to figure out I needed GLEW to be able to use the ..BufferARB() stuff. OpenGL is a low level library. It should not be used in a graphics 101 course. There are plenty of high level graphics engines which are much more suited for education. There is only one framework for low level access to the graphics hardware. As for your first question, because OpenGL never bothered to put anything about deployment in the spec.
|
# ? Jan 19, 2012 01:03 |
|
And because those functions are part of an extension that your OpenGL driver may not implement.
|
# ? Jan 19, 2012 01:19 |
|
pseudorandom name posted:And because those functions are part of an extension that your OpenGL driver may not implement.
|
# ? Jan 20, 2012 10:09 |
|
If the function names have a vendor suffix (and ARB or EXT count), then there's no guarantee the driver implements them.
|
# ? Jan 20, 2012 10:39 |
|
Bisse posted:So the OpenGL driver may not implement the only currently allowed way to render? It's a bit more complicated and nasty than that. In Windows, the OS only implements OpenGL 1.1 or something like that. So that's all you are guaranteed to have. Everything else has to go through wglGetProcAddress, which queries the driver and returns a function pointer to the function. To get a modern (3+) OpenGL context, you have to call wglCreateContextAttribs. Which is not part of the old OpenGL, so you have to actually create an old context, call wglGetProcAddress to get the new creation function and _then_ create your real context. It's a disaster. And glBegin/End should absolutely be banned. One of OGL's biggest issues is that it has a billion ways to do things, but only one of them is fast, and the others don't tell you they are slow. As was said earlier, a more friendly API could be built on top of OGL to do similar stuff. Begin/End and fixed function are really out of date ways of thinking about modern GPUs and graphics - it may be user-friendly but it has nothing to do with how the hardware works or how you should organize your rendering. A good low-level graphics API should only have fast paths. The ARB is attempting to remove the slow crap. Unfortunately they'll never succeed because there are too many apps and people that are using the old stuff and not adopting the new stuff.
|
# ? Jan 20, 2012 10:49 |
|
Been trying to work on adding opengl text to my little hack, and as usual been completely stumped. There's a hack made by much more experienced coders that can get text in the middle of the screen (game engine has no sdk or open source available except for 3rd party sdk involving animation models etc..). I wonder how they are able to center the text and what function they are hooking to get there?? For the game the most success I've had is hooking the glBegin function and issuing the glprint command, and funky stuff like turning off textures or whatever i'm trying usually results in fail due to being a dumb noob. From my reading of hte opengl redbook all I know is that I have to push matrix, modelview matrix, projection matrix, something in between to get the text centered on screen, pop matrix. The only thing I can get working is this. The game uses glbegin to draw HUD elements and nothing else, but the other guys hack I know doesn't hook the glBegin function, because when I disable the HUD his text menu doesn't dissapear. hookedglbegin() { .... glprint(0,0,"hello!"); orig-glbegin(); } which results in this. this is the code for the glprint i'm using is, which isn't mine btw. quote:void glPrint(int x, int y, const char *fmt, ...) This is the much more elite coder's hack, and they got text working where they want along with a snazzy menu. I'm not looking for anything this fancy, just to be able to draw text where I want it to be. WHich will be a stepping stone to being able to draw bounding boxes for in a world-to-screen function for ESP. I've been able to do a number of succesful things on my own with the hack (working wallhack, disable foliage) since my previous posts, if anyone has any clue how this person is able to do this, i'd appreciate it. ickbar fucked around with this message at 17:18 on Jan 20, 2012 |
# ? Jan 20, 2012 17:05 |
|
ickbar posted:Been trying to work on adding opengl text to my little hack, and as usual been completely stumped. There's a hack made by much more experienced coders that can get text in the middle of the screen (game engine has no sdk or open source available except for 3rd party sdk involving animation models etc..). I'll give you a hint: code:
|
# ? Jan 21, 2012 22:40 |
|
thank you very much my friend, i'll take a deep look at it once i'm free. So far most useful thread i've been on in sa.
ickbar fucked around with this message at 00:58 on Jan 22, 2012 |
# ? Jan 22, 2012 00:46 |
|
Bisse posted:It feels really out-there to ban glBegin()/glEnd() altogether. They're slow but are very useful for someone getting to learn 3D programming. Just imagine future graphics classes: "To draw your first basic polygon, poll your graphis card for function pointers to glBufferBlablabla, then create a pre-defined array of..." Congratulations, you just wrote a renderer with Begin/End-like behavior, except now you don't have to rewrite everything when you want to do stuff like batching, which in that case would consist of "if the next set of geometry has the same shading properties, then don't flush," and you can easily upgrade it to do stuff like pump multiple quantities of complex vertex data without massive amounts of function call overhead. "The slow paths don't tell you they're slow" is part of the problem, but the other part is that you'll hit a brick wall on the limitations. If you want a nice example, circa 2002 it was becoming really obvious that extending fixed-function to the kind of effects people wanted it to do was turning into a mess, and the only way to fix it was going to ultimately be scrapping fixed-function the same way they scrapped it for the vertex pipeline when people started wanting to do skinning on the GPU. The "added simplicity" of legacy paths is just a trap where you'll get used to doing things the lovely way and then have to relearn everything, better to do it right the first time.
|
# ? Jan 22, 2012 09:19 |
|
It's also a problem of documentation. People have had years to write tutorials and examples for OpenGL 1.1 glBegin/glEnd stuff, the new stuff, not so much. I mean yeah, plenty of "here's how you get a triangle up on screen with VBOs" but it falls off in quality when you get beyond that, like how your approach would change if you had a list of triangle meshes, how you would handle camera stuff, etc etc. But it'll catch up.
|
# ? Jan 22, 2012 12:47 |
|
So all these people have been talking about how using glBegin and glEnd are really bad but that's been the only way I've been taught, and the only method I've seen in opengl tutorials. Does anybody have links to a tutorial of the better way?
|
# ? Jan 22, 2012 13:04 |
|
OneEightHundred posted:It honestly isn't that hard to create a VBO, map it, and write a function that just copies the parameters into it and increments a pointer, and a flush function to call manually to terminate the primitive, or if you try to push too much in to the buffer. Well, the other problem with removing glBegin/glEnd (which I am strongly in favor of) is that you run the risk of having the same problem as DirectX does, where even for simple examples you need like 500+ lines of cruft to handle all the resource creation, etc. What would be nice is an updated GLUT that creates a VBO in the background and provides an interface to the programmer so they can use something like "glutBegin/glutEnd" for examples, with the clear determination that this is just for illustrative purposes.
|
# ? Jan 22, 2012 15:42 |
|
Does GL 3+ still provide matrix stacks? I only know that ES 2.0 does not.
|
# ? Jan 22, 2012 17:35 |
|
Compatibility profiles do, but they're only used if you're doing fixed function rendering.
|
# ? Jan 22, 2012 20:43 |
|
Woz My Neg rear end posted:Does GL 3+ still provide matrix stacks? I only know that ES 2.0 does not.
|
# ? Jan 22, 2012 23:54 |
|
I've got a question I've had no luck finding an answer to anywhere. How did old (think Quake1/2, Unreal) games handle object lighting. I know it's per-vertex, but did they determine for each vertex whether it was visible to various light sources? I mean, map geometry obviously affects how the objects in these games are lit, and it even works for dynamic lights such as the Unreal flares: if you throw a flare in front of a pillar, objects behind the pillar won't be lit. From what I understand, the modern approach is to first have a shadow mapping pass and then only light the visible pixels?
|
# ? Jan 28, 2012 19:45 |
|
High Protein posted:I've got a question I've had no luck finding an answer to anywhere. How did old (think Quake1/2, Unreal) games handle object lighting. I know it's per-vertex, but did they determine for each vertex whether it was visible to various light sources? I mean, map geometry obviously affects how the objects in these games are lit, and it even works for dynamic lights such as the Unreal flares: if you throw a flare in front of a pillar, objects behind the pillar won't be lit. Ones that I'm sure of: - Quake 2 and Quake 1 do no LOS checks on dynamic lights. Fixed lighting is handled by tracing a line straight down from objects and using the lightmap sample it hits. This obviously has some interesting artifacts when jumping over pits where the bottom is much brighter (i.e. lava). Lighting is accentuated in a fixed direction. - Quake 3 does no LOS checks on dynamic lights, static lights are prebaked into a 3D "light grid" where each point has a dominant light direction, a colorized intensity from that direction, and a fixed-intensity ambient contribution. - UE1 tracks actual light sources and does line checks for visibility. How they're factored in is a mystery though, UE1's object lighting is heinously bad with a ton of black bleed. - Source uses a single line checks on all light sources, but indirect lighting is baked into a 6-component "ambient cube" in each BSP leaf. - Halo 3 uses third-order spherical harmonics for everything, indirect (and possibly direct) lighting is baked into a spherical harmonics term that's stored in a KD-tree that is sparser in areas with no geometry.
|
# ? Jan 29, 2012 02:04 |
|
Thanks, that's exactly the kind of answer I was hoping for!
|
# ? Jan 29, 2012 10:32 |
|
This is a pretty basic math theory question, but I'm taking a graphics class right now and I need a little help. Long story short, my teacher isn't the best at speaking English, much less at explaining things, and I had to go read the book just to figure out what convolution and reconstruction was. I have that figured out, but what I can't wrap my head around is resampling. You know, scaling images/resampling audio. I understand what it's supposed to do, but I don't quite get the math. For simplicity's sake, say I'm doing it on audio or some other 1-dimensional data. If I have this data set: f(x) = 0, 1, 4, 5, 3, 5, 7 And I want to resample this using different filters with a radius of 0.5 in order to figure out, say, f(2.5) and f(2.75), with f(2) having the value 4 in the above data set. My question is, what results should I be getting with my estimates if I use, say, a box filter (1/(2r) if -r <= x < r, 0 otherwise) as opposed to a tent/linear filter (1-abs(x) if abs(x) < 1, 0 otherwise). I hope I didn't make that too confusing, I just am not sure how to exactly compute resampled values. The book doesn't make it very clear. It says something about taking the data points, reconstructing and smoothing them, then resampling a new set of data, but I don't understand how you convolute two functions (a reconstruction and a smoothing one, as opposed to a function and a data set) together.
|
# ? Feb 6, 2012 21:43 |
|
I feel like my graphics knowledge is incredibly dated, and I want to catch up in a hurry. I have decent (if a bit outdated) knowledge of OpenGL and graphics theory. I just feel like I'm missing the practical knowledge required to put together a modern and efficient graphics engine. Where should I go? What should I be reading?
|
# ? Feb 8, 2012 02:17 |
|
Contero posted:I feel like my graphics knowledge is incredibly dated, and I want to catch up in a hurry. I have decent (if a bit outdated) knowledge of OpenGL and graphics theory. I just feel like I'm missing the practical knowledge required to put together a modern and efficient graphics engine. I recently went through the same exercise; after I figured a lot of it out, I ran across this book, which summed up most of the tricks I had collected from a lot of other sources: http://www.amazon.com/OpenGL-4-0-Shading-Language-Cookbook/dp/1849514763/ref=sr_1_fkmr2_1?ie=UTF8&qid=1328669428&sr=8-1-fkmr2
|
# ? Feb 8, 2012 03:50 |
|
Contero posted:I feel like my graphics knowledge is incredibly dated, and I want to catch up in a hurry. I have decent (if a bit outdated) knowledge of OpenGL and graphics theory. I just feel like I'm missing the practical knowledge required to put together a modern and efficient graphics engine. One thing that's helped me a lot are the presentations/slides from GDC and SIGGRAPH which tend to include stuff that's been tried in a real-world environment and works well from a cost/impact perspective. Bungie and Valve in particular have put out quite a few papers detailing useful, efficient techniques. e: I'd contrast that with GPU Gems which is much more theoretical, or doing stuff that works great in a demo but not under real-world resource constraints. OneEightHundred fucked around with this message at 23:36 on Feb 8, 2012 |
# ? Feb 8, 2012 22:07 |
|
Yeah, look at how many different approaches are out there just for, say, filtering shadow maps. Real-time graphics and offline graphics are converging and knowledge is moving between them faster than ever before. When I was in college around the turn of the century I was told that the rule of thumb was that real-time is perpetually where offline was ten years ago; that gap has narrowed a lot today.
|
# ? Feb 8, 2012 22:44 |
|
haveblue posted:Yeah, look at how many different approaches are out there just for, say, filtering shadow maps. Real-time graphics and offline graphics are converging and knowledge is moving between them faster than ever before. When I was in college around the turn of the century I was told that the rule of thumb was that real-time is perpetually where offline was ten years ago; that gap has narrowed a lot today. What's especially nice is that there's a lot of stuff that is much more clever than it is complex, and once known, is very easy to implement. Probably the most admirable thing along those lines I've seen recently is pre-integrated skin rendering, which combines three completely different but technically simple techniques to fake one natural phenomenon cheaply and convincingly. The crepuscular rays thing that a lot of games are using today is hilariously unrealistic too, but it's convincing and it's CHEAP. In general, pick something that'll work for your goals, and set your limitations in advance. Most games right now pick any of various solutions for static lighting (single-color, 3-direction basis, spherical harmonics), possibly have a single precalculated dominant light per surface that they can render in the forward pass (often the sun, lets you do better shadowing and specular), or use deferred rendering to have a lot of lights. Don't obsess about supporting every feature under the sun, support the features you need to meet your goals. OneEightHundred fucked around with this message at 02:35 on Feb 9, 2012 |
# ? Feb 9, 2012 00:05 |
|
PalmTreeFun posted:This is a pretty basic math theory question, but I'm taking a graphics class right now and I need a little help. Long story short, my teacher isn't the best at speaking English, much less at explaining things, and I had to go read the book just to figure out what convolution and reconstruction was. I have that figured out, but what I can't wrap my head around is resampling. You know, scaling images/resampling audio. I understand what it's supposed to do, but I don't quite get the math. How's your math? Convolution has different meanings depending on which domain you are in. The easiest way to thing about it as the overlap between two functions. Or you can think about it as multiplying every data point in function A by every datapoint in function B and adding together. Of course, this really isn't feasible in real time so you use a small 'kernel' as the second function. Take for example a gaussian blur. You'll typically see something like this: 0.006 0.061 .242 .383 .242. 061 .006 That's the kernel, that's function B. Your set is 0, 1, 4, 5, 3, 5, 7 Let's say I want to find the blurred value of the middle element, which is 5 0.006+1*0.0061+4*0.242+5*0.383+3*0.242+5*0.061+7*0.006 You repeat that for each value in your set to get the convolved set, ie f(x-3)*0.006+f(x-2)*0.061+f(x-1)*0.242 0.383*f(x)+0.242*f(x+1)+f(x+2)*0.061+f(x+3)*0.006 That's for a 1D convolution, It can be extended to anything. If you are curious about the math, you should probably take a class on signal processing as it gets quite complex. Or am I explaining the wrong thing?
|
# ? Feb 9, 2012 23:30 |
|
Spite posted:Or am I explaining the wrong thing? I think so. I understand the part you explained already, but basically what I want to know is how scaling/reconstructing a sound/image works. Like, you convert a set of discrete data to a continuous function somehow, and you can use a kernel (thanks for explaining what that was, I didn't know that that and the "filter" were the same thing, this teacher really sucks at explaining things) to extrapolate new, "in-between" data. Like, if you used something like a simple average to find a value between elements 1 and 2 (1 and 4) in the example I gave, you'd get a new value 2.5, because that's halfway from one to the other. The problem is, I don't get how you convey different ways of getting new values using a kernel. Same in reverse, shrinking the set instead of expanding it. I had an assignment on the last homework where we had to resample a data set using two different kernels, one being a tent and the other a box, and I had no idea how to compute that. E: For what it's worth, here are the lecture slides on the topic: http://pages.cs.wisc.edu/~cs559-1/syllabus/02-01-resampling2/resampling_cont.pdf Scroll down to the page that says "Resampling". E2: I just figured out what exactly the box/triangle filters do, (box is rounding up/down, tent is linear interpolation) but I still don't understand how the process in general is done. Like, I have no clue what's going on with the other filters, like Gaussian, B-Spline cubic, Catmull-Rom cubic, etc. PalmTreeFun fucked around with this message at 00:15 on Feb 10, 2012 |
# ? Feb 10, 2012 00:04 |
|
How do you guys feel about the glm Library? I had rolled my own vector/matrix class, but this seems pretty well implemented.
|
# ? Feb 12, 2012 03:34 |
|
Contero posted:How do you guys feel about the glm Library? I had rolled my own vector/matrix class, but this seems pretty well implemented. I've been using Wild Magic 5 when I needed a proper matrix/vector library because it actually includes Quaternions, something missing from a lot of other libraries (glm included from the looks of it). Of course most of the time I use my homebrew quaternion/vector classes... Edit: This is probably a really stupid reason to like a library, but it actually draws a distinction between Vectors and Points, allowing operations to be performed using both types and gives expected results. One thing I've been wondering about, is there anything special you need to do when writing a program that is meant to output to an active shutter 3D display? Anaglyphs are easy enough, but I'm not sure whether just configuring the gl context to refresh at 120Hz and alternating frames will produce the desired effect. I don't have the monitor with me to test it, so I need to try develop the program on my computer using anaglyph output and hope that it works on the 3d display at the clients location. ephphatha fucked around with this message at 03:35 on Feb 13, 2012 |
# ? Feb 13, 2012 03:30 |
|
PalmTreeFun posted:I think so. I understand the part you explained already, but basically what I want to know is how scaling/reconstructing a sound/image works. Like, you convert a set of discrete data to a continuous function somehow, and you can use a kernel (thanks for explaining what that was, I didn't know that that and the "filter" were the same thing, this teacher really sucks at explaining things) to extrapolate new, "in-between" data. It's kind of odd they'd have you do this without giving you the background theory around filtering and time domain vs frequency domain. Forgive me if I'm going a bit overboard with the explanation. The Fourier transform will convert a function in the time domain into its equivalent into the frequency domain. The easiest way to visualize this is to think about a sine wave. Its Fourier transform is simply two peaks, one positive and one negative. They represent the frequency of the wave. Now, every function (at least the ones you'll be interested in) has a transform that converts to this domain. Why is this interesting? Because you can multiply two functions together in the frequency domain to apply a filter. For example, a box function can be a lowpass filter. However, this is a problem since you need all of both functions in order to do the transform. So we would like to apply them in the time domain as they are fed in to us. A multiplication in the frequency domain corresponds to convolution in the time domain. So we can take the filter we are interested in and convert them to the time domain via the inverse fourier transform. Then we can do the operation with the kernel we get in the time domain. The box/triangle filters are in the frequency domain. If you want to apply them to the dataset you need to apply the inverse Fourier transform and convolve that with the set. The box filter's inverse fourier transform is the sinc function, so we can make a kernel out of sinx/x and use that if we wanted to. Gaussian is a special case since the fourier transform of a Gaussian is also a Gaussian. The splines are slightly different because they reconstruct points on a path. You can think of something like Catmull-rom, etc as simply a function F(t) that happens to pass through the control points.
|
# ? Feb 13, 2012 21:58 |
|
edit: nevermind
passionate dongs fucked around with this message at 01:49 on Feb 16, 2012 |
# ? Feb 16, 2012 01:43 |
|
|
# ? May 30, 2024 13:18 |
|
In gl 3.1+, do vertex array objects give a performance benefit over manually binding VBOs? What is the advantage of using them?
|
# ? Feb 23, 2012 22:01 |