|
OneEightHundred posted:edit: Yes, you can use a subset of a larger texture, but get a full-screen overlay on a 1680x1050 screen and tell me if you can find a better use for the 7MB of VRAM you're wasting. Savagely optimize it and put other (small) textures in the space that would be offscreen for the overlay.
|
# ¿ Dec 16, 2008 00:44 |
|
|
# ¿ May 3, 2024 02:09 |
|
Doesn't Mesa 3D compile on Windows?
|
# ¿ Dec 17, 2008 03:37 |
|
dimebag dinkman posted:I wanted the output to be simplistically scaled with no increase in resolution, as you would get if you just change the viewport. So, change the viewport and the projection for the camera.
|
# ¿ Jan 11, 2009 22:30 |
|
Fecotourist posted:2. Does it matter whether Fecotourist posted:3. How much does spatial locality matter to performance on modern hardware? Why are you worrying about locality when you're operating on sets of 8 verts? Also: why do you always hit "enter" after a few words? It's really annoying. Also also: why do people (in general, not just you) always ask for help before they've even started on a project? You'll learn much better if you work on things yourself and maybe consult a book.
|
# ¿ Jan 18, 2009 04:51 |
|
Fecotourist posted:Let me rephrase the question. Given a big vertex array and two possible index arrays, A and B: It'll probably be a little bit faster, but seriously, it sounds like you're trying to work on micro-optimizations before you even have a working (re)implementation. If it takes zero extra work to maintain locality, do it, but you're seriously overthinking things. Fecotourist posted:Is it more annoying than a single-line post that is stretched to 50 line equivalents by a big dumb avatar? Far more. If I want shorter columns I'll resize my browser window. Fecotourist posted:I don't think that's the case here. My application already runs ok using OpenSceneGraph, [which seems to be using the glDrawElements() path the way I've been doing it]. For various reasons, I'm switching from OSG to SDL plus explicit OpenGL. If I'm doing that work, I just want to follow a good path. Have you run performance tests to determine where your bottlenecks are? Those would give you way more information than some random internet people who don't know the specifics of your code.
|
# ¿ Jan 18, 2009 06:42 |
|
I'm pretty sure you don't want to be changing the "up" vector, for one thing.
|
# ¿ Feb 17, 2009 03:17 |
|
MasterSlowPoke posted:Use an unprojection matrix to get the world space coordinates for your quads. Wait, you seriously can't use pre-transformed coordinates in OpenGL? It's not even a full line of code to do that in DirectX.
|
# ¿ Mar 6, 2009 08:28 |
|
krysmopompas posted:Pre-transformed coordinates suck. If you want clipping, not all hardware supports doing the backprojection into clipping space, so it falls back into software and is slow as balls. It sounded like he was making a HUD, where that wouldn't be an issue. Though I suppose you could set all the matrices in the pipeline to identity. Avenging Dentist fucked around with this message at 21:26 on Mar 6, 2009 |
# ¿ Mar 6, 2009 21:18 |
|
Premature optimization is the root of all evil.
|
# ¿ Apr 22, 2009 01:28 |
|
I'm pretty sure he's not actually using an index buffer (at least, he didn't mention it), which is part of the problem. With an index buffer, you can get quite a bit more creative with how you use the vertex buffer(s), but without it, I think it would be a pretty classic case of premature optimization, since not all cases would benefit (e.g. highly dynamic data sets) and you'd need to do actual performance benchmarking to determine if it's worthwhile. Optimizing for its own sake might be fun, but unless you know where the bottlenecks in your application are (and "playing around with vertex buffers" does not suggest that to me), you're probably going to end up optimizing the hell out of something that doesn't really matter all that much. EDIT: Also, keep in mind that even when using index buffers and a free list to do something like the heap, you're still going to run into some important differences from how malloc works. When implementing malloc, it is generally straightforward to request more memory from the operating system when necessary. As far as I know, VBOs in DirectX are non-resizable, so you'll run into problems if you ever fill up your vertex buffer. One solution would be to do something similar to what people generally do on consoles, which is to malloc a big ol' block of memory at once and only ever use that, but this isn't necessarily appropriate on PCs, where you don't even know a priori how much memory you have available. Avenging Dentist fucked around with this message at 04:12 on Apr 22, 2009 |
# ¿ Apr 22, 2009 04:02 |
|
krysmopompas posted:I don't think anyone is seriously advocating just allocating one vertex buffer for an entire app these days, nobody would even do that on a console. But that's exactly what he was asking about : Dicky B posted:If I understand correctly it's best to be using one vertex buffer...
|
# ¿ Apr 22, 2009 05:01 |
|
That's fair. (God I hate typing, and yet I always get roped into providing actual advice.)
|
# ¿ Apr 22, 2009 06:07 |
|
Serious question: do people actually use OpenGL 3? Maybe id Software does since they're on the committee or whatever...
Avenging Dentist fucked around with this message at 20:53 on Jun 5, 2009 |
# ¿ Jun 5, 2009 20:50 |
|
Is anyone else actually reading what this guy wants? I tend to use DirectX instead of OpenGL, but I'm sure this stuff is pretty similar. The issue is that, depending on what face you're working on, the texture coordinates for a given point are different. I think you'd have to render them as individual quads, but I think there are also some tricks in DirectX to make this easier (I'd post them, but I don't have the relevant code at my fingertips now, and I might be completely wrong anyway.)
|
# ¿ Jun 16, 2009 19:38 |
|
Spite posted:I think it's absolutely horrible, even if you find a binding that doesn't create some over-engineered OO paradigm. Well, at least it's consistent with the rest of the language.
|
# ¿ Jul 7, 2009 21:33 |
|
shodanjr_gr posted:http://kotaku.com/5335483/new-cryengine-3-demo Oh come the f8ck on people! sure it looks great, sure its real time, sure its closer to photo realistic, but who cares?!!? Its not gonna make the game any better, some of the best games I've played looked like poo poo, and that is one of the most talked about issues of this gen, the fact that a LOT of man power goes to the graphics, rather than going on better AI/ideas/physics. I respect them for making such an advanced engine, but this simply shouldn't be of interest to the gaming industry, we're not watching a movie here, we're playing a game, and as seen in games such as Crysis , graphics dont make a game.
|
# ¿ Aug 12, 2009 18:05 |
|
Also Crysis was awesome.
|
# ¿ Aug 12, 2009 18:13 |
|
OneEightHundred posted:You shouldn't use FVFs ever, they're obsolete with D3D9 and aren't even in D3D10. They're not that bad when you're just figuring poo poo out, since there's less to think about with FVF.
|
# ¿ Aug 31, 2009 17:39 |
|
|
# ¿ May 3, 2024 02:09 |
|
One of the first Google results is for a DirectDraw overlay, which is probably functionally similar to Direct2D: http://www.gamedev.net/community/forums/topic.asp?topic_id=359319
|
# ¿ Jan 15, 2010 06:53 |