Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
a slime
Apr 11, 2005

Any thoughts on a 3D modeller that meshes well with OpenGL? I want to do some basic low-poly modelling and skeletal (I guess?) animation.

I'm tooling around in Blender right now. Is this a reasonable choice?

Adbot
ADBOT LOVES YOU

a slime
Apr 11, 2005

OneEightHundred posted:

Getting geometry from modeling software is far less about the software than about the FORMAT. The formats used by the modeling packages are horribly unsuitable for being directly loaded by a game engine, so you always want the model exported in a format you can handle, catered to the capabilities of your engine.

The project I'm working on uses its own format and imports that from other easier-to-parse formats like MD5, SMD, or PSK/PSA. COLLADA and FBX are the ultimate intermediate formats right now and are supported by virtually every decent modeling package, but it's still much more complex than game-specialized formats.

Yeah, this answers my question. Sounds like a custom format is the way to go. Thanks everyone :)

a slime
Apr 11, 2005

ultra-inquisitor posted:

How reliable are non-pow2 textures in OpenGL? They've been promoted to the core spec and I've never had any problem with them, but I've only had a very limited range of cards to test on (all nvidia). I've just come back to graphics coding after a pretty lengthy absence - are they still slow, or is the speed difference negligible?

On my laptop with an X300, test program goes from 190fps to 2fps

a slime
Apr 11, 2005

sex offendin Link posted:

Render the scene to an FBO appropriately smaller than the screen, render the result on a fullscreen quad with the filter set to GL_NEAREST, go hog wild.

Why would you do that instead of just changing the viewport appropriately?

a slime
Apr 11, 2005

dimebag dinkman posted:

I wanted the output to be simplistically scaled with no increase in resolution, as you would get if you just change the viewport.

Right. This is what I usually do for integral scaling.

code:
void Application::resize(int screen_width, int screen_height)
{
    int scaling_factor =
        std::min(screen_width / target_width, screen_height / target_height);
    int render_width = target_width * scaling_factor;
    int render_height = target_height * scaling_factor;
    int render_x = (screen_width - render_width) / 2;
    int render_y = (screen_height - render_height) / 2;

    glViewport(render_x, render_y, render_width, render_height);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();

    glOrtho(0.0f, target_width, 0.0f, target_height, -1.0f, 1.0f);
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
}
target_width and target_height set to whatever low resolution you intend to scale up.

edit: should note this will center the viewing area in the middle of the window

a slime
Apr 11, 2005

Can I seriously not tile textures from an atlas in OpenGL

a slime
Apr 11, 2005

I've never worked with GLSL before but I spent some time reading about it tonight and it sounds pretty interesting. I'm working on a little 2D game that currently uses a bunch of textured quads and renders them using VBOs. The quads are never rotated and never change size, but the texture coordinates change fairly often for animated sprites and that kind of thing.

Is it possible to store texture coordinates and the height/width of the quad in the normal or something that I don't use, and instead render everything as a bunch of point sprites? I'm thinking I can set up the texture coordinates and quad size in a vertex/fragment shader.. Am I way off base? Am I likely to see much increase in throughput by doing so?

a slime
Apr 11, 2005

If it's best to interleave vertex data, why doesn't glBufferSubData have a stride parameter? It feels wasteful to send 24 bytes of unchanged data per vertex every time I want to update 8 bytes of texture coordinates

a slime
Apr 11, 2005

What lingering OpenGL state fuckery could destroy my ability to draw vertex arrays? I wrote a plugin for some closed source software a while ago, and at some point a new version of their code hosed everything up. I can draw the exact same things fine in immediate mode.

Is there an easy way to get a dump of the current OpenGL state? How do I debug this?

a slime
Apr 11, 2005

Ughhhhhhh I'm having some OpenGL rasterization issues that are driving me crazy. I'm rendering bitmap fonts from a texture atlas, here the individual letters are moving around:

Good frame:


Bad frame:


The effects are much more noticeable out of the box- right now I'm doing a glTranslatef(0.05f, 0.05f, 0.f) before I render anything, and the texture atlas is set up as follows:

code:
const float RASTERIZATION_OFFSET = 0.05f;
tex_coords_[2] = tex_coords_[4] = (GLfloat)(position.x + size_.x - RASTERIZATION_OFFSET) / texture_width;
tex_coords_[0] = tex_coords_[6] = (GLfloat)(position.x + RASTERIZATION_OFFSET) / texture_width;
tex_coords_[1] = tex_coords_[3] = (GLfloat)(position.y + size_.y - RASTERIZATION_OFFSET) / texture_height;
tex_coords_[5] = tex_coords_[7] = (GLfloat)(position.y + RASTERIZATION_OFFSET) / texture_height;
My projection is set up as:
code:
glOrtho(0.f, 320.f, 0.f, 240.f, -1.f, 1.f);
And my viewport is always some integral multiple of 320x240. It's true that rounding the coordinates of textured objects solves the above problem, but I want to work with floats (mainly so that as the window scales up, objects are displayed more precisely).

Any ideas?

a slime
Apr 11, 2005

Ahhh sorry, my post was super incoherent because I wrote it at six in the morning :(

Thanks for the response. I was trying to say in my post above that without the rasterization offset and translate, the problems are even worse. They are less frequent, but much more noticeable. Observe:



a slime
Apr 11, 2005

Sorry, I'm really not explaining myself well.

OneEightHundred posted:

No I mean don't do the rasterization offset thing on the texture coordinates. If you're trying to draw entire pixels from a texture map, then the texture coordinates need to be from the corner of one pixel on the texture to the opposite corner of another pixel.

So they should be stuff like (position.x + size_x) / width and that's it.

Right, unless I'm misunderstanding you now, that's exactly what I'm doing in my most recent post above.

roomforthetuna posted:

I think the problem is you can't do distorted fonts like in the screenshots with "point" sampling (which I assume is what's being used since there's no half-lit pixels) without getting artifacts - in a moving distortion like that there'll always be some point where one pixel's v is 1.001 and the next pixel's v is 1.999, and you only wanted the line one pixel thick but now it's two. (Or alternatively, one pixel's v is 0.999 and the next is 2.001 so you missed that texture pixel entirely.)

It might help to make the font texture bigger. You'd still get artifacts but they'd look less bad. Then with a bigger texture you could also use linear sampling which would get you a slightly blurrier result but without the honking great chunks of illegible.

(Or, only use your font in nice straight lines at a 1:1 scale, in which case point filtering will give you the best results and you won't get artifacts.)

My fault for not explaining, everything is being rendered as an axis-aligned quad in its original size. The wavy text is just because I'm shifting individual letters vertically. Also, the shots above were rendered at 320x240 and magnified 2x just for the sake of posting.

I'm not sure I can really solve the problem the way I wanted to, I think due to floating point inaccuracies you'll always have rasterization problems unless you do some kind of rounding before sending vertex coordinates to OpenGL.


I have another question. How should one typically use VBOs for dynamic objects that are frequently moving on and off screen? Right now I'm just using glDrawElements with a client-side index array. Every frame, I iterate over all onscreen objects and collect their indices into an array and pass it to glDrawElements. When an object is offscreen, it remains in the VBO, I just don't pass its indices to glDrawElements.

There are so many different ways to do the same thing, I have no intuition about how to compare different approaches and have trouble choosing one. For instance, I could use a server-side index buffer, zero out elements as soon as they go off screen, do some primitive memory management to reuse available parts of the index buffer, keep track of the maximum index used and pass all of that to glDrawRangeElements. Is that better than building a client-side index buffer of visible objects on the heap, passing it to glDrawElements, and then discarding it every single frame?

I realize it depends on the application and I can't find an exact answer without trying it out and doing some profiling, but I need some kind of guiding principle to follow on my initial implementation, otherwise I might as well do anything, right?

a slime
Apr 11, 2005

Spite posted:

First, I'm confused. If the object isn't physically changing shape, etc, you don't need to dynamically update its vertex/index data. Unless I'm missing your point...
For text, most people pass a sysmem pointer to index data.

Usually you want to double-buffer VBOs.
Frame 1:
Mod VBO A, Draw with A
Frame 2:
Mod VBO B, Draw with B

Think of it this way:

All GL commands get put into a queue that will be pushed to the GPU (a "command buffer"). However, you can modify stuff in VRAM via DMA as well. So you cannot* modify something that is currently being used to draw. However, if you have 2 VBOs with the same data, you can get around this (and vertex data tends to be small, so it doesn't really hurt your mem usage).
This true for textures,vertex data, cbuffers, etc (I've seen several DX10 apps that try to use one cbuffer and end up waiting after every draw).

*D3D's LOCK_DISCARD and GL's FlushMappedBufferRange will allow you to modify stuff in use - it's up to you to make sure you don't smash your own memory.

Maybe I'm using VBOs in kind of a strange way. All I'm rendering are a bunch of axis aligned quads that (at least right now) are all textured from the same atlas. Whenever an object in the scene moves, I need to update its vertex coordinates in the VBO. Whenever a sprite animates, I need to update its tex coords. This way I can draw all of the objects in the scene with a single OpenGL call. Is this a bad approach?

OneEightHundred posted:

Why would you do this instead of just using discard?

Also another option for using discard on OpenGL buffers is use glBufferData with a NULL data pointer.

Doesn't this mean I'll have to completely repopulate the VBO? If I'm doing that every frame there's no real benefit over something like vertex arrays right?

a slime
Apr 11, 2005

Instancing looks cool, I'll have to play around with that. I don't think it really helps me too much though because most of my quads are different sizes.

Eventually I figured I would write a geometry shader like you said. What's wrong with doing that?

Also, how would double buffering VBOs help me? I'd have to repopulate the back buffer every frame, right?

a slime
Apr 11, 2005

I don't really know where to post this but I think it fits here. I'm having trouble wrapping my head around a problem and I think at this point I've spent too long staring at it to see a simple solution.

I have a 3d polygon displayed with an orthographic projection. For an arbitrary rotation, I want to be able to "snap" the vertices to the nearest point on a 2d grid overlaid on the orthographic projection.

Right now what I do is the following: I use gluProject on three axis aligned unit vectors, then subtract a projected zero vector from each to get x_part, y_part, and z_part- each 3d axis' affect on 2d translation in the projection. Then I take the minimum nonzero component of each to be x_width, y_width, and z_width, and use (2D_GRID_SIZE / foo_width) as the width of a grid on foo's respective axis. Any vertex snapped to this 3d grid will align with the 2d grid on the scene's projection, and I can do this with some simple rounding magic.

First of all, this solution is not very general and makes a million assumptions, the most ugly of which is that everything breaks if the 2d grid has a different size on each axis. Second of all, everything breaks if the origin is not on a gridpoint.

I arrived here by trial and error and I can't really justify anything that I've done so far. Any ideas? Can anyone give me an idea where to start in building a general solution to this problem? I feel like there has to be an obvious solution that I just completely missed.

edit: think I got it... Post following shortly

a slime fucked around with this message at 14:53 on May 10, 2011

Adbot
ADBOT LOVES YOU

a slime
Apr 11, 2005

I have a bunch of concave contours whose vertices rest on the surface of a sphere. I want to draw them, filled with color, on that sphere. I was thinking that I could take points from a tessellation of the sphere, calculate the Delaunay triangulation over all of these points and the vertices of my contours, and render that. Is there a simpler way to do this?

Sorry if this is a silly question, I haven't worked much in 3D :)

edit: I guess I also need to determine which of the points from the tessellation are inside which contours (for coloring purposes), but it seems like I could do some spherical analogue of a point in polygon test?

a slime fucked around with this message at 00:42 on Apr 25, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply