Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

Nuke Mexico posted:



This is an implementation I did of the Signed-Distance Field pre-processing and Shading techniques presented in the paper "Improved Alpha-Tested Magnification For Vector-Textures and Special Effects" that Valve put out. It's nothing too spectacular to look at, but the concept is pretty neat (and flexible).

Basically, most game text-rendering systems either (a) render their fonts at a fixed point size to a texture and just scale them up or down as needed, or (b) render each character that is needed at any given point-size, outline/shadow mode, etc, on demand to one of several swap textures which are updated each frame. The problem with the former method is that the text scales horribly, so if you deviate too far from the target point size (either larger or smaller) you end up with a blurry mess; the problem with the second mode is that it requires you to do a lot of work each frame to managed your texture memory usage, often having several versions if the same glyphs in your cache textures because you need them at different point sizes/rendering modes.

The "Signed Distance Field" method basically pre-processes a high resolution version of the characters, then shrinks the sampled data down and stores it in a single, small, compact texture. Essentially, each pixel in the texture corresponds to how "far" (in texels) that given point is from the edge of the character -- that's the 'distance' part. The 'signed' part comes from the fact that you split the range of possible values such that a value < 0.5 is outside the edge, while > 0.5 is inside the edge. You then map and render these characters as you normally would a set of glyphs in an atlas texture, with the only exception that you then need a shader to interpret the values so that you get smooth results.

This has the obvious advantages over prior methods of needing less texture memory; however, it's also superior to both in that, by virtue of the mathematical properties of the signed distance field and bi-linear texture interpolation, it scales up and down far better than simply rendering the character itself to a texture and scaling that. There's also the added bonus of the fact that you now have continuous "distance-from-edge" information in the texture, which allows you to get neat things like outlining and drop shadows essentially for free.

This is all a lot of :words: for a fairly unexciting picture, however it's still a pretty cool technique -- especially considering how simple it is, conceptually. It's part of a larger tech demo I am working on for a job portfolio, so I may post some other elements later on once they get more polished.

Can you explain the advantage of using this technique over a vector based approach (i.e. rendering triangles based on the splines)?

Adbot
ADBOT LOVES YOU

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

Nuke Mexico posted:

First, you can get smooth anti-aliasing on your font edges (by manipulating the alpha of the texel based on the distance field instead of just having a hard cut-off) as well as being able to include things like outlining and drop shadows with very little additional cost. You can also, in general, generate much higher quality glyphs with a texture-based system than you could with a direct vector-to-geometry approach, unless you wanted to allocate a fairly sizable amount of triangles towards rendering text.

You also have the advantage of dividing the work more effectively between your vertex- and fragment- units, which is important because one has to keep in mind that the graphics hardware is a pipeline, so a bottleneck in one stage means that the other stages are likely just sitting idle. For this system, I'm generating 2 triangles and 4 vertices (indexed) per character, and doing (hopefully) an equal amount of work on the vertex and fragment processing stages; depending on your implementation, a vector-to-geometry system could be anywhere from 8 to 50 triangles per character, and likely doing almost nothing at all on the fragment processors. That's a lot of CPU and memory overhead to composite those triangles together into one render list, and REALLY a lot of overhead if you're issuing a separate draw command for each character. Of course, none of this really matters if text rendering is not a bottleneck for you; I just happen to find this implementation to be fairly flexible, straightforward, and all-around eloquent.

It's also worth noting that this technique has the additional benefit of being able to work with ANY alpha-tested texture that can be converted into a high-resolution binary (white-or-black) image. For example, you can use this method to create HUD elements from source textures that scale nicely and can drop shadows/pulse/glow. It would also be very useful for alpha-tested polygons of, say, a brush/grass/tree system, for example, as a way of creating leaves which have crisp, smooth edges even under close inspection without using a large amount of texture memory for them. It's really just a good all-around solution for solving the problem of aliasing in alpha-testing.

Thanks a lot for the info. I've been slowly developing an engine out of a codebase I've used for a few projects now, adding features as I need them, and I haven't needed a font rendering system so far as I've been running it inside a GUI app. I've been thinking about taking it to the next level and making it into something that can run full-screen, so I've been thinking about a modern font rendering technique to implement. After reading that paper by Valve I might give it a go, it seems clever and pretty straightforward.
There's a chapter by Jim Blinn in GPU Gems 3 (chapter 25), where they render vector art such as text straight from the spline curves, on the GPU. I haven't read it in detail yet, but maybe Valve's technique is more practical.

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

Nuke Mexico posted:

Every game I know of/worked on uses bitmapped/textured fonts, if an appeal-to-authority is worth anything

I know, but I don't necessarily want to do what every other game is doing. ;)

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

PlaneGuy posted:

You could use voxel-based fonts. I don't think there are many doing that.

You had me at "voxel."

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

Stramit posted:

I'm currently rebuilding my 3d engine from scratch. I used it last year for my honors thesis at university, but there were so many things that I wanted to change by the end that I just decided to rebuild it from scratch. Here are some shots of the stuff I have been working on.

General Scene:


Screen Space Ambient Occlusion Pass (still need a bit more work / tweaking):


I'm doing this ill in OpenGL. Currently the engine supports Bullet physics, and a nice configurable render pipeline where I can change how elements plug in and plug out. I have just started working on the game side of the engine so I'll actually be able to use it for something other then pretty tech renders.

Looks very cool. What kind of scene/map format are you using?

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

Entheogen posted:

I am trying to do some volume rendering using OpenGL and Java.

here are some preliminary results (there are 6 million triangles in each):





The frame rate is not that great. I use display lists right now. The volume data itself doesn't change. If anyone can suggest a better method, please do.

It's volume rendering but you're rendering millions of triangles? Do explain.

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

Thug Bonnet posted:

I'm pretty sure the display lists are stored outside the GPU's memory whereas VBOs/PBOs are stored in GPU memory. Another nice thing about VBOs/PBOs is that you can read from and re-write to them whenever you'd like (incurring while a performance hit of course).

PM me and I'll send the code (there's more than I should probably dump in the thread)

Actually, pretty much all modern OpenGL implementations will store display lists in video memory nowadays. The difference is that the spec does not require this (display lists have been in the spec for far longer than vertex buffers). It makes sense; the implementation should do whatever it can to exploit the fact that the display list is static and since they need to support vertex buffers nowadays anyway, it's a logical step. So if your geometry is static, you can stick with display lists.
That said, ray casting is fun, so you should really consider it, just google for some papers/tutorials. You're already doing shader programming anyway, right? Shouldn't be that big of a learning curve.

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

Entheogen posted:

I followed your guy's advise and used VBO's. While I do not notice any rendering speed improvement, it did allow me to render more than with display lists, because i kept getting GL_OUT_OF_MEMORY exception with display lists when trying to render same amount of information with display list.

Here is a picture of some data set with linear vertex shader function:


and here is same data, same camera angle but with gaussian function:


It was surprisingly easy to get VBO's to work with Java OpenGL. I just made a version with vertex arrays first, made sure it worked and then made a couple of extra gl calls to put arrays into GPU memory.

What does this data represent and how have you verified it is being displayed correctly? Have you tried running your program on some verifiable test data (e.g. medical data), just to see if it looks correct? You seem to be using a very non-standard technique (did you come up with this yourself?), so I'd be interested in seeing how well it looks on data that's not just some bright colours.

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

Zaasakokwaan posted:

What, like this?...



e: Design-wise, that's about all I control on this project. The colors (a zillion shades of blue and grey, fonts, sizing, etc are all Design By Committee. That looks... a bit odd, but it's been a helluva day and if I can make one person happy, so be it.

Better to leave the image in the centre and move the inputs to the left to line up with it. Please.

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

Lumpy posted:

I'll stop obsessing now.

Well done, now I can sleep again.

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

From Earth posted:



A smoke/fluid simulator I'm working on for a visualization course. Written in GLUT + GLUI.

You're at TU/e, right? I probably know your professor.

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

From Earth posted:

Yeah, I am. The professors of the course are Westenberg and Van De Wetering.

Ah, I was so sure that software was Alex Telea's. I guess it might still be. Huub van de Wetering was in my exam committee when I graduated last march.

This thread makes me sad that I can't post stuff I work on at my job, and that I've been too lazy to make anything neat in my own time lately. :(

Adbot
ADBOT LOVES YOU

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

StickGuy posted:

Alex Telea is in Groningen now I think.

Most of the time, yeah. Last time I talked to him he was still going back and forth because he had a PhD student in Eindhoven. He might have moved over permanently now though.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply