Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Jewel posted:

Not sure why you're simultaneously trying to avoid state changes (usually done to avoid sending a lot of data to/from the gpu and keep things fast) while also trying to modify pixels on the cpu (which is the opposite, very incredibly slow and doesn't use the gpu's power at all). Might be an XY problem here. What are you attempting to do, and why can't you do it on the fast gpu instead of doing stuff on the slow cpu?

You should be streaming all your texture data needed up front to the gpu and just binding different textures back and forth with the draw calls, perhaps rendering to texture with rendertargets/framebuffers if you need feedback loops of some kind (like, say, a post process).

The closest thing you get to draw calls in different threads is building sub-command buffers asynchronously and combining them at some synchronous time in the future and submitting it to draw.

Interested to know what your exact goal is!

I want to batch all meshes that use the same shader into the same draw call, and for that I need all textures to be in the same array texture (I'm doing it this way because I'm expecting to have a LOT of visually distinct smaller objects, and for that to be efficient I would have to batch them.) I don't want to modify anything CPU-side, but new textures will have to go through the CPU side from the HDD before it's on the GPU. I want to stream the data asynchronously, because I don't want to stall the application while I'm loading in a new texture.

I don't want different draw calls in different threads. All GL calls will be in a single thread. I just want to stream texture data into an array texture without stalling the application. This isn't a problem with something like the vertex data, because array buffers can be mapped directly to a pointer and double buffering those would max eat an extra MB or so.

Adbot
ADBOT LOVES YOU

Absurd Alhazred
Mar 27, 2010

by Athanatos

Joda posted:

I want to batch all meshes that use the same shader into the same draw call, and for that I need all textures to be in the same array texture (I'm doing it this way because I'm expecting to have a LOT of visually distinct smaller objects, and for that to be efficient I would have to batch them.) I don't want to modify anything CPU-side, but new textures will have to go through the CPU side from the HDD before it's on the GPU. I want to stream the data asynchronously, because I don't want to stall the application while I'm loading in a new texture.

I don't want different draw calls in different threads. All GL calls will be in a single thread. I just want to stream texture data into an array texture without stalling the application. This isn't a problem with something like the vertex data, because array buffers can be mapped directly to a pointer and double buffering those would max eat an extra MB or so.

Have you tried only threading out the loading from HDD to CPU memory, rather than the texture calls?

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today
Uploading texture data to the GPU in parallel to rendering is a good idea (if you map the memory you can even avoid going through userspace system memory first) and, executed correctly, will provide a much better user experience. Precisely controlling parallelism is much easier with Vulkan than GL, though; in GL you may find yourself relying on the driver correctly guessing what your intention is.

Colonel J
Jan 3, 2008
I'm trying to finally wrap up my Master's degree, and I'm looking for a scene to try out the technique I'm developing; surprisingly finding a good free scene on Google is a pretty awful process, and I'm not having much luck finding good stuff. I'm looking mainly for an interior type scene, such as an apartment, ideally with a couple rooms and textures / good color contrast between the different surfaces (I'm working on indirect illumination). Something like this : http://tf3dm.com/3d-model/luxury-house-interior-74731.html , but like most scenes I download from this site a bunch of textures are missing and there's not even a .max file.

What do you guys use as a source for quality scenes? There has to be a modeler's community with quality portfolios or something like that... I'd do it myself but I'm not much of an artist :\ thanks a lot!

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
The standard models for GI papers tend to come from this website. At least the ones I've seen have almost all come from that list, with a few exceptions (like the AAO paper used some clockwork/gears scene that I haven't been able to find.) None of them fit your requirements exactly, but scenes like Crytek Sponza and San Miguel offer enough visual/geometric variety to demonstrate most concepts.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Joda posted:

The standard models for GI papers tend to come from this website. At least the ones I've seen have almost all come from that list, with a few exceptions (like the AAO paper used some clockwork/gears scene that I haven't been able to find.) None of them fit your requirements exactly, but scenes like Crytek Sponza and San Miguel offer enough visual/geometric variety to demonstrate most concepts.

Can confirm. Crytek Sponza is good, San Miguel is great but it's a f'ing 1.8 GB OBJ file because it's plain text and all the instances are unrolled :cripes:

I've been messing around with Blender trying to find a binary format that supports the required materials, etc. If my blender-fu improves I might even try to remake it with instancing, but my real job keeps getting in the way.

Hubis fucked around with this message at 01:31 on Mar 13, 2017

Xerophyte
Mar 17, 2008

This space intentionally left blank
A lot of the recent research is using scenes from this set from Benedict Bitterli, which has a number of pretty good scenes converted for PBRT and Mitsuba. Great if you're working in Mitsuba or have a reader for their format, less great otherwise.

Beyond that, there is good stuff on BlendSwap if you can wade through the muck off less good stuff and are willing to either use blender or spend time to de-blender them.

Colonel J
Jan 3, 2008

Hubis posted:

Can confirm. Crytek Sponza is good, San Miguel is great but it's a f'ing 1.8 GB OBJ file because it's plain text and all the instances are unrolled :cripes:

I've been messing around with Blender trying to find a binary format that supports the required materials, etc. If my blender-fu improves I might even try to remake it with instancing, but my real job keeps getting in the way.

Yeah, I used Crytek Sponza and haven't been getting very good results, and San Miguel is just too big for my limited RAM, when it's loaded in G3D with a tritree I just enter swap hell.

And thanks for this ^^ ! trying these out atm.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Colonel J posted:

Yeah, I used Crytek Sponza and haven't been getting very good results, and San Miguel is just too big for my limited RAM, when it's loaded in G3D with a tritree I just enter swap hell.

And thanks for this ^^ ! trying these out atm.

what problem are you having with Sponza? it *should* have good materials

Colonel J
Jan 3, 2008

Hubis posted:

what problem are you having with Sponza? it *should* have good materials

I mean that my algorithm isn't working too well.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Colonel J posted:

I'm trying to finally wrap up my Master's degree, and I'm looking for a scene to try out the technique I'm developing; surprisingly finding a good free scene on Google is a pretty awful process, and I'm not having much luck finding good stuff. I'm looking mainly for an interior type scene, such as an apartment, ideally with a couple rooms and textures / good color contrast between the different surfaces (I'm working on indirect illumination).

Can you share anything about your work? I've been reading about realtime GI lately and am quite interested in new developments.

Colonel J
Jan 3, 2008

Ralith posted:

Can you share anything about your work? I've been reading about realtime GI lately and am quite interested in new developments.

Of course! I'm working on irradiance probes. You've probably heard about them; they're a pretty standard way of approximating GI by sampling the spherical irradiance function passing through discrete points in the scene and interpolating between the samples at runtime. It's pretty straightforward to create an irradiance volume by placing samples along a 3D regular grid and interpolating trilinearly, but that can easily lead to oversampling as your sample set grows fast that way and irradiance tends to vary pretty smoothly (for distant light sources).

I'm basically trying to automatically construct optimal probe sets by minimzing an error function : I take a much larger number of sample points than my final desired probe set size as a reference then compute an error term which is the sum of squared differences between the SH projection coefficients of the ground truth samples and the interpolated samples from my probe set. I can then use the gradient of the probes' SH coeffs to find a direction to move them in which will lower the error term, make them take a step in that direction and continue until I've found a minimum.

By following a sequence of 1) placing a new probe by trying out a bunch of locations (it's precomputation, so I can try as many as I want and it's fast) and keeping the best one followed by 2) a gradient descent pass until I reach a local minima for the current probe set, I'm able to get pretty consistent result that lead to an error term twice as small than 10x the number of probes placed on a trilinear grid.

This sounds like a good result, but honestly I'm not too happy about it; I've been finding out that a lower theoretical error does not necessarily lead to a more pleasant shading. The important thing is smoothness and visual consistency, and the way I'm building the probes doesn't really care about that, it cares about lowering the error term by all means possible and the final shading has obvious flaws. My choice of interpolating between probes by Weighted Nearest Neighbour has advantages, for example I've been able to derive the equations for gradient descent without too much pain as it's all continuous and well-behaved for small probe displacements, unlike a 3d grid in which crossing a cell border introduces a discontinuity.
However I think the disadvantages are greater; the worst thing is that every probe in the structure influences every shading point, which is extremely wrong. I'm thinking I have to separate my scenes in distinct "visibility volumes" which is kind of what they do in the industry, but I'd have rather had a "black box" in which you can feed a polygon soup and get an ideal probe set as output.

Still it depends; let's say you're making a racing game, you're gonna place your probes along the track and having a regular grid is probably not a good idea. You could instead just interpolate linearly between the probes (along track coordinates) and set your coefficients up so that the 3 closest probes have the most influence over the result (really just thinking out loud here). I think in that sort of situation a technique like mine could prove useful for optimal probe placement.

So yeah, not too groundbreaking work but I think I have enough meat for a thesis / good theoretical results but not a usable algorithm in practice. There's lots of things I could have done better in retrospect; for example a strategy of creating an extremely dense grid and removing nodes by keeping the error function as low as possible, creating some sort of octree, could have been a good solution. I don't think what I did is amazing , especially compared to the fancy stuff they're doing in modern games and cutting-edge research, but this was my first foray into CS (as a physics major) and lead to me working in the industry so it's not all bad :)

edit: to redeem myself here's some good modern research on probes, much fancier than what I've got : http://graphics.cs.williams.edu/papers/LightFieldI3D17/

Colonel J fucked around with this message at 14:50 on Mar 13, 2017

Hyvok
Mar 30, 2010
How am I supposed to update my vertices with OpenGL? Everything works fine and dandy the first time I generate my vertices, but if I try to update them they just do not get updated. The only thing that seems to change is the size of the buffer. I've now spent a few hours going through most permutations of binding buffers and unbinding buffers and calling glVertexAttribPointer and glEnableVertexAttribArray and glDisableVertexAttribArray and seems nothing will make the vertices change in anything else than size.

My OpenGL thingie is very simple 2D thing, in init I basically create the VAO and VBO and vertex attribute pointers and then bind them and then create all the uniforms and shaders and what not. When updating vertices I call glBufferData (yes I know it reallocs and I should use glBufferSubData but right now I just want something to change on screen) and when drawing just glDrawArrays. This works fine the first time around, but second time around if I call glBufferData with new data (different vertices, different amount of them) only the number of vertices displayed on the screen changes but the vertices are not updated. I've tried unbinding the VBO before glBufferData and then binding it back, I've tried calling glVertexAttribPointer after glBufferData, tried disabling and enabling the vertexattribarray and tried practically every combination of these and it works exactly the same each time. What do I need to do to update the vertices?

At one point I was certain I figured it out since some place said buffers cannot be modified when they are bound (and they are bound all the time for me currently, because I just have one), but unbinding VBO and VAO, calling glBufferData and then rebinding makes no difference.

Hyvok fucked around with this message at 19:33 on Mar 13, 2017

Xerophyte
Mar 17, 2008

This space intentionally left blank

Colonel J posted:

So yeah, not too groundbreaking work but I think I have enough meat for a thesis / good theoretical results but not a usable algorithm in practice. There's lots of things I could have done better in retrospect; for example a strategy of creating an extremely dense grid and removing nodes by keeping the error function as low as possible, creating some sort of octree, could have been a good solution. I don't think what I did is amazing , especially compared to the fancy stuff they're doing in modern games and cutting-edge research, but this was my first foray into CS (as a physics major) and lead to me working in the industry so it's not all bad :)

Hey, it sounds pretty good to me. I used to work in the same office as a team who worked on a lightmap and irradiance probe baker for games (Beast) and better automatic probe placement was always their holy grail. They had an intern who did what sounds like a very similar master's thesis to yours a couple of years back. I think he ended up using doing descent on the vertices of a 3D Delaunay tesselation, but light leaking was a constant problem. He had some heuristics for splitting tetrahedrons that crossed geometry and other boundaries but as I understood it things would get messy for any complex scenes. The thesis is here if curious.

Colonel J
Jan 3, 2008

Xerophyte posted:

Hey, it sounds pretty good to me. I used to work in the same office as a team who worked on a lightmap and irradiance probe baker for games (Beast) and better automatic probe placement was always their holy grail. They had an intern who did what sounds like a very similar master's thesis to yours a couple of years back. I think he ended up using doing descent on the vertices of a 3D Delaunay tesselation, but light leaking was a constant problem. He had some heuristics for splitting tetrahedrons that crossed geometry and other boundaries but as I understood it things would get messy for any complex scenes. The thesis is here if curious.

Wow that's amazing, this thesis is one of my main references as there's not too many people who worked on that yet! And yeah light leaking is awful, even in modern AAA games it's all over the place. I'd have rather used tetrahedrons for interpolation as well, but I didn't dare venture into finding the derivatives for the weights; I'll take a look through that thesis again 'cause if that guy went through the trouble of doing it it could be pretty useful to me.

And thanks for the encouragement, I've looked at it so much that at this point all I see is the flaws...

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Static lighting and environments then? I've been particularly interested in totally dynamic solutions; I've played with a very compelling demo based on Radiance Hints (Papaioannou, 2011), which uses a regular grid of "probes" computed live by sampling reflective shadow maps. It uses screen-space techniques to reduce leakage, which isn't perfect but reduces the incidence of visually obvious errors. Someday I want to try applying it to massive environments by using toroidally-addressed clipmaps for the radiance cache.

It surprises me a bit to hear that you can work in terms of individual SH coefficients for gradient-descent and error-bounding. Is that mathematically rigorous? I'm 100% prepared to believe it is, my grasp of SH math is pretty loose.

I wonder if you could improve your results by adjusting your error function. It sounds like you're currently minimizing the global error, but are dissatisfied with the results due to the visual impact of local errors. What if you defined your error function to be the maximum local error? I imagine this might require a more stochastic approach to gradient descent than you currently need, and likely quite a lot more CPU time, but it seems more consistent with your desired results.

I'm having trouble following why every probe has global influence on shading. That certainly seems like an issue, since then the time complexity of shading a single point is proportional to the size of the entire scene.

Ralith fucked around with this message at 22:35 on Mar 13, 2017

Hyvok
Mar 30, 2010

Hyvok posted:

How am I supposed to update my vertices with OpenGL? Everything works fine and dandy the first time I generate my vertices, but if I try to update them they just do not get updated. The only thing that seems to change is the size of the buffer. I've now spent a few hours going through most permutations of binding buffers and unbinding buffers and calling glVertexAttribPointer and glEnableVertexAttribArray and glDisableVertexAttribArray and seems nothing will make the vertices change in anything else than size.

My OpenGL thingie is very simple 2D thing, in init I basically create the VAO and VBO and vertex attribute pointers and then bind them and then create all the uniforms and shaders and what not. When updating vertices I call glBufferData (yes I know it reallocs and I should use glBufferSubData but right now I just want something to change on screen) and when drawing just glDrawArrays. This works fine the first time around, but second time around if I call glBufferData with new data (different vertices, different amount of them) only the number of vertices displayed on the screen changes but the vertices are not updated. I've tried unbinding the VBO before glBufferData and then binding it back, I've tried calling glVertexAttribPointer after glBufferData, tried disabling and enabling the vertexattribarray and tried practically every combination of these and it works exactly the same each time. What do I need to do to update the vertices?

At one point I was certain I figured it out since some place said buffers cannot be modified when they are bound (and they are bound all the time for me currently, because I just have one), but unbinding VBO and VAO, calling glBufferData and then rebinding makes no difference.

Finally got it to work by unbinding VAO and VBO after init, rebinding for glBufferData call and again unbinding at the end of that, and rebind/unbind for render. I think I finally understood it, no data will be transferred when the buffer is bound so the transfer does not actually happen at the instant when calling glBufferData... Just more like scheduled when the buffer is unbound...? Which is a bit confusing because I had the impression all OpenGL calls are blocking but I guess this is incorrect then.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
When you call glBufferData on a bound buffer, any future GL calls involving that buffer should act as if the buffer update you made with that call has already happened (even if it hasn't in real-time.) There's a lot of command buffer building and batch calls and such that're hidden and done by the driver in OpenGL, but there should be an absolute guarantee that you can act as if an operation is completed when its function call returns even if the call only queues it up in a command buffer or whatever (certain GL4 specific features notwithstanding.) Based on your original description my guess would be that you update your index buffer, but not your vertex buffer or something to that effect, but it'd be much easier to give you a clear answer if we could see your code.

Joda fucked around with this message at 13:50 on Mar 14, 2017

Luigi Thirty
Apr 30, 2006

Emergency confection port.

I've got some, uh, extra time on my hands so I'm working on my little DOS 3D engine again. I'm trying to get an object to move toward another object. This is harder than it sounds! My code works fine when the object is only moving along the X axis but if it needs to move along the Y axis, it will rotate to face away from the object the closer it gets...



The target obiect is the cube which is placed 10 units directly above the arrow thing. Origin/Rotate are the moving object's position and orientation. T Dist is the moving object's distance from the center of the object. Reqd X is how many degrees the function thinks it has to correct by in order to face the object. My code is supposed to have the object face the cube and move toward it until it reaches <1 unit away. I thought I understood the math but apparently not. Anything obviously wrong here?

code:
//obj = the arrow thing. waypoint = the cube
    //called once per frame
    Vector3f target_position = waypoint->transformation.translation;
    float maximum_rotation_degrees = 2.5; //how many degrees can we rotate each frame?

    float x_distance = target_position.x - obj->transformation.translation.x;
    float y_distance = target_position.y - obj->transformation.translation.y;
    float z_distance = target_position.z - obj->transformation.translation.z;
    float xz_distance = std::sqrt((x_distance * x_distance) + (z_distance * z_distance));

    float x_desired_rotation = R2D(-std::atan2(y_distance, xz_distance)); //R2D = radians to degrees
    float y_desired_rotation = R2D(std::atan2(x_distance, z_distance));

    float x_required_rotation = x_desired_rotation - obj->transformation.rotation.x;
    float y_required_rotation = y_desired_rotation - obj->transformation.rotation.y;

    char reqdX[80];
    sprintf(reqdX, "REQD X: %f", x_required_rotation);
    g_screen.layer_text.putString(reqdX, strlen(reqdX), TEXT_XY(0,7), COLOR_GREEN, FONT_4x6);

    float x_this_frame_rotation = 0;
    float y_this_frame_rotation = 0;

    if(std::abs(x_required_rotation) > 5)
    {
        if (x_required_rotation > 0) {
            x_this_frame_rotation += std::fmin(maximum_rotation_degrees, std::abs(x_required_rotation));
        }
        else {
            x_this_frame_rotation -= std::fmin(maximum_rotation_degrees, std::abs(x_required_rotation));
        }

        obj->transformation.rotation.x = obj->transformation.rotation.x + x_this_frame_rotation; //around Y axis
    }

    //omitted: if distance <= 1, then stop

Hyvok
Mar 30, 2010
What is the best way to implement 2D text rendering? I've seen a few tutorials and stuff and the common approach seems to be to render the fonts to native size (with FreeType or something), then make a texture atlas out of them and push to GPU and just render them as textures.

However I would like to support unicode and I need to support multiple "zoom levels" (think of a 2D CAD software) so prerendering ALL symbols for ALL zoom levels is probably unfeasible. I'm thinking of just creating textures for symbols that are visible in the design (and if you type a new one it will render it and update texture cache) for all zoom levels. Not sure of final amount of zoom levels but its like maybe 20 levels or so (need to do some testing). I wouldn't really expect the user to use a huge amount of different unicode symbols but that is up to the user I guess... I could also maybe at close zoom level just use a non-antialiased font and just start upscaling the textures (you probably don't care how the edges look in your letters if you're zoomed so close that the letters are size of your screen or something).

Any rules of thumb for maximum size of textures or total amount of textures? I don't at least want to require some high-end GPU to render some text (currently only requirement is Core 3.3 profile support)...

Xerophyte
Mar 17, 2008

This space intentionally left blank
You could try a non-bitmap approach. The maximum quality solution is to render the TrueType splines directly but it's quite slow, challenging to handle filtering and aliasing, plus you have annoying corner cases for overlapping patches. I think at least some of our applications at Autodesk at least used to do fonts by switching between mipmapped bitmaps for small glyphs and tesselation for large glyphs; not really sure if that's still the case though.

Far as I know the current recommended approaches for both good quality and good performance are distance fields or median-of-3 distance fields, both of which effectively boil down to storing the glyphs as filterable, piecewise linear edge data in a low resolution texture. They can provide zoomable and alias-free text rendering at a single modest storage resolution. The drawback is that the fields can be somewhat tricky to generate, especially for the 3-channel median version. There are tools available to do the baking, I have no idea how easy they are to use.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Xerophyte posted:

You could try a non-bitmap approach. The maximum quality solution is to render the TrueType splines directly but it's quite slow, challenging to handle filtering and aliasing, plus you have annoying corner cases for overlapping patches. I think at least some of our applications at Autodesk at least used to do fonts by switching between mipmapped bitmaps for small glyphs and tesselation for large glyphs; not really sure if that's still the case though.

Far as I know the current recommended approaches for both good quality and good performance are distance fields or median-of-3 distance fields, both of which effectively boil down to storing the glyphs as filterable, piecewise linear edge data in a low resolution texture. They can provide zoomable and alias-free text rendering at a single modest storage resolution. The drawback is that the fields can be somewhat tricky to generate, especially for the 3-channel median version. There are tools available to do the baking, I have no idea how easy they are to use.
I wouldn't recommend distance fields for a dynamic application. They're usually computed by rasterizing your vector data at very high resolution and then running a somewhat expensive preprocessing pass on that output to compute a low-resolution distance field. They're also both slower to render (even ignoring preprocessing!) and lower-quality than a good GPU-accelerated rasterizer, which doesn't even require super recent gpu hardware.

I think for 2D CAD purposes where aesthetics aren't super important people often just approximate text outlines with a series of lines that you can handle the same as any other lines. Definitely don't try to store data for every possible zoom level; that will force you to use discrete zoom levels and require more space than just storing the high-resolution version and using hardware downsampling as necessary.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Hyvok posted:

What is the best way to implement 2D text rendering? I've seen a few tutorials and stuff and the common approach seems to be to render the fonts to native size (with FreeType or something), then make a texture atlas out of them and push to GPU and just render them as textures.
Does nobody do it by transforming the Truetype data into a set of triangles to render? Seems like that would give you pretty good quality (not perfect splines but also not jagged, and you could set some "number of slices per curve" configurable values for how smooth you care for it to be), and reasonably cheap rendering since we can render a million billion triangles per frame these days and you most often wouldn't want any kind of exciting shader on there.

Hyvok
Mar 30, 2010

Xerophyte posted:

You could try a non-bitmap approach. The maximum quality solution is to render the TrueType splines directly but it's quite slow, challenging to handle filtering and aliasing, plus you have annoying corner cases for overlapping patches. I think at least some of our applications at Autodesk at least used to do fonts by switching between mipmapped bitmaps for small glyphs and tesselation for large glyphs; not really sure if that's still the case though.

Far as I know the current recommended approaches for both good quality and good performance are distance fields or median-of-3 distance fields, both of which effectively boil down to storing the glyphs as filterable, piecewise linear edge data in a low resolution texture. They can provide zoomable and alias-free text rendering at a single modest storage resolution. The drawback is that the fields can be somewhat tricky to generate, especially for the 3-channel median version. There are tools available to do the baking, I have no idea how easy they are to use.

Ralith posted:

I wouldn't recommend distance fields for a dynamic application. They're usually computed by rasterizing your vector data at very high resolution and then running a somewhat expensive preprocessing pass on that output to compute a low-resolution distance field. They're also both slower to render (even ignoring preprocessing!) and lower-quality than a good GPU-accelerated rasterizer, which doesn't even require super recent gpu hardware.

I think for 2D CAD purposes where aesthetics aren't super important people often just approximate text outlines with a series of lines that you can handle the same as any other lines. Definitely don't try to store data for every possible zoom level; that will force you to use discrete zoom levels and require more space than just storing the high-resolution version and using hardware downsampling as necessary.

Thanks for the replies! I've heard of distance fields before but not of those median-of-3 versions which look interesting! I was considering the regular versions but they have some artifacts with sharp corners so I thought they are not the best for font rendering. I think I will look in to the median-of-3 distance fields first at least. Looks like you do not have to rasterize anything (or the msdfgen does that for you). I don't think processing time will be an issue unless it takes like ages.

Just rendering the text as a mesh might be an option as well, I just saw some site recommend against that due to the amount of vertices you will need to handle. Which did sound a bit odd since you can have A LOT of vertices nowadays...

krystal.lynn
Mar 8, 2007

Good n' Goomy
MSPaint drawings/explanations to follow. I'm a graphics neophyte, but we haven't been able to hire anyone better in our area, so I get to do some self learning.

I'm working on the graphics end of a sea ice physics simulation, where the collision area of each individual sheet of ice is represented in our physics engine as a 2D convex polygon of 4-8 sides or so. Rendering the simple polygons is easy, of course, but I now I need to take steps to get them look more realistic than blocky pieces of styrofoam. To that end, I'm hoping to apply some noise patterns to the boundary and surface of each polygon to roughen their appearance. I am just concerned about manipulating the geometry for now, later on I'll figure out textures and lighting and all that jazz. But first, I'm going to need a more detailed mesh than just the convex hull to accomplish anything (see crappy MSPaint): http://i.imgur.com/f0ciHBH.png

The question is, how do I go about generating this mesh using just the convex hull, and perhaps some parameterization to determine the 'resolution' of the internal vertices and the roughness of the outer edges (maybe I could parameterize based on the number of 'fragments' to break each edge into or something like that)?

This problem also has a B part, that I don't necessarily need completely figured out right away, but which might impose constraints on a solution to my first problem. Our physics model also simulates dynamic splitting of ice sheets, albeit in perfectly straight lines and only from vertex to vertex on the collision polygon (though arbitrary edge to edge splits may be supported in the future). These splits too would need some degree of roughness (again, MSPaint):
http://i.imgur.com/yvc9LNj.png

I was thinking maybe that tracing internal vertices closest to the axis of the split would be a good place to start, but I'm not married to the idea if it would be too expensive vs. just picking random points offset from the axis and retessellating the fragments.

Am I barking up the right tree? Any insight would be appreciated. We're using C++ and OpenSceneGraph but I'm happy (happier?) to do straight OpenGL if need be.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

roomforthetuna posted:

Does nobody do it by transforming the Truetype data into a set of triangles to render? Seems like that would give you pretty good quality (not perfect splines but also not jagged, and you could set some "number of slices per curve" configurable values for how smooth you care for it to be), and reasonably cheap rendering since we can render a million billion triangles per frame these days and you most often wouldn't want any kind of exciting shader on there.

some things do, though the problem is that very small triangles tend to shade inefficiently. The shading for text rendering isn't *usually* that big of a cost, but if you've got a non-trivial amount of text on the screen then it starts to add up.

Signed distance field for glyphs works great, but it only really works for magnification (drawing the text larger than the SDF bitmap). One thing you could do is super-sample the bitmap inside the shader itself to determine coverage.

Zerf
Dec 17, 2004

I miss you, sandman
Distance fields also have some other nice properties when it comes to text effects, like dropshadows, outer/inner glow etc. Doing similar things for meshes/vector fonts is non-trivial and involves computing the straight skeleton or similar.

Ralith posted:

They're also both slower to render (even ignoring preprocessing!)...

Please elaborate on this. If we ignore preprocessing, rendering distance field fonts is just a plain texture lookup and some simple maths(which is essentially free because the texture lookup).

High Protein
Jul 12, 2009

That's an interesting problem. Looking at your example picture, it seems that what you've effectively done is taking a point about halfway along an edge, and moved that along the edge normal a random amount (positive or negative). So you could keep doing that iteratively. However you'll have to make sure you don't end up with intersecting edges.
For the cracks you could do the same thing. The amount of iterations you use would decide how ragged the crack ends up being.

haveblue
Aug 15, 2005



Toilet Rascal

Hyvok posted:

However I would like to support unicode

How much unicode do you want to support? The occasional accented character is probably easy enough but if you want to get really into the weeds with composed characters and bidirectional writing and such you might want to considering letting the OS text service handle the whole thing and make you one finished texture per label. Rolling your own unicode-aware layout is not something to be entered into lightly.

haveblue fucked around with this message at 21:01 on Apr 13, 2017

krystal.lynn
Mar 8, 2007

Good n' Goomy

High Protein posted:

That's an interesting problem. Looking at your example picture, it seems that what you've effectively done is taking a point about halfway along an edge, and moved that along the edge normal a random amount (positive or negative). So you could keep doing that iteratively. However you'll have to make sure you don't end up with intersecting edges.
For the cracks you could do the same thing. The amount of iterations you use would decide how ragged the crack ends up being.

Here I was hoping it would be a boring problem :)

I've implemented a 'roughing' algorithm like I've described for the edges in similar situations a few times before, so I'm not extremely worried about that, but I've never had to generate a triangulation for a concave shape with a bunch of vertices inside it. Can I just feed a list of vertices to a tessellation library, like a Delaunay triangulator or something, and get the results I'm looking for, or would I have to take additional care since I'm working with concave shapes?

This is what I'd like to do:


This is what I'd like to avoid:


Also, if anyone knows a good C++ triangulation library that can do what I want (or can speak to the quality of OpenSceneGraph's), I'd appreciate it. Sorry if some of these questions could be answered by my own experimentation, but I'm away from the office for a few days and would like to be able to turn over a few ideas in my head over the weekend and hit the ground running when I return to work.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Hyvok posted:

Just rendering the text as a mesh might be an option as well, I just saw some site recommend against that due to the amount of vertices you will need to handle. Which did sound a bit odd since you can have A LOT of vertices nowadays...
Yes, rendering text as geometry should work fine if you don't mind visible pointy bits when people zoom waaaaaay in.

Zerf posted:

Please elaborate on this. If we ignore preprocessing, rendering distance field fonts is just a plain texture lookup and some simple maths(which is essentially free because the texture lookup).
It surprised me too. I linked experimental data. There's discussion of implementation details as well.

haveblue posted:

How much unicode do you want to support? The occasional accented character is probably easy enough but if you want to get really into the weeds with composed characters and bidirectional writing and such you might want to considering letting the OS text service handle the whole thing and make you one finished texture per label. Rolling your own unicode-aware layout is not something to be entered into lightly.
Also, this. Correct text layout is really really hard. Make it someone else's problem if you possibly can. If you can't, you're going to have to go learn how to use HarfBuzz (which isn't really documented at all) or something similar.

Ralith fucked around with this message at 03:58 on Apr 14, 2017

Hyvok
Mar 30, 2010

haveblue posted:

How much unicode do you want to support? The occasional accented character is probably easy enough but if you want to get really into the weeds with composed characters and bidirectional writing and such you might want to considering letting the OS text service handle the whole thing and make you one finished texture per label. Rolling your own unicode-aware layout is not something to be entered into lightly.

I mainly mentioned unicode just to express that storing all the symbols in a texture is not viable, otherwise it is not really currently important feature.

Zerf
Dec 17, 2004

I miss you, sandman

Ralith posted:

It surprised me too. I linked experimental data. There's discussion of implementation details as well.

I skimmed through the link, but where do you come to the conclusion that this is faster than distance field rendering? All the comparisons seems to be against CPU-based rasterizers, and the GPU part seems non-trivial to implement.

It's probably faster than distance fields if you include the preprocessing they require, but ideally you preprocess each glyph once (or once per desired resolution) and end up with a super-low-res image that can easily be cached and is fully satisfactory for most use cases.

Don't get me wrong, that link seems like a good idea when dealing with rasterization of fonts though, but I still believe distance fields provide much more bang for the buck.

Xerophyte
Mar 17, 2008

This space intentionally left blank

Zerf posted:

I skimmed through the link, but where do you come to the conclusion that this is faster than distance field rendering? All the comparisons seems to be against CPU-based rasterizers, and the GPU part seems non-trivial to implement.

GLyphy is a GPU implementation that uses signed distance fields:


It surprises me, last I heard anyone say anything on the subject Loop-Blinn was considered complicated and (with filtering, at least) pretty slow.

Zerf
Dec 17, 2004

I miss you, sandman

Xerophyte posted:

GLyphy is a GPU implementation that uses signed distance fields:


It surprises me, last I heard anyone say anything on the subject Loop-Blinn was considered complicated and (with filtering, at least) pretty slow.

Oh, I see, that's why. Thanks. On the other hand, here's an excerpt from the GLyphy Github repo:

quote:

The main difference between GLyphy and other SDF-based OpenGL renderers is that most other projects sample the SDF into a texture. This has all the usual problems that sampling has. Ie. it distorts the outline and is low quality.

GLyphy instead represents the SDF using actual vectors submitted to the GPU. This results in very high quality rendering, though at a much higher runtime cost.

So sure, if you are going to use SDF without computing it to a texture, it's going to be expensive. I still believe regular, texture-based SDF variants will be both simpler and faster than doing any other font rasterization on the GPU (but with the caveat that generating the texture is expensive and sampling artifacts can occur).

Zerf fucked around with this message at 12:17 on Apr 14, 2017

High Protein
Jul 12, 2009

Kanpachi posted:

Here I was hoping it would be a boring problem :)

I've implemented a 'roughing' algorithm like I've described for the edges in similar situations a few times before, so I'm not extremely worried about that, but I've never had to generate a triangulation for a concave shape with a bunch of vertices inside it. Can I just feed a list of vertices to a tessellation library, like a Delaunay triangulator or something, and get the results I'm looking for, or would I have to take additional care since I'm working with concave shapes?

This is what I'd like to do:


This is what I'd like to avoid:


Also, if anyone knows a good C++ triangulation library that can do what I want (or can speak to the quality of OpenSceneGraph's), I'd appreciate it. Sorry if some of these questions could be answered by my own experimentation, but I'm away from the office for a few days and would like to be able to turn over a few ideas in my head over the weekend and hit the ground running when I return to work.

It's old and not the fastest, but I've used GLUTess for concave shapes without any issues. It's easy to use as it probably comes with your OS.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Zerf posted:

Oh, I see, that's why. Thanks. On the other hand, here's an excerpt from the GLyphy Github repo:
So sure, if you are going to use SDF without computing it to a texture, it's going to be expensive. I still believe regular, texture-based SDF variants will be both simpler and faster than doing any other font rasterization on the GPU (but with the caveat that generating the texture is expensive and sampling artifacts can occur).
Ahah, good find! That explains it. I wonder if I can get the Pathfinder guy to contrast against msdfgen or similar as well.

Mofabio
May 15, 2003
(y - mx)*(1/(inf))*(PV/RT)*(2.718)*(V/I)
edit: nevermind! figured it out.

Mofabio fucked around with this message at 05:48 on Apr 16, 2017

krystal.lynn
Mar 8, 2007

Good n' Goomy

Back on this after a few days, but I've looked into GLUTess and it seems like I'm only able to specify the contour of the polygon. What I require is a near-uniform tessellation like the example I gave in my last post, since I need to be able to apply heightmaps. Possible with GLUTess or do I need something else?

Adbot
ADBOT LOVES YOU

High Protein
Jul 12, 2009

Kanpachi posted:

Back on this after a few days, but I've looked into GLUTess and it seems like I'm only able to specify the contour of the polygon. What I require is a near-uniform tessellation like the example I gave in my last post, since I need to be able to apply heightmaps. Possible with GLUTess or do I need something else?

Ah, I hadn't understood correctly then. I don't know any libraries what will do what you want off-hand, but the problem reminds me of rasterization. If you find anything that works, please post it here.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply