|
Madox posted:There's two things I'm working on. First is an editor tool for creating landscapes. It's meant to generate, edit and export meshes and textures for use in a game I'm making. I'm currently figuring out how to do proper LOD for terrains. Out of curiosity, are you using any sort of UI toolkit for the terrain editor app, or is that just all stuff you did on your own?
|
# ¿ May 6, 2008 15:32 |
|
|
# ¿ Apr 27, 2024 22:02 |
|
Nomikos posted:I got sick of how all available music-visualization plugins had really poor spectrograms, so I made my own.
|
# ¿ May 16, 2008 11:43 |
|
Nomikos posted:Thanks for the suggestions about glDrawPixel alternatives. From my initial look at the GLSL tutorials, synchronizing the scrolling action between my program and the shader sounds difficult so I'll probably try the enormous-stream-of-vertices idea first. a fragment shader is probably the "right" way to go, but it's definitely trickier. If you want to give it a try, feel free to PM me if you've got any questions. For the "Enormous Stream of Vertices/Points" version, make sure you use the glDrawArrays or glDrawElements version; you won't save a lot by issuing 10,000 glVertex() calls each frame
|
# ¿ May 16, 2008 16:42 |
|
Wuen Ping posted:This is an open source 4X game project started a while ago because Master of Orion 3 sucked so very hard. I'm the main developer. These shots are from the 3D combat system, which is still under heavy development (it's essentially nonfunctional so far, but looks pretty). Very neat. I judge from the image names that you're using OGRE as your base engine? If you don't mind me asking, what algorithm are you using for the sky color of the planets?
|
# ¿ May 16, 2008 17:34 |
|
This is an implementation I did of the Signed-Distance Field pre-processing and Shading techniques presented in the paper "Improved Alpha-Tested Magnification For Vector-Textures and Special Effects" that Valve put out. It's nothing too spectacular to look at, but the concept is pretty neat (and flexible). Basically, most game text-rendering systems either (a) render their fonts at a fixed point size to a texture and just scale them up or down as needed, or (b) render each character that is needed at any given point-size, outline/shadow mode, etc, on demand to one of several swap textures which are updated each frame. The problem with the former method is that the text scales horribly, so if you deviate too far from the target point size (either larger or smaller) you end up with a blurry mess; the problem with the second mode is that it requires you to do a lot of work each frame to managed your texture memory usage, often having several versions if the same glyphs in your cache textures because you need them at different point sizes/rendering modes. The "Signed Distance Field" method basically pre-processes a high resolution version of the characters, then shrinks the sampled data down and stores it in a single, small, compact texture. Essentially, each pixel in the texture corresponds to how "far" (in texels) that given point is from the edge of the character -- that's the 'distance' part. The 'signed' part comes from the fact that you split the range of possible values such that a value < 0.5 is outside the edge, while > 0.5 is inside the edge. You then map and render these characters as you normally would a set of glyphs in an atlas texture, with the only exception that you then need a shader to interpret the values so that you get smooth results. This has the obvious advantages over prior methods of needing less texture memory; however, it's also superior to both in that, by virtue of the mathematical properties of the signed distance field and bi-linear texture interpolation, it scales up and down far better than simply rendering the character itself to a texture and scaling that. There's also the added bonus of the fact that you now have continuous "distance-from-edge" information in the texture, which allows you to get neat things like outlining and drop shadows essentially for free. This is all a lot of for a fairly unexciting picture, however it's still a pretty cool technique -- especially considering how simple it is, conceptually. It's part of a larger tech demo I am working on for a job portfolio, so I may post some other elements later on once they get more polished.
|
# ¿ May 29, 2008 14:21 |
|
Adhemar posted:Can you explain the advantage of using this technique over a vector based approach (i.e. rendering triangles based on the splines)? You also have the advantage of dividing the work more effectively between your vertex- and fragment- units, which is important because one has to keep in mind that the graphics hardware is a pipeline, so a bottleneck in one stage means that the other stages are likely just sitting idle. For this system, I'm generating 2 triangles and 4 vertices (indexed) per character, and doing (hopefully) an equal amount of work on the vertex and fragment processing stages; depending on your implementation, a vector-to-geometry system could be anywhere from 8 to 50 triangles per character, and likely doing almost nothing at all on the fragment processors. That's a lot of CPU and memory overhead to composite those triangles together into one render list, and REALLY a lot of overhead if you're issuing a separate draw command for each character. Of course, none of this really matters if text rendering is not a bottleneck for you; I just happen to find this implementation to be fairly flexible, straightforward, and all-around eloquent. It's also worth noting that this technique has the additional benefit of being able to work with ANY alpha-tested texture that can be converted into a high-resolution binary (white-or-black) image. For example, you can use this method to create HUD elements from source textures that scale nicely and can drop shadows/pulse/glow. It would also be very useful for alpha-tested polygons of, say, a brush/grass/tree system, for example, as a way of creating leaves which have crisp, smooth edges even under close inspection without using a large amount of texture memory for them. It's really just a good all-around solution for solving the problem of aliasing in alpha-testing.
|
# ¿ May 29, 2008 15:31 |
|
Adhemar posted:Thanks a lot for the info. I've been slowly developing an engine out of a codebase I've used for a few projects now, adding features as I need them, and I haven't needed a font rendering system so far as I've been running it inside a GUI app. I've been thinking about taking it to the next level and making it into something that can run full-screen, so I've been thinking about a modern font rendering technique to implement. After reading that paper by Valve I might give it a go, it seems clever and pretty straightforward. Every game I know of/worked on uses bitmapped/textured fonts, if an appeal-to-authority is worth anything
|
# ¿ May 29, 2008 16:54 |
|
OneEightHundred posted:I've heard a good number of horror stories about lovely STL implementations, which is probably why practically every major engine I look at has its own versions of STL-ish classes. which is kind of hilarious since that usually makes the problem worse, not better
|
# ¿ Jun 9, 2008 12:34 |
|
Wuen Ping posted:This. Also, the usual motivation for reimplementing part of the STL wasn't implementation quality, it was disagreements over the way the STL does things. For instance, Qt reimplements most (all?) of the STL containers, plus a few more, all with Java-style iterators. This is actually billed as a selling point in their marketing docs! Clearly, this is the work of a genius. NIH syndrome at its worst
|
# ¿ Jun 9, 2008 15:52 |
|
MrMoo posted:I thought the primary goal was to reduce memory consumption for KDE 4, although the root cause seems to be a plethora of STL containers within their widgets. One of the few reasonable STL re-implementations was done for the pupose of exposing more memory-management options, i.e. using a pool- or cached allocator of some point, so I might be able to buy that. There's almost never any excuse for re-implementing your own vector or linked list or hash table yourself "because it is faster" because it almost certainly never is.
|
# ¿ Jun 9, 2008 16:31 |
|
OneEightHundred posted:It's not really NIH when flaws with STL are frequently cited in the reasons for making them. To be fair though, every case of "Not-Invented-Here" Syndrome I've ever dealt with in the past by rationalization of "flaws" in the 3rd party system which are either (a) imagined/spawned from ignorance ("STL is so slooow"), (b) superficial ("I don't like the style of STL or templates "), (c) not repaired in the replacement implementation, which may usually be less-featured ("Well I hate iterators, so I just wrote my own version of the library that just doesn't have them" or "STL was so complicated and totally overkill to use, so I wrote my own memory-managed hash-map system. Here's the PDF describing the interface"), and/or (d) replaced with far worse flaws ("Yeah, it crashes when you try to allocate a new array every so often, so keep an eye on that...")
|
# ¿ Jun 11, 2008 00:06 |
|
OneEightHundred posted:Well, I meant fairly rational reasons, i.e. executable size reduction on the order of several megabytes, bloated interfaces that make extension difficult, and poor allocator support. Fair enough. Code size bloat due to templates and allocation support are definitely the most valid reasons I've heard
|
# ¿ Jun 11, 2008 11:42 |
|
Zakalwe posted:Heh, gotta love the "considered harmful" papers. Everyone I know wants to write a CH paper some day. Your posting considered harmful
|
# ¿ Sep 15, 2008 15:21 |
|
ImDifferent posted:So, there's a core "player" app, which contains all the important stuff, but no UI. The UI for each "page" is retrieved by hitting a merb controller, which reads an XML file, runs it through erb, pipes the result to the AS3 compiler, and serves up the resulting SWF. The SWF is loaded into the main application, and the page is displayed. (Naturally, there's some hashing/caching going on so the compilation only occurs if the source changes). The XML files are "sort of" like mxml, in that they contain primarily layout information as well as snippets of AS3 code, but it's based around our own system - which allows us to do a lot more cool stuff. The nice thing is that when you're working on a single page, recompilation is really fast - our last app had about 100K lines of code, and changing the tiniest thing took forever. this seems pretty sexy. I'll do my best to help crash your precious snowflake system tonight (i.e. using it and futzing around during the debates)
|
# ¿ Sep 26, 2008 18:15 |
|
greatZebu posted:
Oh sure, "Human" vs "Human"... way to cop out
|
# ¿ Oct 22, 2008 01:45 |
|
heeen posted:
I hear the performance overhead for CUDA/GL interop is pretty bad, C/D?
|
# ¿ Apr 15, 2009 15:05 |
|
ih8ualot posted:
pfft, wuss V V V (Image Synthesis was my favorite class in my 4th year) Hubis fucked around with this message at 23:14 on Apr 26, 2009 |
# ¿ Apr 26, 2009 20:59 |
|
ih8ualot posted:I love the title. 48 hours of sleep deprivation debugging refractor-refractor interfaces is a hell of a thing.
|
# ¿ Apr 28, 2009 00:46 |
|
Roflex posted:Putting a bunch of stuff together for a Generations update. Consider adding a (very) small degree of height-based 'fogging'; the human brain uses this to help determine distance and relative depth, and I think it would make seeing what's going on below the top layer much more intuitive.
|
# ¿ May 7, 2009 15:19 |
|
wlievens posted:Fantasy dungeon generator, you could use this in tabletop games. All the textures are procedural, there is no file input at all. Neato! What's the method for the provice generator, out of curiosity?
|
# ¿ May 14, 2009 15:14 |
|
OneEightHundred posted:After finally deciphering spherical harmonics to the point where I discovered order-1 spherical harmonics are ridiculously easy to implement, I've started converting lightmaps to use it instead of my existing approach. I'm a little confused -- what do you mean when you say you're using "spherical harmonics" for light-maps? Are you just unrolling the Theta and Phi coordinates into U and V?
|
# ¿ May 18, 2009 21:56 |
|
OneEightHundred posted:Linear spherical harmonics doesn't need the theta/phi coordinates, because you can solve it using Cartesian coordinates. sin(phi), cos(phi), and cos(theta) are the axial values of the normalized sampling direction. Oh, cool -- I hadn't even realized you could linearize spherical harmonics like that. What's the benefit of using spherical harmonics for that? I've seen Wavelet compression used for textures to fantastic effect, usually preserving a lot more detail with fewer terms.
|
# ¿ May 20, 2009 17:14 |
|
Ocarina of Time clone?
|
# ¿ Jun 6, 2009 14:46 |
|
Cool! Is this view-dependent LOD?
|
# ¿ Dec 22, 2009 04:08 |
|
RussianManiac posted:So pretty much this means that the level of subdivision is controlled by how far away or at which angle you are viewing the object from? Ideally, a view-dependent LOD system will subdivide the geometry based on the screen space error between the current subdivision level and the "true" model. Usually the exact values of the screen-space error is complicated/expensive to compute, so most methods use some approximation accounting for view distance and angle (both of which affect how much a given geometry change appears in the final image) as well as possibly some shading parameters like normals or proximity to local lights. heeen posted:No, all in CUDA. I'd be very curious to see any more pictures/slides/information if you've got a link to a write-up available somewhere.
|
# ¿ Dec 29, 2009 04:42 |
|
Snodar posted:
Have you used CUDA? I'd be interested in your opinion of the two APIs (as well as what you think the "uniquely horrible" components of each OpenCL implementation are)
|
# ¿ Jan 18, 2010 23:00 |
|
HappyHippo posted:
Cool! I've been thinking about something along those lines lately myself. How did you map the hexes to a sphere? My thinking was to basically have two hex circles, and overlap them at the equator.
|
# ¿ Mar 25, 2010 02:53 |
|
Factor Mystic posted:Wait, wha- how...? How in the world? Just because it's a famous clip and a lucky guess or did you actually ID it? If you go through it in your head while you trace around the waveform, the beats line up pretty plausibly Was my first guess too. Barring that, I don't think I would've figured it out though.
|
# ¿ Apr 13, 2010 03:40 |
|
Optimus Prime Ribs posted:I wanted to see if I could code my own path finding algorithm. I did this once for a project when I was younger, and was so proud of myself until I later learned I'd reinvented A*
|
# ¿ Oct 12, 2010 04:04 |
|
tripwire posted:Nice use of comic sans lookat this noob Optimus Prime Ribs posted:I'm sure my version will be a cumbersome and inefficient version of A* (mine doesn't work quite the same, and that will probably be why), but I just wanna see if I can do it. Yeah. My experience was basically that, with every improvement I came up with, the closer I got to a "classic" A* implementation. I kind of wonder what the known lower-bound is to find the best path from A to B in a universe of size N, and how far A* is from it... Hubis fucked around with this message at 17:44 on Oct 12, 2010 |
# ¿ Oct 12, 2010 17:42 |
|
SlightlyMadman posted:I've been working on the same basic RPG idea for ages now, but I've got another incarnation of an apocalypse survival roguelike together: Are you using libTCOD?
|
# ¿ Jan 4, 2011 23:15 |
|
fletcher posted:Maybe try the random fuzzing, check to see if the result is a valid jpg, and if it's not, try a different random fuzzing. It's gotta get a valid one eventually right? It's like applying Bogosort to data validation!
|
# ¿ Jul 26, 2011 00:39 |
|
steckles posted:
If you've already got spectral representation, implement thin films! http://www.kimdylla.com/computer_graphics/pbrt_iridescence/iridescence.htm
|
# ¿ Sep 9, 2011 14:27 |
|
Internet Janitor posted:I think I'm writing a video game. Why 8x8 and not 2x3? Also, what's your underlying representation for the hexes? I've thought about a few different indexing methods, but each seems to have their downsides.
|
# ¿ Sep 12, 2011 01:10 |
|
steckles posted:The real complexity of spectral rendering doesn't have anything to do with ray tracing; It's in converting the spectral data to RGB for display. You can't store your frame buffer in RGB, as summing spectral colours converted to RGB doesn't work. Taking your spectral rainbow and adding all the colours together in RGB won't equal white. Assuming you don't want to waste memory storing a complete spectral framebuffer, you need to store your it in a colour space where summing does happen correctly. I use the CIE XYZ colour space. And the link I posted is to my write-up for implementing spectral rendering for thin film interactions (in case anyone is interested). This page is where I got a lot of my info on doing spectral rendering. Basically, you are doing discrete integration along the X-domain on this graph: for each of the curves shown (which correspond to the CIE XYZ values): Spectral rendering is generally kind of un-necessary for most scenes, but thin film interaction, refraction (as Steckles is doing) and Fluorescence/Phosphorescence are some cases where it matters. The last part means your ray-tracer can simulate black lights!
|
# ¿ Sep 13, 2011 01:32 |
|
Internet Janitor posted:Hubis: Partially because I'd like to port this to a platform that lets me build a background out of 8x8 tiles and doesn't allow me to overlay more than one such tile. So basically a completely arbitrary limitation. The hexes themselves are indeed composed of 3x2 overlapping regions of these tiles. When I said this simplifies drawing, I really meant that having the hexes be the shape they are (squashed vertically) simplifies things because they are composed of just four types of tiles- center, center with baseline and two diagonal tiles. One of the more elegant solutions I've seen is using a redundant 3rd dimension -- one for each of the opposing sides (so like what you have above, but with another axis crossing the diagonal one). The system is under-constrained, but is nice because movement is always one step along one of the axes.
|
# ¿ Sep 13, 2011 02:39 |
|
AlsoD posted:Baby's first game: What's your internal representation of the Hexes? (i.e. coordinate system)
|
# ¿ Dec 28, 2011 18:31 |
|
Astrolite posted:
What are you using to render in Python? pyGame?
|
# ¿ Jan 21, 2012 18:12 |
|
How thick is the glass holding the water? Do you simulate non-air interfaces (i.e. glass-water) properly?
|
# ¿ Apr 12, 2012 12:33 |
|
|
# ¿ Apr 27, 2024 22:02 |
|
My project for National Game Development Month It's a Neptune's Pride clone written with Tornado Web/jQuery/HTML5 Canvas. I've got the lobby and basic client/server architecture pretty much done, and now I'm working out some bugs and polishing the basic gameplay. Hopefully more to come towards the end of the month! e: holy image-resizing, Batman!
|
# ¿ Jun 21, 2012 19:06 |