Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Madox posted:

There's two things I'm working on. First is an editor tool for creating landscapes. It's meant to generate, edit and export meshes and textures for use in a game I'm making. I'm currently figuring out how to do proper LOD for terrains.



The next one I will post so you can mock me. It's a deck building helper app for my son who really loves his Yu-Gi-Oh.



Out of curiosity, are you using any sort of UI toolkit for the terrain editor app, or is that just all stuff you did on your own?

Adbot
ADBOT LOVES YOU

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Nomikos posted:

I got sick of how all available music-visualization plugins had really poor spectrograms, so I made my own.


This is someone playing the piano and singing.


This is tremolo at the end of a guitar solo.

Unfortunately glDrawPixels() takes up an ungodly amount of CPU time for some reason; at the size above it's already using 100% of one core. Still trying to find a good way to draw lots of pixels to the screen very fast...

It's written in Haskell and uses PortAudio for the backend. For the screenshots above I connected it to the output of Amarok using JACK.
glDrawPixels is almost certainly the wrong way to do it, since it doesn't really take advantage of ANY hardware acceleration :) There are a lot of "better" ways of doing it, ranging from running all the processes in a fragment program/using render targets to just putting your data in vertices and sending a huge stream of verts to the card instead of calling drawPixels.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Nomikos posted:

Thanks for the suggestions about glDrawPixel alternatives. From my initial look at the GLSL tutorials, synchronizing the scrolling action between my program and the shader sounds difficult so I'll probably try the enormous-stream-of-vertices idea first.

I also came across the Wikipedia article on wavelet transforms as an alternative to the Fourier transform, providing better frequency resolution without sacrificing time resolution. "Scalogram" instead of spectrogram. Once I free up some CPU time from drawing this should be a fun area to investigate :)

a fragment shader is probably the "right" way to go, but it's definitely trickier. If you want to give it a try, feel free to PM me if you've got any questions.

For the "Enormous Stream of Vertices/Points" version, make sure you use the glDrawArrays or glDrawElements version; you won't save a lot by issuing 10,000 glVertex() calls each frame

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Wuen Ping posted:

This is an open source 4X game project started a while ago because Master of Orion 3 sucked so very hard. I'm the main developer. These shots are from the 3D combat system, which is still under heavy development (it's essentially nonfunctional so far, but looks pretty).







Very neat. I judge from the image names that you're using OGRE as your base engine?
If you don't mind me asking, what algorithm are you using for the sky color of the planets?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...


This is an implementation I did of the Signed-Distance Field pre-processing and Shading techniques presented in the paper "Improved Alpha-Tested Magnification For Vector-Textures and Special Effects" that Valve put out. It's nothing too spectacular to look at, but the concept is pretty neat (and flexible).

Basically, most game text-rendering systems either (a) render their fonts at a fixed point size to a texture and just scale them up or down as needed, or (b) render each character that is needed at any given point-size, outline/shadow mode, etc, on demand to one of several swap textures which are updated each frame. The problem with the former method is that the text scales horribly, so if you deviate too far from the target point size (either larger or smaller) you end up with a blurry mess; the problem with the second mode is that it requires you to do a lot of work each frame to managed your texture memory usage, often having several versions if the same glyphs in your cache textures because you need them at different point sizes/rendering modes.

The "Signed Distance Field" method basically pre-processes a high resolution version of the characters, then shrinks the sampled data down and stores it in a single, small, compact texture. Essentially, each pixel in the texture corresponds to how "far" (in texels) that given point is from the edge of the character -- that's the 'distance' part. The 'signed' part comes from the fact that you split the range of possible values such that a value < 0.5 is outside the edge, while > 0.5 is inside the edge. You then map and render these characters as you normally would a set of glyphs in an atlas texture, with the only exception that you then need a shader to interpret the values so that you get smooth results.

This has the obvious advantages over prior methods of needing less texture memory; however, it's also superior to both in that, by virtue of the mathematical properties of the signed distance field and bi-linear texture interpolation, it scales up and down far better than simply rendering the character itself to a texture and scaling that. There's also the added bonus of the fact that you now have continuous "distance-from-edge" information in the texture, which allows you to get neat things like outlining and drop shadows essentially for free.

This is all a lot of :words: for a fairly unexciting picture, however it's still a pretty cool technique -- especially considering how simple it is, conceptually. It's part of a larger tech demo I am working on for a job portfolio, so I may post some other elements later on once they get more polished.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Adhemar posted:

Can you explain the advantage of using this technique over a vector based approach (i.e. rendering triangles based on the splines)?
First, you can get smooth anti-aliasing on your font edges (by manipulating the alpha of the texel based on the distance field instead of just having a hard cut-off) as well as being able to include things like outlining and drop shadows with very little additional cost. You can also, in general, generate much higher quality glyphs with a texture-based system than you could with a direct vector-to-geometry approach, unless you wanted to allocate a fairly sizable amount of triangles towards rendering text.

You also have the advantage of dividing the work more effectively between your vertex- and fragment- units, which is important because one has to keep in mind that the graphics hardware is a pipeline, so a bottleneck in one stage means that the other stages are likely just sitting idle. For this system, I'm generating 2 triangles and 4 vertices (indexed) per character, and doing (hopefully) an equal amount of work on the vertex and fragment processing stages; depending on your implementation, a vector-to-geometry system could be anywhere from 8 to 50 triangles per character, and likely doing almost nothing at all on the fragment processors. That's a lot of CPU and memory overhead to composite those triangles together into one render list, and REALLY a lot of overhead if you're issuing a separate draw command for each character. Of course, none of this really matters if text rendering is not a bottleneck for you; I just happen to find this implementation to be fairly flexible, straightforward, and all-around eloquent.

It's also worth noting that this technique has the additional benefit of being able to work with ANY alpha-tested texture that can be converted into a high-resolution binary (white-or-black) image. For example, you can use this method to create HUD elements from source textures that scale nicely and can drop shadows/pulse/glow. It would also be very useful for alpha-tested polygons of, say, a brush/grass/tree system, for example, as a way of creating leaves which have crisp, smooth edges even under close inspection without using a large amount of texture memory for them. It's really just a good all-around solution for solving the problem of aliasing in alpha-testing.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Adhemar posted:

Thanks a lot for the info. I've been slowly developing an engine out of a codebase I've used for a few projects now, adding features as I need them, and I haven't needed a font rendering system so far as I've been running it inside a GUI app. I've been thinking about taking it to the next level and making it into something that can run full-screen, so I've been thinking about a modern font rendering technique to implement. After reading that paper by Valve I might give it a go, it seems clever and pretty straightforward.
There's a chapter by Jim Blinn in GPU Gems 3 (chapter 25), where they render vector art such as text straight from the spline curves, on the GPU. I haven't read it in detail yet, but maybe Valve's technique is more practical.

Every game I know of/worked on uses bitmapped/textured fonts, if an appeal-to-authority is worth anything

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

OneEightHundred posted:

I've heard a good number of horror stories about lovely STL implementations, which is probably why practically every major engine I look at has its own versions of STL-ish classes.

which is kind of hilarious since that usually makes the problem worse, not better

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Wuen Ping posted:

This. Also, the usual motivation for reimplementing part of the STL wasn't implementation quality, it was disagreements over the way the STL does things. For instance, Qt reimplements most (all?) of the STL containers, plus a few more, all with Java-style iterators. This is actually billed as a selling point in their marketing docs! Clearly, this is the work of a genius. :master:

NIH syndrome at its worst

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

MrMoo posted:

I thought the primary goal was to reduce memory consumption for KDE 4, although the root cause seems to be a plethora of STL containers within their widgets.

One of the few reasonable STL re-implementations was done for the pupose of exposing more memory-management options, i.e. using a pool- or cached allocator of some point, so I might be able to buy that. There's almost never any excuse for re-implementing your own vector or linked list or hash table yourself "because it is faster" because it almost certainly never is.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

OneEightHundred posted:

It's not really NIH when flaws with STL are frequently cited in the reasons for making them.

To be fair though, every case of "Not-Invented-Here" Syndrome I've ever dealt with in the past by rationalization of "flaws" in the 3rd party system which are either

(a) imagined/spawned from ignorance ("STL is so slooow"),
(b) superficial ("I don't like the style of STL or templates :colbert:"),
(c) not repaired in the replacement implementation, which may usually be less-featured ("Well I hate iterators, so I just wrote my own version of the library that just doesn't have them" or "STL was so complicated and totally overkill to use, so I wrote my own memory-managed hash-map system. Here's the PDF describing the interface"), and/or
(d) replaced with far worse flaws ("Yeah, it crashes when you try to allocate a new array every so often, so keep an eye on that...")

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

OneEightHundred posted:

Well, I meant fairly rational reasons, i.e. executable size reduction on the order of several megabytes, bloated interfaces that make extension difficult, and poor allocator support.

Fair enough. Code size bloat due to templates and allocation support are definitely the most valid reasons I've heard

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Zakalwe posted:

Heh, gotta love the "considered harmful" papers. Everyone I know wants to write a CH paper some day.

Your posting considered harmful :iceburn:

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

ImDifferent posted:

So, there's a core "player" app, which contains all the important stuff, but no UI. The UI for each "page" is retrieved by hitting a merb controller, which reads an XML file, runs it through erb, pipes the result to the AS3 compiler, and serves up the resulting SWF. The SWF is loaded into the main application, and the page is displayed. (Naturally, there's some hashing/caching going on so the compilation only occurs if the source changes). The XML files are "sort of" like mxml, in that they contain primarily layout information as well as snippets of AS3 code, but it's based around our own system - which allows us to do a lot more cool stuff. The nice thing is that when you're working on a single page, recompilation is really fast - our last app had about 100K lines of code, and changing the tiniest thing took forever.

Cool... hope the video player doesn't crash.
\/\/\/\/

this seems pretty sexy. I'll do my best to help crash your precious snowflake system tonight (i.e. using it and futzing around during the debates)

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

greatZebu posted:



I'm writing a chess program for Macs in my spare time. It's a nice break from developing runtime systems for parallel computing, and a good introduction to objective c/cocoa. All the mac chess apps I've used have been pretty lousy, so I'm hoping this will actually be a worthwhile project in its own right once it's a little more mature.

Oh sure, "Human" vs "Human"... way to cop out :rolleyes:

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

heeen posted:


Subdivision surfaces!

I hear the performance overhead for CUDA/GL interop is pretty bad, C/D?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

ih8ualot posted:


Because my camera has a flat lens. It gives those kinds of weird results, but it's a helluva lot easier to program.

Plus, my professor doesn't care what kind of lens I use.

pfft, wuss



V V V



:cool:

(Image Synthesis was my favorite class in my 4th year)

Hubis fucked around with this message at 23:14 on Apr 26, 2009

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

ih8ualot posted:

I love the title.

48 hours of sleep deprivation debugging refractor-refractor interfaces is a hell of a thing.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Roflex posted:

Putting a bunch of stuff together for a Generations update.

Warning: 1+MB PNGs ahead (in links):



Also working on a 9 minute HD video, so that'll be up soon(ish - it's going to take an hour to render and 3-4 to upload).

Quick rundown of new features for the next release, some still not yet implemented:

-Variety of color coding options: Classic (like 0.16 and prior), Aged (shown above), Monotone, and a couple others. Eventually want to turn it into an equation-based system so you can 'program' your own with various parameters (x,y,z cell location, age, born/died cells, etc.)

-More useful speed control. Right now this just lets you pick what speed to run at (or not run, you can stop the simulation entirely now). I'm not sure if this'll make it into the update, but I'm eventually going to add Reverse and Hyper modes - reverse will let you 'rewind' the simulation, Hyper will do more than one simulation update per display frame. Both require engine changes that, while not tremendous, are substantial and might get put off in favor of a timely release.

-"Save States" (think emulators), being able to clear the field, portions of the field, single cell add/delete, and changing rule sets mid-simulation. That last one may not get in this release as it'll tie in a bit with the Rewind feature (rewinding the simulation and then resuming can (by option) overwrite "future" layers with newly simulated ones).

-Better camera controls.

-More interface components, all options will have clickable controls. There will also be a menu with even more options, like more permanent saves.

At some point I might port the whole thing over to C#/DirectX from FreeBasic/OpenGL but that's a whole project in itself (I need to learn C#, first of all).

Consider adding a (very) small degree of height-based 'fogging'; the human brain uses this to help determine distance and relative depth, and I think it would make seeing what's going on below the top layer much more intuitive.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

wlievens posted:

Fantasy dungeon generator, you could use this in tabletop games. All the textures are procedural, there is no file input at all.


Click here for the full 576x864 image.


A random world map generator, fantasy style. Fully procedural.


Click here for the full 1000x1000 image.


It also generates provinces on the map.


Click here for the full 1000x1000 image.


Neato! What's the method for the provice generator, out of curiosity?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

OneEightHundred posted:

After finally deciphering spherical harmonics to the point where I discovered order-1 spherical harmonics are ridiculously easy to implement, I've started converting lightmaps to use it instead of my existing approach.

They do a much better job of handling lighting influences from multiple directions, they work better with gloss, and they'll probably work better with texture compression too.


Click here for the full 1280x800 image.


:toot:

I'm actually kind of annoyed that I didn't use these sooner. Ironically, it turned out that order-1 SH is identical to an approach that I was going to try anyway, but didn't think would work.

I'm a little confused -- what do you mean when you say you're using "spherical harmonics" for light-maps? Are you just unrolling the Theta and Phi coordinates into U and V?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

OneEightHundred posted:

Linear spherical harmonics doesn't need the theta/phi coordinates, because you can solve it using Cartesian coordinates. sin(phi), cos(phi), and cos(theta) are the axial values of the normalized sampling direction.

nDir = normalize(dir);
sampledValue = sh0 + nDir.x*sh1 + nDir.y*sh2 + nDir.z*sh3;

I think it's possible to solve the quadratic band in Cartesian space too, problem is almost all of the material I can find on SH uses polar coordinates. I'd try figuring out how to resolve it into Cartesian space, but 4 lightmaps per surface is already pushing it so gently caress it.

Oh, cool -- I hadn't even realized you could linearize spherical harmonics like that. What's the benefit of using spherical harmonics for that? I've seen Wavelet compression used for textures to fantastic effect, usually preserving a lot more detail with fewer terms.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Ocarina of Time clone?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

heeen posted:

Finished my final thesis :)



Cool! Is this view-dependent LOD?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

RussianManiac posted:

So pretty much this means that the level of subdivision is controlled by how far away or at which angle you are viewing the object from?

Ideally, a view-dependent LOD system will subdivide the geometry based on the screen space error between the current subdivision level and the "true" model. Usually the exact values of the screen-space error is complicated/expensive to compute, so most methods use some approximation accounting for view distance and angle (both of which affect how much a given geometry change appears in the final image) as well as possibly some shading parameters like normals or proximity to local lights.

heeen posted:

No, all in CUDA.

I'd be very curious to see any more pictures/slides/information if you've got a link to a write-up available somewhere.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Snodar posted:



Raytracing using OpenCL, obviously just after the critical "holy gently caress this is even worse than GLSL on release" stage. At least that pain is over with; the code works on both ATI/AMD (at least on CPU) and NVIDIA implementations, and I understand how each implementation is terrible in their own special ways, so I can move onto more interesting stuff.

Have you used CUDA? I'd be interested in your opinion of the two APIs (as well as what you think the "uniquely horrible" components of each OpenCL implementation are)

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

HappyHippo posted:



I'm trying to make something like Civilization, but on a real globe. Also, that's a genuine Winkel tripel projection in the corner. Hell yeah!

Cool! I've been thinking about something along those lines lately myself. How did you map the hexes to a sphere? My thinking was to basically have two hex circles, and overlap them at the equator.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Factor Mystic posted:

Wait, wha- how...? How in the world? Just because it's a famous clip and a lucky guess or did you actually ID it?

If you go through it in your head while you trace around the waveform, the beats line up pretty plausibly

Was my first guess too. Barring that, I don't think I would've figured it out though.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Optimus Prime Ribs posted:

I wanted to see if I could code my own path finding algorithm.
My language of choice is C++, and so far it's going pretty well.



Currently it complete ignores walls, and I suppose would get pretty hosed if it ran into a corner.
But if I replace the floor tiles with distance values I can make it look pretty :3:



edit

Woo walls! :toot:



I did this once for a project when I was younger, and was so proud of myself until I later learned I'd reinvented A* :unsmith:

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

tripwire posted:

Nice use of comic sans

lookat this noob


Optimus Prime Ribs posted:

I'm sure my version will be a cumbersome and inefficient version of A* (mine doesn't work quite the same, and that will probably be why), but I just wanna see if I can do it.
Plus I imagine if I get this working I'll be able to implement real pathfinding algorithms a lot more easily.

Yeah. My experience was basically that, with every improvement I came up with, the closer I got to a "classic" A* implementation. I kind of wonder what the known lower-bound is to find the best path from A to B in a universe of size N, and how far A* is from it...

Hubis fucked around with this message at 17:44 on Oct 12, 2010

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

SlightlyMadman posted:

I've been working on the same basic RPG idea for ages now, but I've got another incarnation of an apocalypse survival roguelike together:


Click here for the full 1280x688 image.


I decided to finally say "screw it" and go with ASCII because it seems like I usually give up on a project around the time I try to get the art looking just right. I still may end up getting some tiles together for it, but this at least looks good enough to me until I can finish up the game.

The idea is basically that you need to wander around scavenging supplies and staying alive. There are zombies of course, but they're not the most dangerous thing out there. It uses roguelike perma-death (and death comes very very easily), but the trick is that you can keep playing as a survivor you've rescued, so you have strong motivation to keep them alive.

Ideally, I'd like that sort of succession mechanic to mean that you can keep a game going through many generations of survivors, watching humanity slowly dwindle away in its last days.

Are you using libTCOD?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

fletcher posted:

Maybe try the random fuzzing, check to see if the result is a valid jpg, and if it's not, try a different random fuzzing. It's gotta get a valid one eventually right?

It's like applying Bogosort to data validation!

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

steckles posted:


I added dispersion to my ray tracer. Not too difficult as I had already implemented spectral rendering. Next up, implementing the Sellmeier and Cauchy equations to make it physically accurate.

If you've already got spectral representation, implement thin films!

http://www.kimdylla.com/computer_graphics/pbrt_iridescence/iridescence.htm

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Internet Janitor posted:

I think I'm writing a video game.


I haven't worked with hex grids much, so I figured I'd whip up a little turn-based strategy game in the vein of one of my old favorites, Slay.

I'm not sure how far I'll get into writing an actual game, but I've already learned a lot about how to represent the grid and locate neighbors. These hexes are actually built out of 8x8 tiles, which vastly simplifies drawing over most of the alternatives I could come up with.

Why 8x8 and not 2x3?

Also, what's your underlying representation for the hexes? I've thought about a few different indexing methods, but each seems to have their downsides.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

steckles posted:

The real complexity of spectral rendering doesn't have anything to do with ray tracing; It's in converting the spectral data to RGB for display. You can't store your frame buffer in RGB, as summing spectral colours converted to RGB doesn't work. Taking your spectral rainbow and adding all the colours together in RGB won't equal white. Assuming you don't want to waste memory storing a complete spectral framebuffer, you need to store your it in a colour space where summing does happen correctly. I use the CIE XYZ colour space.

If you're interested in writing a ray tracer, I suggest you pick up a copy of Physically Based Rendering. There's really no better book on the subject.

And the link I posted is to my write-up for implementing spectral rendering for thin film interactions (in case anyone is interested).

This page is where I got a lot of my info on doing spectral rendering. Basically, you are doing discrete integration along the X-domain on this graph:



for each of the curves shown (which correspond to the CIE XYZ values):



Spectral rendering is generally kind of un-necessary for most scenes, but thin film interaction, refraction (as Steckles is doing) and Fluorescence/Phosphorescence are some cases where it matters. The last part means your ray-tracer can simulate black lights!

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Internet Janitor posted:

Hubis: Partially because I'd like to port this to a platform that lets me build a background out of 8x8 tiles and doesn't allow me to overlay more than one such tile. So basically a completely arbitrary limitation. The hexes themselves are indeed composed of 3x2 overlapping regions of these tiles. When I said this simplifies drawing, I really meant that having the hexes be the shape they are (squashed vertically) simplifies things because they are composed of just four types of tiles- center, center with baseline and two diagonal tiles.

I'm using a hex-cell numbering scheme which resembles a grid in which every other column is shifted down half a tile. This particular arrangement is nice because it maps easily to a 2d array for storage, but makes finding neighbors a little sticky. Basically the relative positions of horizontal neighbors shifts depending on what column you're in. I handle neighbors something like this:

code:
int[][] cells = new int[14][19];

int hex(int x, int y) {
	if (x < 0 || x >= cells[0].length || y < 0 || y >= cells.length) {
		return 5; // default "water" hexes off the edge of the map.
	}
	return cells[y][x];
}

enum Dir { N, NE, SE, S, SW, NW };

int adj(int x, int y, Dir d) {
	switch(d) {
		case N:  return hex(x,   y-1);
		case NE: return hex(x+1, y-1+(x % 2));
		case SE: return hex(x+1, y  +(x % 2));
		case S:  return hex(x,   y+1);
		case SW: return hex(x-1, y  +(x % 2));
		default: return hex(x-1, y-1+(x % 2));
	}
}
I should probably replace this with a simple lookup table, I suppose.

As you said, there are several other ways to represent hex grids- another way I was considering was to use a scheme like this, where one of my axes is tilted:



Then neighbor offsets would be consistent but having a square map without wasting space is a little messier.

I'd love to hear more about other people's solutions to this type of problem.

One of the more elegant solutions I've seen is using a redundant 3rd dimension -- one for each of the opposing sides (so like what you have above, but with another axis crossing the diagonal one). The system is under-constrained, but is nice because movement is always one step along one of the axes.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

AlsoD posted:

Baby's first game:

Two days into creating a game with Haskell and OpenGL (both of which I'm already familiar with), mostly as something to keep me busy

So far I have:
- hex based movement
- A*ish pathfinding
- impassable terrain
- drawing a board and a path
- changing the ends of the path with the keyboard

https://github.com/Swooshed/Game-of-Hexes

If anybody has any suggestions or code improvements, I'd love to hear them.

What's your internal representation of the Hexes? (i.e. coordinate system)

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Astrolite posted:



I'm learning python, and trying to combine two of my favorite games: XCOM and Missionforce: Cyberstorm.

What are you using to render in Python? pyGame?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...
How thick is the glass holding the water? Do you simulate non-air interfaces (i.e. glass-water) properly?

Adbot
ADBOT LOVES YOU

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...
My project for National Game Development Month



It's a Neptune's Pride clone written with Tornado Web/jQuery/HTML5 Canvas. I've got the lobby and basic client/server architecture pretty much done, and now I'm working out some bugs and polishing the basic gameplay. Hopefully more to come towards the end of the month!

e: holy image-resizing, Batman!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply