|
roomforthetuna posted:I asked the one guy I know who does pixel graphics much better and faster than I do, and he says that yeah, that's pretty much how it's done, and it just gets faster and better with practice. Is that really the only effective way? That only works well with larger graphics though.
|
# ? Oct 11, 2011 07:30 |
|
|
# ? May 12, 2024 05:15 |
|
I quick question about roguelikes and Java. I am interested in making a roguelike in java, but I can't decide between SDL or a curses implementation. Does anybody have any recommendations or experience?
|
# ? Oct 11, 2011 08:24 |
|
roomforthetuna posted:I think the real problem for me is that I'm not very good at drawing in the first place, so I probably need to work on that before getting into the technical stuff. I'm an illustrator and getting into pixel art was really weird for me, and the idea of having only like 16 by 16 pixels to work on seemed impossible and crazy! What worked for me was simply taking an existing piece of pixel art and straight up copying it. The nice part is that BECAUSE you only have those 16 by 16 pixels, "tracing" pixel art goes really fast. Do this a couple of times and you'll quickly start to understand why certain things are done a certain way. And at that point you can try your hand at your own pieces, and it'll definitely go quicker over time. Of course my main claim to fame here is the grotesque artwork for Soul Jelly so your mileage may vary.
|
# ? Oct 11, 2011 09:39 |
|
re: Sprites Westborn did the complete opposite and crazy method for Deadweed. Cant argue with the results though.
|
# ? Oct 11, 2011 09:44 |
|
The Monarch posted:I'm having a weird issue with XNA. I'm trying to load a Texture2D, but it's not working for some reason. Is there a reason you aren't using the Content portion of your solution and using direct paths instead? All of that can be condensed into a single line (or more if you're passing ContentManager around). code:
|
# ? Oct 11, 2011 13:26 |
|
Suran37 posted:I quick question about roguelikes and Java. I haven't gotten any further than the planning stages in the roguelike I want to make, but in my experience the C++ version of curses is a bitch to work with. I don't know anything about SDL, but there you go. I'd also recommend checking out lwjgl, as it seems to handle most graphical stuff pretty easily. Depending on whether or not you're making a graphical roguelike, that seems like it might be useful.
|
# ? Oct 11, 2011 13:50 |
|
Suran37 posted:I quick question about roguelikes and Java. SDL will definitely keep your project open to expansion. You can change it to a graphics-based one at any time with minimal code changing, while the curses implementation limits you to text. First advice that comes to mind is 'Don't bite off more than you can chew.'. Roguelikes are a surprising amount of work, even short or coffeebreak ones. If you happen to know Python/C/C++ or would want to learn, there's a great library that will take care of some of the dirty work and let you focus more on game logic.
|
# ? Oct 11, 2011 14:29 |
|
The other alternative would just be to use Java2D. It's hardware-accelerated on most platforms, portable and quite flexible once you learn how to use it properly.
|
# ? Oct 11, 2011 14:32 |
|
Red Mike posted:SDL will definitely keep your project open to expansion. You can change it to a graphics-based one at any time with minimal code changing, while the curses implementation limits you to text. Too bad Mac support for the Python version is non-existent at the moment
|
# ? Oct 11, 2011 15:04 |
|
How should I go about finding someone to make music for my game? I'd want a couple of tracks but I've been quoted 200-400 for 30-60 seconds of audio. I can't imagine what several tracks would cost me. How much do ya'll generally pay for several tracks of audio?
|
# ? Oct 11, 2011 15:49 |
|
Hubis posted:Too bad Mac support for the Python version is non-existent at the moment Not to mention the fact that I suck at configuring C++ libraries so I've never managed to get it to work with anything other than Python. Sometimes I wish I didn't want to make cross-platform stuff so I could just muck around with XNA.
|
# ? Oct 11, 2011 15:53 |
|
dangerz posted:How should I go about finding someone to make music for my game? I'd want a couple of tracks but I've been quoted 200-400 for 30-60 seconds of audio. I can't imagine what several tracks would cost me. Since I make really small, experimental games, I just go to indiegamemusic.com and either get permission for the free version, or if there'll be ad revenue buy a track that fits my game decently. I think having someone actually make it from scratch is either going to be free if they're nice or about what you were talking about if you're paying them.
|
# ? Oct 11, 2011 21:29 |
|
Okay, has anyone messed with Bullet physics at all? I'm trying to find out if it's actually safe to do mass object enables/disables so that asynchronous object updates could actually work with it. i.e. a lot of games use a model of updating players separately from the world simulation, which would ostensibly involve disabling physics (but not force accumulation) on whatever's not being updated and then stepping the world. I'm trying to figure out if this would actually work or not. (Of course, I'm also considering just waiting for Doom 3's source, which has this problem mostly solved already)
|
# ? Oct 11, 2011 22:58 |
|
dangerz posted:How should I go about finding someone to make music for my game? I'd want a couple of tracks but I've been quoted 200-400 for 30-60 seconds of audio. I can't imagine what several tracks would cost me.
|
# ? Oct 11, 2011 23:11 |
|
Nalin posted:Audio can cost a lot of money. We were fortunate to meet some musicians who offered their services for free. For our first game, however, almost all of our music came from http://incompetech.com/. Royalty-free music may be your best option. Thanks for the info!
|
# ? Oct 11, 2011 23:58 |
|
Finally had a little free time to tinker with my toy VM again. This afternoon I built a Tetris clone: check out the source! (forth)
|
# ? Oct 13, 2011 04:32 |
|
OneEightHundred posted:Okay, has anyone messed with Bullet physics at all? I'm trying to find out if it's actually safe to do mass object enables/disables so that asynchronous object updates could actually work with it. Have you toyed around with http://www.bulletphysics.com/Bullet/BulletFull/classbtKinematicCharacterController.html ? From what I've read it's a bit bugged, but maybe you can work out your own objects from looking at its source.
|
# ? Oct 14, 2011 23:31 |
|
Zerf posted:Have you toyed around with http://www.bulletphysics.com/Bullet/BulletFull/classbtKinematicCharacterController.html ? From what I've read it's a bit bugged, but maybe you can work out your own objects from looking at its source. My concern is one of stepping the player and the world independent of one another without everything just breaking.
|
# ? Oct 14, 2011 23:55 |
|
Is there any compelling reason not to use Boost::Serialization to implement save game functionality? Is there a better way to handle saving objects that use STL containers and pointers and complex user-defined classes?
|
# ? Oct 15, 2011 01:48 |
|
Wouldn't mind hearing everyone's thoughts on this please: http://www.youtube.com/watch?v=lYZ2C_LN6cQ
|
# ? Oct 15, 2011 04:29 |
|
Vino posted:Wouldn't mind hearing everyone's thoughts on this please: I'm not an expert in graphics by any means, but isn't this just LOD? Also, the demos in the video were highly unimpressive. The textures were blurry at all levels of detail, and there was obvious texture popping and weird issues with gaps between sections of the planet model. The scalable floating point numbers might be interesting? Seems like there would probably be significant overhead in using them that would cause performance issues in a game.
|
# ? Oct 15, 2011 08:04 |
|
Vino posted:Wouldn't mind hearing everyone's thoughts on this please: Also, google "projection space partitioning." That is, more or less, what you're doing there - it's used in space sims to render 3D cockpits, and also for far-distant planetoids et al. The big difference is that usually the partitions do not occupy the same space, whereas yours does. What you're doing also wouldn't work with the depth buffer, unless I'm missing something, so you'd have to painter's algorithm each "segment" of the world yourself, I imagine? Even with partioning techniques, there is max depth resolution, all you're doing is choosing to weight the accuracy into those regions that most need it. EDIT: Actually, no, don't google that... drat it, what is the official term. Not sure. Maybe search on 3D cockpits / space sim / etc. ... but I don't really mean to take a poo poo on your project - it's a cool bit of work so far. I'm just asking questions and hoping for good answers / an interesting discussion Shalinor fucked around with this message at 17:58 on Oct 15, 2011 |
# ? Oct 15, 2011 17:53 |
|
So, I am not sure if this is the best place to ask about coding/logic issues, but here goes: What I want to do is render a layer only when the character is not under an occupied space. For example, if the character enters a house with a roof, the layer above stops getting drawn, but if he steps outside, the above layer starts rendering again. For the most part, it works, but it only works when the character's origin makes the condition true, not if any part of the sprite makes it true. code:
GetCellIndex() returns the number(that identifies the tile) on the given position. In this case, it checks if it is -1(which means the Cell is empty), if it is not, then it stops drawing the layers in the list. I know that this should be really simple, but for some reason my brain just farts when it has to come up with a solution to this.
|
# ? Oct 15, 2011 18:22 |
|
Medieval Medic posted:
There's nothing magical, on this line, you want to call it enough times to cover every position on the sprite. The way I like to do it is something like: code:
iopred fucked around with this message at 18:55 on Oct 15, 2011 |
# ? Oct 15, 2011 18:52 |
|
Gordon Cole posted:I'm not an expert in graphics by any means, but isn't this just LOD? No, not really. LOD would be needed to do that, but what's going on here is a way to render things that are a meter in front of you and also a few lightyears away at the same time and all areas in between. It's the same idea as how a skybox is rendered, but taken up a notch. And I know the demos look like crap, I didn't want to go any further without having solved the collision problem. Shalinor posted:The floating point solution is somewhat novel, but will be very, very slow when you start simulating physics. I'm not actually using the scalable floats for simulation. I break each section of terrain into a chunk and each chunk has a regular floating point simulation. The game only needs to test against maybe four chunks at a time max, and usually only one. It's all really complicated and I haven't gotten it entirely working yet because it involves all that streaming stuff that would take ages for me to do. (I want to work on an indie project that takes a couple days to prototype, not a couple months like this beast has been.) Re super-distant sims, I'm only simulating things that are near the camera. The plan is that the star/planet that the player is near is simulated and other things aren't. The "astronomy" physics model I'm using is really simple, planets spin and maybe move around their star but they do so on preset paths. Distant stars can be loaded out of memory and just rendered as a tiny dot, no collision necessary. And as for memory, as it stands it doesn't really take all that much. The plan is that the game deterministically generates random terrain that it can re-generate again each time. If the player leaves an object there then that must be remembered, but the terrain itself would just be re-generated on the fly. So the entire universe could be explored without having to fit the entire universe in memory. And lastly, I want to make a game out of it but I haven't decided on what kind yet, no point if I can't get the last big simulation/streaming piece of the puzzle into place. This projection space stuff is interesting, could you tell me more about it? Like remember the name maybe?
|
# ? Oct 15, 2011 19:41 |
|
Vino posted:This projection space stuff is interesting, could you tell me more about it? Like remember the name maybe? The typical partition for a deep-space game goes something like, cockpit gets 10-15% of the zbuffer space, near space gets 70% or so, distant gets 10-15%. Or you might skew the distant to be a bit close to 30%. You just change out your projection matrix as needed for each section. Really depends on your needs, but it works pretty well for mixing "those giant distant planets" with "that ant on your dashboard." EDIT: VV I'm talking about the crazy projection matrix weighting method. The whole point is that you don't have to clear the zbuf / things in the cockpit still depth-check against the things in the rest of space. You literally partition the zbuf. Nothing that is non-cockpit can render with a final Z of less than 0.3 (even if in actual space it's a centimeter from the camera), nothing in cockpit ever renders with above .3, etc. Shalinor fucked around with this message at 20:39 on Oct 15, 2011 |
# ? Oct 15, 2011 19:58 |
|
That doesn't really get any hits, but is what you're talking about changing your projection matrix and clearing your depth buffer for what is basically another pass? Or is it some crazy weighting for the depth buffer to do it all in one pass? Because what I'm doing is the former. Also what I'm doing is a tad different because typically with the "skybox" approach you know which render scope your objects are going to be in, that far-off gas giant never changes into the "near" scope. In Descent Freespace for example, the game doesn't let you fly outside of a certain bullpen of space. In Codename: Infinite you can fly to that gas giant and it automatically and gradually transfers the terrain on the surface of a planet to nearer and nearer render levels. Or do I have all this wrong?
|
# ? Oct 15, 2011 20:09 |
|
Shalinor posted:The floating point solution is somewhat novel, but will be very, very slow when you start simulating physics. Eve Online does this for in-space simulation. There are effectively two levels of calculation, "grid"-level and system-level. At any given time, you're on a "grid" with an origin point and if you leave it, it will either create a new grid or expand the one you were in. It doesn't handle grid merges (leading to some pretty hilarious exploits about a year ago), but that's the only real problem with making it scale effectively infinitely. Consequently, you can interact with things on a scale of meters despite being in a system where things are measured in astral units. OneEightHundred fucked around with this message at 21:49 on Oct 15, 2011 |
# ? Oct 15, 2011 21:42 |
|
I thought about doing it that way, but I decided against it because of the grid-merging problems you're talking about and because I thought it would be easier to make what I did scale to infinity than a grid system. With a grid system you're essentially going to either have a coordinate for each grid (x, y) and then you're going to be limited by the size of the integer that holds those coordinates. With what I did you can always just add another level should you need more, and since it's still partially based on floating points, you can still exceed the maximum range of the base-1000 number by a lot before you start getting into precision problems. I glazed over some of what I did with the scalable floats. They actually look like this float - int - int - int - int - int - float The float on the left represents the "overflow" whereas numbers too large to fit into the leftmost int are placed in the float. Same with the right float, it's the underflow. Since that float on the left is actually a double, I'm able to get 48 digits of precision, which is larger than the size of the milky way galaxy, which isn't really "universe" sized but it's big enough for now.
|
# ? Oct 15, 2011 22:00 |
|
Vino posted:I thought about doing it that way, but I decided against it because of the grid-merging problems you're talking about Integer precision isn't really an issue because you can use floating point positions for grid locations. You don't have to worry about floating point loss between grids because if you want to find the reinsert location in another grid, you start by subtracting the grid origins which gives you a much smaller-scale number than the grid origins themselves. OneEightHundred fucked around with this message at 22:35 on Oct 15, 2011 |
# ? Oct 15, 2011 22:17 |
|
Logarithmic depth buffers are a good solution for rendering things at extreme scales. His example with a 24-bit Z-buffer has a resolution of 2 micrometers at 1 meter, and a resolution of 10 meters at 10,000 km. Also, Infinity is an indie space MMO that seems to be aiming at accomplishing a lot of the same things...
|
# ? Oct 15, 2011 23:05 |
|
Today I learned that putting [unroll] before a tiny for loop in your pixel shader can mean the difference between 200 FPS and 1 FPS. Never again will I assume the shader compiler is in any way intelligent.
|
# ? Oct 16, 2011 01:51 |
|
Scaevolus posted:Logarithmic depth buffers are a good solution for rendering things at extreme scales. His example with a 24-bit Z-buffer has a resolution of 2 micrometers at 1 meter, and a resolution of 10 meters at 10,000 km. EVE Online was at one time using a W-Buffer (nVidia PDF) that seemed to have worked well with extreme distances.
|
# ? Oct 16, 2011 08:22 |
|
Lemon King posted:EVE Online was at one time using a W-Buffer (nVidia PDF) that seemed to have worked well with extreme distances. EDIT: Well, no, I fubbed that. It makes z-buffer collisions consistent and predictable, which is a good thing for long view distances involving big objects in the distance that make flickering very noticeable. It's just that it's also a poor choice for long view distances, since the remaining z accuracy you've got up close is now very, very limited compared to usual. So you've got to build your world accordingly, with big objects that mostly have no need of zbuffering / depend on zbuffering only for sorting one large object against another. So yeah, that's a good fit for what Eve does. Less good a fit for other space-type games, though. It really just comes down to what you're trying to do. Log's great for one distribution, W's great for another, and partioning z space is great for still other reasons (and could, I suppose, be mixed with either of the other approaches too). ... actually, huh. That could be interesting. If you did log + partitions, you'd have multiple bands of z accuracy running through your world. That might be interesting, if it suited your LOD scheme well. Shalinor fucked around with this message at 17:19 on Oct 16, 2011 |
# ? Oct 16, 2011 17:05 |
|
Shalinor posted:W-Buffering's big thing is that it equalizes the distribution of z-accuracy across the entirety of your render volume. In space, that's actually a bad thing, unless you can guarantee all your objects are above or within a certain scale range. In particular, it would be a poor choice for any kind of cockpit view (which Eve doesn't use), but it's great for cases where your object size is relatively homogenous (which is... kind of, true, for Eve - though it seems like their regular vs cap ships would be quite a size difference). It actually wouldn't be to much of a problem with a cockpit view, just because you can manually sort the depth planes -- i.e. render outside the cockpit, clear the buffer and change the near/far distances, and draw the cockpit using a clean Z-buffer.
|
# ? Oct 16, 2011 20:42 |
|
Hubis posted:It actually wouldn't be to much of a problem with a cockpit view, just because you can manually sort the depth planes -- i.e. render outside the cockpit, clear the buffer and change the near/far distances, and draw the cockpit using a clean Z-buffer.
|
# ? Oct 16, 2011 21:19 |
|
Or you just render the cockpit after a Z-buffer clear because it will never intersect or overlap anything else in the game world.
|
# ? Oct 16, 2011 21:41 |
|
OneEightHundred posted:Or you just render the cockpit after a Z-buffer clear because it will never intersect or overlap anything else in the game world. It can handle shifts in the projection matrix, but when you start clearing or fiddling too much with the display surface, you're eating performance. roomforthetuna posted:Or just render the cockpit as a transparent sprite afterwards with z-buffer disabled, to save on the wasteful clear (and wasteful z-buffer comparisons for that matter). You could do that by rendering off-screen and compositing, but that'd be even worse on the rendering. (EDIT: unless you double/triple buffered and allowed the cockpit to lag a few frames - that'd actually work pretty well, hmmm) EDIT: Oh, wait, you mean 3D sprite. Doable, but only if your cockpit is guaranteed rigid, and either doesn't require depth comparisons in its rendering, or you've done some fancy tricks with the VB to guarantee draw order of overlapping elements. Still, yeah, doable, depending on requirements. Other big issue if it there's stuff in your cockpit that isn't strictly cockpit-related (particles driven by the regular game particle system, etc). Shalinor fucked around with this message at 22:10 on Oct 16, 2011 |
# ? Oct 16, 2011 21:59 |
|
Does anyone know how to export from Autoidest Softimage XSI Mod Tool or whatever I'm supposed to call it/google into .fbx? I want to get stuff out of it into UDK but I'm having a hard time finding guides on how to do it. Most of what I find seems outdated. How much better is Blender for this? I just want to get basic shapes in really for now, I'm not 3D artist by any means.
|
# ? Oct 17, 2011 00:42 |
|
|
# ? May 12, 2024 05:15 |
|
Shalinor posted:It can handle shifts in the projection matrix, but when you start clearing or fiddling too much with the display surface, you're eating performance.
|
# ? Oct 17, 2011 19:18 |