Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

roomforthetuna posted:

I asked the one guy I know who does pixel graphics much better and faster than I do, and he says that yeah, that's pretty much how it's done, and it just gets faster and better with practice. Is that really the only effective way?
The alternative is do vector graphics with snap to grid, then scale the grid to 1px when you're done and turn anti-aliasing off.

That only works well with larger graphics though.

Adbot
ADBOT LOVES YOU

Suran37
Feb 28, 2009
I quick question about roguelikes and Java.

I am interested in making a roguelike in java, but I can't decide between SDL or a curses implementation. Does anybody have any recommendations or experience?

high on life and meth
Jul 14, 2006

Fika
Rules
Everything
Around
Me

roomforthetuna posted:

I think the real problem for me is that I'm not very good at drawing in the first place, so I probably need to work on that before getting into the technical stuff.

I'm an illustrator and getting into pixel art was really weird for me, and the idea of having only like 16 by 16 pixels to work on seemed impossible and crazy!

What worked for me was simply taking an existing piece of pixel art and straight up copying it. The nice part is that BECAUSE you only have those 16 by 16 pixels, "tracing" pixel art goes really fast. Do this a couple of times and you'll quickly start to understand why certain things are done a certain way. And at that point you can try your hand at your own pieces, and it'll definitely go quicker over time.

Of course my main claim to fame here is the grotesque artwork for Soul Jelly so your mileage may vary.

Synthbuttrange
May 6, 2007

re: Sprites

Westborn did the complete opposite and crazy method for Deadweed.






Cant argue with the results though. :stare:

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice

The Monarch posted:

I'm having a weird issue with XNA. I'm trying to load a Texture2D, but it's not working for some reason.

creaturePath can be ignored, and ArtPath is equal to "\\Art\\Dynamic\\Creatures\\Dog\\".

code:
        public void LoadAnimations(ContentManager content, String creaturePath)
        {

            // Load paths to animation
            String[] filelist = Directory.GetFiles(content.RootDirectory + ArtPath);

            // Add animations to animation dictionary
            foreach (String animationPath in filelist)
            {
                // Get file name
                String animationName = System.Text.RegularExpressions.Regex.Replace
                    (animationPath, @"\\", ",");
                animationName = System.Text.RegularExpressions.Regex.Split
                    (animationName, ",")[5];

                animationName = System.Text.RegularExpressions.Regex.Split
                    (animationName, ".xnb")[0];

                String path = ArtPath + animationName;
                // Load animation texture
                Texture2D animationTexture = content.Load<Texture2D>(path); // <--Error

                ...
            }
        }
The specific error I'm getting is this:

Error loading "\Art\Dynamic\Creatures\Dog\dog3_idle_n". Cannot open file.

I've added the file through Visual studio and if I remove the files and re-add them the proper .xnb files are created. Can anyone tell me what's up?

Edit: Nevermind. I did more reading about how content managers and stuff work so now It's working fine. I'm just passing the content service provider from the main "Game1.cs" class to my creature class, and making a new content manager there.

Is there a reason you aren't using the Content portion of your solution and using direct paths instead? All of that can be condensed into a single line (or more if you're passing ContentManager around).
code:
Texture2D mySprite = Content.Load<Texture2D>("env/door_wood_ns"));

quiggy
Aug 7, 2010

[in Russian] Oof.


Suran37 posted:

I quick question about roguelikes and Java.

I am interested in making a roguelike in java, but I can't decide between SDL or a curses implementation. Does anybody have any recommendations or experience?

I haven't gotten any further than the planning stages in the roguelike I want to make, but in my experience the C++ version of curses is a bitch to work with. I don't know anything about SDL, but there you go. I'd also recommend checking out lwjgl, as it seems to handle most graphical stuff pretty easily. Depending on whether or not you're making a graphical roguelike, that seems like it might be useful.

Red Mike
Jul 11, 2011

Suran37 posted:

I quick question about roguelikes and Java.

I am interested in making a roguelike in java, but I can't decide between SDL or a curses implementation. Does anybody have any recommendations or experience?

SDL will definitely keep your project open to expansion. You can change it to a graphics-based one at any time with minimal code changing, while the curses implementation limits you to text.

First advice that comes to mind is 'Don't bite off more than you can chew.'. Roguelikes are a surprising amount of work, even short or coffeebreak ones.

If you happen to know Python/C/C++ or would want to learn, there's a great library that will take care of some of the dirty work and let you focus more on game logic.

Internet Janitor
May 17, 2008

"That isn't the appropriate trash receptacle."
The other alternative would just be to use Java2D. It's hardware-accelerated on most platforms, portable and quite flexible once you learn how to use it properly.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Red Mike posted:

SDL will definitely keep your project open to expansion. You can change it to a graphics-based one at any time with minimal code changing, while the curses implementation limits you to text.

First advice that comes to mind is 'Don't bite off more than you can chew.'. Roguelikes are a surprising amount of work, even short or coffeebreak ones.

If you happen to know Python/C/C++ or would want to learn, there's a great library that will take care of some of the dirty work and let you focus more on game logic.

Too bad Mac support for the Python version is non-existent at the moment :(

dangerz
Jan 12, 2005

when i move you move, just like that
How should I go about finding someone to make music for my game? I'd want a couple of tracks but I've been quoted 200-400 for 30-60 seconds of audio. I can't imagine what several tracks would cost me.

How much do ya'll generally pay for several tracks of audio?

quiggy
Aug 7, 2010

[in Russian] Oof.


Hubis posted:

Too bad Mac support for the Python version is non-existent at the moment :(

Not to mention the fact that I suck at configuring C++ libraries so I've never managed to get it to work with anything other than Python. Sometimes I wish I didn't want to make cross-platform stuff so I could just muck around with XNA.

ambushsabre
Sep 1, 2009

It's...it's not shutting down!

dangerz posted:

How should I go about finding someone to make music for my game? I'd want a couple of tracks but I've been quoted 200-400 for 30-60 seconds of audio. I can't imagine what several tracks would cost me.

How much do ya'll generally pay for several tracks of audio?

Since I make really small, experimental games, I just go to indiegamemusic.com and either get permission for the free version, or if there'll be ad revenue buy a track that fits my game decently. I think having someone actually make it from scratch is either going to be free if they're nice or about what you were talking about if you're paying them.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Okay, has anyone messed with Bullet physics at all? I'm trying to find out if it's actually safe to do mass object enables/disables so that asynchronous object updates could actually work with it.

i.e. a lot of games use a model of updating players separately from the world simulation, which would ostensibly involve disabling physics (but not force accumulation) on whatever's not being updated and then stepping the world. I'm trying to figure out if this would actually work or not.


(Of course, I'm also considering just waiting for Doom 3's source, which has this problem mostly solved already)

Nalin
Sep 29, 2007

Hair Elf

dangerz posted:

How should I go about finding someone to make music for my game? I'd want a couple of tracks but I've been quoted 200-400 for 30-60 seconds of audio. I can't imagine what several tracks would cost me.

How much do ya'll generally pay for several tracks of audio?
Audio can cost a lot of money. We were fortunate to meet some musicians who offered their services for free. For our first game, however, almost all of our music came from http://incompetech.com/. Royalty-free music may be your best option.

dangerz
Jan 12, 2005

when i move you move, just like that

Nalin posted:

Audio can cost a lot of money. We were fortunate to meet some musicians who offered their services for free. For our first game, however, almost all of our music came from http://incompetech.com/. Royalty-free music may be your best option.
That incompetech site is awesome. I'm just working on a little hobby game, so what I need doesn't have to be fancy.

Thanks for the info!

Internet Janitor
May 17, 2008

"That isn't the appropriate trash receptacle."
Finally had a little free time to tinker with my toy VM again. This afternoon I built a Tetris clone:


check out the source! (forth)

Zerf
Dec 17, 2004

I miss you, sandman

OneEightHundred posted:

Okay, has anyone messed with Bullet physics at all? I'm trying to find out if it's actually safe to do mass object enables/disables so that asynchronous object updates could actually work with it.

i.e. a lot of games use a model of updating players separately from the world simulation, which would ostensibly involve disabling physics (but not force accumulation) on whatever's not being updated and then stepping the world. I'm trying to figure out if this would actually work or not.


(Of course, I'm also considering just waiting for Doom 3's source, which has this problem mostly solved already)

Have you toyed around with http://www.bulletphysics.com/Bullet/BulletFull/classbtKinematicCharacterController.html ? From what I've read it's a bit bugged, but maybe you can work out your own objects from looking at its source.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Zerf posted:

Have you toyed around with http://www.bulletphysics.com/Bullet/BulletFull/classbtKinematicCharacterController.html ? From what I've read it's a bit bugged, but maybe you can work out your own objects from looking at its source.
That just applies input to discretely move the player, the player would still be subject to physical forces as part of the world simulation.

My concern is one of stepping the player and the world independent of one another without everything just breaking.

Top Bunk Wanker
Jan 31, 2005

Top Trump Anger
Is there any compelling reason not to use Boost::Serialization to implement save game functionality? Is there a better way to handle saving objects that use STL containers and pointers and complex user-defined classes?

Vino
Aug 11, 2010
Wouldn't mind hearing everyone's thoughts on this please:

http://www.youtube.com/watch?v=lYZ2C_LN6cQ

dizzywhip
Dec 23, 2005

Vino posted:

Wouldn't mind hearing everyone's thoughts on this please:

http://www.youtube.com/watch?v=lYZ2C_LN6cQ

I'm not an expert in graphics by any means, but isn't this just LOD? Also, the demos in the video were highly unimpressive. The textures were blurry at all levels of detail, and there was obvious texture popping and weird issues with gaps between sections of the planet model.

The scalable floating point numbers might be interesting? Seems like there would probably be significant overhead in using them that would cause performance issues in a game.

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?

Vino posted:

Wouldn't mind hearing everyone's thoughts on this please:

http://www.youtube.com/watch?v=lYZ2C_LN6cQ
The floating point solution is somewhat novel, but will be very, very slow when you start simulating physics. It also kind of side-steps the REAL reason you don't simulate the infinite world - because it won't all fit in memory, can't all be updated at real-time, etc. You already have to start reducing the LOD of super-distant sims to some representative sim (transition from entity-specific AI to regional AI that sums the approximate actions of know factions in the region), so if you're doing that, why is rendering and representing at an infinite distance desirable? You're also going to hit some serious drawcall limitations eventually, etc. What's your plan for how the tech will be used?

Also, google "projection space partitioning." That is, more or less, what you're doing there - it's used in space sims to render 3D cockpits, and also for far-distant planetoids et al. The big difference is that usually the partitions do not occupy the same space, whereas yours does. What you're doing also wouldn't work with the depth buffer, unless I'm missing something, so you'd have to painter's algorithm each "segment" of the world yourself, I imagine? Even with partioning techniques, there is max depth resolution, all you're doing is choosing to weight the accuracy into those regions that most need it.

EDIT: Actually, no, don't google that... drat it, what is the official term. Not sure. Maybe search on 3D cockpits / space sim / etc.


... but I don't really mean to take a poo poo on your project - it's a cool bit of work so far. I'm just asking questions and hoping for good answers / an interesting discussion :3:

Shalinor fucked around with this message at 17:58 on Oct 15, 2011

Medieval Medic
Sep 8, 2011
So, I am not sure if this is the best place to ask about coding/logic issues, but here goes:

What I want to do is render a layer only when the character is not under an occupied space.
For example, if the character enters a house with a roof, the layer above stops getting drawn, but if he steps outside, the above layer starts rendering again. For the most part, it works, but it only works when the character's origin makes the condition true, not if any part of the sprite makes it true.

code:
int counter = 0;
            
foreach (TileLayer layer in Layers)            
{
   counter++;

   layer.Draw(spriteBatch, camera, min, max);

   if (sprite.Layer == counter && counter < Layers.Count)
   { 
      if (Layers.ElementAt(counter).GetCellIndex(Engine.ConvertPositionToCell(sprite.Position)) != -1)
      {
          return;
      }
   }
}      
Engine.ConvertPositionToCell() takes a a Vector2(in this case the position of the character) and converts it to the position in the array he is standing on.
GetCellIndex() returns the number(that identifies the tile) on the given position.
In this case, it checks if it is -1(which means the Cell is empty), if it is not, then it stops
drawing the layers in the list.

I know that this should be really simple, but for some reason my brain just farts when it has to come up with a solution to this.

iopred
Aug 14, 2005

Heli Attack!

Medieval Medic posted:


code:
      if (Layers.ElementAt(counter).GetCellIndex(Engine.ConvertPositionToCell(sprite.Position)) != -1)
      {
          return;
      } 

There's nothing magical, on this line, you want to call it enough times to cover every position on the sprite.

The way I like to do it is something like:
code:
for (x = topLeftX; x <= bottomRightX; x++) {
  for (y = topLeftY; y <= bottomRightY; y++) {
    if (getCellIndex(x, y) != -1) {
      return;
    }
  }
}
Of course you don't do it for every single pixel of the sprite, just for each tile area the sprite is in, so define topLeftX as floor(sprite.x/tileWidth) and bottomRightX as floor((sprite.x + sprite.width)/tileWidth) or something similar.

iopred fucked around with this message at 18:55 on Oct 15, 2011

Vino
Aug 11, 2010

Gordon Cole posted:

I'm not an expert in graphics by any means, but isn't this just LOD?

No, not really. LOD would be needed to do that, but what's going on here is a way to render things that are a meter in front of you and also a few lightyears away at the same time and all areas in between. It's the same idea as how a skybox is rendered, but taken up a notch.

And I know the demos look like crap, I didn't want to go any further without having solved the collision problem.

Shalinor posted:

The floating point solution is somewhat novel, but will be very, very slow when you start simulating physics.

I'm not actually using the scalable floats for simulation. I break each section of terrain into a chunk and each chunk has a regular floating point simulation. The game only needs to test against maybe four chunks at a time max, and usually only one. It's all really complicated and I haven't gotten it entirely working yet because it involves all that streaming stuff that would take ages for me to do. (I want to work on an indie project that takes a couple days to prototype, not a couple months like this beast has been.)

Re super-distant sims, I'm only simulating things that are near the camera. The plan is that the star/planet that the player is near is simulated and other things aren't. The "astronomy" physics model I'm using is really simple, planets spin and maybe move around their star but they do so on preset paths. Distant stars can be loaded out of memory and just rendered as a tiny dot, no collision necessary.

And as for memory, as it stands it doesn't really take all that much. The plan is that the game deterministically generates random terrain that it can re-generate again each time. If the player leaves an object there then that must be remembered, but the terrain itself would just be re-generated on the fly. So the entire universe could be explored without having to fit the entire universe in memory.

And lastly, I want to make a game out of it but I haven't decided on what kind yet, no point if I can't get the last big simulation/streaming piece of the puzzle into place.

This projection space stuff is interesting, could you tell me more about it? Like remember the name maybe? ;)

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?

Vino posted:

This projection space stuff is interesting, could you tell me more about it? Like remember the name maybe? ;)
Eeeeh... just look for information on how to render 3d cockpits always on top despite things in-game-world being super close to your camera - that is the primary use of the approach.

The typical partition for a deep-space game goes something like, cockpit gets 10-15% of the zbuffer space, near space gets 70% or so, distant gets 10-15%. Or you might skew the distant to be a bit close to 30%. You just change out your projection matrix as needed for each section. Really depends on your needs, but it works pretty well for mixing "those giant distant planets" with "that ant on your dashboard."


EDIT: VV I'm talking about the crazy projection matrix weighting method. The whole point is that you don't have to clear the zbuf / things in the cockpit still depth-check against the things in the rest of space. You literally partition the zbuf. Nothing that is non-cockpit can render with a final Z of less than 0.3 (even if in actual space it's a centimeter from the camera), nothing in cockpit ever renders with above .3, etc.

Shalinor fucked around with this message at 20:39 on Oct 15, 2011

Vino
Aug 11, 2010
That doesn't really get any hits, but is what you're talking about changing your projection matrix and clearing your depth buffer for what is basically another pass? Or is it some crazy weighting for the depth buffer to do it all in one pass? Because what I'm doing is the former.

Also what I'm doing is a tad different because typically with the "skybox" approach you know which render scope your objects are going to be in, that far-off gas giant never changes into the "near" scope. In Descent Freespace for example, the game doesn't let you fly outside of a certain bullpen of space. In Codename: Infinite you can fly to that gas giant and it automatically and gradually transfers the terrain on the surface of a planet to nearer and nearer render levels. Or do I have all this wrong?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Shalinor posted:

The floating point solution is somewhat novel, but will be very, very slow when you start simulating physics.
It seems unnecessary as well. The usual solution to the problem is to just represent all positions as relative to a spatial sector of some sort. That's brought up as the way precision issues are solved for rendering, but it solves physics/collision problems just as well.

Eve Online does this for in-space simulation. There are effectively two levels of calculation, "grid"-level and system-level. At any given time, you're on a "grid" with an origin point and if you leave it, it will either create a new grid or expand the one you were in. It doesn't handle grid merges (leading to some pretty hilarious exploits about a year ago), but that's the only real problem with making it scale effectively infinitely. Consequently, you can interact with things on a scale of meters despite being in a system where things are measured in astral units.

OneEightHundred fucked around with this message at 21:49 on Oct 15, 2011

Vino
Aug 11, 2010
I thought about doing it that way, but I decided against it because of the grid-merging problems you're talking about and because I thought it would be easier to make what I did scale to infinity than a grid system. With a grid system you're essentially going to either have a coordinate for each grid (x, y) and then you're going to be limited by the size of the integer that holds those coordinates. With what I did you can always just add another level should you need more, and since it's still partially based on floating points, you can still exceed the maximum range of the base-1000 number by a lot before you start getting into precision problems.

I glazed over some of what I did with the scalable floats. They actually look like this

float - int - int - int - int - int - float

The float on the left represents the "overflow" whereas numbers too large to fit into the leftmost int are placed in the float. Same with the right float, it's the underflow. Since that float on the left is actually a double, I'm able to get 48 digits of precision, which is larger than the size of the milky way galaxy, which isn't really "universe" sized but it's big enough for now.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Vino posted:

I thought about doing it that way, but I decided against it because of the grid-merging problems you're talking about
Grid merging isn't really a "problem" as much as a design consideration, mostly one of how far away an object can be and still be considered part of the relevant set. That or don't isolate them at all and use any of several possible techniques to interact with objects across grids (a problem many MMOs have already solved).

Integer precision isn't really an issue because you can use floating point positions for grid locations. You don't have to worry about floating point loss between grids because if you want to find the reinsert location in another grid, you start by subtracting the grid origins which gives you a much smaller-scale number than the grid origins themselves.

OneEightHundred fucked around with this message at 22:35 on Oct 15, 2011

Scaevolus
Apr 16, 2007

Logarithmic depth buffers are a good solution for rendering things at extreme scales. His example with a 24-bit Z-buffer has a resolution of 2 micrometers at 1 meter, and a resolution of 10 meters at 10,000 km.

Also, Infinity is an indie space MMO that seems to be aiming at accomplishing a lot of the same things...

Paniolo
Oct 9, 2007

Heads will roll.
Today I learned that putting [unroll] before a tiny for loop in your pixel shader can mean the difference between 200 FPS and 1 FPS.

Never again will I assume the shader compiler is in any way intelligent.

Lemon King
Oct 4, 2009

im nt posting wif a mark on my head

Scaevolus posted:

Logarithmic depth buffers are a good solution for rendering things at extreme scales. His example with a 24-bit Z-buffer has a resolution of 2 micrometers at 1 meter, and a resolution of 10 meters at 10,000 km.

Also, Infinity is an indie space MMO that seems to be aiming at accomplishing a lot of the same things...

EVE Online was at one time using a W-Buffer (nVidia PDF) that seemed to have worked well with extreme distances.

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?

Lemon King posted:

EVE Online was at one time using a W-Buffer (nVidia PDF) that seemed to have worked well with extreme distances.
W-Buffering's big thing is that it equalizes the distribution of z-accuracy across the entirety of your render volume. In space, that's actually a bad thing, unless you can guarantee all your objects are above or within a certain scale range. In particular, it would be a poor choice for any kind of cockpit view (which Eve doesn't use), but it's great for cases where your object size is relatively homogenous (which is... kind of, true, for Eve - though it seems like their regular vs cap ships would be quite a size difference).

EDIT: Well, no, I fubbed that. It makes z-buffer collisions consistent and predictable, which is a good thing for long view distances involving big objects in the distance that make flickering very noticeable. It's just that it's also a poor choice for long view distances, since the remaining z accuracy you've got up close is now very, very limited compared to usual. So you've got to build your world accordingly, with big objects that mostly have no need of zbuffering / depend on zbuffering only for sorting one large object against another. So yeah, that's a good fit for what Eve does. Less good a fit for other space-type games, though.


It really just comes down to what you're trying to do. Log's great for one distribution, W's great for another, and partioning z space is great for still other reasons (and could, I suppose, be mixed with either of the other approaches too).

... actually, huh. That could be interesting. If you did log + partitions, you'd have multiple bands of z accuracy running through your world. That might be interesting, if it suited your LOD scheme well.

Shalinor fucked around with this message at 17:19 on Oct 16, 2011

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Shalinor posted:

W-Buffering's big thing is that it equalizes the distribution of z-accuracy across the entirety of your render volume. In space, that's actually a bad thing, unless you can guarantee all your objects are above or within a certain scale range. In particular, it would be a poor choice for any kind of cockpit view (which Eve doesn't use), but it's great for cases where your object size is relatively homogenous (which is... kind of, true, for Eve - though it seems like their regular vs cap ships would be quite a size difference).

EDIT: Well, no, I fubbed that. It makes z-buffer collisions consistent and predictable, which is a good thing for long view distances involving big objects in the distance that make flickering very noticeable. It's just that it's also a poor choice for long view distances, since the remaining z accuracy you've got up close is now very, very limited compared to usual. So you've got to build your world accordingly, with big objects that mostly have no need of zbuffering / depend on zbuffering only for sorting one large object against another. So yeah, that's a good fit for what Eve does. Less good a fit for other space-type games, though.


It really just comes down to what you're trying to do. Log's great for one distribution, W's great for another, and partioning z space is great for still other reasons (and could, I suppose, be mixed with either of the other approaches too).

... actually, huh. That could be interesting. If you did log + partitions, you'd have multiple bands of z accuracy running through your world. That might be interesting, if it suited your LOD scheme well.

It actually wouldn't be to much of a problem with a cockpit view, just because you can manually sort the depth planes -- i.e. render outside the cockpit, clear the buffer and change the near/far distances, and draw the cockpit using a clean Z-buffer.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Hubis posted:

It actually wouldn't be to much of a problem with a cockpit view, just because you can manually sort the depth planes -- i.e. render outside the cockpit, clear the buffer and change the near/far distances, and draw the cockpit using a clean Z-buffer.
Or just render the cockpit as a transparent sprite afterwards with z-buffer disabled, to save on the wasteful clear (and wasteful z-buffer comparisons for that matter).

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Or you just render the cockpit after a Z-buffer clear because it will never intersect or overlap anything else in the game world.

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?

OneEightHundred posted:

Or you just render the cockpit after a Z-buffer clear because it will never intersect or overlap anything else in the game world.
... and in the process, destroy the video card's cache.

It can handle shifts in the projection matrix, but when you start clearing or fiddling too much with the display surface, you're eating performance.

roomforthetuna posted:

Or just render the cockpit as a transparent sprite afterwards with z-buffer disabled, to save on the wasteful clear (and wasteful z-buffer comparisons for that matter).
Aaaaaagh, no, don't do that, it looks like crap. You want a 3D cockpit, so that it can be lit by environmental effects, properly responds to camera movement/shakes, etc.

You could do that by rendering off-screen and compositing, but that'd be even worse on the rendering. (EDIT: unless you double/triple buffered and allowed the cockpit to lag a few frames - that'd actually work pretty well, hmmm)

EDIT: Oh, wait, you mean 3D sprite. Doable, but only if your cockpit is guaranteed rigid, and either doesn't require depth comparisons in its rendering, or you've done some fancy tricks with the VB to guarantee draw order of overlapping elements. Still, yeah, doable, depending on requirements. Other big issue if it there's stuff in your cockpit that isn't strictly cockpit-related (particles driven by the regular game particle system, etc).

Shalinor fucked around with this message at 22:10 on Oct 16, 2011

BizarroAzrael
Apr 6, 2006

"That must weigh heavily on your soul. Let me purge it for you."
Does anyone know how to export from Autoidest Softimage XSI Mod Tool or whatever I'm supposed to call it/google into .fbx? I want to get stuff out of it into UDK but I'm having a hard time finding guides on how to do it. Most of what I find seems outdated.

How much better is Blender for this? I just want to get basic shapes in really for now, I'm not 3D artist by any means.

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Shalinor posted:

It can handle shifts in the projection matrix, but when you start clearing or fiddling too much with the display surface, you're eating performance.
I would have thought that coarse Z buffer and GDDR3 fast clears would make this pretty irrelevant.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply