Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
krysmopompas
Jan 17, 2004
hi
No reason you shouldn't just start with shaders - you'll get a better feel for what the hardware is actually doing behind the scenes. Fixed function is just a confusing mess of arbitrarily configured black boxes that result in *magic* on the other side.

Plus, while the syntax of the shader language may vary, the mechanics and logic of it are more consistent cross-platform/api than fixed function as well.

Adbot
ADBOT LOVES YOU

krysmopompas
Jan 17, 2004
hi

samiamwork posted:

I really hope that made sense and that this is possible and I've just totally misunderstood things.
You're basically talking about what can be achieved via render to texture, first using it as the render target, then reusing the texture to apply your fancy blend mode to a final render target. The blend mode applied to the render target is most unfortunately fixed-function for the foreseeable future.

krysmopompas fucked around with this message at 04:00 on Dec 7, 2007

krysmopompas
Jan 17, 2004
hi
Or OpenTNL maybe?

krysmopompas
Jan 17, 2004
hi

Vinlaen posted:

Unfortunately, those are both for C++ and will not work with Delphi or C#
They will work, both languages have some degree of native interoperability. It may involve writing a thin wrapper, but you're not going to find a hell of a lot in anything that isn't C++.

Vinlaen posted:

I understand that the basic idea is to have a texture that gets modified (eg. a render target, etc) but I'm confused on how large the texture needs to be.
Don't do this.

Use a fixed size, resolution independent grid for gameplay purposes, then render the map by dynamically generating a mesh using triangles that connect contiguous spans of "ground" in that map. Once you get that working, you can start worrying about view dependent refinement of the mesh as well as optimizing it.

krysmopompas
Jan 17, 2004
hi

Vinlaen posted:

However, I'm not sure what you mean by the second part (eg. generating a mesh using triangles, etc).
Think of each pixel as a 2d quad. For each cell in your map, indicated by nX and nY, where w and h are respectively the width and height of each cell, the vertex locations are something like:
code:
{ (nX * w), ((nY + 1) * h) }  { ((nX + 1) * w), ((nY + 1) * h) }
{ (nX * w), (nY * h)       }  { ((nX + 1) * w), (nY * h)       }
But that's a lot of quads to draw, so you need to reduce the count by merging adjacent quads.

A very easy way to do this is to walk each row of cells from left to right. When you find the first cell that is solid ground, set the left 2 vertices of the quad that will represent this row. Continue walking the row until the nX + 1 is not solid ground, then set the right vertices for nX. Repeat this process until you're done with this row, then go on to the next one. Materials and texturing are just a further elaboration of breaking up the quad spans, except this time relative to the material or texture of the cells.

There's a lot better ways to do it, but it works.

krysmopompas
Jan 17, 2004
hi

Vinlaen posted:

@krysmopompas: Is that the algorithm for transforming resolution coordinates into terrain coordinates? (eg. a 1920x1200 resolution into the 3000x1000 terrain resolution)
Er, no...that's for transforming your terrain into 'world' coordinates, which would then be transformed in to 'view' coordinates in a typical 3d pipeline.

krysmopompas
Jan 17, 2004
hi

Mr Dog posted:

This has been doing my head in for most of a day so far. Can anyone please tell me why this minimal Direct3D app is chewing nearly 100% of my CPU? IDirect3DDevice9::Present() is the culprit, but it shouldn't be spinning like that. I've fiddled with practically every line of code and I'm at my wits' end here...
Just off the top of my head, D3DPRESENT_INTERVAL_ONE uses a higher precision timer than blah_DEFAULT which will lead to higher cpu usage, or possibly your driver settings are overriding the setting and forcing it to blah_IMMEDIATE.

krysmopompas
Jan 17, 2004
hi

Mr Dog posted:

Sadly, I've already tried using _DEFAULT instead, and that didn't seem to help. Also, I've asked a few other people to run an EXE that I built out of this code, and they all observed the same thing :\
At the same time, it's not like you're yielding the thread at all, do you really expect it to use less than 100%?

There's nothing in d3d that says the vsync wait has to actually sleep or anything.

krysmopompas
Jan 17, 2004
hi

tyrelhill posted:

A Sleep(1) would help it go down, but thats something you don't want in any game loop.
Sure you would.

In general with vsync on, you want to run the game at even multiples of the refresh rate, say 60/30/15 if we were targeting NTSC, and not ping pong all over the place since it leads to a pretty jerky, inconsistent experience for the player. You may also need to run fixed multiples for your physics to behave nicely as well, especially in a networked situation.

Besides, idling the cpu as much as possible is a way to play nice with laptop users.

krysmopompas
Jan 17, 2004
hi

ehnus posted:

Using Sleep() to enforce specific game refresh rates operates on two incorrect assumptions. First, it assumes that the computer runs faster than the game loop -- if you have assumptions in place that your game updates at 60Hz but the computer can only execute the game loop at 40Hz then you have a problem. Second, it assumes that it will sleep for that exact amount of time which is incorrect because you're at the mercy of the OS's thread scheduler.

Try wrapping various calls to Sleep() with a high resolution timer, you may be surprised with what you see.
That's not at all what I was saying. What I was saying was to clamp framerates at some multiple of the vsync, not hardcode some call to sleep. If you're running at 40hz, then you'd lock yourself down to 30hz, but then you wouldn't go back up to 60hz until you've crossed some threshold count of frames that exceeded 60hz.

Replace Sleep with whatever doesn't suck on your platform, or only calling sleep for 'safe' thresholds and spinning the rest off with high res timers, the idea is the same.

krysmopompas fucked around with this message at 06:51 on Apr 14, 2008

krysmopompas
Jan 17, 2004
hi

MasterSlowPoke posted:

What I'm doing now is to split any vertex with multiple sets of UV coordinates into that many seperate vertices. The problem with this method is that it makes it impossible to accurately determine the normals as the model animates. I have to position the model, calculate the normals, then resplit the vertices every frame. That seems a tad bulky.
Just split the verts once at load time, generate a lookup table that maps split vert->original vert. Animate & recalc normals on the original data set then populate your pre-split vb with the new data using the lookup table.

krysmopompas
Jan 17, 2004
hi

dsage posted:

How would you get around having opengl textures that are not 64, 128, or 256 pixels in width/height?
ARB_texture_non_power_of_two

Otherwise, round up and waste the extra space (or find a creative use for it.)

krysmopompas
Jan 17, 2004
hi

more falafel please posted:

You don't write AI in savagely optimized C.
Christ, I thought all this Kynapse stuff I've been staring at had something to do with AI...no wonder these bugs never get fixed.

The needs of Epic in Gears are vastly different from the needs of the general industry. Go ask someone from Midway about native vs. unrealscript performance.

krysmopompas fucked around with this message at 01:00 on Jul 22, 2008

krysmopompas
Jan 17, 2004
hi

Scaevolus posted:

more falafel please works at Midway Chicago.
That was my point, he should already know this.

krysmopompas
Jan 17, 2004
hi

ValhallaSmith posted:

Keeping with the scripting flame war.
I'm not trying to contribute to a scripting flame war, I'm only taking issue with the assertion that "You don't write AI in savagely optimized C" since it's quite apparent that AI does in fact ship in many games in some savagely optimized low level language all the time.

krysmopompas fucked around with this message at 03:44 on Jul 22, 2008

krysmopompas
Jan 17, 2004
hi

Avenging Dentist posted:

... which is why I said that synchronizing physics is no harder?? Once you have player synchronization working, physics synchronization pretty much just falls out (though you may want to put in some optimizations for physics objects).
Yeah, but when you have to consider that someone will be hosting a game over dsl or cable, a lot of physics synchronization just isn't going to happen.

Fixed timesteps are more about being able to do things without synchronization than with it. If there is anything any game physics code is really, really, bad at (this includes PhysX, Havok and anything custom I've ever seen) it is achieving the exact same result in two slightly complex but identical situations when the timestep is different.

It doesn't matter too much when you're talking about ragdolls, which never match up on any game, or any other simple, non-gameplay relevant effects. But it's really limiting in terms of gameplay to have to limit yourself to extremely simple scenarios involving physics because of the glitchyness of correcting on the clients, or simply due to bandwidth concerns.

krysmopompas
Jan 17, 2004
hi

Avenging Dentist posted:

How often is that actually an issue, though? I can't think of any games except Garry's Mod where there are a large number of gameplay-affecting physics objects at once. Usually it's either a handful of crates and/or barrels, which are stationary except for short bursts of activity (and even the most generic FPS only has so many crates), or it's what amounts to particle effects that can bounce off walls but don't really affect anything anyway.
It's more that you get tossing crates and barrels around because it's pretty much all we can realistically do, with the given tools. Weird multiplayer physics ideas come up all the time in brainstorming sessions but tend to get shot down pretty quickly for that reason.

krysmopompas
Jan 17, 2004
hi
Data is code. lisp lol

It's really quite irrelevant if you hardcode it or if you have some crazily over-engineered data system - iteration times and the suitability of the editing method to your audience matters. If you've got 15 programmers and 5 artists, then you're better off hand-coding shaders; if you've got 5 programmers and 15 artists, then you're better off making some kind of fancy schmancy DAG editor thing.

krysmopompas
Jan 17, 2004
hi

Avenging Dentist posted:

If you actually had to rewrite a game because you changed graphics engines, it would speak way more to your talent (or lack thereof) as a programmer than to your choice in graphics engine.
Right, because you can simply compile on XNA or Irrlicht at the flick of a switch if your programmer dick is big enough.

The things he listed are a lot more than just "graphics" engines. They impose a lot of constraints from the asset pipeline all the way to how entities are defined, processed and how they communicate; and there is little, if any, common ground between any of them.

krysmopompas
Jan 17, 2004
hi

Avenging Dentist posted:

I'm more speaking of "what kind of programmer would 99% finish a game and then say hey wait I just spent six months writing code for an engine that I knew was crap and unsuitable to my needs". If you're using a hypothetical awful engine and you get more than 10% of the way through a project before realizing that the engine is crap, then you've got problems.
Is there a Godwin-like law for invoking 3drealms yet?

I'm not sure how someone new to development is going to be able to tell the difference between an engine being unsuitable, and not knowing what needs to be done or how best to achieve it. Hell, you have tech directors of major companies unable to tell the difference as well.

Anyhow, I think the key thing from his post isn't "pretty graphics" but the combination of the rts and fps view modes. Moving a camera around is an extremely simple task, but most engines do their damnedest to only assume that one way of moving it will work with 99% of the code already written.

p.s. I am glad for your penis.

krysmopompas
Jan 17, 2004
hi
The GOF book is bad for making the 'schlubs' think that any given pattern needs to be implemented some some predetermined manner, or that they are bound to some kind of language feature or lack thereof. Every language has patterns, and every programmer/workplace/project has patterns. Most people just don't spend the time to formalize and document them.

Ultimately, design patterns are just a naming convention for common scenarios, but the word has simply become too tainted by too much head up the assery.

krysmopompas
Jan 17, 2004
hi
Or just use Assimp and use whatever format(s) you happen to find the best exporter for.

krysmopompas
Jan 17, 2004
hi

Rupert Buttermilk posted:

I'm a sound/music guy working on a mac. I really, really want to get into working with game developers in programming sound. What's my best approach? Should I learn programming, and if so, what language? Is there any way to test my work, say if I were working in something like FMOD? I have a bit of experience and understanding of Xact. I get how it works, and what sort of things it can do, albeit not completely.
As a sound designer, the most useful programming you can learn is that which can automate tasks for you - batch renaming files, batch converting, import and merge VO drops, etc. I've seen people use everything from Perl to batch files to Python to do this; so it's not really important what language you use so long as you use it well and it can accomplish the tasks you need.

Nobody uses xact, it's worthless. We used it once for debugging a reverb issue, but that's all.

Whether you're using wwise, fmod or some in-house solution for the audio backend isn't that important since the brunt of your time is going to be spent in the engine's editor. If you're using UE3, you'll be using their sound cue editor, kismet and level editor; if you're in id tech, you'll be using the level editor and notepad; some people actually use the ex component of fmod ex - it's going to be completely different depending on the game, target platforms and engine being used. So, it's only important to be technical enough and able to learn quickly since you're not going to really know what the tools and process are until your first day on the job.

Just be good with pro tools/soundforge/etc, understand enough of the math behind audio, learn how to use the engine/editor and understand enough scripting to be self sufficient and not bother programmers when you want a bunch of files renamed or something; that's all we really want out of a good sound designer.

krysmopompas
Jan 17, 2004
hi

Addict posted:

So is everyone using XAudio2 for sound now, or are people still sticking with DirectSound? I have been working in DX9 for graphics to avoid vista lockin.
If you were starting from scratch, or wanting to do a serious revamp of what you've got, you'd be doing yourself a disservice by not going with it.

It's just built on top of dsound on XP, so it avoids any compatibility issues. These days, they've got XWMA support solid, fixed various bugs relating to pitch shifting and routing voices of different rates to one another and added the ability to change a source voice's format on the fly*; so it's a really, really solid api.

*Lean on the ability to change the source voice format, use a pool of voices and don't rely on create/destroysourcevoice. Creating and destroying a source stalls until the xaudio thread is idle, and will result in you having a very bad day.

krysmopompas
Jan 17, 2004
hi

Avenging Dentist posted:

This is probably a good idea and would resolve the issue I had where exiting the app before a sound effect stopped playing would cause a crash (because I destroyed the XAudio interface before I destroyed the voice) :barf:. Not that I ever work on my game anymore.
You should always clean up all source voices and submix voices before killing the mastering voice and shutting down the device - in the debug libs it asserts if you kill a mastering voice without disconnecting all child submix or source voices attached to it.

Pooling isn't going to be a magic bullet. A new scenario you'll need to consider is a crash when the data you've sumitted via SubmitSourceBuffer is still in use, but you've deleted it.

DestroyVoice has been safe since it stalls until the audio thread is idle, making it ok to delete the source buffer data immediately afterwards. When you get pooling in you're not going to have that certainty and you'll have to hook into the callbacks to make absolutely sure that the buffer isn't being used any longer before you can nuke it.

Adbot
ADBOT LOVES YOU

krysmopompas
Jan 17, 2004
hi
There is so much frame to frame consistency in that sprite sheet that you can exploit this in a dumb, easy fashion.

Divide each sprite into a grid - say, 4x8 or something. For each frame, compare the grid cell to the grid cell of the previous frame. If they differ, write the cell out to a texture atlas, and write out what cells, and their uvs, that correspond to each frame. So at runtime, you simply reconstruct each frame by drawing the cells using the uvs from the atlas.

If you want to get fancy about it, hash each cell, use that to compare to the atlas contents, and you will be able to reuse cells from discontiguous frames.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply