|
No reason you shouldn't just start with shaders - you'll get a better feel for what the hardware is actually doing behind the scenes. Fixed function is just a confusing mess of arbitrarily configured black boxes that result in *magic* on the other side. Plus, while the syntax of the shader language may vary, the mechanics and logic of it are more consistent cross-platform/api than fixed function as well.
|
# ¿ Dec 5, 2007 04:25 |
|
|
# ¿ May 3, 2024 02:03 |
|
samiamwork posted:I really hope that made sense and that this is possible and I've just totally misunderstood things. krysmopompas fucked around with this message at 04:00 on Dec 7, 2007 |
# ¿ Dec 7, 2007 03:43 |
|
Or OpenTNL maybe?
|
# ¿ Mar 1, 2008 03:36 |
|
Vinlaen posted:Unfortunately, those are both for C++ and will not work with Delphi or C# Vinlaen posted:I understand that the basic idea is to have a texture that gets modified (eg. a render target, etc) but I'm confused on how large the texture needs to be. Use a fixed size, resolution independent grid for gameplay purposes, then render the map by dynamically generating a mesh using triangles that connect contiguous spans of "ground" in that map. Once you get that working, you can start worrying about view dependent refinement of the mesh as well as optimizing it.
|
# ¿ Mar 1, 2008 18:44 |
|
Vinlaen posted:However, I'm not sure what you mean by the second part (eg. generating a mesh using triangles, etc). code:
A very easy way to do this is to walk each row of cells from left to right. When you find the first cell that is solid ground, set the left 2 vertices of the quad that will represent this row. Continue walking the row until the nX + 1 is not solid ground, then set the right vertices for nX. Repeat this process until you're done with this row, then go on to the next one. Materials and texturing are just a further elaboration of breaking up the quad spans, except this time relative to the material or texture of the cells. There's a lot better ways to do it, but it works.
|
# ¿ Mar 1, 2008 20:50 |
|
Vinlaen posted:@krysmopompas: Is that the algorithm for transforming resolution coordinates into terrain coordinates? (eg. a 1920x1200 resolution into the 3000x1000 terrain resolution)
|
# ¿ Mar 2, 2008 11:51 |
|
Mr Dog posted:This has been doing my head in for most of a day so far. Can anyone please tell me why this minimal Direct3D app is chewing nearly 100% of my CPU? IDirect3DDevice9::Present() is the culprit, but it shouldn't be spinning like that. I've fiddled with practically every line of code and I'm at my wits' end here...
|
# ¿ Apr 14, 2008 00:06 |
|
Mr Dog posted:Sadly, I've already tried using _DEFAULT instead, and that didn't seem to help. Also, I've asked a few other people to run an EXE that I built out of this code, and they all observed the same thing :\ There's nothing in d3d that says the vsync wait has to actually sleep or anything.
|
# ¿ Apr 14, 2008 00:21 |
|
tyrelhill posted:A Sleep(1) would help it go down, but thats something you don't want in any game loop. In general with vsync on, you want to run the game at even multiples of the refresh rate, say 60/30/15 if we were targeting NTSC, and not ping pong all over the place since it leads to a pretty jerky, inconsistent experience for the player. You may also need to run fixed multiples for your physics to behave nicely as well, especially in a networked situation. Besides, idling the cpu as much as possible is a way to play nice with laptop users.
|
# ¿ Apr 14, 2008 04:08 |
|
ehnus posted:Using Sleep() to enforce specific game refresh rates operates on two incorrect assumptions. First, it assumes that the computer runs faster than the game loop -- if you have assumptions in place that your game updates at 60Hz but the computer can only execute the game loop at 40Hz then you have a problem. Second, it assumes that it will sleep for that exact amount of time which is incorrect because you're at the mercy of the OS's thread scheduler. Replace Sleep with whatever doesn't suck on your platform, or only calling sleep for 'safe' thresholds and spinning the rest off with high res timers, the idea is the same. krysmopompas fucked around with this message at 06:51 on Apr 14, 2008 |
# ¿ Apr 14, 2008 06:26 |
|
MasterSlowPoke posted:What I'm doing now is to split any vertex with multiple sets of UV coordinates into that many seperate vertices. The problem with this method is that it makes it impossible to accurately determine the normals as the model animates. I have to position the model, calculate the normals, then resplit the vertices every frame. That seems a tad bulky.
|
# ¿ May 20, 2008 22:25 |
|
dsage posted:How would you get around having opengl textures that are not 64, 128, or 256 pixels in width/height? Otherwise, round up and waste the extra space (or find a creative use for it.)
|
# ¿ Jun 10, 2008 20:50 |
|
more falafel please posted:You don't write AI in savagely optimized C. The needs of Epic in Gears are vastly different from the needs of the general industry. Go ask someone from Midway about native vs. unrealscript performance. krysmopompas fucked around with this message at 01:00 on Jul 22, 2008 |
# ¿ Jul 22, 2008 00:58 |
|
Scaevolus posted:more falafel please works at Midway Chicago.
|
# ¿ Jul 22, 2008 02:32 |
|
ValhallaSmith posted:Keeping with the scripting flame war. krysmopompas fucked around with this message at 03:44 on Jul 22, 2008 |
# ¿ Jul 22, 2008 03:42 |
|
Avenging Dentist posted:... which is why I said that synchronizing physics is no harder?? Once you have player synchronization working, physics synchronization pretty much just falls out (though you may want to put in some optimizations for physics objects). Fixed timesteps are more about being able to do things without synchronization than with it. If there is anything any game physics code is really, really, bad at (this includes PhysX, Havok and anything custom I've ever seen) it is achieving the exact same result in two slightly complex but identical situations when the timestep is different. It doesn't matter too much when you're talking about ragdolls, which never match up on any game, or any other simple, non-gameplay relevant effects. But it's really limiting in terms of gameplay to have to limit yourself to extremely simple scenarios involving physics because of the glitchyness of correcting on the clients, or simply due to bandwidth concerns.
|
# ¿ Dec 7, 2008 08:48 |
|
Avenging Dentist posted:How often is that actually an issue, though? I can't think of any games except Garry's Mod where there are a large number of gameplay-affecting physics objects at once. Usually it's either a handful of crates and/or barrels, which are stationary except for short bursts of activity (and even the most generic FPS only has so many crates), or it's what amounts to particle effects that can bounce off walls but don't really affect anything anyway.
|
# ¿ Dec 7, 2008 09:28 |
|
Data is code. lisp lol It's really quite irrelevant if you hardcode it or if you have some crazily over-engineered data system - iteration times and the suitability of the editing method to your audience matters. If you've got 15 programmers and 5 artists, then you're better off hand-coding shaders; if you've got 5 programmers and 15 artists, then you're better off making some kind of fancy schmancy DAG editor thing.
|
# ¿ Dec 27, 2008 21:14 |
|
Avenging Dentist posted:If you actually had to rewrite a game because you changed graphics engines, it would speak way more to your talent (or lack thereof) as a programmer than to your choice in graphics engine. The things he listed are a lot more than just "graphics" engines. They impose a lot of constraints from the asset pipeline all the way to how entities are defined, processed and how they communicate; and there is little, if any, common ground between any of them.
|
# ¿ Feb 3, 2009 23:43 |
|
Avenging Dentist posted:I'm more speaking of "what kind of programmer would 99% finish a game and then say hey wait I just spent six months writing code for an engine that I knew was crap and unsuitable to my needs". If you're using a hypothetical awful engine and you get more than 10% of the way through a project before realizing that the engine is crap, then you've got problems. I'm not sure how someone new to development is going to be able to tell the difference between an engine being unsuitable, and not knowing what needs to be done or how best to achieve it. Hell, you have tech directors of major companies unable to tell the difference as well. Anyhow, I think the key thing from his post isn't "pretty graphics" but the combination of the rts and fps view modes. Moving a camera around is an extremely simple task, but most engines do their damnedest to only assume that one way of moving it will work with 99% of the code already written. p.s. I am glad for your penis.
|
# ¿ Feb 4, 2009 00:58 |
|
The GOF book is bad for making the 'schlubs' think that any given pattern needs to be implemented some some predetermined manner, or that they are bound to some kind of language feature or lack thereof. Every language has patterns, and every programmer/workplace/project has patterns. Most people just don't spend the time to formalize and document them. Ultimately, design patterns are just a naming convention for common scenarios, but the word has simply become too tainted by too much head up the assery.
|
# ¿ Mar 7, 2009 21:11 |
|
Or just use Assimp and use whatever format(s) you happen to find the best exporter for.
|
# ¿ Apr 14, 2009 23:03 |
|
Rupert Buttermilk posted:I'm a sound/music guy working on a mac. I really, really want to get into working with game developers in programming sound. What's my best approach? Should I learn programming, and if so, what language? Is there any way to test my work, say if I were working in something like FMOD? I have a bit of experience and understanding of Xact. I get how it works, and what sort of things it can do, albeit not completely. Nobody uses xact, it's worthless. We used it once for debugging a reverb issue, but that's all. Whether you're using wwise, fmod or some in-house solution for the audio backend isn't that important since the brunt of your time is going to be spent in the engine's editor. If you're using UE3, you'll be using their sound cue editor, kismet and level editor; if you're in id tech, you'll be using the level editor and notepad; some people actually use the ex component of fmod ex - it's going to be completely different depending on the game, target platforms and engine being used. So, it's only important to be technical enough and able to learn quickly since you're not going to really know what the tools and process are until your first day on the job. Just be good with pro tools/soundforge/etc, understand enough of the math behind audio, learn how to use the engine/editor and understand enough scripting to be self sufficient and not bother programmers when you want a bunch of files renamed or something; that's all we really want out of a good sound designer.
|
# ¿ Jun 12, 2009 17:18 |
|
Addict posted:So is everyone using XAudio2 for sound now, or are people still sticking with DirectSound? I have been working in DX9 for graphics to avoid vista lockin. It's just built on top of dsound on XP, so it avoids any compatibility issues. These days, they've got XWMA support solid, fixed various bugs relating to pitch shifting and routing voices of different rates to one another and added the ability to change a source voice's format on the fly*; so it's a really, really solid api. *Lean on the ability to change the source voice format, use a pool of voices and don't rely on create/destroysourcevoice. Creating and destroying a source stalls until the xaudio thread is idle, and will result in you having a very bad day.
|
# ¿ Jul 23, 2009 06:28 |
|
Avenging Dentist posted:This is probably a good idea and would resolve the issue I had where exiting the app before a sound effect stopped playing would cause a crash (because I destroyed the XAudio interface before I destroyed the voice) . Not that I ever work on my game anymore. Pooling isn't going to be a magic bullet. A new scenario you'll need to consider is a crash when the data you've sumitted via SubmitSourceBuffer is still in use, but you've deleted it. DestroyVoice has been safe since it stalls until the audio thread is idle, making it ok to delete the source buffer data immediately afterwards. When you get pooling in you're not going to have that certainty and you'll have to hook into the callbacks to make absolutely sure that the buffer isn't being used any longer before you can nuke it.
|
# ¿ Jul 24, 2009 20:35 |
|
|
# ¿ May 3, 2024 02:03 |
|
There is so much frame to frame consistency in that sprite sheet that you can exploit this in a dumb, easy fashion. Divide each sprite into a grid - say, 4x8 or something. For each frame, compare the grid cell to the grid cell of the previous frame. If they differ, write the cell out to a texture atlas, and write out what cells, and their uvs, that correspond to each frame. So at runtime, you simply reconstruct each frame by drawing the cells using the uvs from the atlas. If you want to get fancy about it, hash each cell, use that to compare to the atlas contents, and you will be able to reuse cells from discontiguous frames.
|
# ¿ Oct 9, 2009 00:30 |