Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
haveblue
Aug 15, 2005



Toilet Rascal

Vinlaen posted:

It seems like no game engines include automatic network synchronization... :(

It's very difficult to implement this in a completely general-purpose "free" manner. Too much of it is tied to the expected behavior of the value you're trying to replicate, and there's no room to add metadata to the network traffic. Unreal can do it because they control the virtual machine your game logic runs upon, and because they've been working on it for over ten years.

Adbot
ADBOT LOVES YOU

Vinlaen
Feb 19, 2008

Are you guys talking about Epic's Unreal Engine? (eg. the commericial engine)

I somewhat understand the reason that a general network synchronization library can't be developed, but I would think that a simple position, velocity/acceleration system could be worked out?

Vinlaen fucked around with this message at 00:30 on Mar 1, 2008

The Monarch
Jul 8, 2006

Can anyone point me to a good, recent directX tutorial/website that won't make me pay for anything? I have a copy of visual c++ and the directx sdk, but a lot of the tutorials I'm finding are pretty out of date.

Also, does anyone know of a good website based around the quake 3 source code? I'm looking through it but most of it's flying over my head, and some good explanations about what's doing what would be nice.

Subotai
Jan 24, 2004

Vinlaen posted:

Are there any game engines (or frameworks/libraries) that include automatic network synchronization of game objects?

For example, if I create a Worms or Scorched Earth game I would to shoot a projectile (using simple physics) and have it follow the same trajectory on every game client. I'd also like to include environment physics objects like black holes, etc, etc.

It seems like no game engines include automatic network synchronization... :(

You could try raknet.

krysmopompas
Jan 17, 2004
hi
Or OpenTNL maybe?

more falafel please
Feb 26, 2005

forums poster

Vinlaen posted:

Are you guys talking about Epic's Unreal Engine? (eg. the commericial engine)

I somewhat understand the reason that a general network synchronization library can't be developed, but I would think that a simple position, velocity/acceleration system could be worked out?

Yeah, it would be prohibitively expensive and not what you're looking for at all, except the network thing.

But Tim has written a very nice (if a bit self-horn-tooting) description of the network architecture of Unreal, including actor replication, prediction, etc: http://unreal.epicgames.com/Network.htm

This is based on UE2, from what I can tell, but the basics remain the same.

more falafel please fucked around with this message at 04:26 on Mar 1, 2008

SnakeByte
Mar 20, 2004
FUCK THIS COMPANY THAT HASNT PRODUCED THE GAME IN QUESTION FOR YEARS BECAUSE THEY SUSPENDED ME FOR EXPLOITING A BUG FUCK THEM IN THE ASS I AM A MORON

The Monarch posted:

Can anyone point me to a good, recent directX tutorial/website that won't make me pay for anything? I have a copy of visual c++ and the directx sdk, but a lot of the tutorials I'm finding are pretty out of date.

SnakeByte posted:

https://www.directxtutorial.com/Tutorial9/tutorials.aspx

Someone recommended this website to me. These tutorials are really well thought out, and good for those wanting to transfer their skills to DirectX. I dunno about the "professional" tutorials though.

Did you miss this? The free tutorials on this site are ace. They're recent, easy to understand, and they all compile.

Vinlaen
Feb 19, 2008

Subotai posted:

You could try raknet.

krysmopompas posted:

Or OpenTNL maybe?
Unfortunately, those are both for C++ and will not work with Delphi or C# (sorry, I should have been more specific in my request). I really don't like C++ and will probably be programming in Delphi (it's the language we use at work) or C#.

Let's forget about that for a second and let me ask a different question...

How would you do simple destructible terrain like Worms or Scorched Earth?

I understand that the basic idea is to have a texture that gets modified (eg. a render target, etc) but I'm confused on how large the texture needs to be.

For example, let's say I'm using a tiny repeatable texture to create the terrain image. Even though the repeatable texture is 128x128, I need a MUCH larger texture to achieve 1:1 resolution on my 1920x1200 native resolution monitor.

This causes the texture size to be huge (espicially if the terrain will span 3 screens wide and 2 screens high) and causes the game to be unplayable on older video cards.

Can anybody help me out with this, or do I just need to require newer video cards with large texture memory?

more falafel please
Feb 26, 2005

forums poster

Vinlaen posted:

Unfortunately, those are both for C++ and will not work with Delphi or C# (sorry, I should have been more specific in my request). I really don't like C++ and will probably be programming in Delphi (it's the language we use at work) or C#.

Let's forget about that for a second and let me ask a different question...

How would you do simple destructible terrain like Worms or Scorched Earth?

I understand that the basic idea is to have a texture that gets modified (eg. a render target, etc) but I'm confused on how large the texture needs to be.

For example, let's say I'm using a tiny repeatable texture to create the terrain image. Even though the repeatable texture is 128x128, I need a MUCH larger texture to achieve 1:1 resolution on my 1920x1200 native resolution monitor.

This causes the texture size to be huge (espicially if the terrain will span 3 screens wide and 2 screens high) and causes the game to be unplayable on older video cards.

Can anybody help me out with this, or do I just need to require newer video cards with large texture memory?

Can't you tile the texture? The card won't store more than one copy of it.

IcePotato
Dec 29, 2003

There was nothing to fear
Nothing to fear
Latest frustration: Implementing the A* algorithm (great page!), and I just found out .net doesn't have a BinaryTree implementation in the library, balanced or unbalanced. Or a heap implementation. Or basically anything that would be really sweet to have with an A* algorithm. Argh. Any advice?

e: Oh, never mind because I just found http://www.codeguru.com/csharp/csharp/cs_misc/designtechniques/article.php/c12527/

IcePotato fucked around with this message at 19:27 on Mar 1, 2008

krysmopompas
Jan 17, 2004
hi

Vinlaen posted:

Unfortunately, those are both for C++ and will not work with Delphi or C#
They will work, both languages have some degree of native interoperability. It may involve writing a thin wrapper, but you're not going to find a hell of a lot in anything that isn't C++.

Vinlaen posted:

I understand that the basic idea is to have a texture that gets modified (eg. a render target, etc) but I'm confused on how large the texture needs to be.
Don't do this.

Use a fixed size, resolution independent grid for gameplay purposes, then render the map by dynamically generating a mesh using triangles that connect contiguous spans of "ground" in that map. Once you get that working, you can start worrying about view dependent refinement of the mesh as well as optimizing it.

Vinlaen
Feb 19, 2008

more falafel please posted:

Can't you tile the texture? The card won't store more than one copy of it.
Yes, but when I destruct parts of the terrain how will I know to not render of those spots? I could use the stencil buffer or something, but I was planning to generate the terrain using many smaller textures, etc...

krysmopompas posted:

Use a fixed size, resolution independent grid for gameplay purposes, then render the map by dynamically generating a mesh using triangles that connect contiguous spans of "ground" in that map. Once you get that working, you can start worrying about view dependent refinement of the mesh as well as optimizing it.
If I understand the first part you're saying to use a terrain size that is the same regardless of resolution (eg. let's say 1000x3000 "terrain units"). I agree with this and was planning to do that...

However, I'm not sure what you mean by the second part (eg. generating a mesh using triangles, etc). I always forget what a mesh is because I've been trying to do 2D programming and "mesh" is usually used when talking about 3D programming...

Vinlaen fucked around with this message at 20:01 on Mar 1, 2008

gibbed
Apr 10, 2006

Couldn't you do a mask for the image rather than modifying the original image? You could compress it down to bytes (or even bits) too, I think?

Though a mesh would probably work.

krysmopompas
Jan 17, 2004
hi

Vinlaen posted:

However, I'm not sure what you mean by the second part (eg. generating a mesh using triangles, etc).
Think of each pixel as a 2d quad. For each cell in your map, indicated by nX and nY, where w and h are respectively the width and height of each cell, the vertex locations are something like:
code:
{ (nX * w), ((nY + 1) * h) }  { ((nX + 1) * w), ((nY + 1) * h) }
{ (nX * w), (nY * h)       }  { ((nX + 1) * w), (nY * h)       }
But that's a lot of quads to draw, so you need to reduce the count by merging adjacent quads.

A very easy way to do this is to walk each row of cells from left to right. When you find the first cell that is solid ground, set the left 2 vertices of the quad that will represent this row. Continue walking the row until the nX + 1 is not solid ground, then set the right vertices for nX. Repeat this process until you're done with this row, then go on to the next one. Materials and texturing are just a further elaboration of breaking up the quad spans, except this time relative to the material or texture of the cells.

There's a lot better ways to do it, but it works.

deck
Jul 13, 2006

The terrain in the original Scortched Earth was implemented using a basic "sand" engine.

There's a 1-bit "mask" bitmap that defines the terrain, and you deform it by just cutting holes in the mask. If the mask is altered, run passes of the sand algorithm until the terrain has settled.

The sand algorithm is simple. For each pixel in the mask, if there's empty space immediately below, shift the pixel down one row. Else, if there's empty space to the lower-left or lower-right, shift there instead. (If both are empty, use a random number to break the tie.) It's like a cellular automata process. With these basic rules, the sand will tend to form pyramid-shaped piles. You can extend the proces with more rules (sideways motion) to cause them to flatten more.

Watch for bias issues when doing this: you may need to cycle from bottom to top, and alternate left-to-right then right-to-left as you process all the pixels. Not doing this may cause your sand to settle with a left or right bias, or get spaced out as it falls.

Count the number of pixels that shift with each update, and when you fall below a threshhold (1 is probably the right choice given how fast computers can do this today), you're done.

Vinlaen
Feb 19, 2008

@gibbed: You're basically saying to have a huge array (1 bit per pixel) that says it's solid or not-solid, right? (eg. used for collision detection?)

@krysmopompas: Is that the algorithm for transforming resolution coordinates into terrain coordinates? (eg. a 1920x1200 resolution into the 3000x1000 terrain resolution)

@pee: I think that's the same idea gibbed is talking about...

I think I understand everything, but I'm still unsure of how to build the visual part of the terrain.

For example, I want to use several textures to create the terrain (eg. a mountain texture, then place river textures through it, etc, etc). This "visual terrain texture" would need to be HUGE for resolutions like 1920x1200 which is where I'm getting confused. The reason I'm thinking it will be huge is because instead of re-drawing my small terrain textures (eg. the mountain texture) over and over, I'll actually be creating a new texture composed of all the other textures...

gibbed
Apr 10, 2006

Vinlaen posted:

@gibbed: You're basically saying to have a huge array (1 bit per pixel) that says it's solid or not-solid, right? (eg. used for collision detection?)
Yes and it could double as an image mask to hide what's "destroyed".

krysmopompas
Jan 17, 2004
hi

Vinlaen posted:

@krysmopompas: Is that the algorithm for transforming resolution coordinates into terrain coordinates? (eg. a 1920x1200 resolution into the 3000x1000 terrain resolution)
Er, no...that's for transforming your terrain into 'world' coordinates, which would then be transformed in to 'view' coordinates in a typical 3d pipeline.

Vinlaen
Feb 19, 2008

I still need a massive 1920x1200x32bit * (amount of screens of terrain) texture though, right? (when using 1920x1200 resolution)

The reason I keep coming back to this is because I'm going to build the visual terrain using many smaller textures, etc. I'm concerned because 1920x1200x24 * 5 screens is a massive amount of texture memory and may not work on older video cards :(

Vinlaen fucked around with this message at 02:22 on Mar 3, 2008

Bizarro Buddha
Feb 11, 2007

Vinlaen posted:

I still need a massive 1920x1200x32bit * (amount of screens of terrain) texture though, right? (when using 1920x1200 resolution)

The reason I keep coming back to this is because I'm going to build the visual terrain using many smaller textures, etc. I'm concerned because 1920x1200x24 * 5 screens is a massive amount of texture memory and may not work on older video cards :(

The point is that if you keep the status of the terrain in main memory and use that to build a set of 2d triangles that consist of what you need to render, you only need a/some small texture(s), because you can repeat them across the triangles you have built.

So you don't need tons of video memory, just enough system memory to store the terrain grid.

Pfhreak
Jan 30, 2004

Frog Blast The Vent Core!

Vinlaen posted:

I still need a massive 1920x1200x32bit * (amount of screens of terrain) texture though, right? (when using 1920x1200 resolution)

The reason I keep coming back to this is because I'm going to build the visual terrain using many smaller textures, etc. I'm concerned because 1920x1200x24 * 5 screens is a massive amount of texture memory and may not work on older video cards :(

There are a number of schemes to prevent the appearance of tiling. You can flip and rotate textures, to start, and you can also add a few 'decals' above the texture. (Like big rocks, grass blurbs, etc) that break up the visual aspect without destroying your video memory.

SnakeByte
Mar 20, 2004
FUCK THIS COMPANY THAT HASNT PRODUCED THE GAME IN QUESTION FOR YEARS BECAUSE THEY SUSPENDED ME FOR EXPLOITING A BUG FUCK THEM IN THE ASS I AM A MORON

Vinlaen posted:

I still need a massive 1920x1200x32bit * (amount of screens of terrain) texture though, right? (when using 1920x1200 resolution)

The reason I keep coming back to this is because I'm going to build the visual terrain using many smaller textures, etc. I'm concerned because 1920x1200x24 * 5 screens is a massive amount of texture memory and may not work on older video cards :(

Well older video cards can't handle screens that big. But then again, older video cards won't be doing 1920x1200 either. I think I understand what you're missing here. When you create a texture, it doesn't HAVE to go to the video card. It can be kept in system memory until it is ready to be drawn. So you're video card will only have to keep enough texture information to cover one screen. There's no sense is drawing what you can't see. Does that make sense to you?

Dr. Poz
Sep 8, 2003

Dr. Poz just diagnosed you with a serious case of being a pussy. Now get back out there and hit them till you can't remember your kid's name.

Pillbug

The Monarch posted:

Can anyone point me to a good, recent directX tutorial/website that won't make me pay for anything? I have a copy of visual c++ and the directx sdk, but a lot of the tutorials I'm finding are pretty out of date.

Also, does anyone know of a good website based around the quake 3 source code? I'm looking through it but most of it's flying over my head, and some good explanations about what's doing what would be nice.

http://ultimategameprogramming.com/

It assumes you know nothing and works up from there.

Vinlaen
Feb 19, 2008

SnakeByte posted:

Well older video cards can't handle screens that big. But then again, older video cards won't be doing 1920x1200 either. I think I understand what you're missing here. When you create a texture, it doesn't HAVE to go to the video card. It can be kept in system memory until it is ready to be drawn. So you're video card will only have to keep enough texture information to cover one screen. There's no sense is drawing what you can't see. Does that make sense to you?
Yeah, I've seen that you can actually create a texture in system memory (with D3D) and then transfer it to the video card when needed. Is this what you're talking about?

If so, I'm guessing that you are talking about tiling a bunch of large textures, and then as the user scrolls around the map you re-upload the textures to the video card as needed. All that will be stored on the video card at any time is one texture the same size as the screen resolution plus maybe some "partial" textures so that if the user scrolls around it doesn't need to load a new texture until they scroll past a certain point.

Does that sound right?

The only problem with this is that you can't use textures in system memory as render targets (which would make drawing explosion circles or whatever a LOT easier...)

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Vinlaen posted:

The only problem with this is that you can't use textures in system memory as render targets (which would make drawing explosion circles or whatever a LOT easier...)

So don't have off-screen explosions; just pan the screen to follow the active projectile?

Vinlaen
Feb 19, 2008

So you're saying to destruct the active terrain tile texture (which is on the video card) and then copy it back to system memory?

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Vinlaen posted:

So you're saying to destruct the active terrain tile texture (which is on the video card) and then copy it back to system memory?

If you're absolutely committed to using the graphics cards for your processing, sure. I don't know why you would bother, though, when various scorched earth clones seem to do just fine on sub-hundred-megahertz machines without accelerated graphics.

HauntedRobot
Jun 22, 2002

an excellent mod
a simple map to my heart
now give me tilt shift
Is there an easy way to render the contents of a block of memory to a texture in OpenGL as if it was a traditional framebuffer? This is for a different project, but I have a 256x192 block of memory where each byte is a palette indexed colour, which sadly I can't do anything with the structure of as it's out of my control.

chips
Dec 25, 2004
Mein Führer! I can walk!

Vinlaen posted:

So you're saying to destruct the active terrain tile texture (which is on the video card) and then copy it back to system memory?

I'm not quite sure what your idea of how things work is, but you don't need a 1920x1200 "texture", that's the framebuffer. Whenever you play a game at 1920x1200 resolution, the graphics card has allocated one or more framebuffers, with a certain bit depth per pixel (often RGB, plus depth and so on). When you render a triangle with a texture on it, the graphics card transforms the triangle vertices into screen space (your 19200x1200 window) and then converts it to pixels, then the pixels are processed and textured appropriately.

Don't worry about needing one to one correspondence of textures, the video card blends textures when you're too close, and you can apply distance dependant detail textures that givec more detail when you're close up to something. Basically no 3D game has 1:1 texel to pixel relation.

Terrain destruction is something that happens on the CPU-side of things, where you store your grid of vertices that make up your terrain, you can shift these up and down depending on the destroyed terrain - then you transfer the updated terrain across to the card in a vertex buffer or suchlike. You could implement displacement in a vertex shader I suppose - but it's nothing to do with textures (other than maybe adding some texture splats/decals to simulate the damaged terrain surface)

HauntedRobot posted:

Is there an easy way to render the contents of a block of memory to a texture in OpenGL as if it was a traditional framebuffer? This is for a different project, but I have a 256x192 block of memory where each byte is a palette indexed colour, which sadly I can't do anything with the structure of as it's out of my control.

What do you mean "render the contents of a block of memory to a texture"? Do you mean copy the contents into a texture for use on the graphics card? You could just make a single channel texture out of it, opengl doesn't mind that it isn't really a texture - just be careful for interpolation when you're using it on the card. If you want the colours to be correct, I'd assume you'd need to self-process the texture first, or use one of the indexed colour functions in OpenGL (I seem to remember that there are some)

chips fucked around with this message at 21:02 on Mar 6, 2008

Doc Block
Apr 15, 2003
Fun Shoe
He's talking about 2D sidescrolling terrain with destruction, a la Worms, not 3D terrain.

The question was how to do pixel-perfect terrain destruction like in Worms, where a bomb "eats" the terrain, then the terrain collapses. That was answered already.

But I think now he's wondering how to actually draw it. Tiling alone isn't enough, since you'd need a tile for each possible way each piece of terrain could be destroyed, which would require a huge amount of video memory, defeating the whole point of tiling in the first place.

Honestly, the easiest way is to forgo using the 3D hardware and just do it all via software blitting. Then you can store your whole level as one big image in memory and destroy it as you go. But then you don't get to have pretty explosions, free alpha blending, etc.

What I'd do is decide if having the terrain collapse after the bomb eats away at it is necessary. If not, you can still use the 3D hardware as a fast 2D blitter:

1)When an explosion occurs, eat away the terrain in the destruction bitmap. Use the destruction bitmap for collision detection.

2)Save the center point and radius of the explosion in a list.

3)Draw your background

4)Draw explosions (even ones that have already happened) into the stencil buffer. Just draw a circle, using the center point and radius from your list.

5)Draw the terrain using tiling, but with the stencil buffer check turned on. Wherever an explosion has taken place, the terrain won't get drawn.

You should probably prune the explosion list periodically, looking for things like one explosion that completely overlaps another, to reduce overdraw in the stencil buffer.

30.5 Days
Nov 19, 2006

kewlpc posted:

He's talking about 2D sidescrolling terrain with destruction, a la Worms, not 3D terrain.

The question was how to do pixel-perfect terrain destruction like in Worms, where a bomb "eats" the terrain, then the terrain collapses. That was answered already.

But I think now he's wondering how to actually draw it. Tiling alone isn't enough, since you'd need a tile for each possible way each piece of terrain could be destroyed, which would require a huge amount of video memory, defeating the whole point of tiling in the first place.

Honestly, the easiest way is to forgo using the 3D hardware and just do it all via software blitting. Then you can store your whole level as one big image in memory and destroy it as you go. But then you don't get to have pretty explosions, free alpha blending, etc.

What I'd do is decide if having the terrain collapse after the bomb eats away at it is necessary. If not, you can still use the 3D hardware as a fast 2D blitter:

1)When an explosion occurs, eat away the terrain in the destruction bitmap. Use the destruction bitmap for collision detection.

2)Save the center point and radius of the explosion in a list.

3)Draw your background

4)Draw explosions (even ones that have already happened) into the stencil buffer. Just draw a circle, using the center point and radius from your list.

5)Draw the terrain using tiling, but with the stencil buffer check turned on. Wherever an explosion has taken place, the terrain won't get drawn.

You should probably prune the explosion list periodically, looking for things like one explosion that completely overlaps another, to reduce overdraw in the stencil buffer.

This is good, but I would really recommend a mesh deformation solution instead. Reason being that you still have to couple your modifications here with something to fix up the collision detection. With mesh deformation, you give your character sphere or capsule colliders, and you can do collision detection against the terrain (as an arbitrary mesh) no problem.

EDIT: I understand we're talking about a 2d game, but with an orthographic view matrix, you can still do it with a 3d renderer.

chips
Dec 25, 2004
Mein Führer! I can walk!
Oh I see, sorry. I'd still agree that mesh deformation is probably the best route, perhaps even a method similar to Marching Cubes but in 2D. Or you could use alpha-testing to give it pixel-level accuracy by anti-aliasing, similar to the method use in TF2 for high-resolution GUI elements - and it wouldn't have to be full screen resolution.

Doc Block
Apr 15, 2003
Fun Shoe
Right, but just deforming a mesh would look like rear end unless you had a really high resolution mesh.

Maybe I'm just misunderstanding...

Captain Pike
Jul 29, 2003

I am trying to write a (simple) multiplayer server in C#.

From reading various tutorials and articles, I have cobbled together a server which lets multiple clients connect, send, and receive messages via UDP.

(The server is for an action game that requires timely messages)

I have recently read an article which states that while UDP should be used for most messages (such as player movements) TCP should be used for very important messages, such as player-deaths. (UDP messages aren't 100% guaranteed. If a player gets killed, but never gets this message, he will still be wandering around the map)

I do not know how to use both TCP and UDP.

In addition, all of the low-level networking details are very confusing to me. (non-blocking sockets vs threads, ports, IPs, etc).

Questions:

1. Does an open-source game server exist?
2. Are there any good books that will help me?
3. Should I start an open-source game server project?

Captain Pike fucked around with this message at 09:48 on Mar 10, 2008

pseudopresence
Mar 3, 2005

I want to get online...
I need a computer!

Captain Pike posted:

I have recently read an article which states that while UDP should be used for most messages (such as player movements) TCP should be used for very important messages, such as player-deaths. (UDP messages aren't 100% guaranteed. If a player gets killed, but never gets this message, he will still be wandering around the map)

I do not know how to use both TCP and UDP.

The important thing to note is that UDP does not guarantee the delivery of your packets, but there is lower space and time overhead per packet than for TCP. This makes it appropriate for things like updates in player position which can be interpolated and extrapolated so dropped packets will not be a problem (under sane conditions). TCP is quite different; it's stream-oriented rather than packet-oriented, so you have to frame your messages yourself. TCP guarantees that if you feed in a stream of bytes at one end, you get it back out again in the same order at the other end, unless the connection dies.

You can implement your own acknowledgement and retransmission mechanism on top of UDP; for each connection between two endpoints, each endpoint should include a sequence number in the message header that increases each time a message is sent by that endpoint on that connection. Then the receiving endpoint can tell if it is missing any messages, and send some form of acknowledgement back. You can get arbitrarily clever with how you do this, although TCP essentially does the same thing and you don't have to worry about the details if you use it.

If you want to use both UDP and TCP connections, you'll need a separate socket for each connection. Just set up one socket to each player for UDP and one to each player for TCP.

quote:

In addition, all of the low-level networking details are very confusing to me. (non-blocking sockets vs threads, ports, IPs, etc).

TCP and UDP both work on top of IP (Internet Protocol). This uses unique addresses to identify endpoints on the network. TCP and UDP each then add a further level of addressing on each endpoint, called "ports". The combination of protocol (TCP or UDP), IP and port on one end, and IP and port on the other end uniquely identifies a connection. Ports are a way of addressing different applications on the same machine; if you didn't have them, either a machine could only have a single connection at a time, or all applications would receive the packet, and you'd need some way to distinguish whether the packet was intended for a certain application, and you'd basically reinvent ports or something equivalent but more heavyweight.

In a server-client architecture, only the server needs to listen on fixed ports; when the client connects (TCP) or first sends a message (UDP) the server will know what ip:port that came from.

quote:

Questions:

1. Does an open-source game server exist?
2. Are there any good books that will help me?
3. Should I start an open-source game server project?

There are open-source game servers, but these are tied to a specific framework, engine or game. You could make a generic server that just routes messages to the various clients, but this won't be much use for many kinds of games; for an action game especially, the server needs to handle lots of the game logic.

In terms of books, the only one I've seen that's half-decent is "Networked Virtual Environments: Design and Implementation", but I'm sure there are others. Several all-in-one game programming books have short sections on networked games. There are plenty of articles on gamedev.net and gamasutra.com, so I recommend having a poke around there.

more falafel please
Feb 26, 2005

forums poster

Fib posted:

You can implement your own acknowledgement and retransmission mechanism on top of UDP; for each connection between two endpoints, each endpoint should include a sequence number in the message header that increases each time a message is sent by that endpoint on that connection. Then the receiving endpoint can tell if it is missing any messages, and send some form of acknowledgement back. You can get arbitrarily clever with how you do this, although TCP essentially does the same thing and you don't have to worry about the details if you use it.

Let me expand on this a little bit.

TCP is great, but it's generally overkill for games. It's not incredibly difficult to write a reliable UDP layer that gives you most of the benefits of TCP without the overhead.

At the most basic, you define a message -- for instance, a 2-byte length in big-endian, then a 4-byte sequence number, then the payload. Assuming your application needs more than one type of message, you probably want to devote another byte or two of the payload to specifying what kind of message it is. To keep things as efficient as you can, you want to keep all messages within the size limit of a single ethernet frame (or, if possible, half the size, so you can send out 2 messages at a time, but half as frequently to save on overhead).

The application sends a message to a client by specifying a payload. The reliability layer adds a sequence number and length, calls sendto() to send the packet, and puts the message in its output queue, recording the time it was sent.

When a message is received with recvfrom(), it's placed in the input queue. The application is responsible for checking for messages in the input queue and popping them off for processing.

Here's where it gets interesting. You have no guarantee that your packets will get to the other side, and if they do, you have no guarantee as to what order they will come in. So let's say you've received messages 1-1000, so you're now expecting message 1001. If you receive message 1002, you know you've missed 1001. So you put it in your input queue, but in a separate "unordered" queue that the application can't access. When you get a message, you record that you've received the message with that sequence number. The next time you send a message out, you encode (at the beginning, or end, or whatever) ACKs to specify the lowest sequence number you haven't received, the highest sequence number you have received, and for each message between lowest and highest, whether you've received that message or not. When you receive a message with those ACKs attached, remove the messages the other side has seen from the output queue -- they've been received.

You now have data on how long each message took to go being sent to receiving the ACK for it. Averaging these times, you get a decent estimation of round trip time to the other side. Every time a message is going out, if there's room in it, attach a message from your output queue that's been in your output queue longer than the average round trip time. This way messages that don't get received will be resent.

It seems a lot more complicated than it is. With enough debug logging, you should be able to hack up a basic implementation in a day or two. What this gives you over TCP is the ability to control how messages are acknowledged and resent. The most important part, though, may be that you control the message size. With TCP, a connection is represented as an ordered stream of bytes. But your messages are already discrete chunks of data, and if you encode them as a stream, the TCP stack will be taking that stream and breaking it into discrete chunks anyway. This means that your TCP stack will receive incomplete messages, then wait for the rest of it to come in. When you guarantee that your message size is less than the size of an ethernet frame, you're essentially guaranteeing that it all comes through in one piece.

This allows you to run a peer-to-peer game using about 20kbits/sec of bandwidth, which can perform robustly with 10% packet loss and up to about 100ms of latency. For reference, console games are generally required to perform well with as little as 64kbits/sec of bandwidth, 2% overall packet loss with up to 10% spikes, and 100ms average latency.

The Oid
Jul 15, 2004

Chibber of worlds

Captain Pike posted:

I have recently read an article which states that while UDP should be used for most messages (such as player movements) TCP should be used for very important messages, such as player-deaths. (UDP messages aren't 100% guaranteed. If a player gets killed, but never gets this message, he will still be wandering around the map)

I do not know how to use both TCP and UDP.

As has been said, you can implement your own reliable messaging framework on top of UDP. I know some commercial engines do it this way, and if I recall correctly from reading the docs, Microsoft's XNA framework implements its reliable networking this way too.

This book isn't specifically about game networking, but if I recall correctly, it has a chapter on networking which is useful for understanding different techniques (non-blocking sockets, threaded, etc)

http://www.amazon.co.uk/Core-Techniques-Algorithms-Game-Programming/dp/0131020099

king_kilr
May 25, 2007
I'm looking to build a game similar to Operation Space Hog(for those of you who haven't played it, a) check it out, its free, b) it's a 2d scroller space game where you have a ship and you get upgrades and stuff), with the intention of ultimately making it online-coop, I'd like to build it in python(by far my strongest language), any recommendations for what library to use, my understanding is that it would be either PyGame or PySoy, any other advice would be great.

eshock
Sep 2, 2004

HauntedRobot posted:

Is there an easy way to render the contents of a block of memory to a texture in OpenGL as if it was a traditional framebuffer? This is for a different project, but I have a 256x192 block of memory where each byte is a palette indexed colour, which sadly I can't do anything with the structure of as it's out of my control.

This seems like something you'd want to write a shader for.

Adbot
ADBOT LOVES YOU

haveblue
Aug 15, 2005



Toilet Rascal

eshock posted:

This seems like something you'd want to write a shader for.

Especially since indexed color is practically deprecated and probably won't work right on modern hardware.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply