Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Beelzebub
Apr 17, 2002

In the event that you make sense, I will still send you to the 7th circle.
I found this thread two months ago and decided to give Python a shot. So far this is what I've managed to accomplish with the Pyglet and Rabbyt modules.

http://youtu.be/F_R-WEK_Xfo
http://youtu.be/dXUTT9J_r7o

I plan on working in about 6 to 10 levels/zones with three acts per level. The bloody art sucks up most of the time. But I'm pretty dedicated to seeing this through.

Adbot
ADBOT LOVES YOU

Red Mike
Jul 11, 2011

Beelzebub posted:

I found this thread two months ago and decided to give Python a shot. So far this is what I've managed to accomplish with the Pyglet and Rabbyt modules.

http://youtu.be/F_R-WEK_Xfo
http://youtu.be/dXUTT9J_r7o

I plan on working in about 6 to 10 levels/zones with three acts per level. The bloody art sucks up most of the time. But I'm pretty dedicated to seeing this through.


Oh, hey. Fancy meeting you here. Looking forward to new videos of it.

I have a question for people who are familiar with OpenGL. Is there a way, library or drivers, to wrap native OpenGL calls such that you work and have a context in memory? I'm looking, basically, for a way to create a context and draw scenes on my server, but any attempt at OpenGL generally gives me errors, and I'm not sure how to go about doing it any other way. I understand the new OpenGL specs have a command for creating a context without a window, but the example code I've got doesn't work.

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.

Red Mike posted:

Oh, hey. Fancy meeting you here. Looking forward to new videos of it.

I have a question for people who are familiar with OpenGL. Is there a way, library or drivers, to wrap native OpenGL calls such that you work and have a context in memory? I'm looking, basically, for a way to create a context and draw scenes on my server, but any attempt at OpenGL generally gives me errors, and I'm not sure how to go about doing it any other way. I understand the new OpenGL specs have a command for creating a context without a window, but the example code I've got doesn't work.

On the Linux side I've dabbled with "OSMesa" which is basically off-screen OpenGL. You create a context similar to how you would in other OpenGL implementations and allocate a buffer and then pass that buffer into the context and then draw your commands normally. From there you can take that buffer and save it to an image or do whatever.

http://www.mesa3d.org/osmesa.html

code:
OSMesaContext ctx = OSMesaCreateContext(GL_RGBA, NULL);
unsigned char *buffer = new unsigned char[256 * 256 * 4]; // Width/height/bytes per pixel
OSMesaMakeCurrent(ctx, buffer, GL_UNSIGNED_BYTE, 256, 256);
// Draw commands
// Do something with the buffer
OSMesaDestroyContext(ctx);
delete buffer;

HiriseSoftware fucked around with this message at 15:20 on Jul 26, 2011

Red Mike
Jul 11, 2011
Oh brilliant, thanks. I'll see if I can manage to work it so that I can pass the context to Pyglet. Sadly, I'll probably need a Python wrapper for the thing, but I'll see what I can do.

Mr.Hotkeys
Dec 27, 2008

you're just thinking too much
Does anyone have a good tutorial or can anyone offer a good explanation on how to handle transformed collision (namely between objects with arbitrary rotation) in XNA? I've got basic rotation down (just find where the two's bounding boxes overlap and test each pixel in the overlap to see if it's occupied in both), but the only examples I've found for collision between rotated objects use matrix math to do the transformation and my code really isn't set up to handle that, and I'm too dumb at matrices to translate it into how mine works (the points are stored rotated instead of rotated every time they're drawn).

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
There's not much getting around matrix math if you want to do collision detection between rotated objects via bounding boxes.

If you're OK with dropping the per-pixel checks you could use bounding circles instead - the collision checks for those is just seeing if the distance between the centers is less than the sum of the radii of the colliding circles. Visually it won't make a huge difference if you make the bounding circle's radius equal to half the width of the bounding square.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I reread a lot of stuff about component based designs and realized I really wasn't getting it right at all. I'm still doing a bunch of stuff with interfaces and I think that's probably staving off the inevitable. Already I get into the uglies with switching between human and AI control. It's embarrassing to me because I've been doing a lot more design kind of stuff in my day job, yet I couldn't wrap my head around the general idea of it all. I suppose the big problem is in assuming there's a lot of OOP involved in it at all. From what I can tell, a lot of component-based development could probably be done with C structs.

EagleEye
Jun 4, 2011
I'm trying to render a .map file generated by the latest version of GoldSrc-era Hammer, but it only stores information for one triangle per face. I'm having a hard time getting it to draw the "inverse" complementary triangle so that it's drawing the full face. I kicked butt in Algebra, but I but barely managed to scrape out an A in calculus, and that was due to being a good programmer, not a good calculus... person...? Here's the main part of the function that I need help with. At the moment, I'm hardcoding special cases for the top and bottom that work (I got these through trial and error, and they only work if it's a box and not a slope or something):
code:
foreach (BrushDef b2 in b.brushes)
{
    int lolol = 0;
    foreach (BrushSideDef b3 in b2.brush)
    {
        if (lolol == 0)
         {
             vertices[temp + 0].Position = xyz(b3.p1_2, b3.p1_1, b3.p1_3);
             vertices[temp + 0].Color = Color.Green;
             vertices[temp + 1].Position = xyz(b3.p2_2, b3.p2_1, b3.p2_3);
             vertices[temp + 1].Color = Color.Green;
             vertices[temp + 2].Position = xyz(b3.p3_2, b3.p3_1, b3.p3_3);
             vertices[temp + 2].Color = Color.Green;
             vertices[temp + 3].Position = xyz(b3.p1_1, b3.p3_1, b3.p3_3);
             vertices[temp + 3].Color = Color.Green;
             vertices[temp + 4].Position = xyz(b3.p1_1, b3.p3_2, b3.p2_3);
             vertices[temp + 4].Color = Color.Green;
             vertices[temp + 5].Position = xyz(b3.p1_2, b3.p1_1, b3.p1_3);
             vertices[temp + 5].Color = Color.Green;
             temp += 3;
         }
         else if (lolol == 1)
         {
             vertices[temp + 0].Position = xyz(b3.p1_2, b3.p1_1, b3.p1_3);
             vertices[temp + 0].Color = Color.Pink;
             vertices[temp + 1].Position = xyz(b3.p2_2, b3.p2_1, b3.p2_3);
             vertices[temp + 1].Color = Color.Pink;
             vertices[temp + 2].Position = xyz(b3.p3_2, b3.p3_1, b3.p3_3);
             vertices[temp + 2].Color = Color.Pink;
             vertices[temp + 3].Position = xyz(b3.p3_2, b3.p3_1, b3.p3_3);
             vertices[temp + 3].Color = Color.Pink;
             vertices[temp + 4].Position = xyz(b3.p2_1, b3.p2_2, b3.p2_3);
             vertices[temp + 4].Color = Color.Pink;
             vertices[temp + 5].Position = xyz(b3.p1_2, b3.p1_1, b3.p1_3);
             vertices[temp + 5].Color = Color.Pink;
             temp += 3;
         }
         else
         {
             vertices[temp + 0].Position = xyz(b3.p1_2, b3.p1_1, b3.p1_3);
             vertices[temp + 0].Color = Color.Red;
             vertices[temp + 1].Position = xyz(b3.p2_2, b3.p2_1, b3.p2_3);
             vertices[temp + 1].Color = Color.Yellow;
             vertices[temp + 2].Position = xyz(b3.p3_2, b3.p3_1, b3.p3_3);
             vertices[temp + 2].Color = Color.Green;
             vertices[temp + 3].Position = xyz(b3.p1_2, b3.p1_1, b3.p3_3);
             vertices[temp + 3].Color = Color.Red;
             vertices[temp + 4].Position = xyz(b3.p2_2, b3.p2_1, b3.p2_3);
             vertices[temp + 4].Color = Color.Yellow;
             vertices[temp + 5].Position = xyz(b3.p3_2, b3.p3_1, b3.p1_3);
             vertices[temp + 5].Color = Color.Green;
             temp += 3;
        }
        lolol += 1;
        temp += 3;
    }
}
How would I go about making it so it just gives me a triangle and it's complementary triangle? From there I can implement indicies-sharing, and then either collision or textures, depending on which mood of masochistic I'm in. But until I get this stupid code working, I'm looking at triangles instead of rectangular prisms, which hurts my near-end-of-summer morale even more than when I only had 1 triangle.

Here are the code files, since I know a foreach is probably pretty hard to understand out of context: test.map, CustomMapLoader.cs, and Game1.cs.

RichardA
Sep 1, 2006
.
Dinosaur Gum
^^^^^
Isn't goldsrc the quake level format? If so the three vertexes on a line of the quake .map file define a plane - not a triangle. To get the triangles you need to find the points where 3 of the planes intersect, test if it lies on the brush, and use those points to define a triangle.
This link might be helpful - mattn.ninex.info/files/MAPFiles.pdf

RichardA fucked around with this message at 22:58 on Jul 27, 2011

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
If this is for the game itself, do not use the .map file, use the BSP. The BSP has the processed triangle data, lighting, visibility, etc. while the .map is just a totally raw format designed for editing.

That said, you should try using the Q3 format instead as it is generally much easier to render (except for parametric surfaces, which are really annoying), and doesn't have the collision detection inflexibilities of the Q1-based formats (including HL1's).

EagleEye
Jun 4, 2011
@RichardA GoldSrc is the unofficial name of the Half Life 1 engine. HL1 maps are almost exactly Q1 maps. Thanks for the link. I'm probably not going to use the information (though I'm sure as heck going to read it; these kinds of docs are my favorite) because of 108's advice, but that UFO: Alien Invasion game looks fun.

@OneEightHundred I decided against using Q3 .BSPs because I can't tell a GtkRadiant from an UnrealEd, but now that I think about it, I could probably open up the .map file in Gtk/QuArK/some other Q3 map editor and compile it. Wouldn't be too much more of a pain than manually saving something as a .map.

Still, does anyone know of a way to make Hammer compile Q3 .BSPs (via custom executable settings or something)?

Also, is there a best collision library for XNA, or is it a Pros/Cons sort of deal? And is there any standard for models? As far as I'm concerned, .bsp-style is the map standard, but I know Q1's model format sucked, and I'm not sure whether to go with HL1's, Q3's, or something else entirely.

I'd like to take a moment to thank you guys. I'm glad I joined Something Awful; $10 well spent, and usually regret all of my purchase decisions. :)

Edit: I might be able to jury-rig Gearcraft to use the same textures my engine will use, and hook up the Q3 tools to it or something like that. Does anyone have any clue if there is a release of any version of Hammer's source code? I searched, but since the name of the modern Hammer's engine is the Source engine, "hammer source code" gets me no relevant results.

EagleEye fucked around with this message at 01:14 on Jul 28, 2011

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

EagleEye posted:

@RichardA GoldSrc is the unofficial name of the Half Life 1 engine. HL1 maps are almost exactly Q1 maps. Thanks for the link. I'm probably not going to use the information (though I'm sure as heck going to read it; these kinds of docs are my favorite) because of 108's advice, but that UFO: Alien Invasion game looks fun.

@OneEightHundred I decided against using Q3 .BSPs because I can't tell a GtkRadiant from an UnrealEd, but now that I think about it, I could probably open up the .map file in Gtk/QuArK/some other Q3 map editor and compile it. Wouldn't be too much more of a pain than manually saving something as a .map.

Still, does anyone know of a way to make Hammer compile Q3 .BSPs (via custom executable settings or something)?

For what it's worth, I'd second using Q3 maps -- they're fairly well documented, and not that hard to load/render (depending on how many of their features you want to support). Q3's tools are all command-line (aside from the map editor) so at the worst you could compile them with a batch file.

quote:

I'd like to take a moment to thank you guys. I'm glad I joined Something Awful; $10 well spent, and usually regret all of my purchase decisions. :)

SomethingAwful: Like StackOverflow, but with more catchphrases!

Physical
Sep 26, 2007

by T. Finninho
I am looking for reference material for HLSL data type TextureCube. I have an example of a skyBox online and tried to add it to my code. I am getting an error "error X3000: syntax error: unexpected token 'tex'" The line of code that I think is causing it is

TextureCube tex;

Specifically I don't think TextureCube exists in an HLSL XNA 3.1 environment. Is that correct? And if so, where can I find some reference material (from msdn hopefully) referring to this.

The example is form here http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series4/Skydome.php http://iloveshaders.blogspot.com/2011/05/creating-sky-box.html and I am trying to put it in an XNA 3.0 project. Upgrading is not an option, I just want to know why it isn't working and where I can find reference material for this. I googled TextureCube in a bunch of different ways and I haven't really found any verbose info on it.

Is there any way to convert the TextureCube to something that works in my XNA 3.1 environment? At the very least I know I can split the texture up and do it that way (currently the skybox texture file is a .dds 6-sided image)

Physical fucked around with this message at 05:06 on Jul 28, 2011

EagleEye
Jun 4, 2011

Physical posted:

Specifically I don't think TextureCube exists in an HLSL XNA 3.1 environment. Is that correct? And if so, where can I find some reference material (from msdn hopefully) referring to this.

I'm sorry that I can't help you with this, but I was wondering why you're using XNA 3.1. Is it due to targeting a system that only has .NET 3.0 installed, performance issues on older systems, or something else entirely?

Here's an article on SkySpheres targeted at XNA 3.1, if it helps. It uses the TextureCube class in it. Speaking of which, I will definitely be using (the 4.0 version of) that tutorial when I'm done getting Q3 .bsps loading, so I'd like to thank you for somewhat indirectly helping me.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
It's been a while since I used XNA 3.1 but I don't think much has changed in 4.0 in this area.

You might be confusing the TextureCube class in the C# code with the texCUBE function in HLSL.

The TextureCube class in the C# code is used to load in a 6-sided .dds texture like you seem to be doing. In the HLSL code you pass the TextureCube in as a regular texture data type (not as a TextureCube!) and then use the texCUBE function in the pixel shader to get the pixel color.

Your C# code would look like
code:
TextureCube tex = Game.Content.Load<TextureCube>(skybox_texture);
This tells the content importer how to handle the .dds file at build time since it will need different processing than normal textures.

Your HLSL would look like
code:
float4x4 World;
float4x4 View;
float4x4 Projection;
texture Texture;

// ***********************************
//  Texture Cube Sampler
// ***********************************
sampler CubeSampler = sampler_state
{
	texture = <Texture>;
	magfilter = LINEAR;
	minfilter = LINEAR;
	mipfilter = LINEAR;
	AddressU = mirror;
	AddressV = mirror;
};

// *** Vertex shader omitted

// ***********************************
//  Pixel Shader
// ***********************************
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
     return texCUBE(CubeSampler, input.Direction);
}
The data referenced by the TextureCube type gets passed in as a regular texture, then the texCUBE function takes the texture sampler and the direction to sample in and returns the pixel color.

e: I didn't read through the Reimers link you posted in great detail, but it looks like he is just making a skydome out of a regular model mesh and then turning z-buffer writes off/on while drawing it. This is different from a normal skybox which assumes that you have a cube wrapped around the player's camera and want to sample points on that cube based on direction.

PDP-1 fucked around with this message at 05:00 on Jul 28, 2011

Physical
Sep 26, 2007

by T. Finninho
I posted the wrong link, here is the right one: http://iloveshaders.blogspot.com/2011/05/creating-sky-box.html

Ok well in the (XNA 4) hlsl .fx file there is a TextureCube tex; declaration and the code compiles and runs fine. So it is weird that it is working in that version and not mine isn't it? Taking into consideration what you say, it should error in XNA 4 as well shouldn't it?

I am sticking with XNA 3.1 because the library I have would be a nightmare of going through by hand and addressing each error caused by the upgrade.

Physical fucked around with this message at 05:26 on Jul 28, 2011

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
Apparently I am wrong and TextureCube is a valid HLSL type in 4.0 - I just ran the code from the new link and it worked fine, and then changed 'texture' to 'TextureCube' in my own (4.0) skybox code and it worked there too. I don't have 3.1 on this system anymore so I can't see if it breaks going from 4.0 to 3.1.

In your 3.1 code, if you change 'TextureCube' to 'texture' can you get it to compile and does it look OK? I went through the pain of upgrading my 3.1 stuff to 4.0 and I don't recall having to mess with my skybox shader, I think I just converted stuff to the new vertex declaration formats.

Here's my shader code if it helps. It has the change from texture->TextureCube but if you revert that it still works.

If that doesn't work then, welp, I dunno.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Hubis posted:

For what it's worth, I'd second using Q3 maps -- they're fairly well documented, and not that hard to load/render (depending on how many of their features you want to support).
Big changes with Q3 BSP compared to Q1/Q2 are that lightmaps are prepacked, texture coordinates are precomputed (including lightmap coordinates), face polygons are stored as index triplets into the vertex list, visibility isn't compressed, and there are more surface types. Everything about it except for parametric surfaces is significantly easier to load into a modern rendering API.

Physical
Sep 26, 2007

by T. Finninho

PDP-1 posted:

Apparently I am wrong and TextureCube is a valid HLSL type in 4.0 - I just ran the code from the new link and it worked fine, and then changed 'texture' to 'TextureCube' in my own (4.0) skybox code and it worked there too. I don't have 3.1 on this system anymore so I can't see if it breaks going from 4.0 to 3.1.

In your 3.1 code, if you change 'TextureCube' to 'texture' can you get it to compile and does it look OK? I went through the pain of upgrading my 3.1 stuff to 4.0 and I don't recall having to mess with my skybox shader, I think I just converted stuff to the new vertex declaration formats.

Here's my shader code if it helps. It has the change from texture->TextureCube but if you revert that it still works.

If that doesn't work then, welp, I dunno.

When I compile, I get a compiler error saying "unexpected token ';'"

In your 3.1 skybox did you use textureCube?

Physical
Sep 26, 2007

by T. Finninho
code:
Vertex Buffer

            cubeVertices[0] = new Vector3(0, 0, 0);
            cubeVertices[1] = new Vector3(0, 0, 144);
            cubeVertices[2] = new Vector3(144, 0, 144);
            cubeVertices[3] = new Vector3(144, 0, 0);
            cubeVertices[4] = new Vector3(0, 144, 0);
            cubeVertices[5] = new Vector3(0, 144, 144);
            cubeVertices[6] = new Vector3(144, 144, 144);
            cubeVertices[7] = new Vector3(144, 144, 0);

Index Buffer

    //bottom face
            cubeIndices[0] = 0;
            cubeIndices[1] = 2;
            cubeIndices[2] = 3;
            cubeIndices[3] = 0;
            cubeIndices[4] = 1;
            cubeIndices[5] = 2;

            //top face
            cubeIndices[6] = 4;
            cubeIndices[7] = 6;
            cubeIndices[8] = 5;
            cubeIndices[9] = 4;
            cubeIndices[10] = 7;
            cubeIndices[11] = 6;

            //front face
            cubeIndices[12] = 5;
            cubeIndices[13] = 2;
            cubeIndices[14] = 1;
            cubeIndices[15] = 5;
            cubeIndices[16] = 6;
            cubeIndices[17] = 2;

            //back face
            cubeIndices[18] = 0;
            cubeIndices[19] = 7;
            cubeIndices[20] = 4;
            cubeIndices[21] = 0;
            cubeIndices[22] = 3;
            cubeIndices[23] = 7;

            //left face
            cubeIndices[24] = 0;
            cubeIndices[25] = 4;
            cubeIndices[26] = 1;
            cubeIndices[27] = 1;
            cubeIndices[28] = 4;
            cubeIndices[29] = 5;

            //right face
            cubeIndices[30] = 2;
            cubeIndices[31] = 6;
            cubeIndices[32] = 3;
            cubeIndices[33] = 3;
            cubeIndices[34] = 6;
            cubeIndices[35] = 7;


//Call to actually draw stuff

            graphicsDevice.Vertices[0].SetSource(_vertices, 0, (3*8));
            graphicsDevice.Indices = indices;


            skyEffect.Parameters["Texture"].SetValue(skyTex);
            //skyEffect.Parameters["WVP"].SetValue(Matrix.Identity * viewMatrix * projectionMatrix);
            skyEffect.Parameters["World"].SetValue(Matrix.Identity);
            skyEffect.Parameters["View"].SetValue(viewMatrix);
            skyEffect.Parameters["Projection"].SetValue(projectionMatrix);
            skyEffect.CurrentTechnique = skyEffect.Techniques["Technique1"];

            skyEffect.Begin();            
            skyEffect.CurrentTechnique.Passes[0].Begin();

            graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, number_of_vertices, 0, number_of_indices / 3);

            skyEffect.CurrentTechnique.Passes[0].End();
            skyEffect.End();

It just makes a weird plane in my game. IT looks like this

http://imgur.com/DqXVH
http://imgur.com/FimGq

edit: drat I updated some code and had it running once for at least one plane but I commented and uncommented and compiled and now I don't get the same effect.
edit2: Ok it seems that in the Vertices[0].setSource call I didn't have the stride distance right. I thought that it was the size of vertexbuffer, instead its the size of one point. so I did sizeof(float)*3 to set it right since sizeof(Vector3) errors out. Still having some other weird issues though.

Physical fucked around with this message at 20:35 on Jul 28, 2011

EagleEye
Jun 4, 2011
Does anyone know why create.msdn.com treats game development like porn, insisting that, even though I've verified that I've completed 2 semesters of college, "You do not meet the requirement for App Hub. You must be 18 to register"? It makes it a bit of a PITA, since most of my XNA-related searches turn-up App Hub, which is the official XNA forums. Just another reason a SA membership wound up being a good idea, I guess.

Edit: Whoops, I deleted the first paragraph before I hit submit because it wound up a long rant about the GPL, and I forgot to rewrite the relevant part. In a nutshell, I'm to make my MIT Licensed engine 360 compatible. I want to use this Quake 3 asset loading library, but it's under the Eclipse Public License, and it's a couple of .dlls. If my knowledge of Xbox 1 applies, I can't really use .dlls, so that means I'll need to add them to a subfolder or something. Could I add them to various subfolders and note that they're under a different license, or would I have to convert my project to the Eclipse Public License? (Luckly, EPL is about as restrictive as the MIT; that is, not so much.)

EagleEye fucked around with this message at 19:31 on Jul 28, 2011

Rupert Buttermilk
Apr 15, 2007

🚣RowboatMan: ❄️Freezing time🕰️ is an old P.I. 🥧trick...

I'm just starting in game development, and have decided, at least for now, to try my hand at Stencyl. Now, I know that this can come across as 'Baby's First Game Engine', but whatever, I don't have much programming know-how, but I'm willing to work on it, learn more as time goes on, and just get better, all-around. I'm not kidding myself into thinking that I'm going to be some overnight millionaire, or even close, but goddammit, do I want to make my own game.

That being said, I question how far one can go with something like Stencyl. I see that you're not limited JUST to using pre-made stuff, that if you know progamming (Actionscript?) you can get a lot of custom poo poo done. I'm sort of thinking of making a game that is, I won't like, heavily inspired by Mario 3. For clarification, I mean multiple overworlds with a beginning, end, as well as levels along the way, usable items, bonuses... stuff like that. Does anyone know if an engine like Stencyl can do this?

Also, I plan on dedicating a lot of my time to getting the physics of movement just right, so that they don't feel stiff, or too floaty. Can this be done with Stencyl, or are there a lot of limitations with it?

Physical
Sep 26, 2007

by T. Finninho
Should the "World" matrix be set to the camera position? I notice some things don't seem to use the world matrix, they just set it to Matrix.Identity.

So I am a little confused on when and why to set it, it seems like the view matrix has the world transform built into it. Or am I misunderstanding something?

Hobnob
Feb 23, 2006

Ursa Adorandum

Physical posted:

Should the "World" matrix be set to the camera position? I notice some things don't seem to use the world matrix, they just set it to Matrix.Identity.

So I am a little confused on when and why to set it, it seems like the view matrix has the world transform built into it. Or am I misunderstanding something?

The world matrix transforms from the model coordinate system to the world coordinate system. Generally because your model mesh will be created in your modelling program centred around the origin. Then each mesh uses a (different) world matrix to rotate the mesh and place it somewhere in the world.

If you use the identity matrix as the world matrix, then you are effectively placing the mesh unrotated at the origin of the world. This would be fine for, say, a model viewer or similar demo, so that's why you'd see it in some things.

[Edit to make things a little clearer]
The view matrix is in effect the position and orientation of the camera (actually, it transforms the world coordinate system to the local coordinates of the camera view, so the inverse of the view matrix is like a world matrix for a camera object).

If you have only one object in the world, it doesn't really matter if you move around by keeping the view matrix (camera position) fixed and change the world matrix of the object, or you keep the world matrix constant and change the view matrix. Once you have several objects in the world, each with their own world matrix, it's easier to just change the view matrix to move the camera around.

You might find this page useful as a reference.

Hobnob fucked around with this message at 20:56 on Jul 28, 2011

Physical
Sep 26, 2007

by T. Finninho

Hobnob posted:

The world matrix transforms from the model coordinate system to the world coordinate system. Generally because your model mesh will be created in your modelling program centred around the origin. Then each mesh uses a (different) world matrix to rotate the mesh and place it somewhere in the world.

If you use the identity matrix as the world matrix, then you are effectively placing the mesh unrotated at the origin of the world. This would be fine for, say, a model viewer or similar demo, so that's why you'd see it in some things.

[Edit to make things a little clearer]
The view matrix is in effect the position and orientation of the camera (actually, it transforms the world coordinate system to the local coordinates of the camera view, so the inverse of the view matrix is like a world matrix for a camera object).

If you have only one object in the world, it doesn't really matter if you move around by keeping the view matrix (camera position) fixed and change the world matrix of the object, or you keep the world matrix constant and change the view matrix. Once you have several objects in the world, each with their own world matrix, it's easier to just change the view matrix to move the camera around.

You might find this page useful as a reference.

Ok thats exactly what I thought. So when I render a model, I would wnat to set its world matrix to whatever its position is in real life.

For the skybox, I was setting its world matrix to the camera.position which in turned constantly kept it very far away from me. I set it to the identity matrix and now it seems to get the desired effect. I think that is because I want the skybox centered around the origin. Does that sound right?

eidt: Now There are the appropriate 6 sides, but the textures don't align right. But when I make the stride width the whole big vertex buffer it allows me to see the bottom. I am so lost. I wish I could work with someone in person to show this to and help troubleshoot because so much of this requires editing, compiling, looking, playing, and going back to the code and a forums is a little too slow for figuring this out and representing ideas.

UPDATE
Ok so I have a cube now. But the only texture that gets used from the .dds file is the middle one, the one that is suppose to be for the top part of the skybox.

Physical fucked around with this message at 21:19 on Jul 28, 2011

Hobnob
Feb 23, 2006

Ursa Adorandum

Physical posted:

Ok thats exactly what I thought. So when I render a model, I would wnat to set its world matrix to whatever its position is in real life.

For the skybox, I was setting its world matrix to the camera.position which in turned constantly kept it very far away from me. I set it to the identity matrix and now it seems to get the desired effect. I think that is because I want the skybox centered around the origin. Does that sound right?

That sounds right to me. I don't know if this is the best practice or anything, but one way I've seen the skybox done is to render if first, every frame, with depth writing turned off. Then render the rest of the world as normal. This ensures that it always appears further away than everything else, and you don't get any problems with distant objects clipping through the skybox or have to set your far-plane distance too large.

Physical
Sep 26, 2007

by T. Finninho

Hobnob posted:

That sounds right to me. I don't know if this is the best practice or anything, but one way I've seen the skybox done is to render if first, every frame, with depth writing turned off. Then render the rest of the world as normal. This ensures that it always appears further away than everything else, and you don't get any problems with distant objects clipping through the skybox or have to set your far-plane distance too large.

Yes I am experimenting with getting that right. Right now I also have a cloud layer that I am try to get to appear correctly. If I am above the cloud layer, I want to be able to look down and see the clouds over the ground. But now I can see the clouds THROUGH the ground.

Hey you wouldn't happen to know in which order the faces get rendered? Right now I only have 3 working faces the others get jumbled up.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Only seeing one panel of a cubemap and having it look like a bunch of lines zeroed in on a single point usually means you're passing 2D texture coordinates.

Physical
Sep 26, 2007

by T. Finninho
But why is it only working for 3 of the 6 faces. I am also not even defining UV points as per the example does it.

Physical fucked around with this message at 01:44 on Jul 29, 2011

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
Are the three faces that you see contiguous sides? If so your problem may be that you are actually outside of the skybox looking in at it and the other three sides are being culled. The quick check for this is to set the render mode (sorry, I don't recall exactly what they called it in 3.1) to CullMode.None and see if you get all six sides. Don't leave it in that culling mode, it's just a quick test to see if that's your problem.

I'd also suggest setting the skybox's world matrix to track the camera position. Think of the skybox as a cube that surrounds the camera at all times - when the camera moves you want the skybox to move along with it. If you are using an identity world matrix your camera will get outside of the skybox and you'll only see three sides due to culling like described above.

As for the textures getting jumbled up, I think that's an issue due to the DirectX tool using a slightly different coordinate system than what XNA uses. I had some notes on this that I can't find at the moment, if they turn up I'll post them. I think it was a left-handed vs. right-handed coordinate system change but don't quote me on that. Making a set of six colored squares with big text numbers and 'top', 'bottom', etc written inside can help sort that out.

Here's my 4.0 skybox code for reference if that helps. It's what goes along with the shader I posted last night. The vertex declaration is in 4.0 syntax but most of the rest of it should be recognizable.

Physical
Sep 26, 2007

by T. Finninho
Wait a minute, I don't think I have enough vertices declare. I just fired this up with a fresh pair of eyes and I can actually recognize the texture enough though it is all distorted and coming to a point. What I realized is that only the top half of each side is being rendered. So I only have the top part of the cube. It was kind of a weird realization to be looking at the distorted mess and realize "oh thats the water reflection, and that small white blob is the thing I remember from the .dds file"

edit: Yea I definitely need more Vertices to get this thing to work.

edit2: PDP your code doesn't use an index buffer, the one I am does so the that extra layer of data for the skybox I am using added to the complication of things I think.

edit3: And also the distance of the Vertices makes a difference on how much of the area gets mapped. Maybe I should add UV position data to these and get this over with faster.

edit4: How do I use an index buffer but with different Texture Positions. For example the Vertex (0,0,0) Will be a vertex with UV info for both (0,0) and (1,0)
[I would think that you can't and a quick Google shows that at least one person agree's http://www.gamedev.net/topic/571060-question-on-index-buffer-and-texture-coordinate/ Unless I am missing something]

Physical fucked around with this message at 16:25 on Jul 29, 2011

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Physical posted:

edit4: How do I use an index buffer but with different Texture Positions.
You are correct, you don't do that, you just duplicate the position in another vertex. (Generally - for all I know there might be some sort of multiple-indexed system available these days, but I've never seen it done any way other than separate vertices with the same position coordinates. I know Anim8or's file format stores things with separate indices for position/normal/tex-coords, though. But it also does faces with more than 3 sides, so it evidently bears little resemblance to the rendering format.)

roomforthetuna fucked around with this message at 18:27 on Jul 29, 2011

Physical
Sep 26, 2007

by T. Finninho
Do I have to tell the texture to scale or something?

Physical
Sep 26, 2007

by T. Finninho
I abandoned that other method and went with a different one. Not sure why it wasn't working or why it behaved the way it did. Maybe when I get more experienced with XNA/HLSL I will revisit it but in the last month I have become more about getting results than research and development. I want production, not learning and this kind of threw me off for a couple days on a stupid error. Thanks for the help.

Physical
Sep 26, 2007

by T. Finninho
Ok so I have a render in multiple passes. I have a clouds layer, and then an earth layer.

But the clouds are getting eclipsed by the ground when they are over the ground and I can't get the alpha blending right. What's the right way to do this?

The Cheshire Cat
Jun 10, 2008

Fun Shoe

Physical posted:

Ok so I have a render in multiple passes. I have a clouds layer, and then an earth layer.

But the clouds are getting eclipsed by the ground when they are over the ground and I can't get the alpha blending right. What's the right way to do this?

If you're rendering in that order then you've got it backwards. Think of it like paint - the stuff you paint earlier gets covered up by the stuff you paint later. So if you're rendering the clouds first, they will always be rendered over by the earth.

I'm not too sure what sort of game you're making here, but if it's a top-down view kind of thing, then you'll want the clouds to always render later. If it's something that might vary, then it gets a bit more complicated because you have to determine which faces are closer to the camera's current position and orientation in order to decide render order.

Making a 3D engine from scratch is very, very difficult. If that's what you're trying to learn how to do then of course keep at it. If you're just trying to make a game though, you might be better off using a premade engine like OGRE or getting into UDK/Source.

Physical
Sep 26, 2007

by T. Finninho

The Cheshire Cat posted:

If you're rendering in that order then you've got it backwards. Think of it like paint - the stuff you paint earlier gets covered up by the stuff you paint later. So if you're rendering the clouds first, they will always be rendered over by the earth.

I'm not too sure what sort of game you're making here, but if it's a top-down view kind of thing, then you'll want the clouds to always render later. If it's something that might vary, then it gets a bit more complicated because you have to determine which faces are closer to the camera's current position and orientation in order to decide render order.

Making a 3D engine from scratch is very, very difficult. If that's what you're trying to learn how to do then of course keep at it. If you're just trying to make a game though, you might be better off using a premade engine like OGRE or getting into UDK/Source.

Yea I already have my own engine up, its just some small stuff that I am dealing with. I am an advanced user here. How do I get a depth mask or so something for figuring out which parts of the cloud layer to display. I'd like to build a alpha mask based on the height of the terrain, I could then use that for the clouds layer. But I don't know how to make a mask like that based on geometry. Hmmm, maybe I could dump it to a texture.

eidt:Ok I guess I can do it with shaders like this one http://forums.create.msdn.com/forums/p/24194/130810.aspx

I need to be able to generate a depth mask from a given perspective. Maybe I can create one from top down axis and then transform it to the camera coords to get me what I need.

Physical fucked around with this message at 03:49 on Jul 30, 2011

ZombieApostate
Mar 13, 2011
Sorry, I didn't read your post.

I'm too busy replying to what I wish you said

:allears:

Physical posted:

Yea I already have my own engine up, its just some small stuff that I am dealing with. I am an advanced user here. How do I get a depth mask or so something for figuring out which parts of the cloud layer to display. I'd like to build a alpha mask based on the height of the terrain, I could then use that for the clouds layer. But I don't know how to make a mask like that based on geometry. Hmmm, maybe I could dump it to a texture.

eidt:Ok I guess I can do it with shaders like this one http://forums.create.msdn.com/forums/p/24194/130810.aspx

I need to be able to generate a depth mask from a given perspective. Maybe I can create one from top down axis and then transform it to the camera coords to get me what I need.

http://en.wikipedia.org/wiki/Z-buffering sounds like it's what you're looking for. OpenGL and DirectX should do all the depth buffer stuff for you if you just look up how to turn it on. No need to reinvent the wheel.

Vino
Aug 11, 2010
And if you're rendering transparent objects mind that you need to sort them and render them back to front for them to appear correctly.

Adbot
ADBOT LOVES YOU

Physical
Sep 26, 2007

by T. Finninho
I already have RenderState.DepthBufferWriteEnable = true set, what more is there to do to get a z-buffer working?

Here are a couple pictures describing what I am trying to do. The order in which they get ordered is Skybox (that is both the skybox and the clouds) and then the World (the voxel boxes) So this looks fine until the view is above the layer cloud. Then the ground layer eclipses the clouds, which it shouldn't because the ground is technically 20 pixels below the cloud layer.


So maybe should the way I render this be the skybox, ground and create a z-buffer/depth stencil at this point, and then the clouds applying the buffer/stencil?

Physical fucked around with this message at 19:42 on Feb 9, 2013

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply