Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Shameproof
Mar 23, 2011

Well, it is a minelike.

Also, ugh, why isn't there a single SVO demo that isn't hideous.

Adbot
ADBOT LOVES YOU

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.

Unormal posted:

It's really not that hard to render a pretty huge cube-world in pre-generated vertex buffers. The real trick, I found, is re-calculating them in real time without hitching when cubes are created and destroyed.

I made a voxel world recently that used pre-generated vertex buffers and allowed realtime terrain modification. The main gimmick was to divide the world into 16x16x16 unit blocks and generate a separate vertex buffer for each block that contained geometry (occasionally large open rooms might contain empty blocks). Then when the player did anything that modified the world all affected blocks would be regenerated on a background thread and swapped into the live vertex buffer set as a group when they were all done. Swapping the whole group at once was needed so that you wouldn't see flickering cracks in the geometry from partial updates. The block regeneration usually took about 20-50 ms depending on how many blocks needed updates, so it felt 'instant' when playing the game without skipping any frames.

Shameproof
Mar 23, 2011

Well that doesn't sound bad at all! Was it able to load quickly to the point where you could move around at, say, 50 m/s? Because that's another goal of mine.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
It's hard to say if you could move around at a specific speed, since the scale of the nodes and the volume of new material you'd have to generate to keep up would come into play. I was generating a fully 3D world with interior cave systems so things might go faster if you were doing a more 2D surface terrain like geometry or traveling down a restricted tunnel-like volume.

Here's a demo video from when the thing was about half done, the grid texture on the walls is about 1:1 with the number of voxel nodes, destructable geometry starts at about 1:20. There is kind of a speed run at about 2:10 that could suggest at least a lower bound as to how fast things could be generated while moving. The world is generated between about 0:08 and 0:11 as can be seen from the green progress bar at the bottom of the screen, but that's with both CPU cores lit up to 100% processing geometry blocks in parallel so it wouldn't be a good indication of how fast something could run during a live gameplay session.

https://www.youtube.com/watch?v=qQBRtkF10Tc

One thing to note is that the player's maximum velocity will also be limited by collision detection. If you have a destructible world you can't really make any assumptions about the state of the solid objects from frame to frame, and if the player moves quickly you have to have to check collisions over a swept volume. I generated a list of geometry objects that the player might possibly touch and checked the player physics model against that list on each frame. If you allow the player to move faster you need to do more collision checks.

This was done in C#/XNA and held up 60fps on my old-ish computer, the biggest bottleneck was how many triangles I could pump through the GPU at long view distances.

Synthbuttrange
May 6, 2007

You had better be developing that further!

FlyingDodo
Jan 22, 2005
Not Extinct
I have a question regarding calculating texture coordinates for a quake3 level (.map NOT .bsp). The textures appear to be scaled and rotated correctly (for about 99.9% of the time) but the translation is not correct a lot of the time. Sometimes textures do appear to be in the correct position but mostly not.



In the code below m_texX and m_texY are an orthonormal basis for the plane that the polygon is on. The basis vectors are rotated according to the texture rotation angle. position is the 3d vertex position, m_translation is the 2d translation for the texture, m_scale is the 2d scaling and m_dimensions is the width/height of the texture.
code:
	double s = ((position.dot(m_texX)) + (m_translation.getX())) / (m_scale.getX() *  m_dimensions.getX());
	double t = ((position.dot(m_texY)) + (m_translation.getY())) / (m_scale.getY() * m_dimensions.getY());
What could I be doing wrong? It seems strange to be so very close to being correct but not quite.
This is a problem that has been bugging me for a very long time. I am trying to figure out why it doesn't work properly but I have absolutely no idea.

dangerz
Jan 12, 2005

when i move you move, just like that

PDP-1 posted:

I made a voxel world recently that used pre-generated vertex buffers and allowed realtime terrain modification. The main gimmick was to divide the world into 16x16x16 unit blocks and generate a separate vertex buffer for each block that contained geometry (occasionally large open rooms might contain empty blocks). Then when the player did anything that modified the world all affected blocks would be regenerated on a background thread and swapped into the live vertex buffer set as a group when they were all done. Swapping the whole group at once was needed so that you wouldn't see flickering cracks in the geometry from partial updates. The block regeneration usually took about 20-50 ms depending on how many blocks needed updates, so it felt 'instant' when playing the game without skipping any frames.
Me too. Same basic format and concept as PDP-1, except my chunks were 16x128x16. Plenty of old videos here: http://youtu.be/ZwWPLdvno1A

I stopped working on mine a little bit ago since I'd learned all that I wanted to from it.

Physical
Sep 26, 2007

by T. Finninho

dangerz posted:

Me too. Same basic format and concept as PDP-1, except my chunks were 16x128x16. Plenty of old videos here: http://youtu.be/ZwWPLdvno1A

I stopped working on mine a little bit ago since I'd learned all that I wanted to from it.

16x128x16 is what Minecraft (and my game) are using. I just recently got my server/client setup to allow players to walk infinity and the server generates the chunks. It also remembers what you put down! So I walked like 10 chunks out and laid down something, walked back to spawn laid something else down and kept going back and forth to see if it showed up, and it did! Pretty sweet. I gotta work on some optimization but then I will start working on cool stuff like AI.

One thing that I don't get is how do I make terrain able to have hang overs like cliffs and stuff. Dangerz would you share your terrain algorithm with me? Mine uses a 2D perlin noise along the x, z plane (looking down) and fills all the blocks below it in. So each index has a value, that value represents the highest Y value of that column. So if it noise[2,2] = 13 then I put a block at Y = 3 and fill everything directly below it. This inherently means I will never have a hang over or cliff. Should I make a 3d perlin noise and have a certain threshold for whether or not the block is something or none?

Physical fucked around with this message at 19:01 on Sep 13, 2011

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
More component-based design stuff, but I think this is more in the fun realm than anything else. I was curious for some examples of how people are splitting up things into components.

For a basic human-controlled character right now I have something like:

Health: for managing health. It services requests for taking damange or getting healed, and emit a death dignal if the player is considered dead.
AnimationState: for mapping movements to frames of some kind. This stores whether to player is walking, running, slumping backwards, and whatnot.
Position: for position, but this is an interface implemented by a PhysicsPosition which knows to interact with the physics engine for the most up-to-date. There's a stub one for dealing with positions not in the physics engine.
Facing: which pretty much just stores where the entity is facing
Velocity: like facing, but for movement, and this is tied to the physics engine.
Faction: Which side they're on, who are their friends and enemies.
HumanControl: Translating controller messages into movement and facings. This emits signals that a lot of other components listen to. For example, Velocity would takes signals for movement and try to make good on them.
AnimatedMesh: To be added, since AnimationState manages the mesh too much right now.
Physical: Physics information like mass. This interacts directly with the physics subsystem, but takes a lot of stuff from other components.

I intend to add something like a GhostPhysics for bounding boxes and such for thinks like melee and ranged attacks, but I haven't scribbled them out yet.

Is this looking like the right idea?

The Cheshire Cat
Jun 10, 2008

Fun Shoe

Physical posted:

16x128x16 is what Minecraft (and my game) are using. I just recently got my server/client setup to allow players to walk infinity and the server generates the chunks. It also remembers what you put down! So I walked like 10 chunks out and laid down something, walked back to spawn laid something else down and kept going back and forth to see if it showed up, and it did! Pretty sweet. I gotta work on some optimization but then I will start working on cool stuff like AI.

One thing that I don't get is how do I make terrain able to have hang overs like cliffs and stuff. Dangerz would you share your terrain algorithm with me? Mine uses a 2D perlin noise along the x, z plane (looking down) and fills all the blocks below it in. So each index has a value, that value represents the highest Y value of that column. So if it noise[2,2] = 13 then I put a block at Y = 3 and fill everything directly below it. This inherently means I will never have a hang over or cliff. Should I make a 3d perlin noise and have a certain threshold for whether or not the block is something or none?

I'm not that knowledgeable about algorithms so unfortunately I can't give you a very detailed answer for this, but I believe that Minecraft may use a similar system for terrain generation as Dwarf Fortress - where it creates terrain from random noise similar to what you're doing, but then runs simulations on that terrain to give it a more natural look - things like modelling "erosion" near water by digging out spaces next to lakes (which could give you the cliff overhangs you're looking for), digging out big tunnels and caves, which will often come right out of the ground, creating cave entrances, etc.

Essentially, the key to procedural generation is running multiple algorithms, using the output of one as the input for the next. The trick is that procedural generation does not mean random generation - you'll usually have to start with some randomized input, but the point is that the output feels natural because the system that generates that output is deterministic. Simply generating terrain using random noise is only ever going to get you terrain that looks like it was generated with random noise (this is why all those optional worlds in Mass Effect all feel so generic). How complex you want to make your system is up to you, but Minecraft uses a lot of different rules and simulations to get that look.

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice

Rocko Bonaparte posted:

More component-based design stuff, but I think this is more in the fun realm than anything else. I was curious for some examples of how people are splitting up things into components.

For a basic human-controlled character right now I have something like:

Health: for managing health. It services requests for taking damange or getting healed, and emit a death dignal if the player is considered dead.
AnimationState: for mapping movements to frames of some kind. This stores whether to player is walking, running, slumping backwards, and whatnot.
Position: for position, but this is an interface implemented by a PhysicsPosition which knows to interact with the physics engine for the most up-to-date. There's a stub one for dealing with positions not in the physics engine.
Facing: which pretty much just stores where the entity is facing
Velocity: like facing, but for movement, and this is tied to the physics engine.
Faction: Which side they're on, who are their friends and enemies.
HumanControl: Translating controller messages into movement and facings. This emits signals that a lot of other components listen to. For example, Velocity would takes signals for movement and try to make good on them.
AnimatedMesh: To be added, since AnimationState manages the mesh too much right now.
Physical: Physics information like mass. This interacts directly with the physics subsystem, but takes a lot of stuff from other components.

I intend to add something like a GhostPhysics for bounding boxes and such for thinks like melee and ranged attacks, but I haven't scribbled them out yet.

Is this looking like the right idea?

With your components you have actions right? Should probably discuss both. My setup is similar to yours but my game is a roguelike so no need for much physics.

Health: Hp yah, has actions that heal or take damage.
AnimationState: This is a biggie for me. It stores the sprite or sprite sheet for the character as well as the direction it's facing. Also says if it's animated and if it loops (torches on walls flicker). So a player moving in the opposite direction will have it's sprite flipped so it's facing the other direction.
Position: Pretty standard. Has actions to change position by delta (move up 1) or completely move (teleport).
HumanControl: This is something I might look into later. Right now I let XNA's keyboard stuff worry about moving and just tells the player entity to do an action for the position component. Of course my only keys at the moment are up,down,left, right so this might get changed later.
AI: Mobs have a component for the A* pathfinding. the action is "MoveTowards". It takes an argument of the target position (in case i want mobs to pick up gold and not just steam towards player). The action will actually do the pathfinding and the tell the position component to move.

The next part of development will be the actual combat so I'm sure I'll have weapon and armor components that will talk to the HP component to determine actual damage taken.

Staggy
Mar 20, 2008

Said little bitch, you can't fuck with me if you wanted to
These expensive
These is red bottoms
These is bloody shoes


I gave a component-based system a go after seeing your earlier posts poemdexter, and it was fun - until I got things all tangled up together.

I do have a question: is there any advantage to using "smart" actions and "dumb" components? I was doing things the other way around - messages were simply an ID string and some data (a position/float/etc.) passed to each component. Each component would then check the ID string and - if it was one they needed to react to - do something with/about the data.

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice

Staggy posted:

I gave a component-based system a go after seeing your earlier posts poemdexter, and it was fun - until I got things all tangled up together.

I do have a question: is there any advantage to using "smart" actions and "dumb" components? I was doing things the other way around - messages were simply an ID string and some data (a position/float/etc.) passed to each component. Each component would then check the ID string and - if it was one they needed to react to - do something with/about the data.

When I create the entity, I add components and actions. That way when i say player.Do("TakeDamage", 5) the action does all the important stuff. And by important stuff, I mean the action will go through the entity checking for other components it might need to determine what kind of damage it will actually take. This keeps components separate from one another so that no component needs to know about another component. The actions do all the heavy stuff.

If you're talking about messages across all the entities, I have an entity manager for each level of the dungeon. In this manager, I have a dictionary of doors and door locations. I also have a list of monsters and a list of items on the ground. If the player moves, then I just enumerate the list and tell each entity which action they need to take depending on what state they are in.

I think this answers your question. It doesn't seem you have actions like I do to go along with your components. Just some messages with an id that says "do this if you see this string". But then you run the risk of having components being dependent on other components to do things.

Your Computer
Oct 3, 2008




Grimey Drawer

poemdexter posted:

When I create the entity, I add components and actions. That way when i say player.Do("TakeDamage", 5) the action does all the important stuff. And by important stuff, I mean the action will go through the entity checking for other components it might need to determine what kind of damage it will actually take. This keeps components separate from one another so that no component needs to know about another component. The actions do all the heavy stuff.

If you're talking about messages across all the entities, I have an entity manager for each level of the dungeon. In this manager, I have a dictionary of doors and door locations. I also have a list of monsters and a list of items on the ground. If the player moves, then I just enumerate the list and tell each entity which action they need to take depending on what state they are in.

I think this answers your question. It doesn't seem you have actions like I do to go along with your components. Just some messages with an id that says "do this if you see this string". But then you run the risk of having components being dependent on other components to do things.
I don't really "get" component systems (because I've never seen them used, and never tried it myself), but I don't see how it's easier to do string parsing and error-checking then delegating method calls to different components instead of just subclassing and calling, say, 'player.takeDamage(5)'.

It'd be fun to try a system like that though. Any good tutorials (and not just talks about what it is/why it's the best/XNA)?

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice

Your Computer posted:

I don't really "get" component systems (because I've never seen them used, and never tried it myself), but I don't see how it's easier to do string parsing and error-checking then delegating method calls to different components instead of just subclassing and calling, say, 'player.takeDamage(5)'.

It'd be fun to try a system like that though. Any good tutorials (and not just talks about what it is/why it's the best/XNA)?

I see it as more of a way to get away from crazy inheritance trees. I'm sure delegating methods to different pieces of classes would work just fine. It's really just a different way of doing it when you know you're going to have a giant mess of different objects in the game that are sorta related but not quite and really gives you the power to create some pretty interesting creatures on the fly without having to figure out where it fits on the inheritance tree or worse, creating a new subclass to extend. As far as tutorials go, I just use this barebones project and expanded on it: http://dl.dropbox.com/u/11893120/EntityPrototype.zip

You can also google "component based game development" and find more links than you could possibly want. Most of it is pretty abstracted out so it's hard to find concrete examples, but that EntityPrototype project should be pretty good at showing what it does.

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

Your Computer posted:

I don't really "get" component systems (because I've never seen them used, and never tried it myself), but I don't see how it's easier to do string parsing and error-checking then delegating method calls to different components instead of just subclassing and calling, say, 'player.takeDamage(5)'.

One of the best parts of component-based design, in my opinion, and one of the purposes behind one of the first well-described systems (Scott Bilas' FUBI from Dungeon Siege) is that it lends itself incredibly well to data-driven object blueprinting and definition.

I used a component based system for Caves of Qud for this reason; knowing that I'd want a massive variety of objects with varying properties. I chose a component based architecture because it allows me to easily specifiy and specialize objects in data with no code chages at all.

For instance, I could take a simple wooden buckler:

code:
  <object Name="Wooden Buckler" Inherits="BaseShield">
    <part Name="Render" DisplayName="wooden buckler"></part>
    <part Name="Shield" AV="1" DV="-1" RF="0" WornOn="Arm"></part>
    <part Name="Description" Short="A small wooden shield meant to be strapped to the arm, leaving the hands free."></part>
    <part Name="Commerce" Value="5"></part>
    <tag Name="Tier" Value="0"></tag>
    <part Name="Physics" Weight="5"></part>
  </object>
and turn it into a metal buckler by adding a new component ("part" in my nomenclature) that handles metal-based physics (the "Metal" part) and changing some data values:

code:
  <object Name="Iron Buckler" Inherits="BaseShield">
    <part Name="Render" DisplayName="iron buckler"></part>
    <part Name="Shield" AV="2" DV="-3" RF="0" WornOn="Arm"></part>
    <part Name="Description" Short="A small iron shield meant to be strapped to the arm, leaving the hands free."></part>
    <part Name="Commerce" Value="10"></part>
    <part Name="Metal"></part>
    <tag Name="Tier" Value="1"></tag>
    <part Name="Physics" Weight="9"></part>
  </object>
I use reflection for all component creation, so that when I say ObjectFactory.CreateObject("Iron Buckler"), the system goes through each part tag, instantiates a class who's name matches the Name attribute of that tag, and then sets each public property via reflection who's name matches the name of any of the remaining attributes to the value of that attribute; after the values for that part are set, it adds that 'part' to the GameObject (which is basically just a container for holding parts and sending them messages). Once all the parts have been added, I then fire an "ObjectCreated" messages so parts can do any post-creation initialization, and then the object is returned from the factory, ready to use.

Simple, and really effective.

E: Random FYI, the <tag> part is a per-blueprint (Iron Buckler) property that is stored at a global level, so it doesn't take memory per-object.

Unormal fucked around with this message at 22:02 on Sep 13, 2011

The Fool
Oct 16, 2003


C#/XNA question.

What kind of pitfalls do I need to be aware of if I use Texture2D.FromStream() instead of ContentManager.Load() for loading .png files?

Unless there's a better way to look at a folder that holds an unspecified number of png's and corresponsing xml files, and load them up as whatever sprites the xml file specifies.

edit: XNA 4.0 doesn't have FromFile anymore

The Fool fucked around with this message at 02:01 on Sep 14, 2011

AntiPseudonym
Apr 1, 2007
I EAT BABIES

:dukedog:

Your Computer posted:

I don't really "get" component systems (because I've never seen them used, and never tried it myself), but I don't see how it's easier to do string parsing and error-checking then delegating method calls to different components instead of just subclassing and calling, say, 'player.takeDamage(5)'.

It'd be fun to try a system like that though. Any good tutorials (and not just talks about what it is/why it's the best/XNA)?

I use a component system, but I don't use strings for anything except saving, loading and displaying in the editor, all of the logic takes place through standard code. I don't think using strings to set data from within code is a good idea at all, honestly. It means that all of the niceties that we get with modern IDEs (Code completion, hover variable inspection, etc) becomes either harder to use or just flat out useless. Not to mention the obvious processing overhead of constantly parsing and constructing strings from data, which is completely pointless when you consider a lot of the time you're just going from typed data to string and back to typed data when that typed data already exists.

I've got several templated helper functions (In C++ land here) that allow me to query entities for specific components and create them if they don't exist. When a component is added to an entity, it either gets or creates the components it needs and caches them, and then I just use the components like standard members. I've never really hit a problem with this system.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

poemdexter posted:

With your components you have actions right? Should probably discuss both. My setup is similar to yours but my game is a roguelike so no need for much physics.
I didn't do actions--I think. I guess we should make sure I understand the nomenclature here. By "actions," I am assume code that is applied to an entity that does whatever with the components. I think with everything I have in place I could, but it didn't really cross my mind as a thing to do. Instead, I'm doing stuff with signals and requests. When components are added to an entity, it registers signals it cares about, and requests it can accommodate. So eventually another component may request a signal, and the servicing component gets notified with the request ID, and it sends back a message containing the relevant data.

HumanControl works particularly with signals, with the idea that I could swap out for an AIControl component that would control the entity using the exact same signals, and all the components listening for them would be none the wiser.

Bizarro Buddha
Feb 11, 2007

The Fool posted:

C#/XNA question.

What kind of pitfalls do I need to be aware of if I use Texture2D.FromStream() instead of ContentManager.Load() for loading .png files?

Unless there's a better way to look at a folder that holds an unspecified number of png's and corresponsing xml files, and load them up as whatever sprites the xml file specifies.

edit: XNA 4.0 doesn't have FromFile anymore

I think the only issue is that you won't be able to do that on XBox 360, because you can't access random parts of the filesystem there.

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice

Rocko Bonaparte posted:

I didn't do actions--I think. I guess we should make sure I understand the nomenclature here. By "actions," I am assume code that is applied to an entity that does whatever with the components. I think with everything I have in place I could, but it didn't really cross my mind as a thing to do. Instead, I'm doing stuff with signals and requests. When components are added to an entity, it registers signals it cares about, and requests it can accommodate. So eventually another component may request a signal, and the servicing component gets notified with the request ID, and it sends back a message containing the relevant data.

HumanControl works particularly with signals, with the idea that I could swap out for an AIControl component that would control the entity using the exact same signals, and all the components listening for them would be none the wiser.

Yah, I think we're on the same page. I actually have an action class that all actions extend but they pretty much operate in the same fashion as your signals and requests.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.

The Fool posted:

C#/XNA question.

What kind of pitfalls do I need to be aware of if I use Texture2D.FromStream() instead of ContentManager.Load() for loading .png files?

Unless there's a better way to look at a folder that holds an unspecified number of png's and corresponsing xml files, and load them up as whatever sprites the xml file specifies.

edit: XNA 4.0 doesn't have FromFile anymore

It will be slower than the normal pipeline because it is doing processing at runtime rather than compile time and FromStream doesn't include some features like DXT compression, mipmapping, or color key transparency.

I don't really know how to do that kind of loading otherwise though. I suppose you could try to import some of the custom content processor classes into your game project instead of letting them live in the content node like usual to get access to the advanced features at runtime. It would still be slower than pre-processing and you'd have to clean up the generated .xnb files when you were done. This is just speculation, I haven't tried it or seen it done before.

FlyingDodo
Jan 22, 2005
Not Extinct
I have changed how I'm calculating texture coordinates, it has improved but it still isn't right. I have two screenshots, one from me and one from the quake3 level editor.

http://imgur.com/a/xTnCg

The top image is from me which has one incorrect texture alignment which stands out but if you compare it to the quake3 editor screenshot you will notice more inconsistencies.

One thing I noticed is that reversing a texture axis can fix an incorrect looking texture, but will break others that looked correct before. My only theory so far is that how I am getting the initial axes is different to how the quake3 editor does it, sometimes they match and sometimes they don't giving incorrect results.

I get the texture axes for a polygon like this:

code:
		MathLib::Plane p(normal,0);
		p.orthoBasis(&m_texX,&m_texY);	

		m_texX = m_rotation.rotatePoint(m_texX);
		m_texY = MathLib::Vector3::ZERO - m_rotation.rotatePoint(m_texY);
and calculate the texture coordinates like this:

code:

	double s = ((position.dot(m_texX) / m_dimensions.getX()) / m_scale.getX()) + (m_translation.getX() / m_dimensions.getX());
	double t = -((position.dot(m_texY) / m_dimensions.getY()) / m_scale.getY()) - (m_translation.getY() / m_dimensions.getY());

Without the negatives in there the texture coordinates end up even worse, with some upside down and even more shifted to the wrong place.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
The way Quake 3 calculates brush vectors is based on some extreme legacy code and does not work intuitively.

Take a look at the following functions:

http://www.nanobit.net/doxy/quake3/q3map_2map_8c-source.html#l00680
http://www.nanobit.net/doxy/quake3/textures_8c-source.html#l00075
http://www.nanobit.net/doxy/quake3/surface_8c-source.html#l00062

FlyingDodo
Jan 22, 2005
Not Extinct
That looks very useful, thank you. Hopefully I can get this working properly. I've had this program working for a while except for the texture coordinates and has been bugging me for a very long time.

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?
Ok, I feel like an idiot... this should be a simple problem:

In a shader, I want to effectively render out the texture map. Per-pixel, I want to render out normal/positional/etc data. Imagine I'm just trying to make a shader that calculates the normals and writes out the normal map of a target object which I then save to disk, since that covers the core of the problem.


No matter how I set the shader up, I see nothing. I thought I had a pretty good handle on screen-space, but clearly, I'm nowhere near the mark in this case.

It seems like I should be able to do this without any camera or projection matrix whatsoever - I have the U, I have the V, and I have the texture dims. So in the vertex shader, I output the position (U * width, V * height, Z, W), and... but that doesn't work. Fiddled with Z and W, tried values ranging from sub 1.0 to 10 to 100, thinking I'd botched the math... nothing. I tried throwing in an orthographic projection matrix with the texture dims, etc, to try and make sure the Z and W were within range, and still nothing.

What am I missing / screwing up?

Shalinor fucked around with this message at 19:49 on Sep 14, 2011

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Shalinor posted:

It seems like I should be able to do this without any camera or projection matrix whatsoever - I have the U, I have the V, and I have the texture dims. So in the vertex shader, I output the position (U * width, V * height, Z, W), and... but that doesn't work.
I'm having a hard time telling what you're trying to do. What do you mean by "render out the texture map?"

A couple possible pitfalls:
- Texture rectangles do not use normalized texture coordinates, they use coordinates from 0..(dimension-1)
- Screen coordinates are -1..1, not 0..1
- The W coordinate is not used by pixel shaders unless you're using a projected texture lookup (i.e. tex2Dproj). If you're using a non-projected lookup (i.e. tex2D), you need to divide S/T by W.

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?

OneEightHundred posted:

I'm having a hard time telling what you're trying to do. What do you mean by "render out the texture map?"

A couple possible pitfalls:
- Texture rectangles do not use normalized texture coordinates, they use coordinates from 0..(dimension-1)
- Screen coordinates are -1..1, not 0..1
- The W coordinate is not used by pixel shaders unless you're using a projected texture lookup (i.e. tex2Dproj). If you're using a non-projected lookup (i.e. tex2D), you need to divide S/T by W.
I have an object with UV coords, a tree. I want to render out its normal map. To do this, I render to a square off-screen surface, and I need to, per-vertex / in the VS, output screen coordinates equivalent to its input UVs. In another member in the output struct, I will also output the per-vertex normal. I will then, per-pixel / in the PS, calculate the per-pixel normal and draw it out.

The resultant render buffer will then be saved to disk. When I open it, I should see the normal map for the tree.


... and I realize that screen coordinates are offset, but that's the thing. I should then at least be seeing 1/4th of the result texture with the expected data in place, I realize the problem, adjust, bam. I see, literally, nothing. The best I can figure is that my coordinate calculations are an order of magnitude off (but U * tex_width / V * tex_height really, truly, should map into the right space), or my Z is completely off and the result is being clipped.

EDIT: I am completely off base in thinking that (U * 512, V * 512, 1, 1) should work pretty much regardless? With maybe some tweaking of the Z to 10, depending on how odd the near plane set is?

Shalinor fucked around with this message at 20:12 on Sep 14, 2011

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Shalinor posted:

EDIT: I am completely off base in thinking that (U * 512, V * 512, 1, 1) should work pretty much regardless? With maybe some tweaking of the Z to 10, depending on how odd the near plane set is?
Are these the final values you're outputting from the vertex shader, or the values you're feeding into a matrix multiply?

Normally converting from texture UV to screen space is just float4(uv*2 - 1, 0, 1)

edit: -1, not +1

OneEightHundred fucked around with this message at 20:46 on Sep 14, 2011

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?

OneEightHundred posted:

Are these the final values you're outputting from the vertex shader, or the values you're feeding into a matrix multiply?

Normally converting from texture UV to screen space is just float4(uv*2 - 1, 0, 1)

edit: -1, not +1
This is the final position being output from the vertex shader, yes.

And - that's exactly what I thought.

... dammit. I think the problem here may actually be that my data-set is totally corrupt. I'm working with an old test asset here that may have totally garbage UV data, or negative UV coords, etc. Trying other objects now, and it looks like some of them at least work better.

Thanks. I knew I was too close to the problem.

FlyingDodo
Jan 22, 2005
Not Extinct

OneEightHundred posted:

The way Quake 3 calculates brush vectors is based on some extreme legacy code and does not work intuitively.

Take a look at the following functions:

http://www.nanobit.net/doxy/quake3/q3map_2map_8c-source.html#l00680
http://www.nanobit.net/doxy/quake3/textures_8c-source.html#l00075
http://www.nanobit.net/doxy/quake3/surface_8c-source.html#l00062

Well now it works, although I don't quite understand what on earth is going on in the quake3 code. I pretty much had to do a copy-paste. Although I supposed there isn't much choice in it, if the texture coordinates have to be calculated like that I guess just how it has to be.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
What on earth is going on is mainly that textures default to axial projections based on the surface normal. They probably did it so that shallow-angled surfaces remained contiguous by default.

There's very little reason to deal with .map unless you're writing an editor though, using the compiled .bsp files is staggeringly easier since it's all pretty much calculated out and ready to go.


As an aside, I kind of want to slap Blender Foundation. It had what was more or less a good non-commercial alternative to Beast, then they removed Radio in 2.5 while both Yafray and Luxrender consider bake to be low-priority. :sigh:

OneEightHundred fucked around with this message at 20:29 on Sep 15, 2011

unSavory
Sep 26, 2004
fellow
Here's an update on the music-based FPS I'm working on. Getting all the audio sorted out, along with a few other nifty things.

http://www.youtube.com/watch?v=bCLRUS-vOn4

Morham
Jun 4, 2007

unSavory posted:

Here's an update on the music-based FPS I'm working on. Getting all the audio sorted out, along with a few other nifty things.

http://www.youtube.com/watch?v=bCLRUS-vOn4

This is really cool, I can see myself picking and choosing which enemies to kill based on their beats! :D are you going to blend the effects in a bit? I notice it jars a bit when you hit the ground again after jumping, ending up with a little bit of distortion as the low end kicks back in. I much prefer the EQ effects to the slow down you had in the original demo for crouching and stuff.

Paniolo
Oct 9, 2007

Heads will roll.

unSavory posted:

Here's an update on the music-based FPS I'm working on. Getting all the audio sorted out, along with a few other nifty things.

http://www.youtube.com/watch?v=bCLRUS-vOn4

This is pretty awesome, I hope you can find some really good gameplay to integrate with the music idea. I think that you're going to want to focus on fairly short levels or else it'll get overwhelming. The concept is great though.

edit: A while back I had an idea for combining a roguelike with DOOM-like shooter gameplay. I'll bet your music concept would go fantastically well with that.

Paniolo fucked around with this message at 17:04 on Sep 17, 2011

unSavory
Sep 26, 2004
fellow
Yeah it sounds like it would.

The effects and stuff will be blended and smoothed out as I move along. Just getting the basic premise down at the moment.

Vino
Aug 11, 2010
I thought it was in Unreal? You switched to Unity? Or was I mistaken?

If you don't mind me throwing ideas at you: Seems to me like there should be some kind of "sample inventory" for the player. Each time you kill an enemy, a sample is added to your inventory. Then you choose which parts of the mix you want active and that helps you kill certain enemies. For example you have these categories

Bass
Snare/hat
Synth
Melody lead

and so on, and perhaps you need a certain kind of bass to kill this certain kind of enemy, or something like that. So you need to collect samples from some enemies to kill others, and mix and match on the fly, creating different songs as you go.

Anyhow you're doing great, keep it up!

unSavory
Sep 26, 2004
fellow
It was unreal. But UDK's audio management is loving awful. I've gotten ten times farther in just a few days than I did over weeks of pushing through UDK. Just by switching to Unity so that I can code in a language that I know and not unrealscript.

On the sample-bank: great minds think alike :)

The way I have it laid out in my sketches and designs is a 9-track sample bank, corresponding to the 1-9 at the top of the keyboard or the numpad. Each sample you "pick-up" is added to one of the slots, and carries with it a different ability or attribute based on the type of enemy it was and the timbre of the sound. (say, high-timbre sounds make you run faster, while low-timbre sounds make your weapons stronger, etc)

If you have all 9 slots filled, you can't take on anymore and therefore can't kill a loop-based (some are just one-shots and sound effects) enemy, so you have to drop one by pressing a number key, opening up that slot for a new track.

Doing it this way encourages strategy because oh-god-i-need-another-slot-theyre-everywhere but if you drop that weapon upgrade you might not be able to take them all on and if you drop that speed upgrade they're going to be coming at you even faster. etc etc etc

There is background story stuff that explains the reasoning behind the system, but i'll save that for now.

Thanks for the comments dudes.

Morham
Jun 4, 2007
I fully expect guns to fire in rhythms...the salsa shotgun, or the rumba rocket launcher. A maybe pointless but amusing idea.

Adbot
ADBOT LOVES YOU

The Cheshire Cat
Jun 10, 2008

Fun Shoe
I don't really have much to say about the UDK vs. Unity issue, since I don't know enough about them to have an informed opinion, but I just wanted to mention that game idea looks really awesome and I'm looking forward to seeing more progress on it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply