Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
devilmouse
Mar 26, 2004

It's just like real life.
We finally got around to writing up how we use Haxe with Unity for maximum cross-platform development fun-times. It's not exactly straight-forward, but once it's set up, it's pretty great for doing anything client/server.

http://blog.proletariat.com/post/63641237563/free-hugs

Adbot
ADBOT LOVES YOU

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?

Lork posted:

Does anybody have any tips/advice for getting bullets/projectiles to line up with your crosshairs in a third person game? Right now I'm spawning bullets from the barrel of the character's gun and then rotating them to face the end point of a raytrace from the center of the screen, which sort of works, but can result in situations where the gun seems to miss objects that are right in front of the player "firing past" them instead.
If I remember correctly, there's no clean way of solving bullets coming from the gun in 3rd person. You always get edge cases if you don't originate bullets from the camera, assuming you use an on-camera reticle. Even using a 3D laser originating from the gun itself is problematic, because you have to dynamically point the gun at whatever you think the player is looking at based on by ray cast from camera - so you still get into a lot of weird situations where your dude can't actually point the gun in that direction without breaking an arm.

That said, the problems are directional proportional to the delta between camera ray vs gun ray, so one way of fixing this is to scooch the camera in closer to the shoulder for an aimed fire mode. If your "slow" bullets can only be fired when aimed, that might work. That's also how they made the laser work in RE4 - no pointing the laser at what they thought was the target, just locked the laser to the camera in an aim mode and let the player figure it out.

The other common trick is to originate bullets from the camera, but don't start drawing them until they're further out, and give the gun a ludicrously big muzzle flash to obscure the difference.

Don't forget that you also need to do a ray cast from gun to target, to catch cases where you'd otherwise be shooting through a corner. If the gun ray hits anything, you shoot that instead.

Zaphod42
Sep 13, 2012

If there's anything more important than my ego around, I want it caught and shot now.

Lork posted:

Does anybody have any tips/advice for getting bullets/projectiles to line up with your crosshairs in a third person game? Right now I'm spawning bullets from the barrel of the character's gun and then rotating them to face the end point of a raytrace from the center of the screen, which sort of works, but can result in situations where the gun seems to miss objects that are right in front of the player "firing past" them instead.

It sounds like the ray you're casting from the screen isn't being cast right and missing the first target, or you're not rotating the bullets exactly right to hit the same point that ray does.

As others have mentioned there are also edge cases involving objects that are ridiculously close to you, or when there's a wall/object between your gun and the screen's ray; although you can just cheese the trajectories if you have to. (If you're shooting a shotgun at point blank range its not like you're going to see the bullets anyways)

xgalaxy
Jan 27, 2004
i write code

Zaphod42 posted:

It sounds like the ray you're casting from the screen isn't being cast right and missing the first target, or you're not rotating the bullets exactly right to hit the same point that ray does.

As others have mentioned there are also edge cases involving objects that are ridiculously close to you, or when there's a wall/object between your gun and the screen's ray; although you can just cheese the trajectories if you have to. (If you're shooting a shotgun at point blank range its not like you're going to see the bullets anyways)

Take a look at this code to get an idea of how to change the projectile direction to account for third person camera.
The code is from Torque and really specific to that engine but it should give you sort of an idea of how to tackle the problem.
https://github.com/GarageGames/Torque3D/blob/master/Engine/source/T3D/shapeImage.cpp#L2118

Lork
Oct 15, 2007
Sticks to clorf

Shalinor posted:

That said, the problems are directional proportional to the delta between camera ray vs gun ray, so one way of fixing this is to scooch the camera in closer to the shoulder for an aimed fire mode. If your "slow" bullets can only be fired when aimed, that might work. That's also how they made the laser work in RE4 - no pointing the laser at what they thought was the target, just locked the laser to the camera in an aim mode and let the player figure it out.
I'm actually requiring the player to go into an "aim" mode to shoot already, so this seems like the best solution for me.

As it turns out, a lot of the problems I was experiencing were due to the way collision detection is handled in Unity (poorly). The bullets were actually passing through objects when they shouldn't have been and it looked weird due to the difference in angle between the gun and the camera, but would've been fine if they had hit when they were supposed to. Fixing that and attaching a primitive "laser" to the gun instead of using crosshairs seems to have mostly resolved the issue. I'd like to try drawing crosshairs or a "targeting cursor" at the point where the gun is aiming someday, but drawing a 2d sprite in Unity appears to be a herculean task for some reason, so I'm just going to stick with the laser sight for now.

xgalaxy posted:

Take a look at this code to get an idea of how to change the projectile direction to account for third person camera.
The code is from Torque and really specific to that engine but it should give you sort of an idea of how to tackle the problem.
https://github.com/GarageGames/Torque3D/blob/master/Engine/source/T3D/shapeImage.cpp#L2118
That seems pretty close to what I'm doing other than the extra stuff accounting for edge cases.

(In Unity)
code:
Ray ray = Camera.main.ScreenPointToRay(new Vector3(Screen.width/2, Screen.height/2, 0));
RaycastHit rayHit;
Vector3 aimPoint;
			
if(Physics.Raycast(ray, out rayHit, maxDistance))
	aimPoint = rayHit.point;
else
	aimPoint = ray.origin + ray.direction * maxDistance;
			
bullet.transform.LookAt(aimPoint);
Unless I'm missing something obvious.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I finally got around to committing this to paper: Some rough, but thorough notes breaking down how Quake III manages world state, timing, and prediction in network sessions. The core techniques of it are still very common in modern games and work well in pretty much any client-server real-time game.

https://www.dropbox.com/s/w5k1ecwiei9chrm/q3net_rev2.pdf

There are some other writeups on it, but most cover low-level transmission, this covers a topic I think isn't covered nearly enough: How that information is converted into what you actually see on your screen in a way that looks good.

OneEightHundred fucked around with this message at 09:08 on Oct 11, 2013

OzyMandrill
Aug 12, 2013

Look upon my words
and despair

Farchanter posted:

I'm really sorry if this belongs in another thread. I'm doing a really basic single-player Android adventure game, and all of my little guys on the screen are Actors. I'm putting in touch controls so that one particular Actor, the Player, moves around depending on what button is pushed.

I'm curious about best practices here:

Should a control element have a property referring to the player, and it can manipulate the player object directly, like so:

code:
public class UIControl
{
     protected Actor player;
.....
Or would it be better to have the control add any of its information to a data store, which the player Actor then checks against when changing its own position?

Thanks in advance!

I always tend to have the players object look up the user input state as part of the players logic update. The UI manager then maintains the current state - for touch screen this can be an array of touch positions. It is also useful to keep the last positions too - so you can tell what the delta was for the last frame, and so you can match up touches to the previous index when the hardware decides to swap multitouch points around. This helps stop the controls going squiffy if, say, the corner of your palm brushes the edge of the screen.

By having the players object then look up to this as part of the update, it keeps your main loop cleaner (read user input, update simulation, draw, repeat) and also means your UI code can be written to be reusable in future projects if there is no dependence on the specifics of your game classes. Also allows you to do clever things like replays - just record the UI state and playback as long as the game is fully determinable (i.e. use an isolated random number generator for all the game logic, and record the seed with the replay, so the values should come out identical in the replay)

shimmy
Apr 20, 2011
I need some advice.
I have been working on a prototype in Unity but I've hit a big wall. It's a Wipeout style racing game but the track needs to be dynamic. I got these tools that can create one from a spline and do so realtime but it isn't close to fast enough. Only the track near the player needs to exist so I have tried only creating that bit but it's still slow. I have tried seperating the collision mesh and creating one that only extends a few meters from the player but it doesn't help much.
Now maybe these tools are slow but I doubt I can do better, and I don't like reinventing the wheel anyway. Which is why I liked Unity until now, it was real nice being able to get my vehicle handling set up and tweaked and racing through a static track without doing work I could only ever think of as redundant.
But with a static track this game is nothing special so I do need something different. What are my choices? UDK? Do I need to go outside my comfort zone and use geometry shaders or sorcery like that? (can I do geometry shaders with Unity free? would collision work?)

Pham Nuwen
Oct 30, 2010



I'm investigating technology for a project. I'm looking for software where kids (k-12) can build models of real-world things like hydroelectric dams, locks (like for shipping), train tracks, etc. and add dynamic actions. So one kid might decide to make streetlights, and when you press the virtual crosswalk button, the light changes and the WALK sign turns on.

I've looked at OpenSim, a Second Life clone, for this, but the lack of any decent water physics makes a lot of these ideas (hydro dam with turbines, shipping simulation, etc) more of a pain in the rear end. Otherwise, though, the in-game object creation and scriptability makes things really nice; I was able to prototype some moving gates and such within 15 minutes, and I think that sort of thing is within the grasp of kids.

Does anyone know of something similar-ish, but ideally with better water/other physics? I looked at Garry's Mod, but it doesn't seem to support flowing water? I also considered Minecraft, but it's not really scriptable enough; you have to get super spergy with that half-assed circuit stuff, rather than just saying "when button 3d51fc is pressed, set light 87a51 state to ON".

Sorry if this question sounds pretty clueless, because I am. I don't have any experience with game development, although I am experienced with programming in general. I don't mind doing more hard-core setup work on the back-end, so long as I can eventually provide relatively simple building blocks and scripting capabilities to the kids.

Edit: Ooh, forgot, free is better. Getting a school to pay for 30 Minecraft licenses... nawwww

Centripetal Horse
Nov 22, 2009

Fuck money, get GBS

This could have bought you a half a tank of gas, lmfao -
Love, gromdul

shimmy posted:

I need some advice.
I have been working on a prototype in Unity but I've hit a big wall. It's a Wipeout style racing game but the track needs to be dynamic. I got these tools that can create one from a spline and do so realtime but it isn't close to fast enough. Only the track near the player needs to exist so I have tried only creating that bit but it's still slow. I have tried seperating the collision mesh and creating one that only extends a few meters from the player but it doesn't help much.
Now maybe these tools are slow but I doubt I can do better, and I don't like reinventing the wheel anyway. Which is why I liked Unity until now, it was real nice being able to get my vehicle handling set up and tweaked and racing through a static track without doing work I could only ever think of as redundant.
But with a static track this game is nothing special so I do need something different. What are my choices? UDK? Do I need to go outside my comfort zone and use geometry shaders or sorcery like that? (can I do geometry shaders with Unity free? would collision work?)

Which tool are you using? I've seen this done in realtime (not just runtime, which seems to be what you meant in your post.)

I am fiddling with an infinite runner, right now. The solution I am using is to pre-design dozens of different sections of the world, and having those sections also randomly spawn obstacles and whatnot each time the section is activated and added to the world. You could pre-design many pieces of track, but you seem to be avoiding that.

I don't know which tool you're using, or its workflow, or the resources each piece of track eats, but would it be possible to create hundreds of random track pieces, then keep them in waiting until it's time to texture and activate them?

Alternatively, could you break creation down into enough smaller pieces that a coroutine constantly running and creating tracks could keep up at the cost of one or two FPS?

shimmy
Apr 20, 2011

Centripetal Horse posted:

Which tool are you using? I've seen this done in realtime (not just runtime, which seems to be what you meant in your post.)

I am fiddling with an infinite runner, right now. The solution I am using is to pre-design dozens of different sections of the world, and having those sections also randomly spawn obstacles and whatnot each time the section is activated and added to the world. You could pre-design many pieces of track, but you seem to be avoiding that.

I don't know which tool you're using, or its workflow, or the resources each piece of track eats, but would it be possible to create hundreds of random track pieces, then keep them in waiting until it's time to texture and activate them?

Alternatively, could you break creation down into enough smaller pieces that a coroutine constantly running and creating tracks could keep up at the cost of one or two FPS?
I have Megashapes http://www.west-racing.com/mf/?page_id=2208

Unless I'm wrong runtime means building it before playing it like Audiosurf, and realtime means rebuilding it during gameplay in one or a few frames worth of time.
You're right, I am avoiding ideas like those you suggested. I guess I should explain the game a bit more. The idea is basically Audiosurf but instead of going along at a speed set by the game, you're playing Wipeout. I need constant realtime rebuilding because unlike Audiosurf, I can't know where the player is but I still want the track to react to music. It would have to rebuild every frame to be fast enough for that. Even skipping one frame would probably be too slow when the average speed is 700kph.
Reacting to music in this way also means the track has to change where the player can see it happen, splines are smooth so that's ok but prebuilt track components I think would be too crude.

FuzzySlippers
Feb 6, 2009

Doing bullets as objects for collision detection will always tend to gently caress up due to the speed of the projectiles though even with slow combat (melee) I like to backup the collision detection with raycasts. Since I wanted bullets to be visible in slowmo I use a raycast / collision combination. The raycast determines a hit point, the bullet is sent to that hit point as a straight move no physics, once it reaches the end point it turns off its renderer and hangs out for a few extra update cycles to check for collisions. This has worked out well enough for me but if you had faster moving players or something you could do raycasts from the bullet enroute to determine hits as well. Either way relying on OnCollisionEnter or whatever for anything fast moving is going to suck.

Flownerous
Apr 16, 2012

shimmy posted:

I need some advice.
I have been working on a prototype in Unity but I've hit a big wall. It's a Wipeout style racing game but the track needs to be dynamic. I got these tools that can create one from a spline and do so realtime but it isn't close to fast enough. Only the track near the player needs to exist so I have tried only creating that bit but it's still slow. I have tried seperating the collision mesh and creating one that only extends a few meters from the player but it doesn't help much.
Now maybe these tools are slow but I doubt I can do better, and I don't like reinventing the wheel anyway. Which is why I liked Unity until now, it was real nice being able to get my vehicle handling set up and tweaked and racing through a static track without doing work I could only ever think of as redundant.
But with a static track this game is nothing special so I do need something different. What are my choices? UDK? Do I need to go outside my comfort zone and use geometry shaders or sorcery like that? (can I do geometry shaders with Unity free? would collision work?)

Do you know which part exactly is too slow? E.g. is it in the script code where it's updating the vertices or is it within Unity when you set the vertices on the mesh and is it the collision mesh or the rendering mesh that's causing problems? Until you've profiled to find the slow bit it's a bit hard to speculate, there's a lot of different things that will be very slow if not done "right" when talking dynamic meshes and collisions. In theory it's nothing Unity can't handle close enough to as well as most other engines.

Flownerous fucked around with this message at 00:52 on Oct 12, 2013

shimmy
Apr 20, 2011
I actually don't know. It's not my code, I'm sure the guy that wrote it is way better than me, and I don't think there's anything I can do to make it run 10-20x faster which is what I need.
I did think Unity was slower than other engines (could be I played the wrong demos) but I'm not even sure any engine could do it which is why I was already thinking about geometry shaders.

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
I just spent like 8 hours total debugging a graphics issue. I have never even HEARD of GlBlendFuncSeparate in any tutorial and didn't expect it to solve my issue.

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?

Orzo posted:

I just spent like 8 hours total debugging a graphics issue. I have never even HEARD of GlBlendFuncSeparate in any tutorial and didn't expect it to solve my issue.
Graphics bug rabbit holes are the best rabbit holes :hfive:

(I'm 8 hours in on trying to do layered rendering in Unity's deferred path... turns out it isn't flexible enough for what I had in mind, so have to refactor my rendering code to brute force it instead :suicide:)

Chunderstorm
May 9, 2010


legs crossed like a buddhist
smokin' buddha
angry tuna
Hey all, I've got a problem with OnTriggerEnter not detecting when it should.



Basically, I've got my "monster" player spitting out a collision gameObject to detect other players. It sends out the object like so:
code:
if (!attackStarted && Input.GetKeyDown(KeyCode.Space))
{
	// post edit: this was breaking tables
	attackObjClone = Instantiate(
		attackObject, attackBeginPos.transform.position, playerModel.transform.rotation) as GameObject;
	attackObjClone.transform.parent = playerModel.transform;
	Destroy (attackObjClone, attackTime);
	attackStarted = true;
}

if (attackStarted)
{
	if (attackObjClone != null)
	{
		// there isn't normally a line break here but it was breaking tables
		attackObjClone.transform.position = Vector3.MoveTowards(attackObjClone.transform.position,
 		attackTarget.transform.position, attackSpeed * Time.deltaTime);
	}
	else
	{
		attackStarted = false;
	}
}
Normally this works just fine, and when it hits the Warrior player, he speeds off, sometimes hits an object and reacts to the physics, whatever:



However, if the Warrior hits an Object and bounces back into the Monster's range, and neither player moves before the Monster attacks again, the collision will go straight through. This also happens if multiple hits are detected while the Warrior is shielding, which flat-out destroys the monster's attack object.



My code for checking the Trigger is as follows:
code:
void OnTriggerEnter(Collider other)
{
	Debug.Log (isCurrentlyAttacked);

	if (other.gameObject.tag == "monsterAttack")
	{
		if (!isCurrentlyAttacked && !shielding)
		{
			// we have a hit
			isCurrentlyAttacked = true;
			
			// find monster's position in relation to me, create direction out of it
			GameObject monsterObj = GameObject.FindGameObjectWithTag("monster");
			Vector3 monsterPos = monsterObj.transform.position;
			Vector3 heroPos = this.gameObject.transform.position;
			this.gameObject.rigidbody.velocity = Vector3.zero;
			float heroYPos = heroPos.y;
			knockbackDirection = monsterPos - heroPos;
			knockbackDirection.Normalize ();
				
			// multiply direction, apply force, reset knockbackDirection
			knockbackDirection *= -35;
			knockbackDirection = new Vector3(knockbackDirection.x, heroYPos, knockbackDirection.z);
			this.gameObject.rigidbody.AddForce(knockbackDirection, ForceMode.Impulse);
			knockbackDirection = Vector3.zero;
		}
		if (shielding)
		{
			Destroy(other.gameObject);
		}
	}
}
And I set isCurrentlyAttacked to false after the Warrior has been pushed back far enough:
code:
void CountHitCooldown()
{
	wasHitCooldown += Time.deltaTime;
	if (wasHitCooldown >= stunnedTime)
	{
		wasHitCooldown = 0.0f;
		isCurrentlyAttacked = false;
	}
}
Sorry if this doesn't make sense, I'm happy to clarify anything. I've been at this for hours, asked friends for help, and haven't found anything worthwhile on Google.

e: vvvvv
Sounded reasonable, but no dice. :/

Chunderstorm fucked around with this message at 05:29 on Oct 14, 2013

Obsurveyor
Jan 10, 2003

If I understand correctly, it's because OnTriggerEnter only fires when a physics object enters the volume of another. If the two volumes never separate from each other, which is what it sounds like is happening, no subsequent OnTriggerEnters are fired. OnTriggerStay is called every frame the volumes remain in collision though.

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?
Bringing this in from the Kickstarter thread, as per Orzo's request...

Alright. Let's talk palette rendering.

To the best of my knowledge, the only way of doing this in today's world is as follows:

1.) Pick your palette, and store it in a 1D texture, 1 pixel per color.
2.) Render everything in your game with an ID buffer. That is, per pixel, you don't render a color - you render an ID. This ID is a coordinate into the 1D texture that indexes a specific color. It either has to be crazy accurate, or you need a very big palette buffer to avoid cross-sampling colors.
3.) Switch the ID buffer to being your source buf, per-pixel read in the ID, use that to offset into the 1D palette texture, sample color, render final color.

For this to work, you'd have to turn off fog, lighting of any kind, anything that could distort the ID buffer. At least for that pass of rendering. You might even need to turn off anisotropic filtering, not entirely sure. You'd also either end up using one of the exotic buffer types, or only using a single channel - if the latter, you could probably find something creative to stuff into the other 3 channels. Kind of a mini deferred shader.

Another way (the way I originally had in mind) would be to use RGB values instead of ID values. In THAT case, you'd actually store your palette as a CLUT (color look-up table), and think of your palette as less an absolute set, and more a mapping of standard coloration to palette space. Or you can make your palette texture 2D, etc, there's a range of options here.

... in any case, does anyone know of a better way? I kind of want to experiment with this - with palette rotation effects especially - but it's only doable if it isn't stupid expensive. In this case, I think you could actually get away with it pretty well if you double-buffered the ID buffer. Still, maybe there's a better way.

EDIT: Huh, the 1D version has a bonus. You could do a palette rotation with WRAP sampling and a per-frame increasing U offset. It wouldn't let you rotate only a subset of the colors, but it'd be enough for a lava demo, at least. For anything more complicated, you'd need to have a set of 1D textures, or a 2D texture set up with multiple horizontal bands which you rotate through by incrementing your V offset.

Shalinor fucked around with this message at 05:55 on Oct 14, 2013

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
I might be missing something--I don't see what filtering, fog, lighting, or anything like that has to do with it, the way I was imagining things. At some point in your shader you're doing a tex2d(u,v) call and receiving a raw pixel color from your source texture.

Now, if all of your source images conform to a palette (and if they don't, what are we even talking about?), you can predictably shift based on that. You could either do this globally (like NES does, I think) as a uniform on the shader, or you could pass a swap value in each vertex (should be uniform across a single triangle) for the thing you're rendering to indicate how to shift it in a lookup table.

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?

Orzo posted:

I might be missing something--I don't see what filtering, fog, lighting, or anything like that has to do with it, the way I was imagining things. At some point in your shader you're doing a tex2d(u,v) call and receiving a raw pixel color from your source texture.

Now, if all of your source images conform to a palette (and if they don't, what are we even talking about?), you can predictably shift based on that. You could either do this globally (like NES does, I think) as a uniform on the shader, or you could pass a swap value in each vertex (should be uniform across a single triangle) for the thing you're rendering to indicate how to shift it in a lookup table.
What you're rendering are palette indices, encoded as single floating point values, probably in the R channel or whatever. If you apply filtering, fog or lighting, it will distort the base data you just rendered for that pixel. What you just rendered isn't a color, it's a solitary piece of data, so adding or subtracting to it is a completely nonsensical operation unless the thing doing it knows about your palette / how the colors are ordered.

Though anisostropic filtering would only matter if you were doing palette-based 3D. If you're using camera-oriented quads, neither fogging nor filtering are likely to have any effect at all.

As for shift... no, not really. Colors in a palette aren't just quanitized R G B in order or whatever, they are a specific, artist-defined colors. Shifting even 1 ID value off your pixel may get you an entirely different color, and it certainly won't get you a consistently hue-shifted color in a particular direction.

EDIT: Consider the popular DB16 palette. It's a 4/4 (4 colors at 4 shades). Unless your palette was 2D and sorted in one direction by tone and clamped along that axis, shifting your ID along either axis would have unpredictable effects. Though if you did arrange it that way (say colors along U and shades along V, clamped on both), then yes, you could reasonably expect either a U or a V shift to produce a sane, useful result in terms of varying shade.

Shalinor fucked around with this message at 06:12 on Oct 14, 2013

Zizi
Jan 7, 2010

Shalinor posted:

... in any case, does anyone know of a better way? I kind of want to experiment with this - with palette rotation effects especially - but it's only doable if it isn't stupid expensive. In this case, I think you could actually get away with it pretty well if you double-buffered the ID buffer. Still, maybe there's a better way.

I've messed around with this in shaders,a nd it depends on what aspect of palette rendering you're interested in. If you want the full-scene palette-swapping, like, say, Starfox (the first one) used for the engine glowy effects, then you need to do soemthing like you're proposing to make it work, I'd think. Ditto for some of the psycedelic full screen palette-swapping effects and so on.

On the other hand, I was looking at trying to get the effects of palette-swapping similar to, say, Street Fighter character colors. That gives you WAY more options. Personally, I've got a couple of different techniques I've played with.

One used the RGB channels as arbitray color channels-- where you'd set three colors in the shader to whatever you want and then it multiplied each channel by the color. This allowed fo some interesting blends, and I was using it to produce many different-colored "hot" engine flares with only one texture-- where the flare would be red, but have a white or yellow core, for instance.

Another thing I'm playing around with right now is for character customization. The plan is to have one, say, hair texture and then convert the RGB values into HSV and do a hue shift, then convert back to RGB. Plan is probably to cache the results to a new spritesheet, at least on player characters.

Neither of these are palette-swaps for real, of course, but the question is "What do I want a palette for and can I fake it?"

(Also, isn't there a texture format that's basically floating-point greyscale, these days?)

Jewel
May 2, 2009

How Skullgirls does it is the artists paint solid colors with the pencil tool (no aliasing) under the lineart of specific "Palette" colors, all fairly different extremes ((0, 0, 255), (255, 0, 0), etc), but mostly just "different colors" that the shader maps to a 1D image (Pure red? Third pixel across).



The lineart is disabled before saving, and that data is used as the palette (but the lineart is enabled in the pictures you see so it's easier to know what's what).

quote:

I begin to input these color values into the game. Thanks to the amazing depth map engine, creating palettes is less of painting by the numbers and more of an actual illustration approach. Instead of the traditional 2D sprite approach where specific sections of the sprite are strictly designated to one color, the depth map allows me to choose a range of colors that will be shown on a portion of the sprite. Through some techno-wizardry, the colors I pick are represented through some averaging, creating that illustrated look that Skullgirls is known for. Utilizing this, I can decide on the sort of appropriate texture with the color.



With that decided and the basic shades down, next is choosing color shifts for the highlights and darker tones. Sometimes, I feel it’s more appropriate to have cooler highlights and warm shadows. Other times, I think the opposite works better, with redder lights and bluer shades. With the palette engine, I can especially create interesting combinations, such as ambient lighting and core shadows that pop out. But the most important detail that I try to keep in mind is to always maintain a good level of contrast with individual colors and as a character as a whole. This emphasizes good readability, important in a fighting game, and also reflects Alex’s own unique and bright coloring style. Once all the colors are settled in, Alex and I go over the colors once more for some final tweaking and testing, pass it around the office for a quick approval, and then the palettes are imported into the game.


All from http://skullgirls.com/2011/09/a-rather-refined-palette/ and what I've gathered from looking at some streams they've done of drawing the palettes.

The end result is the ability to create lots of really unique palettes that creatively do stuff like add stockings or gloves:




Excuse the breast shots, that's Skullgirls for you :shobon:

Jewel fucked around with this message at 08:54 on Oct 14, 2013

SuicideSnowman
Jul 26, 2003
Can anybody with some Unity3D shader knowledge help me out a little?

Am I understanding correctly that if you have multiple passes in a shader that they will all be run if you have a material using that shader?

If so is there a way to only run a certain pass until specified to do otherwise? Similar to how with XNA you actually loop through all passes in a shader and only apply the ones you want to apply. Or even something like being able to set the current technique via "CurrentTechnique"? Basically I have a shader that needs to do multiple things but not immediately.

stramit
Dec 9, 2004
Ask me about making games instead of gains.

Shalinor posted:

(I'm 8 hours in on trying to do layered rendering in Unity's deferred path... turns out it isn't flexible enough for what I had in mind, so have to refactor my rendering code to brute force it instead :suicide:)

How / what are you trying to do exactly?

quote:

1.) Pick your palette, and store it in a 1D texture, 1 pixel per color.
2.) Render everything in your game with an ID buffer. That is, per pixel, you don't render a color - you render an ID. This ID is a coordinate into the 1D texture that indexes a specific color. It either has to be crazy accurate, or you need a very big palette buffer to avoid cross-sampling colors.
3.) Switch the ID buffer to being your source buf, per-pixel read in the ID, use that to offset into the 1D palette texture, sample color, render final color.

Why bother with step 2? Why not just look up the color straight away? If you want to rotate the palette ect just do a UV offset on the sampler?

stramit fucked around with this message at 14:40 on Oct 14, 2013

stramit
Dec 9, 2004
Ask me about making games instead of gains.

SuicideSnowman posted:

Can anybody with some Unity3D shader knowledge help me out a little?

Am I understanding correctly that if you have multiple passes in a shader that they will all be run if you have a material using that shader?

If so is there a way to only run a certain pass until specified to do otherwise? Similar to how with XNA you actually loop through all passes in a shader and only apply the ones you want to apply. Or even something like being able to set the current technique via "CurrentTechnique"? Basically I have a shader that needs to do multiple things but not immediately.

You can use subshaders and shader LOD to do this :)
http://docs.unity3d.com/Documentation/Components/SL-ShaderLOD.html
http://docs.unity3d.com/Documentation/ScriptReference/Shader-maximumLOD.html
http://docs.unity3d.com/Documentation/Components/SL-SubShader.html

If you are using surface shaders you can actually put multiple subshaders in there... it's just not well documented.

code:
Shader "Transparent/Diffuse" 
{
    Properties 
    {
        _Color ("Main Color", Color) = (1,1,1,1)
        _MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
    }
     
    SubShader 
    {
        Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
        LOD 200
     
        CGPROGRAM
        #pragma surface surf Lambert alpha
         
        sampler2D _MainTex;
        fixed4 _Color;
         
        struct Input 
        {
            float2 uv_MainTex;
        };
         
        void surf (Input IN, inout SurfaceOutput o) 
        {
            fixed4 c = tex2D(_MainTex, IN.uv_MainTex) * _Color;
            o.Albedo = c.rgb;
            o.Alpha = c.a;
        }
        ENDCG
    }
    Fallback "Transparent/VertexLit"
}

schme
May 28, 2013
I don't know if this has been asked before or not, sorry for that.

Why most games opt for signed integers instead of unsigned for storing money? If you want the player to be able to have debt the answer would be obvious, but most games don't allow you to or it's just not really part of the game. You usually can't buy things when you have no money and that's it. The only other reason I can think of is to make it more difficult to underflow. I searched for some articles or more insight to this, but I couldn't find much. Am I just wrong saying that most games use signed integers over unsigned ones?

xgalaxy
Jan 27, 2004
i write code

schme posted:

I don't know if this has been asked before or not, sorry for that.

Why most games opt for signed integers instead of unsigned for storing money? If you want the player to be able to have debt the answer would be obvious, but most games don't allow you to or it's just not really part of the game. You usually can't buy things when you have no money and that's it. The only other reason I can think of is to make it more difficult to underflow. I searched for some articles or more insight to this, but I couldn't find much. Am I just wrong saying that most games use signed integers over unsigned ones?

Dunno about other languages but in C++ the general guideline is to use the appropriate type, and if you don't know what that is, use an int until you do. There is actually very little reason to use unsigned - special math or bit fiddling usually. Prefer named sized types in structures, In C++ 11 these would be int8_t, int16_t, int32_t, etc, especially when you get down to optimization - this kind of optimization is going to be overkill for 99% of indie games though.

Math with unsigned gets messy quick, and you should know beyond doubt that what your doing can't produce a negative number otherwise you are better off using a signed value. If I'm working in some code and I see unsigned values being subtracted from each other that is a red flag to me that says either this code is doing something clever, or there is probably a bug here.

xgalaxy fucked around with this message at 15:34 on Oct 14, 2013

Paniolo
Oct 9, 2007

Heads will roll.
I've definitely done a 180 on using unsigned ints. I used to feel that you should use them whenever the quantity you're representing doesn't make sense to be signed.

However, now I use signed values almost everywhere, simply because unsigned integers wrapping around can result in some really nasty, hard to detect bugs. With signed integers, you can just assert that the value is >= 0 in cases where a negative value does not make sense.

OzyMandrill
Aug 12, 2013

Look upon my words
and despair

Shalinor posted:

What you're rendering are palette indices...
:words:
Essentially, you always have a palette index -> RGB conversion, so it depends where in the rendering pipeline you put it.

a) In the shader, just after you read the source texture, you then use it to look up into a second texture as a CLUT. You can then alpha blend/fog/etc all you like. Boring & standard.
Edit: You just need to ensure the texture is POINT sampled, then you don't have to worry about colour bleed. However - texel centres are offset, so the 'index' U value for the first color is actually (0.5 / 256.0) for a 256 pixel wide texture, or generally: ( (index + 0.5) / (fTexelWidth) )

b) 'Deferred' rendering
As you describe, and as the palette would only use 8 bits, you can then also use the other channels for some other information, i.e. depth, lighting, or normals even. You then have a second pass that reads this in, does stuff, and outputs the RGB values. Essentially, it is an 8-bit version of deferred rendering, and for example if you store the depth as one of the values you could do some fogging & depth of field effects for example. Or take the lighting value and have each palette as a 2D with the other axis being the 'tone' and have the warm/cool shadowing & highlighting. You can always re-render the 2D palette texture dynamically with different shades for high/mid/shadow tones depending on the scene.
Lots of possibilities for interesting stuff.

c) As a full screen post-effect. Just render as normal and have a nice posterise style shader run once at the very end of the scene that gives a palettised or pixellated effect. (The OpenXcom for example has a rather nice smoothed-pixel style post effect I really like.)

OzyMandrill fucked around with this message at 16:33 on Oct 14, 2013

Chunderstorm
May 9, 2010


legs crossed like a buddhist
smokin' buddha
angry tuna

Obsurveyor posted:

If I understand correctly, it's because OnTriggerEnter only fires when a physics object enters the volume of another. If the two volumes never separate from each other, which is what it sounds like is happening, no subsequent OnTriggerEnters are fired. OnTriggerStay is called every frame the volumes remain in collision though.

That shouldn't be the case, as the attack object gets re-instantiated every time the monster attacks. It gets created at the monster's position and goes out, then gets destroyed.

ninjaedit: I did try regardless, and it had no change on the functionality.

Chunderstorm fucked around with this message at 17:18 on Oct 14, 2013

Internet Janitor
May 17, 2008

"That isn't the appropriate trash receptacle."
I think at the end of the day you have to ask yourself whether the potential headache of unsigned math biting you in the rear end is worth doubling the maximum value you can represent. Is there any game which truly needs the player to be able to have four billion gold pieces and would be irreparably damaged if that limit were a mere two billion?

Paniolo
Oct 9, 2007

Heads will roll.
Yes, diablo 3 :)

e: for the unaware, diablo 3's economy was massively wrecked due to an exploit caused by unsigned integer math.

Paniolo fucked around with this message at 20:51 on Oct 14, 2013

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe
US Budgetslayer Roguelike

Obsurveyor
Jan 10, 2003

Chunderstorm posted:

That shouldn't be the case, as the attack object gets re-instantiated every time the monster attacks. It gets created at the monster's position and goes out, then gets destroyed.

Why are you implementing the knockback over time instead of just doing it all at once from the perspective of the projectile? If it hits a player(that's not the monster) and not shielding, impulse a player by the correct amount once, to knock them back and destroy itself. It also seems that setting the projectile's transform parent to the monster means that if the monster moves while the projectile is mid-flight, the projectile is going to be moved as well, not sure if that is intended behavior or not.

I don't know, maybe it's all intentional but the code seems strange that it's calculating vectors between the player and the monster and not the player and the projectile. You can hit me up on a chat client if you want to talk through it live.

Shalinor
Jun 10, 2002

Can I buy you a rootbeer?
Just wanted to say this was a fantastic post, thanks dude. Had no idea Skullgirls was playing with this too. That's so neat!

Zizi
Jan 7, 2010

Shalinor posted:

Just wanted to say this was a fantastic post, thanks dude. Had no idea Skullgirls was playing with this too. That's so neat!

+1

I am so bookmarking this for later and cribbing heavily. It's very similar to what I'd planned to do, but I hadn't even considered the possibility of using the color-swapping to do things like costume changes(like adding stockings and straps!)

SharpenedSpoonv2
Aug 28, 2008
So, I've been programming for a long time, and experimenting with games for a while as well, but I finally "got" Unity during a gamejam these past few weeks (oh god it is so amazing how did I never dive into this). So for the gamejam, my team and I were all new and I recently got a raise so I went a little crazy just buying assets. Now that that's all over, I was curious - how do the "pros" use Unity? Is everything built from the ground up, or are assets used, or what? Art I can imagine being created all for specific games, but what about, say Playmaker? Or maybe something like "Ultimate FPS" (which I got for the jam) - it seems like there is very little it doesn't do in terms of the basic shooter framework. What do you all do when making a game with Unity? All custom-made assets/content/scripts/code?

Zizi
Jan 7, 2010

SharpenedSpoonv2 posted:

So, I've been programming for a long time, and experimenting with games for a while as well, but I finally "got" Unity during a gamejam these past few weeks (oh god it is so amazing how did I never dive into this). So for the gamejam, my team and I were all new and I recently got a raise so I went a little crazy just buying assets. Now that that's all over, I was curious - how do the "pros" use Unity? Is everything built from the ground up, or are assets used, or what? Art I can imagine being created all for specific games, but what about, say Playmaker? Or maybe something like "Ultimate FPS" (which I got for the jam) - it seems like there is very little it doesn't do in terms of the basic shooter framework. What do you all do when making a game with Unity? All custom-made assets/content/scripts/code?

Most of the pros/studios I know do art assets and stuff in-house, scripts a mix of the in-house you need and some awesome Asset Store scripts that save you a bunch of time and are useful(IE: 2D toolkit, Playmaker, NGUI, etc).

Adbot
ADBOT LOVES YOU

xzzy
Mar 5, 2009

Isn't Unity overhauling their GUI features in the next release, making NGUI potentially redundant?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply