Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
baby puzzle
Jun 3, 2011

I'll Sequence your Storm.
At Disney we had tests that recorded video that we could look at to see which change broke lighting or textures or whatever. Not an automated unit test but it did the job.

Adbot
ADBOT LOVES YOU

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Shalinor posted:

Unit testing in games is something most large orgs WANT, but the practical reality is that unit testing a game properly is difficult to impossible. I'd assume mobile thin client stuff has a way easier time of it, though? I don't see what would prevent unit testing in most f2p stuff there.

The part you can't is (most) rendering, and stuff with visually dependent systems. Also almost anything touching physics. So it's that thing where if you can only unit test half your systems, is it still useful, or no.

There's also debate on how useful it is vs the cost of adding and especially maintaining tests. Definitely makes zero sense on a small team making a game with minimal post release content. I wish we had time to instrument our procgen game for it, but it's unlikely.

I unit tested my netcode, and I don't want to think about the hell i would have lived through if I didn't.

I also unit test core AI stuff, if not the behaviors themselves.

Knowing that the things you're trying to build everything else on top of work makes life so much simpler. Minimized examples of expected behavior not working can usually fairly easily become tests. And you never start chasing your tail fixing the same bugs because your systems just don't regress that way.

I'm less sure of the benefit to testing non-core systems.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Does anybody know how to define an object in Unity as blocking light? I am doing a top-down 3d game and want the roof to block my directional sun light. I was under the impression that setting my roof's mesh renderer's cast shadow field to "shadows only" would do it. That made the material transparent, but it's letting everything through--including the directional sun light.

Mr Shiny Pants
Nov 12, 2012

Rocko Bonaparte posted:

Does anybody know how to define an object in Unity as blocking light? I am doing a top-down 3d game and want the roof to block my directional sun light. I was under the impression that setting my roof's mesh renderer's cast shadow field to "shadows only" would do it. That made the material transparent, but it's letting everything through--including the directional sun light.

Isn't what an object does? Or do you need to see trough it?

stramit
Dec 9, 2004
Ask me about making games instead of gains.
https://docs.unity3d.com/ScriptReference/Light-cullingMask.html

This is a better way than shadows, you won't have the overhead or rendering shadows with this method.

munce
Oct 23, 2010

Re: Godot chat

I've been using Godot recently and I have to say its pretty good. I'm using 2d and 3d stuff together in one project and it handles them both well. The language does suck, but its not hard to learn and it gets the job done. They say the next release (in a few months i think) will have C#, which will be excellent.

RabidGolfCart
Mar 19, 2010

Excellent!
Boxels:



Based on the technique described here.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

Mr Shiny Pants posted:

Isn't what an object does? Or do you need to see trough it?

Heh yeah I want it to be transparent otherwise. Think of it like a house where I want to see the player moving around inside but without the overhead sun baring down through the roof.


Strumpy posted:

https://docs.unity3d.com/ScriptReference/Light-cullingMask.html

This is a better way than shadows, you won't have the overhead or rendering shadows with this method.

I'll look at it. However, I do want to make something like a roof shape and have the exterior casting the appropriate shadow.

Ranzear
Jul 25, 2013


I was just thinking to do a stack of offset layers for my botes project last week, and now you go and post layery botes before me.

I'm still hung up on my water effect anyway.

FuzzySlippers
Feb 6, 2009

Rocko Bonaparte posted:

Does anybody know how to define an object in Unity as blocking light? I am doing a top-down 3d game and want the roof to block my directional sun light. I was under the impression that setting my roof's mesh renderer's cast shadow field to "shadows only" would do it. That made the material transparent, but it's letting everything through--including the directional sun light.

You can do it with a mesh and a shader that doesn't render it but still blocks light. I don't know of another way.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Shalinor posted:

Unit testing in games is something most large orgs WANT, but the practical reality is that unit testing a game properly is difficult to impossible. I'd assume mobile thin client stuff has a way easier time of it, though? I don't see what would prevent unit testing in most f2p stuff there.

The part you can't is (most) rendering, and stuff with visually dependent systems. Also almost anything touching physics. So it's that thing where if you can only unit test half your systems, is it still useful, or no.

There's also debate on how useful it is vs the cost of adding and especially maintaining tests. Definitely makes zero sense on a small team making a game with minimal post release content. I wish we had time to instrument our procgen game for it, but it's unlikely.
The problem with testing games is that gameplay is a bunch of complex inputs into a system with a constantly-changing spec (which is often an agglomeration of high-level concepts with conflicting requirements), often with aspects that are deliberately random, and outputs that are difficult to articulate. A lot of things that you'd really want to test also don't have a cleanly-defined test plan and the required execution of the test constantly changes, like you really want to know if a mission becomes uncompletable due to a change, but how you complete the mission depends on almost every other gameplay system and the world geometry.

The most promising thing seems to be bots.

Jewel
May 2, 2009

OneEightHundred posted:

The most promising thing seems to be bots.

Absolutely. Bots are the closest you'll get to something actually useful on the game side of things and not just the "does the code return what I want it to". The Witness iirc had a walker-bot network which would traverse the map every time they changed it and make sure everything was walkable and the terrain didn't have bumpy edges preventing the character from moving up it.

Serenade
Nov 5, 2011

"I should really learn to fucking read"
I use very high code coverage in all of my game hobby projects. Its the only way I can be sure it works.

Also, I have never managed to make a screenshot Saturday in time. Coincidence? I think not.

RabidGolfCart
Mar 19, 2010

Excellent!

Ranzear posted:

I was just thinking to do a stack of offset layers for my botes project last week, and now you go and post layery botes before me.

I'm still hung up on my water effect anyway.

It's fairly straightforward to implement with simpler objects. However trying to scale and offset the layers in a way that gives the scene perspective continues to be mind-numbing.

Also I want to look into creating a simple pipeline for this kind of art, so far I have to hand-pixel the layers. Ideally I would need a utility that converts vox files into image slices, but that's also difficult to figure out for someone like me with very little programming experience.

Ranzear
Jul 25, 2013

RabidGolfCart posted:

It's fairly straightforward to implement with simpler objects. However trying to scale and offset the layers in a way that gives the scene perspective continues to be mind-numbing.

Also I want to look into creating a simple pipeline for this kind of art, so far I have to hand-pixel the layers. Ideally I would need a utility that converts vox files into image slices, but that's also difficult to figure out for someone like me with very little programming experience.

It'd be the very definition of orographic projection, yeah. I think trying to do perspective is just a few scaling tricks, but there's a maximum you can 'skew' an object (in z-axis) before it'll look weird.

I could bang something together using a JS canvas that'll spit out PNGs later today if you can link me the vox file format.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Regarding see-through, light-blocking materials and all that: it appears that setting "Shadows Only" really did work, but something really wonky was happening with ProBuilder. I updated it and it seemed to behave again. I have to toy with it a little bit more, but the behavior of it right after the update was much more consistent than before.

I am still pondering an in-editor material so I can know I put down roofs in the right spots. I'm assuming I'd just put them on one layer and toggle it off if I wanted to meddle inside the buildings or something.

BirdOfPlay
Feb 19, 2012

THUNDERDOME LOSER
A word of advice for anyone that's using the static methods to display flags in the editor: it assumes your [Flags] enum enumerator an bit for each enumerations. For example:
code:
[Flags]
enum Colors
{
None = 0,
Red = 1,
Green = 2,
Blue = 4,
Yellow = Green | Blue
}
If you select None the returned value is 0x1, not 0x0. Likewise, Yellow is 0x10, not 0x6. This took me far too long on the train to get a working solution for.

Essentially, the values in the enum are assumed to be in the right order, to begin at 0x1, and for each value to represent a new bit. Also, none of the helpful Enum static methods are there to help rework this. My solution to this was to shift right the initial value of the property by 1, give it to the popup, and then handle special cases and shift left as needed.

For a question about AI behaviors, what's the idea/concept behind having characters just amble around? Specifically at the design level of things. I'm not scared of cranking out an implementation, but I just don't know how to plan out how they plan.

BirdOfPlay fucked around with this message at 19:23 on Mar 27, 2017

RabidGolfCart
Mar 19, 2010

Excellent!

Ranzear posted:

It'd be the very definition of orographic projection, yeah. I think trying to do perspective is just a few scaling tricks, but there's a maximum you can 'skew' an object (in z-axis) before it'll look weird.

I could bang something together using a JS canvas that'll spit out PNGs later today if you can link me the vox file format.

Well if you want to take a crack at it, I was planning on using MagicaVoxel to make the models. Much to my surprise, the creator of MagicaVoxel also provided the specification for the .Vox format.
I got stuck because I was expecting the data to be stored similar to a bitmap, but it actually stores the individual voxels as coordinate points with a pallet index. I couldn't think of an efficient way to read the data and plot it to a bitmap with my limited programming knowledge.

baby puzzle
Jun 3, 2011

I'll Sequence your Storm.
I want to find the screen coordinates for an object in 3d space. I'm trying to make sure I have the right idea.

This is the code in my vertex shader, which works great and I love it:

code:
	gl_Position = projection * view * model * vec4( position, 1.0 );
So, I thought I could do the exact same thing in my C++ code (kinda psuedocode)

code:
	glm::vec4 objectPos( position.x, position.y, position.z, 1.0f );
	glm::vec4 screenPos = m_ProjectionMatrix * m_ViewMatrix * m_ModelMatrix * objectPos;
These Matrix variables are the same values that I'm setting in the shader. I expect that screenPos.x and screenPos.y are values that fall between -1 and 1, but that isn't what I get.

Do I have the right idea?? Is the order of operations for the glm library going to be the same as in GLSL?

FuzzySlippers
Feb 6, 2009

I've always defined [Flag] enums like this for simplicity's sake so I don't have to remember to do it correctly

code:
    [Flags]
    public enum ItemTypes {
        Inventory = 1 << 0,
        Equipment = 1 << 1,
        Ability = 1 << 2,
        Weapon = 1 << 3,
        Usable = 1 << 4,
        Spell = 1 << 5,
        Special = 1 << 6,
    }
(this is for a half assed component definition so I can flag an item as multiple types on a ScriptableObject, you can do components on SO but its a pain)

and yeah if you declare your own None I think that places two None entries in Unity's default mask drawer.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

baby puzzle posted:

I want to find the screen coordinates for an object in 3d space. I'm trying to make sure I have the right idea.

This is the code in my vertex shader, which works great and I love it:

code:
	gl_Position = projection * view * model * vec4( position, 1.0 );
So, I thought I could do the exact same thing in my C++ code (kinda psuedocode)

code:
	glm::vec4 objectPos( position.x, position.y, position.z, 1.0f );
	glm::vec4 screenPos = m_ProjectionMatrix * m_ViewMatrix * m_ModelMatrix * objectPos;
These Matrix variables are the same values that I'm setting in the shader. I expect that screenPos.x and screenPos.y are values that fall between -1 and 1, but that isn't what I get.

Do I have the right idea?? Is the order of operations for the glm library going to be the same as in GLSL?

You need to do W-division to get [-1;1]. I.e. glm::vec3 deviceCoord = objectPos.xyz/objectPos.w;

Understandable that you would miss this, since it is not an explicit part of the pipeline, but rather hardware wired operation that is performed between vertex and fragment shader along with the transformation from device coordinates to viewport coordinates.

E: Note, you're still gonna get vertices that are outside that range, all that means is that they're outside the clipspace (that is to say, not inside the viewport.)

Joda fucked around with this message at 21:37 on Mar 27, 2017

baby puzzle
Jun 3, 2011

I'll Sequence your Storm.
Your magic works great. Cheers.

e:

While I did need that solution, I just figured out that I can't use it for what I am attempting to do right at this moment. I'm trying to draw indicators at the edges of the screen to indicate that there are important objects outside of view. So, for objects that aren't actually on the screen, I don't get valid/useful values from that. Or, I don't understand how to read the results to get what I want.

I'm using this stuff to determine if a thing is actually in the view, and this works fine:
-w < x < w
-w < y < w
0 < z < w

But I really want to know a screen position for things that aren't actually on the screen, so I know where to draw the indicator at the edge of the screen.

e: maybe the value I get is still useful to me.. I just need to look at Z. I was getting strange values when things are behind me. You know what, I'll figure this out.

baby puzzle fucked around with this message at 22:59 on Mar 27, 2017

baby puzzle
Jun 3, 2011

I'll Sequence your Storm.
No I don't get it.

code:
bool ViewTransform::GetScreenPos( const glm::vec3& worldPos, glm::vec3& out ) const
{
	glm::vec4 objectPos( worldPos.x, worldPos.y, worldPos.z, 1.0f );
	glm::vec4 pos = ( m_ProjectionMatrix * m_ViewMatrix * m_ModelMatrix ) * objectPos;

	out = glm::vec3( pos.x, pos.y, pos.z ) / pos.w;

	return ( -pos.w < pos.x && pos.x < pos.w &&
		-pos.w < pos.y && pos.y < pos.w &&
		0.0f < pos.z && pos.z < pos.w );
}
This is my little function that works great for things that are on the screen, but when things are NOT on the screen, the values become meaningless to me.

I think it is the case where pos.z < pos.w, or maybe when pos.w is negative? I'm having a hard time because I don't understand what these numbers mean.

e: I get it now. w is negative so things are .. like... inside-out or something. Adding this seems to give me what I want but I don't know if it is "correct".

code:
	umm = pos.w > 0.0f;
	if ( umm )
	{
		// normal fine numbers
		out = glm::vec3( pos.x, pos.y, pos.z ) / pos.w;
	}
	else
	{
		// crazy lovely numbers?
		out = glm::vec3( pos.x, pos.y, pos.z ) * -pos.w;
	}

baby puzzle fucked around with this message at 03:15 on Mar 28, 2017

BirdOfPlay
Feb 19, 2012

THUNDERDOME LOSER

FuzzySlippers posted:

I've always defined [Flag] enums like this for simplicity's sake so I don't have to remember to do it correctly

I wasn't sure if that was good or not, which is why I went with the explicit definitions. Regardless, that's not the problem. The problem is that the default mask drawer doesn't look at values and, instead, only looks at the list of named values. It determines the value for the selection by doing 1 << selectionIndex, if that makes sense.

quote:

and yeah if you declare your own None I think that places two None entries in Unity's default mask drawer.

Nope! The "Nothing" and "Everything" options are always there by default and equal 0 and -1 respectively. This means that my flag drawer spits out something like this:
Nothing
Everything
None
Arrow
...

Really though, I was just venting because it's stupid.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

baby puzzle posted:

No I don't get it.

code:
bool ViewTransform::GetScreenPos( const glm::vec3& worldPos, glm::vec3& out ) const
{
	glm::vec4 objectPos( worldPos.x, worldPos.y, worldPos.z, 1.0f );
	glm::vec4 pos = ( m_ProjectionMatrix * m_ViewMatrix * m_ModelMatrix ) * objectPos;

	out = glm::vec3( pos.x, pos.y, pos.z ) / pos.w;

	return ( -pos.w < pos.x && pos.x < pos.w &&
		-pos.w < pos.y && pos.y < pos.w &&
		0.0f < pos.z && pos.z < pos.w );
}
This is my little function that works great for things that are on the screen, but when things are NOT on the screen, the values become meaningless to me.

I think it is the case where pos.z < pos.w, or maybe when pos.w is negative? I'm having a hard time because I don't understand what these numbers mean.

e: I get it now. w is negative so things are .. like... inside-out or something. Adding this seems to give me what I want but I don't know if it is "correct".

code:
	umm = pos.w > 0.0f;
	if ( umm )
	{
		// normal fine numbers
		out = glm::vec3( pos.x, pos.y, pos.z ) / pos.w;
	}
	else
	{
		// crazy lovely numbers?
		out = glm::vec3( pos.x, pos.y, pos.z ) * -pos.w;
	}

XYZ/W on a projection view model transformed vertex gives you values between -1 and 1 for any vertex that is inside the viewing frustum. If you are only interested in window coordinates, the Z and W coordinates are meaningless to you beyond that.

Say you have a vertex that is outside the viewing frustum and want to indicate on an 800*600 window. You would first do tVector = PVM*Vertex; dcVector = tVector.xyz/tVector.w. At this point you have a 3-dimensional vector in clipspace. If any value is not in the range [-1;1] the original vertex is outside the frustum. At this point, if your window maps to [-1;1], making an indicator at the edge of the screen would simply be vec2 indicator = glm::clamp(dcVector.xy,glm::vec2(-1.0f,-1.0f),glm::vec2(1.0f,1.0f));
If, on the other hand, you want the pixel coordinates, you can get normalized device coordinates (ndcVector = (dcVector + 1.0f)/2.0f) and transform these to pixel/fragment coordinates by pixelPos = ndcVector * glm::vec2(800,600). Then you find the clamped pixel pos by vec2 pixelIndicator = glm::clamp(dcVector.xy,glm::vec2(0.0f,0.0f),glm::vec2(800.0f,600.0f));

E: Also, the W divided Z coordinate is ALSO between -1 and 1. The clamping to the range 0;1 is part of the pipeline (and that range can be changed with glDepthRange().)

E2: Also note, be careful doing it like this, since vertices that are very close to or inside the plane with origin in your view position and the normal of your view direction will get huge/infinite values. I only really recommend you use it for determining IF the vertex is inside your frustum. Making the indicator itself should probably be handled separately.

Joda fucked around with this message at 18:24 on Mar 28, 2017

Ranzear
Jul 25, 2013

RabidGolfCart posted:

Well if you want to take a crack at it, I was planning on using MagicaVoxel to make the models. Much to my surprise, the creator of MagicaVoxel also provided the specification for the .Vox format.
I got stuck because I was expecting the data to be stored similar to a bitmap, but it actually stores the individual voxels as coordinate points with a pallet index. I couldn't think of an efficient way to read the data and plot it to a bitmap with my limited programming knowledge.

The RIFF spec was what I was missing to get anywhere on this. Now I understand what the chunking is about. I'll give it a whirl this weekend probably.

Luigi Thirty
Apr 30, 2006

Emergency confection port.

One good thing about writing Amiga software is that functionality for reading IFF data files is built into the latest versions of the operating system since every game and application used them.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Can somebody get me into the ballpark for setting usable residential-style light parameters in Unity? I'm just bombarding it, and the bake cycles are annoying enough to make it fussy to figure it out. I hoped somebody could just nudge me along:



That light is basically 4 feet from the house and it's just obliterating the side of it in light. I was hoping for something larger, less intense, and more gradual. For other lights, I've halved the intensity twice already and not seen much difference.

TheresaJayne
Jul 1, 2011

Rocko Bonaparte posted:

Can somebody get me into the ballpark for setting usable residential-style light parameters in Unity? I'm just bombarding it, and the bake cycles are annoying enough to make it fussy to figure it out. I hoped somebody could just nudge me along:



That light is basically 4 feet from the house and it's just obliterating the side of it in light. I was hoping for something larger, less intense, and more gradual. For other lights, I've halved the intensity twice already and not seen much difference.

maybe change the color to more yellowish and reduce the saturation you are currently using halogen where you should be using tungsten or sodium

https://en.wikipedia.org/wiki/Color_temperature

Jo
Jan 24, 2005

:allears:
Soiled Meat

munce posted:

Re: Godot chat

I've been using Godot recently and I have to say its pretty good. I'm using 2d and 3d stuff together in one project and it handles them both well. The language does suck, but its not hard to learn and it gets the job done. They say the next release (in a few months i think) will have C#, which will be excellent.

Similar boat, but I'd like to add my lamentations over the lovely tile editor. I really like it as a whole, but it does feel really unpolished in that respect.

I'm going to see if I can get Kotlin or another JVM language to interact via JNI and some plugins on the 3.0 branch.

BirdOfPlay
Feb 19, 2012

THUNDERDOME LOSER
Physics question: is this simple motion formula good enough to figure out how to aim projectiles?

0 = initVelocityY - gravity * time

Asking cause I didn't see any helper methods in Unity's Physics class or other ways to "ask" the physics system to run a calculation.

Ranzear
Jul 25, 2013

Time squared?

LLSix
Jan 20, 2010

The real power behind countless overlords

BirdOfPlay posted:

Physics question: is this simple motion formula good enough to figure out how to aim projectiles?

0 = initVelocityY - gravity * time

Asking cause I didn't see any helper methods in Unity's Physics class or other ways to "ask" the physics system to run a calculation.

Can you provide more details? What are you trying to calculate?

From a real word perspective all solving that equation will do is tell you when your projectile stops moving upwards, assuming you shot it straight up. That's not very useful without using other equations to find out how far it went. If you're not shooting straight up, the variables in that equation are poorly named since gravity only pulls objects towards the ground. The main thing that slows bullets/arrows down is drag/air resistance.

I found this discussion which might be related to what you are trying to do: http://answers.unity3d.com/questions/677957/projectile-motion-of-an-arrow-after-applying-force.html
http://answers.unity3d.com/questions/49195/trajectory-of-a-projectile-formula-does-anyone-kno.html


Ranzear posted:

Time squared?
Not time squared. The units for velocity are meters/second and gravity is an acceleration which has units of meters/second/second so multiplying my time once to get meters/second is correct.

BirdOfPlay
Feb 19, 2012

THUNDERDOME LOSER

LLSix posted:

Can you provide more details? What are you trying to calculate?

I'm trying to calculate the velocity required to launch something and have it hit a target. I know the target, where I am, and heading that I'll be shooting. This is all in Unity.

I got pretty close using this, but it freaks when the z-coordinate is too small:
code:
float velocity = Mathf.Sqrt (((-Physics.gravity.y) * travelDistance) / (2 * cannon.forward.z * cannon.forward.y));
I know part of my problem was assuming that, since the cannon was tracking the player/target, the x-coordinate would zero out. I'm pretty sure I'm almost there though.

EDIT: Stupid error on my part, it should've looked like this:
code:
float velocity = Mathf.Sqrt ((Physics.gravity.y * travelDistance.z) / (2 * _cannon.forward.z * _cannon.forward.y));
Previously, travelDistance was the distance between the cannon and the target, which I assumed was the same as the z value, because I was thinking in local space. By grabbing the distance along the z-axis, everything lines up correctly. For those that are really confused, the derivation of this formula comes from the one below and from solving for Time in the distance formula along the z-axis.

quote:

From a real word perspective all solving that equation will do is tell you when your projectile stops moving upwards, assuming you shot it straight up. That's not very useful without using other equations to find out how far it went. If you're not shooting straight up, the variables in that equation are poorly named since gravity only pulls objects towards the ground. The main thing that slows bullets/arrows down is drag/air resistance.

I was short handing FinalVelocityY = InitialVelocityY - Gravity * Time, which was the wrong formula. I was thinking of this distance formula: DistanceY = InitialVelocityY * Time - (Gravity * Time ^ 2)/2. Solving for DistanceY = 0 is the state that the projectile is on the ground, usually Time = 0 and when it lands.

BirdOfPlay fucked around with this message at 05:24 on Mar 31, 2017

Ranzear
Jul 25, 2013

LLSix posted:

Not time squared. The units for velocity are meters/second and gravity is an acceleration which has units of meters/second/second so multiplying my time once to get meters/second is correct.

Right, when trying to get a velocity instead of a position.

BirdOfPlay posted:

DistanceY = InitialVelocityY * Time - (Gravity * Time ^ 2)/2

I knew it was in there somewhere though, for what he wanted.

I'd probably just sim it.

Ranzear fucked around with this message at 06:31 on Mar 31, 2017

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

TheresaJayne posted:

maybe change the color to more yellowish and reduce the saturation you are currently using halogen where you should be using tungsten or sodium

https://en.wikipedia.org/wiki/Color_temperature

There isn't anything to overtly set saturation with the built-in Unity lights. It does look like reducing the intensity below 1 does help a lot. However, I'm still saturated with some of my floors, but that is some other inconsistency. I actually moved out the lights, rebaked, and still had bright white floors.

TheresaJayne
Jul 1, 2011

Rocko Bonaparte posted:

There isn't anything to overtly set saturation with the built-in Unity lights. It does look like reducing the intensity below 1 does help a lot. However, I'm still saturated with some of my floors, but that is some other inconsistency. I actually moved out the lights, rebaked, and still had bright white floors.

What is your ambient light set to?

oliveoil
Apr 22, 2016
Unity question: Is there any reason I should prefer to use multiple scenes when I can get away with one? I made a scrolling shmup where the game background continues to scroll in the background while the player is navigating menus, so when you start and view the title screen, you can still see the background moving. It seems to me that there's no need to use multiple scenes when I can put everything in one scene and enable/disable the GUI objects as necessary.

On the other hand, it seems like there's some helpfulness that it offers: If I use multiple scenes, then I can use Unity itself as a level editor, treating each scene as a complete level. If I do everything in one scene, then I have to generate each level in code, which is fine for procedurally generated content, but probably becomes a pain if I want to handcraft levels because then I'll have to find some other tool to create them.

Stick100
Mar 18, 2003

oliveoil posted:

Unity question: Is there any reason I should prefer to use multiple scenes when I can get away with one?

I've usually used one scene (granted very small games), and just reset/setup stuff myself. I hate the time it takes to load from scene to scene in Unity. Using multiple scenes does help segregate your code/objects so you don't have to keep cleaning stuff up.

If you're generating all of the objects then having one scene is fine, if you have everything done in advance have multiple scenes will make a big difference for occlusion/light baking.

Adbot
ADBOT LOVES YOU

SupSuper
Apr 8, 2009

At the Heart of the city is an Alien horror, so vile and so powerful that not even death can claim it.
Multiple scenes are better for big games or splitting lots of content, single scenes are better for code-driven games. Note that you can combine scenes, so they can be more versatile than prefabs depending on what you're using them for (you can't nest prefabs and they're harder to edit).

For example, I prefer using scenes for big static object hierarchies that I just wanna swap in and out like UI screens, and prefabs for stuff that's gonna be instantiated or repeated a lot like gameplay objects.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply