|
At Disney we had tests that recorded video that we could look at to see which change broke lighting or textures or whatever. Not an automated unit test but it did the job.
|
# ? Mar 26, 2017 00:15 |
|
|
# ? Jun 7, 2024 11:23 |
|
Shalinor posted:Unit testing in games is something most large orgs WANT, but the practical reality is that unit testing a game properly is difficult to impossible. I'd assume mobile thin client stuff has a way easier time of it, though? I don't see what would prevent unit testing in most f2p stuff there. I unit tested my netcode, and I don't want to think about the hell i would have lived through if I didn't. I also unit test core AI stuff, if not the behaviors themselves. Knowing that the things you're trying to build everything else on top of work makes life so much simpler. Minimized examples of expected behavior not working can usually fairly easily become tests. And you never start chasing your tail fixing the same bugs because your systems just don't regress that way. I'm less sure of the benefit to testing non-core systems.
|
# ? Mar 26, 2017 00:22 |
|
Does anybody know how to define an object in Unity as blocking light? I am doing a top-down 3d game and want the roof to block my directional sun light. I was under the impression that setting my roof's mesh renderer's cast shadow field to "shadows only" would do it. That made the material transparent, but it's letting everything through--including the directional sun light.
|
# ? Mar 26, 2017 05:39 |
|
Rocko Bonaparte posted:Does anybody know how to define an object in Unity as blocking light? I am doing a top-down 3d game and want the roof to block my directional sun light. I was under the impression that setting my roof's mesh renderer's cast shadow field to "shadows only" would do it. That made the material transparent, but it's letting everything through--including the directional sun light. Isn't what an object does? Or do you need to see trough it?
|
# ? Mar 26, 2017 09:10 |
|
https://docs.unity3d.com/ScriptReference/Light-cullingMask.html This is a better way than shadows, you won't have the overhead or rendering shadows with this method.
|
# ? Mar 26, 2017 09:17 |
|
Re: Godot chat I've been using Godot recently and I have to say its pretty good. I'm using 2d and 3d stuff together in one project and it handles them both well. The language does suck, but its not hard to learn and it gets the job done. They say the next release (in a few months i think) will have C#, which will be excellent.
|
# ? Mar 26, 2017 09:27 |
|
Boxels: Based on the technique described here.
|
# ? Mar 26, 2017 12:02 |
|
Mr Shiny Pants posted:Isn't what an object does? Or do you need to see trough it? Heh yeah I want it to be transparent otherwise. Think of it like a house where I want to see the player moving around inside but without the overhead sun baring down through the roof. Strumpy posted:https://docs.unity3d.com/ScriptReference/Light-cullingMask.html I'll look at it. However, I do want to make something like a roof shape and have the exterior casting the appropriate shadow.
|
# ? Mar 26, 2017 13:58 |
|
I was just thinking to do a stack of offset layers for my botes project last week, and now you go and post layery botes before me. I'm still hung up on my water effect anyway.
|
# ? Mar 26, 2017 19:01 |
|
Rocko Bonaparte posted:Does anybody know how to define an object in Unity as blocking light? I am doing a top-down 3d game and want the roof to block my directional sun light. I was under the impression that setting my roof's mesh renderer's cast shadow field to "shadows only" would do it. That made the material transparent, but it's letting everything through--including the directional sun light. You can do it with a mesh and a shader that doesn't render it but still blocks light. I don't know of another way.
|
# ? Mar 26, 2017 20:21 |
|
Shalinor posted:Unit testing in games is something most large orgs WANT, but the practical reality is that unit testing a game properly is difficult to impossible. I'd assume mobile thin client stuff has a way easier time of it, though? I don't see what would prevent unit testing in most f2p stuff there. The most promising thing seems to be bots.
|
# ? Mar 26, 2017 20:40 |
|
OneEightHundred posted:The most promising thing seems to be bots. Absolutely. Bots are the closest you'll get to something actually useful on the game side of things and not just the "does the code return what I want it to". The Witness iirc had a walker-bot network which would traverse the map every time they changed it and make sure everything was walkable and the terrain didn't have bumpy edges preventing the character from moving up it.
|
# ? Mar 26, 2017 20:50 |
|
I use very high code coverage in all of my game hobby projects. Its the only way I can be sure it works. Also, I have never managed to make a screenshot Saturday in time. Coincidence? I think not.
|
# ? Mar 26, 2017 21:10 |
|
Ranzear posted:I was just thinking to do a stack of offset layers for my botes project last week, and now you go and post layery botes before me. It's fairly straightforward to implement with simpler objects. However trying to scale and offset the layers in a way that gives the scene perspective continues to be mind-numbing. Also I want to look into creating a simple pipeline for this kind of art, so far I have to hand-pixel the layers. Ideally I would need a utility that converts vox files into image slices, but that's also difficult to figure out for someone like me with very little programming experience.
|
# ? Mar 27, 2017 00:47 |
|
RabidGolfCart posted:It's fairly straightforward to implement with simpler objects. However trying to scale and offset the layers in a way that gives the scene perspective continues to be mind-numbing. It'd be the very definition of orographic projection, yeah. I think trying to do perspective is just a few scaling tricks, but there's a maximum you can 'skew' an object (in z-axis) before it'll look weird. I could bang something together using a JS canvas that'll spit out PNGs later today if you can link me the vox file format.
|
# ? Mar 27, 2017 15:19 |
|
Regarding see-through, light-blocking materials and all that: it appears that setting "Shadows Only" really did work, but something really wonky was happening with ProBuilder. I updated it and it seemed to behave again. I have to toy with it a little bit more, but the behavior of it right after the update was much more consistent than before. I am still pondering an in-editor material so I can know I put down roofs in the right spots. I'm assuming I'd just put them on one layer and toggle it off if I wanted to meddle inside the buildings or something.
|
# ? Mar 27, 2017 18:41 |
|
A word of advice for anyone that's using the static methods to display flags in the editor: it assumes your [Flags] enum enumerator an bit for each enumerations. For example:code:
Essentially, the values in the enum are assumed to be in the right order, to begin at 0x1, and for each value to represent a new bit. Also, none of the helpful Enum static methods are there to help rework this. My solution to this was to shift right the initial value of the property by 1, give it to the popup, and then handle special cases and shift left as needed. For a question about AI behaviors, what's the idea/concept behind having characters just amble around? Specifically at the design level of things. I'm not scared of cranking out an implementation, but I just don't know how to plan out how they plan. BirdOfPlay fucked around with this message at 19:23 on Mar 27, 2017 |
# ? Mar 27, 2017 19:20 |
|
Ranzear posted:It'd be the very definition of orographic projection, yeah. I think trying to do perspective is just a few scaling tricks, but there's a maximum you can 'skew' an object (in z-axis) before it'll look weird. Well if you want to take a crack at it, I was planning on using MagicaVoxel to make the models. Much to my surprise, the creator of MagicaVoxel also provided the specification for the .Vox format. I got stuck because I was expecting the data to be stored similar to a bitmap, but it actually stores the individual voxels as coordinate points with a pallet index. I couldn't think of an efficient way to read the data and plot it to a bitmap with my limited programming knowledge.
|
# ? Mar 27, 2017 19:51 |
|
I want to find the screen coordinates for an object in 3d space. I'm trying to make sure I have the right idea. This is the code in my vertex shader, which works great and I love it: code:
code:
Do I have the right idea?? Is the order of operations for the glm library going to be the same as in GLSL?
|
# ? Mar 27, 2017 20:08 |
|
I've always defined [Flag] enums like this for simplicity's sake so I don't have to remember to do it correctlycode:
and yeah if you declare your own None I think that places two None entries in Unity's default mask drawer.
|
# ? Mar 27, 2017 20:18 |
baby puzzle posted:I want to find the screen coordinates for an object in 3d space. I'm trying to make sure I have the right idea. You need to do W-division to get [-1;1]. I.e. glm::vec3 deviceCoord = objectPos.xyz/objectPos.w; Understandable that you would miss this, since it is not an explicit part of the pipeline, but rather hardware wired operation that is performed between vertex and fragment shader along with the transformation from device coordinates to viewport coordinates. E: Note, you're still gonna get vertices that are outside that range, all that means is that they're outside the clipspace (that is to say, not inside the viewport.) Joda fucked around with this message at 21:37 on Mar 27, 2017 |
|
# ? Mar 27, 2017 21:33 |
|
Your magic works great. Cheers. e: While I did need that solution, I just figured out that I can't use it for what I am attempting to do right at this moment. I'm trying to draw indicators at the edges of the screen to indicate that there are important objects outside of view. So, for objects that aren't actually on the screen, I don't get valid/useful values from that. Or, I don't understand how to read the results to get what I want. I'm using this stuff to determine if a thing is actually in the view, and this works fine: -w < x < w -w < y < w 0 < z < w But I really want to know a screen position for things that aren't actually on the screen, so I know where to draw the indicator at the edge of the screen. e: maybe the value I get is still useful to me.. I just need to look at Z. I was getting strange values when things are behind me. You know what, I'll figure this out. baby puzzle fucked around with this message at 22:59 on Mar 27, 2017 |
# ? Mar 27, 2017 21:41 |
|
No I don't get it.code:
I think it is the case where pos.z < pos.w, or maybe when pos.w is negative? I'm having a hard time because I don't understand what these numbers mean. e: I get it now. w is negative so things are .. like... inside-out or something. Adding this seems to give me what I want but I don't know if it is "correct". code:
baby puzzle fucked around with this message at 03:15 on Mar 28, 2017 |
# ? Mar 28, 2017 02:14 |
|
FuzzySlippers posted:I've always defined [Flag] enums like this for simplicity's sake so I don't have to remember to do it correctly I wasn't sure if that was good or not, which is why I went with the explicit definitions. Regardless, that's not the problem. The problem is that the default mask drawer doesn't look at values and, instead, only looks at the list of named values. It determines the value for the selection by doing 1 << selectionIndex, if that makes sense. quote:and yeah if you declare your own None I think that places two None entries in Unity's default mask drawer. Nope! The "Nothing" and "Everything" options are always there by default and equal 0 and -1 respectively. This means that my flag drawer spits out something like this: Nothing Everything None Arrow ... Really though, I was just venting because it's stupid.
|
# ? Mar 28, 2017 06:55 |
baby puzzle posted:No I don't get it. XYZ/W on a projection view model transformed vertex gives you values between -1 and 1 for any vertex that is inside the viewing frustum. If you are only interested in window coordinates, the Z and W coordinates are meaningless to you beyond that. Say you have a vertex that is outside the viewing frustum and want to indicate on an 800*600 window. You would first do tVector = PVM*Vertex; dcVector = tVector.xyz/tVector.w. At this point you have a 3-dimensional vector in clipspace. If any value is not in the range [-1;1] the original vertex is outside the frustum. At this point, if your window maps to [-1;1], making an indicator at the edge of the screen would simply be vec2 indicator = glm::clamp(dcVector.xy,glm::vec2(-1.0f,-1.0f),glm::vec2(1.0f,1.0f)); If, on the other hand, you want the pixel coordinates, you can get normalized device coordinates (ndcVector = (dcVector + 1.0f)/2.0f) and transform these to pixel/fragment coordinates by pixelPos = ndcVector * glm::vec2(800,600). Then you find the clamped pixel pos by vec2 pixelIndicator = glm::clamp(dcVector.xy,glm::vec2(0.0f,0.0f),glm::vec2(800.0f,600.0f)); E: Also, the W divided Z coordinate is ALSO between -1 and 1. The clamping to the range 0;1 is part of the pipeline (and that range can be changed with glDepthRange().) E2: Also note, be careful doing it like this, since vertices that are very close to or inside the plane with origin in your view position and the normal of your view direction will get huge/infinite values. I only really recommend you use it for determining IF the vertex is inside your frustum. Making the indicator itself should probably be handled separately. Joda fucked around with this message at 18:24 on Mar 28, 2017 |
|
# ? Mar 28, 2017 18:09 |
|
RabidGolfCart posted:Well if you want to take a crack at it, I was planning on using MagicaVoxel to make the models. Much to my surprise, the creator of MagicaVoxel also provided the specification for the .Vox format. The RIFF spec was what I was missing to get anywhere on this. Now I understand what the chunking is about. I'll give it a whirl this weekend probably.
|
# ? Mar 29, 2017 15:44 |
|
One good thing about writing Amiga software is that functionality for reading IFF data files is built into the latest versions of the operating system since every game and application used them.
|
# ? Mar 30, 2017 05:20 |
|
Can somebody get me into the ballpark for setting usable residential-style light parameters in Unity? I'm just bombarding it, and the bake cycles are annoying enough to make it fussy to figure it out. I hoped somebody could just nudge me along: That light is basically 4 feet from the house and it's just obliterating the side of it in light. I was hoping for something larger, less intense, and more gradual. For other lights, I've halved the intensity twice already and not seen much difference.
|
# ? Mar 30, 2017 06:32 |
|
Rocko Bonaparte posted:Can somebody get me into the ballpark for setting usable residential-style light parameters in Unity? I'm just bombarding it, and the bake cycles are annoying enough to make it fussy to figure it out. I hoped somebody could just nudge me along: maybe change the color to more yellowish and reduce the saturation you are currently using halogen where you should be using tungsten or sodium https://en.wikipedia.org/wiki/Color_temperature
|
# ? Mar 30, 2017 08:37 |
munce posted:Re: Godot chat Similar boat, but I'd like to add my lamentations over the lovely tile editor. I really like it as a whole, but it does feel really unpolished in that respect. I'm going to see if I can get Kotlin or another JVM language to interact via JNI and some plugins on the 3.0 branch.
|
|
# ? Mar 30, 2017 13:00 |
|
Physics question: is this simple motion formula good enough to figure out how to aim projectiles? 0 = initVelocityY - gravity * time Asking cause I didn't see any helper methods in Unity's Physics class or other ways to "ask" the physics system to run a calculation.
|
# ? Mar 30, 2017 22:36 |
|
Time squared?
|
# ? Mar 31, 2017 03:57 |
|
BirdOfPlay posted:Physics question: is this simple motion formula good enough to figure out how to aim projectiles? Can you provide more details? What are you trying to calculate? From a real word perspective all solving that equation will do is tell you when your projectile stops moving upwards, assuming you shot it straight up. That's not very useful without using other equations to find out how far it went. If you're not shooting straight up, the variables in that equation are poorly named since gravity only pulls objects towards the ground. The main thing that slows bullets/arrows down is drag/air resistance. I found this discussion which might be related to what you are trying to do: http://answers.unity3d.com/questions/677957/projectile-motion-of-an-arrow-after-applying-force.html http://answers.unity3d.com/questions/49195/trajectory-of-a-projectile-formula-does-anyone-kno.html Ranzear posted:Time squared?
|
# ? Mar 31, 2017 04:48 |
|
LLSix posted:Can you provide more details? What are you trying to calculate? I'm trying to calculate the velocity required to launch something and have it hit a target. I know the target, where I am, and heading that I'll be shooting. This is all in Unity. I got pretty close using this, but it freaks when the z-coordinate is too small: code:
EDIT: Stupid error on my part, it should've looked like this: code:
quote:From a real word perspective all solving that equation will do is tell you when your projectile stops moving upwards, assuming you shot it straight up. That's not very useful without using other equations to find out how far it went. If you're not shooting straight up, the variables in that equation are poorly named since gravity only pulls objects towards the ground. The main thing that slows bullets/arrows down is drag/air resistance. I was short handing FinalVelocityY = InitialVelocityY - Gravity * Time, which was the wrong formula. I was thinking of this distance formula: DistanceY = InitialVelocityY * Time - (Gravity * Time ^ 2)/2. Solving for DistanceY = 0 is the state that the projectile is on the ground, usually Time = 0 and when it lands. BirdOfPlay fucked around with this message at 05:24 on Mar 31, 2017 |
# ? Mar 31, 2017 05:03 |
|
LLSix posted:Not time squared. The units for velocity are meters/second and gravity is an acceleration which has units of meters/second/second so multiplying my time once to get meters/second is correct. Right, when trying to get a velocity instead of a position. BirdOfPlay posted:DistanceY = InitialVelocityY * Time - (Gravity * Time ^ 2)/2 I knew it was in there somewhere though, for what he wanted. I'd probably just sim it. Ranzear fucked around with this message at 06:31 on Mar 31, 2017 |
# ? Mar 31, 2017 06:24 |
|
TheresaJayne posted:maybe change the color to more yellowish and reduce the saturation you are currently using halogen where you should be using tungsten or sodium There isn't anything to overtly set saturation with the built-in Unity lights. It does look like reducing the intensity below 1 does help a lot. However, I'm still saturated with some of my floors, but that is some other inconsistency. I actually moved out the lights, rebaked, and still had bright white floors.
|
# ? Mar 31, 2017 06:39 |
|
Rocko Bonaparte posted:There isn't anything to overtly set saturation with the built-in Unity lights. It does look like reducing the intensity below 1 does help a lot. However, I'm still saturated with some of my floors, but that is some other inconsistency. I actually moved out the lights, rebaked, and still had bright white floors. What is your ambient light set to?
|
# ? Mar 31, 2017 11:18 |
|
Unity question: Is there any reason I should prefer to use multiple scenes when I can get away with one? I made a scrolling shmup where the game background continues to scroll in the background while the player is navigating menus, so when you start and view the title screen, you can still see the background moving. It seems to me that there's no need to use multiple scenes when I can put everything in one scene and enable/disable the GUI objects as necessary. On the other hand, it seems like there's some helpfulness that it offers: If I use multiple scenes, then I can use Unity itself as a level editor, treating each scene as a complete level. If I do everything in one scene, then I have to generate each level in code, which is fine for procedurally generated content, but probably becomes a pain if I want to handcraft levels because then I'll have to find some other tool to create them.
|
# ? Apr 2, 2017 17:28 |
|
oliveoil posted:Unity question: Is there any reason I should prefer to use multiple scenes when I can get away with one? I've usually used one scene (granted very small games), and just reset/setup stuff myself. I hate the time it takes to load from scene to scene in Unity. Using multiple scenes does help segregate your code/objects so you don't have to keep cleaning stuff up. If you're generating all of the objects then having one scene is fine, if you have everything done in advance have multiple scenes will make a big difference for occlusion/light baking.
|
# ? Apr 2, 2017 21:56 |
|
|
# ? Jun 7, 2024 11:23 |
|
Multiple scenes are better for big games or splitting lots of content, single scenes are better for code-driven games. Note that you can combine scenes, so they can be more versatile than prefabs depending on what you're using them for (you can't nest prefabs and they're harder to edit). For example, I prefer using scenes for big static object hierarchies that I just wanna swap in and out like UI screens, and prefabs for stuff that's gonna be instantiated or repeated a lot like gameplay objects.
|
# ? Apr 2, 2017 22:21 |