|
That kind of thing is way easier if you. 1: Approximate your spline to a series of line segments. 2 Map those line segments to points on the spine. Then you just use the distance along the line segments and linearly interpolate between points on the spline.
|
# ? Aug 22, 2017 13:23 |
|
|
# ? May 25, 2024 15:12 |
|
I decided to load up the Unity docs and see how the new Sprite Masks are used. The demos use Unity-chan* and 2 other animes I don't know. What the hell Unity? *Lest I assume it is. She has the Unity logo on her necklace.
|
# ? Aug 23, 2017 00:48 |
|
BirdOfPlay posted:I decided to load up the Unity docs and see how the new Sprite Masks are used. The demos use Unity-chan* and 2 other animes I don't know. What the hell Unity? That being said, I mean, you're right. REALLY, they have one obvious choice for all future demo content, and it's a crying shame it isn't their universal default humanoid. https://www.youtube.com/watch?v=AJ6Mkx1KEns&t=39s I mean, just, look at him. He's perfect. Shalinor fucked around with this message at 05:56 on Aug 23, 2017 |
# ? Aug 23, 2017 05:52 |
|
I’m doing some homemade 3D stuff again on ancient computers and totally forgot how to rotate my camera orientation. I’ve got a model with position/rotation/scale vectors and a “camera” orientation with a position vector and a rotation vector, all in Euler angles. Right now my view matrix is generated by multiplying (Perspective * Camera orientation * Model orientation) which works for applying the camera translation but doesn’t do any camera rotation. It turns out that didn’t work right in my old code either... I feel like a dingus being able to get my Amiga to draw wireframe 3D but not rotate the drat view.
|
# ? Aug 23, 2017 08:52 |
|
Luigi Thirty posted:I’m doing some homemade 3D stuff again on ancient computers and totally forgot how to rotate my camera orientation. I handmade a 3D engine in college and while it rendered everything nice, there was a bug that if you rotated the camera around over and over and over some slight error in the math somewhere would accumulate and everything would slowly get weird Didn't manage to figure out the bug in time for the project and I never ended up going back. I mean you already know this, but you're just missin some transform somewhere. Just gotta go over the math.
|
# ? Aug 23, 2017 09:02 |
|
Luigi Thirty posted:Right now my view matrix is generated by multiplying (Perspective * Camera orientation * Model orientation) Normally the MVP matrix is generated by multiplying the model (world transformations), view (camera transformations) and projection matrices (converting 3d -> 2d). I'm missing a lot of details to be sure, but if the things you call "Perspective", "camera orientation" and "model orientation" are all matrices then maybe what you are generating here isn't the view matrix, but the MVP matrix? The view matrix is normally what you call the camera. The camera can usually be described by 3 vectors (the camera position, the up vector, and the forward vector/camera target) which can then be converted to a view matrix, for example like this. You can describe the camera as a translation and a rotation too, ofcourse. Mata fucked around with this message at 09:35 on Aug 23, 2017 |
# ? Aug 23, 2017 09:29 |
|
Luigi Thirty posted:I’ve got a model with position/rotation/scale vectors and a “camera” orientation with a position vector and a rotation vector, all in Euler angles. Right now my view matrix is generated by multiplying (Perspective * Camera orientation * Model orientation) which works for applying the camera translation but doesn’t do any camera rotation. It turns out that didn’t work right in my old code either... I feel like a dingus being able to get my Amiga to draw wireframe 3D but not rotate the drat view. Not sure exactly what you mean by "view matrix" vs "camera orientation" here. clip_space_vertex_pos = projection_transform * world_to_view_transform * local_to_world_transform * local_vertex_pos is how this is normally set up, at least assuming column vectors. The product of all the matrices is generally called the model-view-projection matrix, with the view matrix being specifically the middle submatrix. The view matrix contains both the rotation and translation for the camera, so not just orientation. It is independent of the projection you use or model to worldspace transform for the current mesh. Possible checks: - What does your local_to_world_transform look like as you rotate? The expectation is that the top left 3x3 submatrix forms an orthonormal basis that maps world space axes to the axes of your camera. A 90 degree rotation should result in the same numbers with positions and signs switched around, for instance. - Eliminate all the other junk. Change the local to world transform for your mesh to identity and the projection to a trivial orthographic (e.g. also identity). Put the camera at origin, looking down -Z and put your model z = -1 or something like that. Still no rotation? E: Well, I'm slow.
|
# ? Aug 23, 2017 09:41 |
|
Shalinor posted:... I mean would you prefer they used that one grimdark space marine again? On a balance, Unity-chan seems as fine as anything else. I was fine with the zombies for 2D. It was mainly that I'm now at a coworking space. Everyone else is working on their serious business apps, and I'm here with my XBone controller and loading this doc page. quote:That being said, I mean, you're right. REALLY, they have one obvious choice for all future demo content, and it's a crying shame it isn't their universal default humanoid. I want him walking everywhere to that music.
|
# ? Aug 23, 2017 18:39 |
|
Mata posted:Normally the MVP matrix is generated by multiplying the model (world transformations), view (camera transformations) and projection matrices (converting 3d -> 2d). Hmm. Here's what I have. I know all my GetXMatrixForOrientation functions work properly because they get applied to the model. This matrix then gets multiplied by the model's vertices. Looks like I'm generating the MVP matrix. The camera orientation here is a translation and rotation struct in Euler angles. I guess that orientation is not quite what I need here. code:
|
# ? Aug 23, 2017 21:01 |
|
Can't really speak to the other issues you're having, but you're wasting a ton of time making and then multiplying matrices for each Euler angle and the translation separately. Just build the matrix directly. You can work it out on paper yourself or look here for example for a typical x then y then z convention.
|
# ? Aug 23, 2017 21:44 |
|
Agreed with Lime, it's simplest to calculate as few matrices as possible, and one 4x4 matrix can perfectly fit a 4vec for translation, rotation and scale. There's no reason to use a matrix for each of those when a 3vec will work just as well. There's a lot of things it could be, but uh... Try reversing the order you multiply the world * view * projection matrices?
|
# ? Aug 23, 2017 23:11 |
|
Yeah, especially since I have no SIMD instructions and an FPU from 1989. I was mostly just trying to get it to work, then worry about optimizing it once I know what the results are supposed to be.
|
# ? Aug 23, 2017 23:35 |
|
Sure, optimization is irrelevant, but simplifying code that is more complex than it needs to be can be very useful for reducing the surface area for bugs. Remember that when multiplying matrices the way you are doing, ordering is important - if you build the world matrix like translation * rotation * scale that's not the same matrix as scale * translation * rotation.
|
# ? Aug 24, 2017 00:21 |
|
Luigi Thirty posted:Hmm. Here's what I have. I know all my GetXMatrixForOrientation functions work properly because they get applied to the model. This matrix then gets multiplied by the model's vertices. Looks like I'm generating the MVP matrix. It's hard to tell from the code what numbers you have, but have you checked that your camera matrix contains the inverse rotation and translation? You want objects to move to camera space, so if the camera is at {2, 5, 10}, your camera matrix should contain {-2,-5,-10} in the translate part. Same applies to rotation. Also, it this is the case, remember that the inverse rotation matrix == transpose of the rotation matrix. Zerf fucked around with this message at 05:31 on Aug 24, 2017 |
# ? Aug 24, 2017 05:23 |
|
I rewrote it to do all the model rotations at once, use a right-handed coordinate system, and fixed my screwed up perspective. Much faster and better.
|
# ? Aug 26, 2017 01:01 |
|
Hi all, I've been trying to learn C# so I can develop in Unity, and I've run into some confusion over how to handle certain functions. I'm trying to build a strategy game similar to XCOM, and I'm currently focusing on the strategic, base-management side of things. I want to have certain behaviours trigger under certain conditions, such as resources being gained at the start of each month. My original plan was to create a monobehaviour script called "Event Handler", containing a list of methods associated with a given event (such as OnDayPassed or OnMonday), which all outside scripts would call to whenever they triggered an event. I created this system prior to learning about C# events, but now I understand how to create and use C# events, I'm wondering if they would even be worth using in the first place. My original system keeps all the event-related methods in one central location, and I can't think of a way of using C# events that wouldn't just do the same thing, which begs the question of why even bother using them? I'm probably explaining this really badly, but my question is, what are the benefits of using C# events, compared to direct method calls?
|
# ? Aug 30, 2017 13:07 |
|
Boron_the_Moron posted:Hi all, I've been trying to learn C# so I can develop in Unity, and I've run into some confusion over how to handle certain functions. Similarly, for updating global resources, you'd maybe have a single resources object for each player, so it would save you coding a loop or an array of storage states, instead each one "updates itself" when the event occurs.
|
# ? Aug 30, 2017 15:25 |
|
Because with events, the thing calling the events doesn't need to know or care about what objects are subscribed to the event. This is important if you're calling a method instead of a function with your event. Let's say you want to implement a method where all units turn on a flashlight at night. Without events, the code making this happen would need a list of references to every instance of your unit class and that list would need to be updated constantly. It gets really messy really fast.
|
# ? Aug 30, 2017 15:31 |
|
Yeah events are good for instances where a simple call is needed to lots of different classes. But IMHO once you start passing a lot of data and getting into using them for critical tasks they can get real messy and hard to follow real fast.
|
# ? Aug 30, 2017 15:59 |
|
Ooookay. I guess I don't actually understand how to use events. Or at least, I don't understand how events differ between C# standalone programs and Unity scripts. I've got a "teach yourself C#" book open in front of me, and I just watched some Unity tutorials, and the two use very different code to explain how to use events. The example in my book seems much more complicated, and seems to massively under-sell what you can do with events, to the point where they seem no more useful than direct method calls. Is that unusual? Most of the things I've learned from my book have been directly applicable to Unity scripting, but now I'm worried that there might be more things in here that don't match up, and will confuse me going forward. I can post the example code from my book, if anyone's curious.
|
# ? Aug 31, 2017 00:24 |
|
Sure, post the code and we'll break it down.
|
# ? Aug 31, 2017 00:44 |
|
code:
However, now I look at it, I think I see the problem. The thing that was tripping me up was the fact that the Main() method has to create a CharChecker object before it can subscribe an event handler to it. To my mind, that seemed no better than just having the CharChecker do a direct method call to Drop_A(). But having looked at the Unity tutorials, I realise that if the event was declared static, it could be subscribed to whether there was an object associated with it or not. With that realisation, events actually seem stupidly simple now. Have I got it right?
|
# ? Aug 31, 2017 11:31 |
|
Boron_the_Moron posted:It's a very simple program, in which a series of characters are set to a char variable in a class, created by the Main() method. But every time the char variable is set, an event is triggered, which calls the Drop_A method in the main program. Drop_A() checks which character was sent, and if the character was an "A" or "a", it is replaced with an "X". The program then prints the character that was assigned to the char variable (either the original character, or the new X character). You understand what the code does, but I think you're still missing why it's powerful, even without static classes. Let's imagine we're building an RTS with the classic "buildings and units blow up when you lose" feature. How might we program this? Without events: code:
code:
Note how with events, you don't need to keep a list of all the objects you want to do something with - the objects register themselves with the event. Without events, you need to explicitly call every object with loops, which becomes a huge PITA when you have lots of different kinds of things that need to be called. Edit: Here's a working example: https://dotnetpad.net/ViewPaste/52RiHWXjwUOF54KPz_4ZgQ KillHour fucked around with this message at 16:11 on Aug 31, 2017 |
# ? Aug 31, 2017 15:31 |
|
I was mostly asking if I understood the syntax correctly, but thank you for the detailed answer. Believe me, I realise exactly how powerful events are. I've got a giant monobehaviour script full of loops, from when I tried to work without them. One other question, hopefully the last: in C# standalone programs, events require an object and an EventArgs declared as parameters, but in Unity scripts events can be declared with any parameters, even none at all. Why is this? Is it something I need to pay attention to?
|
# ? Aug 31, 2017 20:40 |
|
Boron_the_Moron posted:I was mostly asking if I understood the syntax correctly, but thank you for the detailed answer. Believe me, I realise exactly how powerful events are. I've got a giant monobehaviour script full of loops, from when I tried to work without them. I was subtly trying to imply that you shouldn't use static classes that hold data that's going to change (like time of day). I'm guilty of this and I learned the hard way that it's a bad idea. Boron_the_Moron posted:One other question, hopefully the last: in C# standalone programs, events require an object and an EventArgs declared as parameters, but in Unity scripts events can be declared with any parameters, even none at all. Why is this? Is it something I need to pay attention to? Not strictly required, just part of the style guidelines for .NET. Unity technically doesn't use .NET, so it's up to you. https://msdn.microsoft.com/en-us/library/aa645739(v=vs.71).aspx posted:Although the C# language allows events to use any delegate type, the .NET Framework has some stricter guidelines on the delegate types that should be used for events. If you intend for your component to be used with the .NET Framework, you probably will want to follow these guidelines. KillHour fucked around with this message at 23:01 on Aug 31, 2017 |
# ? Aug 31, 2017 22:50 |
|
Boron_the_Moron posted:One other question, hopefully the last: in C# standalone programs, events require an object and an EventArgs declared as parameters, but in Unity scripts events can be declared with any parameters, even none at all. Why is this? Is it something I need to pay attention to? Kill hits the main note that it's not required. The only thing events require is a delegate that defines what the event will look like. For health behaviors, I have an EventHandler that looks like this: code:
BirdOfPlay fucked around with this message at 14:46 on Sep 1, 2017 |
# ? Sep 1, 2017 05:04 |
|
Unity friends, are there any good reads on bounds, Wait nvm I figured out the obvious after typing this out, localScale. Still, I'm not sure why bounds appears so much for measuring objects in the usual unity forums/stackexchange/whatever google place. Man Musk fucked around with this message at 09:59 on Sep 3, 2017 |
# ? Sep 2, 2017 21:36 |
|
Man Musk posted:Unity friends, are there any good reads on bounds, What have you done? Hint: Use "fixed" tags to make words look like this not...whatever you did. Seriously, my post box is full of directives. As for what you're talking about, the Scripting API is my go to for this sort of thing, though some classes are very poorly explained. That said, Bounds isn't one of them. So, first things first, you're dealing with 2 completely different concepts here. Bounds describes a volume in world space and is a variable/property of MeshRenderer and all the variations of Collider used for physics. Transform.localScale is the scale factor of the GameObject itself, and excludes the scale of any parent Transforms. Consider a "House" GameObject scaled by 2 in all directions. The localScale of the "Door" would still be 1, but its actual size would be 2x its local size. In this case, if you wanted to find out how big the "Door" is, you'd be better served by using bounds.size.
|
# ? Sep 2, 2017 23:44 |
|
Man Musk posted:Unity friends, are there any good reads on bounds, This is really hard to read. To answer, bounds and scale are two very separate things. You can make a gigantic object in your 3D software of choice and it'll be gigantic with a scale of 1. You can then make a much smaller object in the same scene, export that object, and that object will also have a scale of 1, but be arbitrarily smaller. The scale doesn't mean anything in regards to the actual size of the object, just how much you're scaling that size. Bounds.size measure the size of the object in world space. So bounds are not gonna correspond to the values in Scale, they have no reason to. Their only correlation is that if you increase the scale you also increase the Bounds.size, but other than that they can be wildly arbitrarily different values because a scale of 1 can mean any bounds.size. Elentor fucked around with this message at 01:23 on Sep 3, 2017 |
# ? Sep 3, 2017 01:16 |
|
Man Musk posted:Unity friends, are there any good reads on bounds, why would you post like this
|
# ? Sep 3, 2017 01:56 |
|
Is that seat taken? I come bearing fruit
|
# ? Sep 3, 2017 06:36 |
|
I've come bearing garbage I'm awful at predicting how hard things are going to be. I wanted to improve the visuals of the map, and I wanted to use a noise function to give heights. I figured implementing the noise would be some silly dumb interpolation thing that'd mostly just be some copy/paste job of some thirty year old algorithm. But I dreaded the thought of trying to modify a 3d object via code, because its basically, to my primitive understanding, baleful wizardry reliant on having to actually do math. Then I tried to implement perlin noise and got confused immediately by about every single aspect of it. I didn't know what a gradient actually was, what a dot product was or what it was used for, what a "lattice" was and how I could "combine" it with my hex grid, how the fractal aspect of perlin noise was done, what any of the parameter words meant, or what values I was getting from the function and what I was meant to do with it. It took me basically all of loving August to get to the point where I felt like I could write my own implementation of Perlin and describe what every part was doing (I'm not trying to reinvent the wheel, but gently caress man if I'm writing a game that relies on procedural generation I figured understanding perlin noise and the principles of noise generation was sort of important). And it turned out to really be kind of embarrassingly simple to understand once I got over my avoidance in actually studying it. Meanwhile, giving my tile meshes a bottom took maybe 15 minutes of casual napkin math, and 15 minutes of coding. gently caress everything.
|
# ? Sep 3, 2017 07:42 |
|
A lattice is another word for a grid. A Perlin noise function takes spatial coordinates (usually in 2D but could be 3D or higher) as input and produces a smoothly varying random value as output, generally as a floating point value between 0 and 1 (or -1 and 1 depending on the implementation.) That is to say if you take the value of two points which are very close to each other, they will have similar values. That is not a property that true random has. That's all you really need to know, trying to understand the guts of how it works is unnecessary.
|
# ? Sep 3, 2017 08:05 |
|
Paniolo posted:That's all you really need to know, trying to understand the guts of how it works is unnecessary. Yeah, too late. I know it in depth enough to write a good portion of it from scratch. Maybe not the Smooth(t) bit, with the whole 6t-15t-10 or whatever that function is. Since I'm going to be procedurally generating like, every single map, I wanted to be able to modify noise algorithms to get the results I wanted, in particular perlin since it's already very good at making terrain heights. Even ignoring that, if I'm going to be reliant on something, I don't want to have to guess at its behavior. I understand that's not a very conductive trait for programming, but in my defense, I've never tried to give the impression that I am not an stubborn idiot.
|
# ? Sep 3, 2017 08:16 |
|
BirdOfPlay posted:What have you done? Hint: Use "fixed" tags to make words look like this not...whatever you did. Seriously, my post box is full of directives. Thanks for that hehehe I agree, what I'm trying to do would be much better served by bounds, since I would at some point be attaching a collider. However, this leads into the issue where if I tilt an object on its side, and find the y-length by multiplying the sine * (bounds.size)hypotenuse, I get a different value than if I hardcoded the hypotenuse myself then performed the operation. Elentor posted:This is really hard to read. I've been working with Unity placement cubes up until now. This might be a limitation of that, and a good time to learn blender? Actually I might possibly be disregarding the item width. Let me try that, will post trip report. Man Musk fucked around with this message at 10:13 on Sep 3, 2017 |
# ? Sep 3, 2017 10:07 |
|
Man Musk posted:Thanks for that hehehe Yeah, if you're using the same object everywhere, and the object is a cube of unit 1, then your scale values are gonna align with their size. Working with 1x1x1 primitives will yield that, but it's just coincidence that you started testing with them. If you had started with any other kind of model the difference between bounds and scale would be more obvious.
|
# ? Sep 3, 2017 11:01 |
|
I'm boggled that a 2010 doesn't know how to use code tags!
|
# ? Sep 3, 2017 14:54 |
|
Man Musk posted:Thanks for that hehehe First, that expression doesn't make sense. If you meant sine * bounds.size.y = hypotenuse, that's incorrect: it's bounds.size.y / sine = hypotenuse. If it means something else, I can't rightly figure out what "hypotenuse" you are referring to. So I missed a big point about Bounds, it's an axis-aligned bounding box, or AABB. The axis-aligned distinction means that the only thing it does is expand and contract along the axes, never rotating or anything like that. The reason your values are varying wildly is probably because of this fact. I drew up a two diagrams to show you what I mean, using a 2 unit high, 1 unit wide Eiffel tower. The dashed lines represent the AABB for the tower, and the numbers are the size in the appropriate axis. The left shows the tower in normal orientation. In this case, the height and width of the tower match that of its AABB. The tower on the right is where things get interesting, because it's been rotated 45 degrees (along the z-axis in 3D space). The tower itself hasn't been transformed at all, but the AABB has shrunk in the y-axis and expanded in the x-axis! Also, notice how the height of the tower directly related to either extent and finding it based on rotation angle and the extents is a nontrivial task. Which brings me to the big point here, what are you trying to do? I don't think you're trying to accomplish something overly complicated, but this route you're going down is more complicated than needed. For example, if you're trying to spawn effects on the top of the tower, it'd be easier to just create a child GameObject where it needs to be and reference that in code. I do this pretty regularly when I need a reference point to spawn or tether something. I imagine that these empty children are fairly lightweight, and they stay put without any work on my end. quote:I've been working with Unity placement cubes up until now. This might be a limitation of that, and a good time to learn blender? Will modelling poo poo ever be your job? Like, I wouldn't worry about messing with Blender if: A) the project is not meant for public consumption or 7) somebody else will be doing the final assets. What you can do is make your primitive cubes more tower like by making them taller than they are wide. I typically have my mesh set as a child of the main GameObject; this way any scaling done to the mesh doesn't affect the other components. This may or may not be the usual practice. Tiler Kiwi posted:I've come bearing garbage Aww, it's cute. Also, your river's going uphill.
|
# ? Sep 3, 2017 20:24 |
|
Tiler Kiwi posted:Yeah, too late. I know it in depth enough to write a good portion of it from scratch. Maybe not the Smooth(t) bit, with the whole 6t-15t-10 or whatever that function is. No you really don't want to modify the noise algorithm. You really, really, really don't. That's how people go down the road of "I'm just going to tweak this crypto algorithm slightly" and then everyone's bank accounts are stolen. Do whatever you want with the output of the function-- scale it, run it through a dozen other functions, combine it with multiple noise samples taken at different resolutions. That's where all of the juice is in procedural generation. But for your own sake don't gently caress with the noise function.
|
# ? Sep 3, 2017 21:00 |
|
|
# ? May 25, 2024 15:12 |
Paniolo posted:No you really don't want to modify the noise algorithm. You really, really, really don't. Well except if you play around with a noise function you might get some hosed up contours on your landscape or something breaks if you go too far from the origin. Like sure, like in crypto you need a very firm understanding of some very advanced maths to get it completely right, but unlike crypto, getting it wrong will be an inconvenience at most and you might learn something.
|
|
# ? Sep 3, 2017 23:13 |