|
Can you treat the building itself as pathable (but only for the unit that's trying to build it)? That way they'll take the shortest path to the center instead of going around the outside, and you can detect when they reach the edge in order to stop them and switch from movement to building.
|
# ¿ Sep 10, 2019 10:57 |
|
|
# ¿ May 11, 2024 19:42 |
|
sqrMagnitude is four adds and three multiplies, it's cheap as chips if you're just doing it once for each agent, especially compared to everything else you have that agent doing. The thing to watch out for is superlinear stuff, like if you were to start checking the distance between each agent and every other agent.
|
# ¿ Sep 14, 2019 06:25 |
|
Essentially, you need to include the version in the replay, and also ship every old version of the game logic so it can load the replay using the appropriate version. And you also need to handle the case where someone plays a bunch on version A, saves, then loads it up in version B and carries on from there. Probably with some sort of "switched game versions" special action.
|
# ¿ Oct 11, 2019 03:26 |
|
It seems like Welzl's algorithm (which computes the bounding circle on a Euclidian plane) is trivially portable to your spherical scenario? I guess the big complication with a sphere is that any circle on the surface could be drawn in two ways, but you can resolve that easily if you can constrain your points such that the true bounding circle is always less than half the sphere. Jabor fucked around with this message at 09:23 on Dec 19, 2019 |
# ¿ Dec 19, 2019 09:21 |
|
It occurs to me that on a sphere, finding the bounding circle is equivalent to finding the largest circle that doesn't contain any of the points, so you might be able to do better by constructing the voronoi diagram on the sphere and checking which vertex allows you to place the largest circle. Which would also work for the case where the bounding circle is larger than a single hemisphere.
|
# ¿ Dec 19, 2019 09:48 |
|
It doesn't matter what the rest of your rotation is as long as your camera is (as you specified in the problem) at a fixed 45 degree angle. You could spin it in circles maintaining that 45 degree angle and it doesn't seem like the answer you're looking for would change? So, forget about all 3d poo poo for now. Simplify it down to a 2d scenario, work out exactly what you want there, and then generalize it back up to the 3d world (which should be pretty straightforward). I still have no idea what you're actually trying to accomplish, so perhaps try drawing a 2d diagram of your situation and what you actually want to achieve?
|
# ¿ Feb 8, 2020 06:29 |
|
It seems like you should be able to do what you want. - You know the height of the camera above the Y-plane. - You know the angle of the camera (45 degrees) - Hence, you know how far away your blue square is from the camera (it's the height above the x plane / cos(45)) - Hence, in the camera space, your ray starts at [0, 0, (camera.y - yplane.y) / cos(45) ] Jabor fucked around with this message at 11:13 on Feb 8, 2020 |
# ¿ Feb 8, 2020 11:10 |
|
Will it actually give you the same decision every time? What happens if circumstances change while you're performing those tasks? Surely your ai will make different decisions in different circumstances? How "sticky" you want tasks to be (equivalently: how eager agents should be to drop their current tasks when a higher-priority one becomes available) is a design decision you can think about and iterate on, but regardless you'll find it easier to strike a good balance if you look at it in terms of macro-tasks. If task stickiness depends inherently on how finely you've sliced things up into subtasks, you're going to have a bad time keeping it in a good place as you iterate on other things.
|
# ¿ Mar 5, 2020 23:57 |
|
You mention this allocation happens "on" the first frame ... Does that mean it has not yet happened when your first bit of code for that frame runs? If so, you could consider setting a breakpoint there, and then capturing a profile of just the first frame itself, not any of the startup stuff.
|
# ¿ Apr 6, 2020 06:15 |
|
Transform it to a polar system (target is r distance away at an angle θ, current speed is v at angle φ), and you can do things exactly the same regardless of how many dimensions you have and will wind up with an orientation-independent result.
|
# ¿ Apr 8, 2020 04:21 |
|
If you wanted to track positions internally in spline space, but have momentum and user inputs and stuff act on world space, then you need to convert these things from world to spline space at the point you're actually moving the character. I'm assuming your spline space involves a position along the spline, and at each position you have height and horizontal position axes that are mutually orthogonal with themselves and the direction your spline is travelling. So, you've got a vector indicating the worldspace displacement you want to apply, and you have the current position in spline space. Assuming your spline is smooth and your displacement is small, you can use the current normals as an estimate for the normals at the new position, guess what the t-value at the new position will be, look up the actual spline normals at that point, and then do a few newton-raphson steps to get your estimate of t sufficiently close to the actual value. Then it's a simple change of basis to get the spline coordinates from the worldspace coordinates. Note that this mostly precludes the possibility of jumping off the spline and landing back on it at a much later point, so if you feel that is (or could be) important to your game then you might not want to go this route. -- Trying to calculate where you are in spline space based purely off worldspace position, without any concept of where you previously were in spline space, is challenging - in the abstract, you have to deal with degenerate points that could reasonably be at many different possible spline space coordinates.
|
# ¿ May 27, 2020 08:09 |
|
That's a bit of a hacky way to do it, and you're kind of fighting against the component there - have you considered just having the input field cover the editable part of the prompt, and adding the static parts outside of it?
|
# ¿ Jun 4, 2020 07:00 |
|
Can you use a sprite that always faces the camera instead of your 3d rendering pipeline? (This is what Doom did for enemies and pickups).
|
# ¿ Jun 15, 2020 02:28 |
|
With divide, you're going to have some huge values where a large number was divided by a very tiny one, and that's going to screw with your scaling. Try using a logarithmic normalisation instead of a linear one, or something like that.
|
# ¿ Sep 12, 2020 08:01 |
|
Raenir Salazar posted:Wouldn't it be all white in that case? Because the "big numbers" are white pre-scaling. The very biggest number is going to be white, but it's so much bigger than every other number (even the moderately-big ones) that using a linear scaling, the whole rest of the picture ends up black.
|
# ¿ Sep 12, 2020 17:27 |
|
Instead of generating colours from the entire spectrum, you could partition the range of hues into five different buckets, with gaps in between them. Assign each region to a bucket so that no two adjacent regions are in the same bucket, give each region a colour from its bucket, and now you've guaranteed that adjacent regions are sufficiently far apart hue-wise.
|
# ¿ Oct 13, 2020 03:08 |
|
It seems strange that your supposedly normalized fireVec does not have overall length 1.
|
# ¿ Aug 19, 2021 03:09 |
|
I was writing something up about what sort of debugging strategies I'd employ to try and figure this out, but had a little spark of insight while I was doing so. Why are calculating fireVec to be from the barrel's current position to the target?
|
# ¿ Aug 19, 2021 04:11 |
|
Notice that the code you've written it's symmetric - if you swapped the character position and the aim point in your code, you'd get the same number back out, so how is it ever going to be weighted towards one of them over the other? The basic formula for a weighted average is (aX + bY + ...)/(a + b + ...). With two vectors and wanting to find the midpoint (i.e. weight them evenly) this works out to (X+Y)/2, as you might expect. If you want to weight it 85% towards one option, you could do (85X + 15Y)/100.
|
# ¿ Aug 22, 2021 15:54 |
|
Assuming you're not deliberately making a Resident Evil callback by using unplayable tank controls, you should determine your absolute input by multiplying the controller input by the camera rotation, and not caring at all about the player rotation.
|
# ¿ Oct 6, 2021 00:43 |
|
It's going to be pretty minor compared to silently allocating memory every single time you call Equals, but I also noticed you're iterating through your label array like:code:
A modern CPU can execute billions of instructions per second - with merely 33 million pixels to look at, I'd expect you could get times comfortably under a second with just a single thread if you can avoid the CPU stalling because it needs to access memory instead of local caches.
|
# ¿ Oct 26, 2021 00:25 |
|
Basically, accessing main memory is really really slow. Hundreds of clock cycles slow. Instead of waiting for main memory all the time, the processor will have a local cache that can be accessed much more quickly. This cache is organized as a series of 64-byte cache lines, and it will contain both lines that have been recently accessed, and sometimes lines that a (rather simple) prefetcher thinks will be used next. When you're doing a bulk operation, where you do the exact same thing to very many bits of data, you really really want things to be arranged so that you access memory in a linear sequence - that way you access memory that's already cached as much as possible, and when you step from one cache line to the immediate subsequent line, that's probably already been prefetched. Using a 1d array is one way to make it clear whether you're accessing the elements linearly, or if you're making big jumps between widely-spaced elements. But you can do it with a 2d array as well - the implementation of multidimensional arrays is pretty straightforward.
|
# ¿ Oct 26, 2021 01:04 |
|
Now you've got the opposite problem - you're jumping all over map instead of accessing it in a linear fashion! You need to transpose labelMap so that it gets walked in the same way as map - with y positions first and x positions second - then you'll be able to set up your loops so that both arrays are always accessing adjacent elements.
|
# ¿ Oct 26, 2021 01:53 |
|
0,0 is still the top left. Right now 0,1 is the leftmost column, one pixel down from the top, while 1,0 is top row, one pixel in from the left. You want it to be the other way around, so 0,1 is top row, one pixel in. This will make it so that map[i*width + j] and labelMap[i, j] correspond to the same position.
|
# ¿ Oct 26, 2021 03:06 |
|
Raenir Salazar posted:Hrm, IIRC in Unity3D for texture coordinates the origin 0,0 is Bottom Left no? So when I import my data from my Texture2D its 0,0 assumes bottom left? I think its been a couple of times now I said bottom left and you said top left and I want to clear this up in case it matters. I'm not familiar with Unity3D specifically, you're probably correct here. It doesn't really matter in this case though, the important thing is that the coordinates should be the same for both your labelMap array and your imported texture data. In the code you've presented they're swapped around (labelMap[j, i] corresponds to the map pixel map[i*width + j]), you need to un-swap them.
|
# ¿ Oct 26, 2021 03:28 |
|
Yup, that sounds about right. The way you have it is correct - note that immediately after processing, say, element 2037, you then proceed on to element 2038, which will be right next to element 2037 in both arrays. That gives you very good locality. I think further optimizations are possible, but you'd need to change the structure you're outputting (but at a guess, the "bag of coordinates" you're currently outputting is going to be difficult to do anything performant with downstream anyway...). My first thought would be that the whole "finding which label is smaller when there are two possibilities" part is totally redundant - if you're going to be using UnionFind to merge the labels together, it doesn't actually matter which one you picked for this node. So you could use an algorithm like: code:
e: I guess swap "above" with "below" for your use case, I know I'm going to keep messing that up so I won't even try to fix it. Jabor fucked around with this message at 05:06 on Oct 26, 2021 |
# ¿ Oct 26, 2021 04:57 |
|
Generally speaking, vectorizing with 4-way SIMD instructions will improve performance somewhere between "not at all" and a 4x improvement depending on the problem. I'm not too sure it'll help in this case - critically, the label you assign to the fourth value you look at is going to depend on what label gets assigned to the preceding three values - that dependency is going to heavily limit how much benefit you can get from vectorizing. It's more useful for totally-parallel operations where the output from one element only depends on a defined set of inputs, and not on the output from other elements. You could perhaps look at a diagonal stripe all at once - say, process (1,4),(2,3),(3,2) and (4,1) as one unit, followed by (1,5),(2,4),(3,3) and (4,2), that way all of the outputs each one depends on will have already been written. You'd need to write some special-case code to handle the start and end of each row. It's questionable whether that would actually improve things though - if your existing solution was constrained by memory bandwidth instead of CPU cycles, it probably wouldn't. If you really wanted more performance, you could learn about using profiling tools to identify what parts of your code are the bottlenecks where improving them would net the biggest gains.
|
# ¿ Oct 26, 2021 05:34 |
|
What to replace the bag collection with is going to depend a lot on what you intend to do with this output. For a lot of use cases, a single point within the region is equally as useful as the entire bag of points, because what the consuming code really wants to do is something like find all the boundary points, or work with a 2d bitmask of in vs. out points, or something like that. But it really depends on what you want to do.
|
# ¿ Oct 26, 2021 05:39 |
|
Your Union implementation is wrong - when you join two elements, you need to find the roots of each tree and join those, rather than just joining the elements you're given. The Find implementation is correct, but probably isn't optimal - I wouldn't expect the compiler or jitter to attempt to inline recursive calls, so you're paying function call overhead every time even when you're only one step from the root. Look up and implement one of the iterative Find implementations instead. Lastly, you're allocating each label in a separate allocation which means they're going to be scattered around the heap. Consider making Label a value type, have an array that holds all the labels, and refer to each label purely by index in that array. If you set up your struct so that when it's zero-initialized, that correctly indicates a new label that hasn't been joined with anything yet, you can totally skip initializing labels and just keep a counter referring to the first unused one. For example: code:
(If you're clever, you can then notice that parentIndex and rank are never used at the same time, and you just need the bool and one int. If you're really clever, you can store the bool in one bit of the int as long as you're never going to need more than 2^31 different pre-merge labels).
|
# ¿ Oct 28, 2021 06:38 |
|
To be clear, the Union implementation in that CodeProject article is wrong. It might be sufficient for the exact use case the article author applies it to - proving that it is could be tricky. It's not correct as a general implementation. The wikipedia article HappyHippo linked is fantastic, it's a much better reference on UnionFind than any random hands-on tutorials you're going to see floating around. I would recommend implementing it yourself based on the wikipedia article - it's a good learning experience. -- On performance, the big secret (and it's not really a secret) is that having lots of tiny little object instances sucks for performance. They're great for having flexibility and being able to rapidly iterate on things through the design phase, but once you know exactly what you're doing and you know it needs to be high-performance, you really want to get away from objects and focus on a more data-oriented programming model. As a rather trite example, when you have an image, you don't model it as a big list of pointers to individually-allocated Pixel objects - instead it's a big array containing the pixel data directly.
|
# ¿ Oct 29, 2021 00:53 |
|
Raenir Salazar posted:@Jabor I'm not quite sure what isChild is for, as it seems like whenever I assign a new label I can just set the parentIndex to itself and that should work? The idea is that you don't need to initialize anything at all - since the 0 state already represents "a self-contained tree that hasn't been joined to anything". That way you don't need to touch the labels array if you're just creating new values, only when you're actually unioning things. If you're not using it just delete it from the struct, making your array fit the same number of entries in less memory will improve your cache hit rate and thus performance. Raenir Salazar posted:
This is wrong - if you're joining a small X tree to a larger Y tree, now all the parents of the actual node being joined will be dropped. You could just do a three-line swap with a temporary variable: code:
code:
|
# ¿ Oct 29, 2021 09:58 |
|
Your structure seems fine, it sounds like what you need is better tools for working with it. For example, you could write a converter that takes Ink's data files and converts them to your format, then use all the Ink tooling to actually do the writing.
|
# ¿ Jan 19, 2022 10:12 |
|
When you start nesting multiple levels of collections like this, it's good to take a step back and think about whether defining some new classes would make things clearer or simplify some implementations. For example, if you defined an EventManager class that handled dispatching an event to a set of observers, and made the dictionary in your top-level registry just a Dictionary<Type, EventManager>, that would separate those concerns and allow you to focus on them individually. You could also define a RegistrationRecord struct that encapsulates the listener and function to invoke (and sort key, if you choose to go that route) and simple have the EventManager hold a single List<RegistrationRecord>. When dispatching events, just iterate the list and call the functions - when adding or removing listeners (relatively rare in comparison), iterate the list until you find the right spot to insert or listener to remove. You only have a small number of listeners so it's not a big deal.
|
# ¿ Feb 3, 2022 02:25 |
|
Not game dialog specifically, but our UI is translated into a variety of languages and if you just tell the translators what specific placeholders mean then they'll do a pretty good job of keeping them in the right place. I wouldn't expect them to learn, understand, and rewrite your dialog control scripts to make them do new things for a particular language though, especially if this is a one-off translation job rather than an ongoing contract - you'll probably get similar results regardless of which mechanism you go for.
|
# ¿ Feb 22, 2022 05:06 |
|
What's with the complete lack of comments and meaningful names in your code? Clear and understandable code is easy code to identify higher-level improvements in. Higher-level improvements like "not using an n^2 deduping algorithm where you add all the verts one-at-a-time to an array and then check the entire array for duplicates with every new vert you see".
|
# ¿ Apr 11, 2022 18:06 |
|
Are you sure it's not just that you're computing which direction is "down", and then the orientation of the cube is being updated before anything is actually drawn on screen? That would match your observation that the amount of error is proportional to how fast the cube is spinning.
|
# ¿ Apr 17, 2022 11:07 |
|
A "2d vector drawing lib" is a pretty good description of the Canvas API. Do you need to put a library on top of it?
|
# ¿ Aug 12, 2022 14:18 |
|
Gotta add "date handling" to the joke I guess.
|
# ¿ Nov 1, 2022 06:05 |
|
Have you considered antialiasing your lines? Instead of having a hard "close to a line -> set colour to black; not close enough -> do nothing", you'd set a shade of grey based on how close the pixel is to the line.
|
# ¿ Nov 2, 2022 02:50 |
|
|
# ¿ May 11, 2024 19:42 |
|
Experience, mostly. You gently caress it up, see what went badly, and make different mistakes next time. There's a lot to be said for getting enough into the prototype stage that you can evaluate whether your game design is actually fun to play before knuckling down to rewrite the dodgiest parts - then your rewrite can focus on supporting your next design iteration instead of the one that you're just about to throw out.
|
# ¿ Nov 16, 2022 01:50 |