Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense

Jabor posted:

It seems like you should be able to do what you want.

- You know the height of the camera above the Y-plane.
- You know the angle of the camera (45 degrees)
- Hence, you know how far away your blue square is from the camera (it's the height above the x plane / cos(45))
- Hence, in the camera space, your ray starts at [0, 0, (camera.y - yplane.y) / cos(45) ]

it does seem so doesn't it.

Now I've got the camera and ray separated because I got sick of trying to make that work. The ray and camera are both children of the rig, the ray and camera are both rotated -45 degrees on x. Which shouldn't mean anything it's just the direction that the ray is pointed. So now I'm trying to place the cock making GBS threads ray at tan(45) * y_distance.

The ray is in space for all I know.

Adbot
ADBOT LOVES YOU

Xerophyte
Mar 17, 2008

This space intentionally left blank

Nolgthorn posted:

I mean sure here's a picture.



Far as I can tell from a very quick googling unity's ray type consists of just an origin o and a direction d, no tmin or tmax. If that's right then you're basically looking to update the origin directly so it's just below the plane, correct?

That means you're looking to find a value t for such that an origin change o' = o + td puts the new o' just below the plane. The speed of the ray in the y direction is |d.y|, so the time it takes to end up just below the plane is height above the plane divided by that. I.e.

o' = o + (plane_height - o.y) / d.y * d

Does that make sense?


E: Incidentally, d.y = -cos(45°) if the direction is a unit vector so this is pretty much the same thing as suggested on the last page. You could also express it in terms of dot products for a generic ray-plane intersection.

E2: Likewise, if the ray origin in camera space then the plane_height needs to be the height in camera space, which is camera.y - plane.y as per last page.

Xerophyte fucked around with this message at 15:53 on Feb 8, 2020

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe

Nolgthorn posted:

I'm not sure how that equation works, I think it's confusing because the aim vector is at an angle in 3d space but your equation only takes into account y. Not to say it isn't correct, but what I really think I need is to calculate the intersection point between a line and a plane. The best (simplest) I can come up with is something like this:

The aim vector is 3D, but you don't need to consider all of the dimensions to determine the "how far do I march along this line" bit. You do take them into account when actually positioning the ray.

Let's take a 2D example. Say you have a camera whose look direction is (.707, -.707) (i.e. its slope is -1, which happens to be a 45 degree angle). The camera's position is X=50, Y=100. You want to position your ray 10 units above the Y=0 line. But you know that your ray can only be moved along the line that has a slope of (.707, -.707) and passes through (50,100). In other words, your question is "how far do I move along this line in order for the remaining distance in Y to be 10?"

Because you're only moving along the line, the amount you move in Y determines the amount you move in X. You can't move 50 in X and -45 in Y; that's not a position on the line. What this means, very conveniently, is that you can solve the "distance to move along this line" for one axis and the apply the same solution to all axes.

In this case, your camera is at Y=100 and you want the ray to start at Y=10. So it needs to move -90 in Y. In order for that movement to be legal it must also move a specified amount in X; that amount is 90 because our slope dictates that for every negative unit we move in Y we must move a positive unit in X. So the ray's position in this example would be (X=140, Y=10).

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

TooMuchAbstraction posted:

In this case, your camera is at Y=100 and you want the ray to start at Y=10. So it needs to move -90 in Y. In order for that movement to be legal it must also move a specified amount in X; that amount is 90 because our slope dictates that for every negative unit we move in Y we must move a positive unit in X. So the ray's position in this example would be (X=140, Y=10).
The bit you're missing there is that the ray-origin is specified in camera-space not global-space, so you could either figure that it's at (140,10) and then run a translation from global to camera-local space, or you could transform your y distance to the hypotenuse and then give the ray-origin a local position (0,0,hypotenuse_length). Where in this example hypotenuse_length is 90*cos(45)

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe

roomforthetuna posted:

The bit you're missing there is that the ray-origin is specified in camera-space not global-space, so you could either figure that it's at (140,10) and then run a translation from global to camera-local space, or you could transform your y distance to the hypotenuse and then give the ray-origin a local position (0,0,hypotenuse_length). Where in this example hypotenuse_length is 90*cos(45)

I feel like my approach is simpler than solving a triangle, especially if you move from 2D to 3D. But that is of course a subjective opinion, so you're free to disagree.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

TooMuchAbstraction posted:

I feel like my approach is simpler than solving a triangle, especially if you move from 2D to 3D. But that is of course a subjective opinion, so you're free to disagree.
Absolutely your approach is great if you're trying to calculate the global position of the ray start, I really liked it! But in the framework he's using he wants it in camera-local coordinates, which means the hypotenuse length is the number he needs to calculate, and you can't do that the easy and cool way.
(Your way reminds me of the Bresenham algorithm for lines.)

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
I am still working on this.

code:
var origin: Vector3 = camera.project_ray_origin(screen_point)
var target_y: = float(Game.current_layer + 1) * CONSTANTS.LAYER_HEIGHT
var yy: float = origin.y - target_y
var zz: float = yy * tan(45)
var offset: float = Vector3(0, -yy, -zz).rotated(Vector3.UP, rotation.y)
ray.global_transform.origin = origin + offset
Even when I break it down into drag my knuckles on the concrete levels of simplicity as seen above. Where I don't even really care anymore how long the hypotenuse is, and I'm manually rotating the point. It ends up in the correct y position, but z is off by a fairly significant margin. Even stranger the closer I move my mouse toward the top of the screen the worse the z offset is.

I imagine this has something to do with the camera being at an angle. So when I'm moving the mouse up the screen the y is increasing on the origin and the z is getting smaller. Which results in apparently a bad calculation, I thought it made sense that using an orthographic camera the calculation of 45 degrees would apply across the viewport.

If the z was getting larger you would think that might help so I tried swinging the camera around and pointing it in reverse. That doesn't help and even if it did it wouldn't explain why there's a z offset when my cursor is in the middle of the screen.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
This fixes it

code:
var zz: float = yy * tan(deg2rad(45))
Turns out these operations use radians not degrees. That drove me nuts.

Nolgthorn fucked around with this message at 11:58 on Feb 9, 2020

Kibbles n Shits
Apr 8, 2006

burgerpug.png


Fun Shoe
Is Unity's tilemap system good for procedural world generation and/or dynamic worlds, or is it more geared towards "painting" tiles into a static scene?

Edit: Assuming the world size and number of objects is within reason, of course.

Kibbles n Shits fucked around with this message at 04:36 on Feb 10, 2020

novamute
Jul 5, 2006

o o o

Kibbles n Shits posted:

Is Unity's tilemap system good for procedural world generation and/or dynamic worlds, or is it more geared towards "painting" tiles into a static scene?

Edit: Assuming the world size and number of objects is within reason, of course.

Easy enough to do both

Ruzihm
Aug 11, 2010

Group up and push mid, proletariat!


Kibbles n Shits posted:

Is Unity's tilemap system good for procedural world generation and/or dynamic worlds, or is it more geared towards "painting" tiles into a static scene?

Edit: Assuming the world size and number of objects is within reason, of course.

Unitystation uses it to do both (please ignore the failed unit tests, that's my fault :kiddo:)

Ruzihm fucked around with this message at 19:58 on Feb 14, 2020

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
Anyone interested in chatting about AI?

I've got a management game going and I need to start designing the AI system. I know I don't want to design an entire finite state machine for all the different possible things and how to do them, I came across a pretty nice solution called Goal Oriented Action Planning (GOAP). It basically boils down to: Each action has a set of prerequisites and effects. Given a list of actions, figure out how to achieve a goal. It's essentially a finite state machine but with all the edges being programmatic instead of well defined.

An agent that is supposed to build a thing knows it can do that by going to the warehouse to pick up a box, bring it to the location, and it's done. Where I am now, is trying to figure out which goal any given agent should have. So I've got an agent with memory x at location y and I've got a simulation with state z. What's the "select a goal" part?

How do I weigh the options, what is the common name for this?

Nolgthorn fucked around with this message at 12:39 on Feb 19, 2020

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Nolgthorn posted:

Anyone interested in chatting about AI?

I've got a management game going and I need to start designing the AI system. I know I don't want to design an entire finite state machine for all the different possible things and how to do them, I came across a pretty nice solution called Goal Oriented Action Planning (GOAP). It basically boils down to: Each action has a set of prerequisites and effects. Given a list of actions, figure out how to achieve a goal. It's essentially a finite state machine but with all the edges being programmatic instead of well defined.

An agent that is supposed to build a thing knows it can do that by going to the warehouse to pick up a box, bring it to the location, and it's done. Where I am now, is trying to figure out which goal any given agent should have. So I've got an agent with memory x at location y and I've got a simulation with state z. What's the "select a goal" part?

How do I weigh the options, what is the common name for this?

Goal selection
Decision making

Beef
Jul 26, 2004
Forward chaining? I remember FEAR's GOAP being derived from STRIPS.

LordSaturn
Aug 12, 2007

sadly unfunny

I think what you're describing is usually handled by a list of task categories in priority order, sometimes per job role, sometimes per agent. somewhere there's a big list of all the tasks that have been ordered but nobody's started doing them. the agent finds their top category with a job in it, and then identifies the nearest job in that category, and then pulls it off the big list to do themselves. if they get interrupted, the job's gotta get back in the big queue somehow.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
For the time being I have an idle goal and a work goal. If there's any work to do, then the goal is to improve the base, otherwise do whatever that fulfils the needs of the agent. I suppose that's pretty simple for now and I'll just expand on it as I add more stuff the agent can do.

I'm a little bit bewildered about where the distinction should lay. It all seems so arbitrary. Why isn't there for example only one goal that all agents pursue, and actions are selected based on all the little pieces of information that are available. What about tiered action selection, like I can select an action and that action has a subset of actions it can choose from and so on. Therefore goal selection is calculated the same way as anything else. Currently in my system if their goal is to work but all the jobs are taken or they can't actually work for some reason then it shifts them to idle and plans again.

Is that even useful you know what I mean?

Would I ever need to have depth whatsoever to the actions an agent can perform, it seems like it would be the same as selecting one from a master list of actions anyway. What if the action is combat for example, should that be many different actions? Should I build out combat as a separate completely unrelated system to the given goal, should it be a subset of actions that are evaluated after it's decided they will engage in combat. Should combat flow with the rest of the actions towards a higher goal or is the goal combat.

There's just a lot to think about, there doesn't seem to be any canonical way to do any of this. It's up to the designer to implement ai unique to the game they are building.

Honestly I'm leaning toward removing goals altogether. Making the final leaf of the tree be "win the game" or similar with a bunch of prerequisites that just filter down the tree forever. Combat can consist of many different actions all with the effect of "kill the enemy". Just basically strip this whole drat thing right down to the bone and trust my ability to design good algorithms for an action's weight.

I hope that won't be too computationally expensive.

Nolgthorn fucked around with this message at 21:31 on Feb 24, 2020

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Nolgthorn posted:

Honestly I'm leaning toward removing goals altogether. Making the final leaf of the tree be "win the game" or similar with a bunch of prerequisites that just filter down the tree forever. Combat can consist of many different actions all with the effect of "kill the enemy". Just basically strip this whole drat thing right down to the bone and trust my ability to design good algorithms for an action's weight.

I hope that won't be too computationally expensive.
That's kind of how chess AI mostly works. You calculate a "how do you rate this world" score, try each of the possible moves, "how do you rate *that* world?" and then you pick the move that led to the best-scoring world.

But it's tricky for a larger real-time world because how far ahead in time should you look? If an agent can score 1 point in half a second, or 10 points in 4 seconds, how do you decide between those options, things might have changed in 3 seconds, etc.

Edit: if you can ask the Caves of Qud guy that would probably get you the best advice because his agent-based AI is really fun.

M2tt
Dec 23, 2000
Forum Veteran
This is as much for me as for Nolgthorn but I have recently implemented a hacky GOAP inspired AI for my management game and the breakthrough for me was realizing that the agents weren't the only things that had goals. Specifically I've got a number of machines that need things done (as a result of player input) and those goals (like build a widget) are broken down into tasks and then added to the job board as required. Each Agent also has its own internal job board to deal with its selfish needs like food/sleep/etc and then eventually lands on the "Work" goal, at which point it grabs a task from the job board that it's qualified to do (ie only a Chef can cook food but anyone can deliver ingredients). Basically in the structure I ended up on, each goal has a series of tasks that are required for the goal complete, each task has code behind it, but the goals are just dictionary entries. Couple of examples:

Eat (Goal)
1. Supply food to Self
2. Find Seating (optional)
3. Eat

Build Widget (Goal)
1a. Supply Material A to Machine
1b. Supply Material B to Machine
2. Process Material
3. Remove Finished Widget

Priority atm is just each agent finishing it's internal job board first, and then checking the global job board for things to do, and in the case of a/b/c/etc tasks can be completed in parallel so those jobs can be assigned to a single agent or multiple (All assignments are kept track of by the Job Board), currently it's all FIFO but it won't be too hard to add more involved priority logic when I'm ready to do that. All the Supply code is the same function though, whether the agent is feeding itself or delivering material to a machine, and as such I've found it really extensible. To take Nolgthorn's example of upgrading the base, each element of the base that needs to be upgraded would have it's own goal of being upgraded, so that series of tasks gets added to the global job board as each prerequisite gets completed. (Supply upgrade module to base bit, install upgrade module) when it comes to combat each invader would add it's own set of jobs to the board (Supply weapon to Self, Find Invader, Attack Invader). To keep performance in check I adjust the Tick Rate of every requester based on what they're requesting at the time...so a relaxing agent isn't going to stop relaxing for at least 5 seconds, while a machine will check every 2 seconds to see if the material it needs has been delivered.

Anyways hope that's helpful and/or someone can tell me what's going to go disastrously wrong with my approach. I'm still blown away every time I add a new machine and some dude walks over and loads it up for work.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense

M2tt posted:

I'm still blown away every time I add a new machine and some dude walks over and loads it up for work.

It's fun isn't it? I'm having a blast watching my creations walk around, this is my first attempt at building AI.

What seems to simplify things a lot is to have the agents be as 'naked as possible with regards to scripting. All of their commands come in from the actions that reference them. Have those actions stored pretty deep somewhere inside of a planning manager so that you have generalised access to information about the simulation.

When your agents need to move somewhere is that a separate action?

Currently for me I've got an Action base class and an extended ActionTargeted base class. If my action requires them to be somewhere then I use the latter and it's responsible for getting them there before they start. Which is ignoring to a certain extend the single responsibility principle. But when you're talking about OOP what are you really going to do it'll never be perfect, right.

But I was finding the calculation of whether they needed to travel somewhere and creating a separate action for it all a bit confusing, since it doesn't seem to lend itself well to GOAP. At least currently I don't have a way to say "prerequisite is be at this location", and it doesn't seem like GOAP is designed to handle specifics like that.

ZombieApostate
Mar 13, 2011
Sorry, I didn't read your post.

I'm too busy replying to what I wish you said

:allears:
I'm picturing something like adding costs for tasks and values for jobs to weight them, where traveling is separate task with a cost and the value of the job increases depending on age and/or how much the product of the job is needed. Basically sum up the costs of all the tasks required to complete a job and compare it to the value of the job to prioritize which jobs are worth doing.

So if you need water to survive, but you have nearby water storage and a personal hydration value, then that job to refill your water store that is expensive because of the travel time increases in value based on how low your water storage/hydration value get. You could probably even compare the cost of going to the distant water source vs going to the water storage and determine when it made sense to refill.

Goddammit, now I want to try out a system like that but it probably doesn't mesh with my current project at all. <:mad:>

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
I'm thinking I need a separate movement system. When an agent is moving it isn't performing it's actions, those are just waiting in the background. Which means I need the tiniest of finite state machine, which I would also take the opportunity to use for a true idle state when nothing else makes sense. So states would be idle, movement, and action. Determined essentially by whatever the planner has spat out. If the current action has a target then you're in movement, when that finishes then resume.

I can see how that would massively tidy some of this up. I'd also be able to extend the movement system to include whatever complexities it needs as well. I've also still got to set up the whole "holy poo poo abort what you're doing" part of everything.

ZombieApostate posted:

Goddammit, now I want to try out a system like that

Management simulation is where it's at yo.

M2tt
Dec 23, 2000
Forum Veteran

Nolgthorn posted:

When your agents need to move somewhere is that a separate action?

It's included in the task function itself, so for Supply X to Y Goal for example it's basically:

Do I have the X? No? Find the X. Move to X and pick it up when I get close. Yes? Go to Y and deliver it when I get there.

Granted I'm using UE4 so a MoveTo is abstracted enough that it's just a single function call and doesn't feel as convoluted as it may actually be, and the Find(X) function does find the closest X before deciding where to go. Also admittedly there's some mini-FSMs in the tasks, but I'm pretty sure that's unavoidable and they're all sequential in my case so I haven't run into any spaghetti disasters yet. That said it's also not as robust as ZombieApostate's example, like my dudes are never going to stop by the vending machine for a snack on their way to pick up materials because they know they'll need it later. That would be awesome to pull off, though.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
Oh ok so you're doing movement similar to the way I'm doing it. I still think I'm going to abstract that out of the actions (tasks). While they are moving around they can just survey their surroundings. It shouldn't be too difficult since their action plan is waiting in the background anyway that if they see something on their way that won't slow them down too much to just grab it.

I'm using Godot and my movement is calculated every physics frame. I'm not familiar with UE but essentially I'm calculating how much force to apply to a kinematic body and in what direction all the time.

Nolgthorn fucked around with this message at 16:31 on Feb 27, 2020

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
Quick question on action plans.

After you've figured out what any particular agent should do, like "go get crate", "put crate here". Do you store the entire plan or just what the agent should currently be doing. Like after I've figured out it should do those two things, if I calculated it again, it would give me the same result, and after it has the crate if I calculated it again it would still give me "put crate here".

Therefore shouldn't I just recalculate whatever the current action to perform is and throw the rest out? All games do this don't they, many of them re-calculate what the agent is supposed to be doing multiple times a second. It feels like I shouldn't be too concerned about all the things the agent will do.

jizzy sillage
Aug 13, 2006

You could store the entire plan, and test the current action for viability. If the current action needs to change, go back and redo the plan.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Will it actually give you the same decision every time? What happens if circumstances change while you're performing those tasks? Surely your ai will make different decisions in different circumstances?

How "sticky" you want tasks to be (equivalently: how eager agents should be to drop their current tasks when a higher-priority one becomes available) is a design decision you can think about and iterate on, but regardless you'll find it easier to strike a good balance if you look at it in terms of macro-tasks. If task stickiness depends inherently on how finely you've sliced things up into subtasks, you're going to have a bad time keeping it in a good place as you iterate on other things.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
It won't necessarily come up with the same thing but that's the point. But I see what you're saying about iteration on stickiness. Rather than replanning the whole thing, just check whether the box can still be placed where the agent was going to put it for example. Unless something really important has happened, in which case replanning is necessary.

This all seems to hark back to what I was saying a while ago which is there doesn't seem to be a canonical way to do this stuff. The harder I try to do things "the right way" the further out in the ocean I am.

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

Nolgthorn posted:

Quick question on action plans.

After you've figured out what any particular agent should do, like "go get crate", "put crate here". Do you store the entire plan or just what the agent should currently be doing. Like after I've figured out it should do those two things, if I calculated it again, it would give me the same result, and after it has the crate if I calculated it again it would still give me "put crate here".

Therefore shouldn't I just recalculate whatever the current action to perform is and throw the rest out? All games do this don't they, many of them re-calculate what the agent is supposed to be doing multiple times a second. It feels like I shouldn't be too concerned about all the things the agent will do.

https://en.wikipedia.org/wiki/Robotic_paradigm
(this article isn't great, sorry)

The thing you're trying to describe here is what computer scientists call "robotic paradigms". One of the classic robotic paradigm problems is exactly what you're asking: do I plan out all the steps right away, or do I plan one step and then re-plan when that one step is completed?

The bad news here is that the best or most efficient answer to this question depends on one's use case, i.e. it's not a thing that is trivially solvable for all cases. The good news is that game AI agents have to contend with a lot less complexity and unknowns than the researchers who were trying to put these theories into practice by building physical robots decades ago.

Stick100
Mar 18, 2003
E3 has been cancelled.
https://arstechnica.com/gaming/2020/03/e3-2020-has-been-canceled/

Argue
Sep 29, 2005

I represent the Philippines
I was watching this video about how a manga artist uses Unreal to set up environments, and I was thinking, this would be pretty handy for me too, as painting reference. Are there any tutorials generally regarded as the best, for the specific use case of just setting up environments, without regard for game-related features? I found a couple of tutorials, but they either have a lot of game stuff mixed in that I'd have to sort through, or are crazy long, and I get the feeling there must be more efficient resources to learn from if I just want to--for example--set up a small city block with prefabbed models.

loaf
Jan 25, 2004



Someone from HTC's content team contacted me about "greenlight" funding future development of my little combat flight sim, Sopwith VR. I made it last year for fun in my spare time, but I haven't updated the public build in almost a year. I've been exploring porting it to JavaScript and WebGL, but this might tempt me to dust off the Unity codebase and start adding some of the stuff I never had time for.

Does anyone have experience with these funding arrangements? Sounds like I'd create a presentation to outline the costs and scheduling, and if it's approved they want a period of exclusivity but I'd retain the rights.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
didn't htc go bankrupt

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS

Argue posted:

I was watching this video about how a manga artist uses Unreal to set up environments, and I was thinking, this would be pretty handy for me too, as painting reference. Are there any tutorials generally regarded as the best, for the specific use case of just setting up environments, without regard for game-related features? I found a couple of tutorials, but they either have a lot of game stuff mixed in that I'd have to sort through, or are crazy long, and I get the feeling there must be more efficient resources to learn from if I just want to--for example--set up a small city block with prefabbed models.

I wish I could offer you a better tip than being very general but you're basically gonna just want to learn the fundaments of 3D environmental art, lighting and (if applicable) cel/toon shading and post-processing.

If you know where to get the assets and prefabs, then just understanding the basics on how to set up the lighting in the engine can get you a long way.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

loaf posted:

Someone from HTC's content team contacted me about "greenlight" funding future development of my little combat flight sim, Sopwith VR.
I don't have any helpful comments about the funding, but your game looks fun, and reminds me of a game on the Acorn Archimedes that did split-screen two player. One thing that was fun there was that it was *just barely* possibly to fly your plane under a rugby goal posts, you even had to go through at an angle. I recommend having some 'interactive' landscape features like that as you expand your game (also, bridges to fly under). I definitely always had more fun in WW2 plane games than in modern missile launching games. Congratulations on offers of money!

Dirty
Apr 8, 2003

Ceci n'est pas un fabricant de pates

roomforthetuna posted:

I don't have any helpful comments about the funding, but your game looks fun, and reminds me of a game on the Acorn Archimedes that did split-screen two player. One thing that was fun there was that it was *just barely* possibly to fly your plane under a rugby goal posts, you even had to go through at an angle. I recommend having some 'interactive' landscape features like that as you expand your game (also, bridges to fly under). I definitely always had more fun in WW2 plane games than in modern missile launching games. Congratulations on offers of money!

I had the same thought - Chocks Away, right? Great game and lots of fun co-op split screen, despite the appalling frame rate.

loaf posted:

Someone from HTC's content team contacted me about "greenlight" funding future development of my little combat flight sim, Sopwith VR.
Congratulations - it looks fun!

loaf
Jan 25, 2004



Thanks, I hadn't seen Chocks Away before but it looks great for 1990. I made sure the factories in my game were big enough to fly through, and someday I'll spawn ammo pickups or something in there. Bridges would be a nice way to spice up the scenery too.

I've never been happy with my terrain in Unity. I've been getting some interesting results generating Perlin noise terrain in Three.js and Ammo.js. I wish I could get flat-shaded terrain in Unity without learning how to write shaders.



That terrain has a full heightmap collider and I'm seeing decent performance, so I originally wanted to see if I could port the whole game over and use WebRTC for eventual multiplayer support. But I may backburner my WebGL experiments if I really get funds to resume work on the original Sopwith VR.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

loaf posted:

That terrain has a full heightmap collider and I'm seeing decent performance, so I originally wanted to see if I could port the whole game over and use WebRTC for eventual multiplayer support. But I may backburner my WebGL experiments if I really get funds to resume work on the original Sopwith VR.
It is possible to use WebRTC without being an HTML thing, in case you didn't know that.
There's even a Unity plugin for it.
The setup process sucks balls, but it sucks balls in a web app too, you still always need a server to handle the lobby part of things and the handshake process is gross as gently caress.
(And shaders aren't as horrible to write as you think!)

Spek
Jun 15, 2012

Bagel!
C# & Unity

Is there a good reason not to use partial classes to encapsulate a large chunk of classes basically like a namespace? I have class A which needed to access some private members of class B and since C# seems to lack a friend keyword I had to nest A inside B. I then needed to bundle a bunch of other classes related to A and B into a namespace and it occurred to me that there didn't really seem to be a good reason to not just shove them into the same partial class I already have especially as it would make the code more concise, I hate the idea of having to go namespace.B.A to access A which is kind of the cornerstone of the codebase and has members that are accessed all over the place. But this seems like a gross misuse of the partial keyword and I'm wondering if there's any actual downsides.

Also does anyone know of an easy way to profile shaders, specifically compute shaders? And for that matter know of any good generic resources for learning to write more performant shader code? Even really basic stuff I just don't know. Like does calling a function have overhead or do shaders basically inline them all the time. I know branching is bad, but how bad is it? If I can do 10 arithmetic operations to avoid a branch is that worth it, 40? I have a giant array of ints I use in a generic compute buffer would I get better performance if I encoded them in a texture? etc. etc.

I feel like I know almost nothing about what to do to write good shaders. It certainly doesn't help that everything I know about shaders I learnt 10+ years ago and is most likely both horribly out of date and half forgotten.

xgalaxy
Jan 27, 2004
i write code
C# partial classes have no runtime cost. It's purely an organizational niceness.
If you look at the .NET Fx libraries on GitHub its actually not that uncommon to see them use partial classes, and even some in the same way that you are talking about.
I don't see anything wrong with what you want to do.

At the end of the day, it's just games development. As long as your code is easy to understand and can be maintained easily I don't think its worth twisting yourself in knots over technical design / architecture of your classes too much. If the solution works, is easy to read, easy to maintain, and didn't take a long time to implement then I would choose it 10 times out of 10 regardless of whatever programming pattern snobbery opinion someone may have.

xgalaxy fucked around with this message at 18:16 on Mar 15, 2020

Adbot
ADBOT LOVES YOU

Corla Plankun
May 8, 2007

improve the lives of everyone
I'm working through the Roguelike Unity tutorial in an effort to learn C# in an applied manner, and I've hit a roadblock. In this chapter he sets up some controls for moving the player, but I cannot for the life of me figure out why my implementation doesn't work.

Here's the bit that seems to be broken:
code:
protected IEnumerator SmoothMovement(Vector3 end)
    {
      Debug.Log($"Variable moveTime value: {moveTime}");
      Debug.Log($"Variable inverseMoveTime value: {inverseMoveTime}");
      Debug.Log($"Variable end value: {end}");
      Debug.Log($"Initial transform.position value: {transform.position}");
      float sqrRemainingDistance = (transform.position - end).sqrMagnitude;
      // TODO remove count logic because this doesn't need to give up if it actually works
      int stepMax = 4;
      int stepCount = 0;
      // while (sqrRemainingDistance > float.Epsilon && stepCount <stepMax)
      // {
      //   Debug.Log($"Distance remaining (squared) = {sqrRemainingDistance}");
      //   Vector3 newPosition = Vector3.MoveTowards(rb2D.position, end, inverseMoveTime * Time.deltaTime);
      //   Debug.Log($"Variable newPosition value: {newPosition}");
      //   rb2D.MovePosition(newPosition);
      //   Debug.Log($"Variable rb2D.position value: {rb2D.position}");
      //   sqrRemainingDistance = (transform.position - end).sqrMagnitude;
      //   stepCount++;
      //   yield return null;
      // }
      while(sqrRemainingDistance > float.Epsilon)
       {
           Debug.Log($"Distance remaining (squared) = {sqrRemainingDistance}");
           //Find a new position proportionally closer to the end, based on the moveTime

           Vector3 newPostion = Vector3.MoveTowards(rb2D.position, end, inverseMoveTime * Time.deltaTime);
           Debug.Log($"Variable newPosition value: {newPostion}");

           //Call MovePosition on attached Rigidbody2D and move it to the calculated position.
           rb2D.MovePosition (newPostion);
           Debug.Log($"Variable rb2D.position value: {rb2D.position}");

           //Recalculate the remaining distance after moving.
           sqrRemainingDistance = (transform.position - end).sqrMagnitude;

           //Return and loop until sqrRemainingDistance is close enough to zero to end the function
           yield return null;
       }
      if (stepCount == stepMax)
        Debug.Log("Gave up on smooth movement");
    }
And according to the logs, "transform.position" and "rd2D.position" never, ever change for the player or for the enemy. I have verified that newPosition is correct, and I am at a total loss as to why rb2D.MovePosition (newPostion) seems to do nothing.

I don't have "Freeze" checked on the player prefab but I suspect that maybe I have configured it wrong, because I'm using Unity 2018 instead of Unity 5 and things look different, so I'm attaching a picture of its settings.

PS Is this the right place to ask about this? I couldn't find a Unity thread but maybe it is named something hard to search for.

Only registered members can see post attachments!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply