Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Kibbles n Shits
Apr 8, 2006

burgerpug.png


Fun Shoe
How should I approach pathfinding on a large terrain that is "chunked"? My terrain consists of a grid of small meshes. I was thinking of doing a high level search first (calculating which chunks to path through) and then calculating a path to the next chunk using a smaller resolution grid, but I'm not sure how that could generate an optimal overall path.

Adbot
ADBOT LOVES YOU

xgalaxy
Jan 27, 2004
i write code
Here is a first look at what the new Entity Component System API will be like in Unity3d moving forward:

https://github.com/Unity-Technologies/EntityComponentSystemSamples/tree/master/TwoStickShooter/Pure/Assets/GameCode


Docs:
https://github.com/Unity-Technologies/EntityComponentSystemSamples/blob/master/Documentation/index.md

Ranzear
Jul 25, 2013

Can an entire chunk be blocked?

Otherwise that sounds fine. I can think of a slightly deeper method that solves entry and exit from each chunk:

I'm assuming you can look at at least two chunks at a time, while all at once is untenable.

0. Do the macro path from chunk to chunk.
1. Take the next (or first) pair of chunks, pathfind from the center of the first to the center of the second.
1a. If (excepting the centers of the chunks being blocked) you can't find a path into the other chunk, mark it blocked and repeat step 0. Note that prior edge points (from next step) are still good for a given pair of chunks.
2. Note the place where the path crosses between the chunks. As mentioned in 1a you can cache these plus any wholly blocked chunks if nothing that blocks pathing ever changes.
3. Pathfind from the previous 'edge point' (or the initial position) and this new edge point for the path on the current chunk.
~ Repeat 1-3 for the next and next-next chunks, using the edge points for the final path.

This will still tend to follow the edges, but should at least avoid having to path through the center of each chunk. This is all just to guarantee that the boundary between the chunks is walkable.

Regarding 1a, you might need a little logic if the center of the chunk can't reach an edge. This may be as simple as ignoring unwalkable until you find a walkable point, then pathing from there, but there could be funny edge cases.

With this, even a stairstep of chunks should become a nice diagonal, though your pathing may still have an 'eight-way' bias. Caching the edge points should speed it up a little, but it's probably worthwhile to regenerate them each time with a little bit of 'wiggle' or randomness so you don't get a bunch of single file dudes.

Ranzear fucked around with this message at 03:17 on Mar 21, 2018

Mata
Dec 23, 2003
Personally I went with this approach to hierarchical pathfinding: http://www.gameaipro.com/GameAIPro/GameAIPro_Chapter23_Crowd_Pathfinding_and_Steering_Using_Flow_Field_Tiles.pdf%E2%80%9D

Basically you create a graph of portals (the traversable edges between each chunk) and do your high level pathfinding on that. Then, your lower-level pathfinding just has to steer your agent to the next portal until you've reached the goal.

The flowfield/line of sight functionality took me a long time to get right, as there were a lot of edge cases that could arise, and it is probably overkill unless you have hundreds of steering agents. But I'm convinced the high level approach in that PDF is solid.

Mata fucked around with this message at 08:35 on Mar 21, 2018

Mr Shiny Pants
Nov 12, 2012

Is this to get "hot" data like transforms, which might change every frame compared some other data, grouped together making it faster to execute?

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Mata posted:

Personally I went with this approach to hierarchical pathfinding: http://www.gameaipro.com/GameAIPro/GameAIPro_Chapter23_Crowd_Pathfinding_and_Steering_Using_Flow_Field_Tiles.pdf%E2%80%9D

Basically you create a graph of portals (the traversable edges between each chunk) and do your high level pathfinding on that. Then, your lower-level pathfinding just has to steer your agent to the next portal until you've reached the goal.
Doesn't that have the same problem that was trying to be avoided?
For example, if you have three large squares, arranged
code:
12
 3
And you're at the top left of square 1, and want to get to the bottom right of square 3, the shortest path to the portal between 1 and 2 goes to the top of it, then the shortest path to the portal between 2 and 3 goes to the left of it, then the final stretch goes diagonally, when you'd want the whole path to be diagonal (assuming the chunks are completely empty for the purposes of this example).

The portals thing may be advantageous for not assuming you can get from anywhere on a chunk to anywhere else on a chunk though. That combined with Ranzear's answer seems like it would be pretty good.

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
One thing I've found with pathfinding is once you have a path, even if it's not ideal, you should refine it by basically walking backwards from the end point and for each node seeing if there's a line of sight path to the start. Once there is that's your first step. It tends to eliminate artifacts in the initial path.

Dr. Stab
Sep 12, 2010
👨🏻‍⚕️🩺🔪🙀😱🙀

roomforthetuna posted:

Doesn't that have the same problem that was trying to be avoided?
For example, if you have three large squares, arranged
code:
12
 3
And you're at the top left of square 1, and want to get to the bottom right of square 3, the shortest path to the portal between 1 and 2 goes to the top of it, then the shortest path to the portal between 2 and 3 goes to the left of it, then the final stretch goes diagonally, when you'd want the whole path to be diagonal (assuming the chunks are completely empty for the purposes of this example).

The portals thing may be advantageous for not assuming you can get from anywhere on a chunk to anywhere else on a chunk though. That combined with Ranzear's answer seems like it would be pretty good.

In this particular instance, the LOS pass means that the agent will walk straight towards the goal.

Mata
Dec 23, 2003

roomforthetuna posted:

Doesn't that have the same problem that was trying to be avoided?

Yeah, the PDF algo achieves this by marking tiles as having Line of sight of the goal - when you enter such a tile, you don't have to worry about pathfinding and can just steer straight toward the goal. But in my experience, the LoS pass was the hardest part of the algo to get right.

It might be easier to just run a JPS or A* to the goal once you get within 3 chunks of it, assuming you don't have too many steering agents. I guess this was suggested..

Mata fucked around with this message at 16:26 on Mar 21, 2018

LLSix
Jan 20, 2010

The real power behind countless overlords

Mata posted:


The debug-output from my procedural terrain generator. I thought it looked cool :)

Looks sweet.

xzzy
Mar 5, 2009

Hey cool everyone's buzzing about real time ray tracing!

https://www.youtube.com/watch?v=J3ue35ago3Y

Hey lame the hardware to do it has a $150,000 price tag.

Eh, guess I'll wait another 10 years. Just like when ray traced quake was announced. :smith:

Crap, it was 14 years ago when that hit the internet.

https://www.youtube.com/watch?v=CS3RvLhfBCM

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
it's actually 4 DGX-1's so more like a $60,000 price tag

xzzy
Mar 5, 2009

Oops, I c&p'd the wrong number from the article I read. :angel:

Still out of my league.

Stick100
Mar 18, 2003

xzzy posted:

Oops, I c&p'd the wrong number from the article I read. :angel:

Still out of my league.

Well, it is real-time which probably means even on a single 1080 it would only take a matter of minutes render seconds of footage that good which is still insane. It was a couple of years ago Sweny said we were at photoreal static scenes in real-time without characters. I think this shows we can get to photo real-time with characters as long as they don't have human faces.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
yeah to be honest this is still super impressive. someone that makes offline stuff looks at those numbers and goes "whoa that's cheap" because your classic render farm is like 10x as expensive and takes 200x as long to render a frame.

still has nothing to do with games though.

xzzy
Mar 5, 2009

Ok, sourpuss.

I know when I see "real time ray tracing!" I'm thinking using it in game development.

xgalaxy
Jan 27, 2004
i write code
Whats the state of the art in machine learning for upscaling images?
You could do raytracing into a small render target and upscale to 1080p :D
See:


The Microsoft/Nvidia stuff already uses machine learning for the final output but I don't recall what its doing.

xgalaxy fucked around with this message at 22:32 on Mar 21, 2018

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS

xgalaxy posted:

Whats the state of the art in machine learning for upscaling images?
You could do raytracing into a small render target and upscale to 1080p :D
See:


The Microsoft/Nvidia stuff already uses machine learning for the final output but I don't recall what its doing.

https://www.youtube.com/watch?v=YjjTPV2pXY0

This is probably relevant.

xgalaxy
Jan 27, 2004
i write code
The really crazy thing about these machine learning models is that once you've trained the model the predicted output (in this case your final image) can be calculated really fast.
The training is what takes up the most time and is the most calculation heavy part of it and that can be done offline and the final model stored on the GPU or whatever.

Ranzear
Jul 25, 2013

http://waifu2x.udp.jp
https://wiki.archlinux.org/index.php/Waifu2x

I've been wanting to throw this at old games' textures.

Ranzear fucked around with this message at 23:21 on Mar 21, 2018

xgalaxy
Jan 27, 2004
i write code
So I've been reading about the Unity3d ECS stuff and I'm curious about something.

quote:

An EntityArchetype is a unique array of ComponentType. EntityManager uses EntityArchetypes to group all Entities using the same ComponentTypes in chunks.
code:
// Using typeof to create an EntityArchetype from a set of components
EntityArchetype archetype = EntityManager.CreateArchetype(typeof(MyComponentData), typeof(MySharedComponent));


quote:

EntityManager is where you find APIs to create Entities, check if an Entity is still alive, instantiate Entities and add or remove components.
code:
// Adding a component at runtime
EntityManager.AddComponent(entity, new MyComponentData());

Is an Archetype the set of components that an entity must have or only what an entity may have? At first, my reading of this overview I was under the impression that these archetypes were the set of components an entity must have. But then that got me curious how they handle adding components to an existing entity at runtime - to for instance act as a temporary flag for some behavior. Does the system implicitly create a new archetype if you add components to an entity that doesn't fit an existing archetype.. or does it disallow it completely?

Also its not very evident how the Archetypes interplay with the Systems. You would think if you have to declare these Archetypes upfront that the Systems would be declaring interest in particular archetypes but instead the Systems just individually list the components of interest.

If you can just add whatever components you want to an entity at runtime using AddComponent and the Systems are declaring their interest in individual components instead of at the archetype level than what is really the purpose of Archetypes other than possibly a memory layout enhancement for the cases where you do declare them upfront? And if that is the case then I guess its just a leaky abstraction / detail - which would be kind of disappointing but I guess that's how it is sometimes.

xgalaxy fucked around with this message at 23:09 on Mar 23, 2018

Nude
Nov 16, 2014

I have no idea what I'm doing.

xgalaxy posted:

So I've been reading about the Unity3d ECS stuff and I'm curious about something.



Is an Archetype the set of components that an entity must have or only what an entity may have? At first, my reading of this overview I was under the impression that these archetypes were the set of components an entity must have. But then that got me curious how they handle adding components to an existing entity at runtime - to for instance act as a temporary flag for some behavior. Does the system implicitly create a new archetype if you add components to an entity that doesn't fit an existing archetype.. or does it disallow it completely?

Also its not very evident how the Archetypes interplay with the Systems. You would think if you have to declare these Archetypes upfront that the Systems would be declaring interest in particular archetypes but instead the Systems just individually list the components of interest.

If you can just add whatever components you want to an entity at runtime using AddComponent and the Systems are declaring their interest in individual components instead of at the archetype level than what is really the purpose of Archetypes other than possibly a memory layout enhancement for the cases where you do declare them upfront? And if that is the case then I guess its just a leaky abstraction / detail - which would be kind of disappointing but I guess that's how it is sometimes.
Yeah I got excited too. But it seems like their purpose is not must have or may have but rather starts with as in a weird Entity constructor like thing. I assume at runtime you can then remove or add any amount of components to whatever entity, but if you want the entity to start with, box collider, transform, and rigidbody you might create an archetype. Or if it's only in one instance use their archetype wrapper, as shown in their last example. I say this because when searching the repository it seems like that is how they use it:
C# code:
// Example 1
public static EntityArchetype PlayerArchetype;
...
PlayerArchetype = entityManager.CreateArchetype(
                typeof(Position2D), typeof(Heading2D), typeof(PlayerInput),
                typeof(Health), typeof(TransformMatrix));
...
Entity player = entityManager.CreateEntity(PlayerArchetype);
So it seems like it's just for convince instead of having to do something like:
C# code:
// Example 2
Entity player1 = new Entity();
player.AddComponent(typeof(Position2D));
player.AddComponent(typeof(Heading2D));
player.AddComponent(typeof(Health))
//... etc
// You can just do
Entity player1 = entityManager.CreateEntity(PlayerArchetype);
Entity player2 = entityManager.CreateEntity(PlayerArchetype);
If this is the case I would assume player1 and 2 aren't referencing the archetype. Rather CreateEntity is returning a new Entity with a new Position2D() and new Heading2D() etc. Um weirdly enough I can't seem to find in their repository an example where they actually reuse the archetype object more than once. All the archetypes are used like in example 1.

Nude fucked around with this message at 04:48 on Mar 24, 2018

xgalaxy
Jan 27, 2004
i write code
Ya the documentation is oddly inconsistent and I'm unsure if its because the API has changed between revisions of their documentation on the repo or if there are just multiple ways of going about things and its just unclear because this isn't really documentation but a demo writeup.

I think the archetype thing is more involved in the system than just an easy way to "construct an entity" though because in one of the other documents they talk about how the components are laid out in memory and it appears archetypes are an important element in that process.

Nude
Nov 16, 2014

I have no idea what I'm doing.

xgalaxy posted:

Ya the documentation is oddly inconsistent and I'm unsure if its because the API has changed between revisions of their documentation on the repo or if there are just multiple ways of going about things and its just unclear because this isn't really documentation but a demo writeup.

I think the archetype thing is more involved in the system than just an easy way to "construct an entity" though because in one of the other documents they talk about how the components are laid out in memory and it appears archetypes are an important element in that process.

It's possible. I mean there is only a few sentences of actual documentation. I can see how EntityManager uses EntityArchetypes to group all Entities using the same ComponentTypes in chunks. implies way more than just construct. I'm thrown off though because the examples they give of course just constructs one entity.

Linear Zoetrope
Nov 28, 2011

A hero must cook
It almost certainly gives you a new instance for each one. ECS optimizations generally involve using some arenas to allocate components/entities that are accessed frequently together near each other for caching purposes, so archetypes probably serve as a hint as to where things should go for performance. From a semantic perspective, I'd wager it acts just like constructing a new one.

E: Basically, if you're writing a physics system, you want all your Position/Heading/AABB/whatever components next to each other for an entity, and then entities with those components used for that purpose near each other so the cache can exploit this locality when you're iterating over all of them. That's what it probably means by "chunks", it allocates big chunks of units near each other. The naive way to do it is just a separate list, or worse, separate new for each entity or component type and you just get tons of cache misses and fragmentation.

Linear Zoetrope fucked around with this message at 06:44 on Mar 24, 2018

Goreld
May 8, 2002

"Identity Crisis" MurdererWild Guess Bizarro #1Bizarro"Me am first one I suspect!"

Linear Zoetrope posted:

The naive way to do it is just a separate list, or worse, separate new for each entity or component type and you just get tons of cache misses and fragmentation.

Using a list in engine coding is a good way to get the attention of a lead engineer very quickly. Unless the alternatives are far more onerous, it's almost always better to use a vector (usually with a custom fixed size allocator) in a game engine. Even small lists can cause havoc with respect to fragmentation if you're altering them every frame.

In cases where we really needed a linked list, it was still often better to manually do links with offsets into a vector and swap-removals within that vector to keep the heap allocation globally contiguous.

Goreld fucked around with this message at 15:37 on Mar 24, 2018

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

xzzy posted:

Hey cool everyone's buzzing about real time ray tracing!

https://www.youtube.com/watch?v=J3ue35ago3Y

Hey lame the hardware to do it has a $150,000 price tag.

Eh, guess I'll wait another 10 years. Just like when ray traced quake was announced. :smith:

Crap, it was 14 years ago when that hit the internet.

https://www.youtube.com/watch?v=CS3RvLhfBCM
Real-time ray tracing is in my "I'll believe it when I see it" pile with voxels and blockchain.

The problem is that rasterization is extremely cheap and most of what ray tracing helps with are phenomena that humans are really bad at judging the accuracy of. There's been some renewed interest in it because of missing geometry artifacts with screen-space reflection, but there are tricks that can compensate for that like depth peeling and multiple scene renders that are expensive, but still vastly cheaper than ray tracing.

Linear Zoetrope
Nov 28, 2011

A hero must cook

Goreld posted:

Using a list in engine coding is a good way to get the attention of a lead engineer very quickly. Unless the alternatives are far more onerous, it's almost always better to use a vector (usually with a custom fixed size allocator) in a game engine. Even small lists can cause havoc with respect to fragmentation if you're altering them every frame.

In cases where we really needed a linked list, it was still often better to manually do links with offsets into a vector and swap-removals within that vector to keep the heap allocation globally contiguous.

I was using "list" in the general sense of "a linear collection of data, (but in this case presumably allocated contiguously)", not as in what you're talking about. I assume C# or C++'s "list" type means "linked list"? I refer to any linear collection of data as a "list", somewhat because it's the most language-agnostic way of talking about it. To me, a linked list is a type of list, but so is a vector/dynamic array, static array, skip list, distributed list, or whatever other means of linearly storing data (linearly as in, not a heap or tree or hash map or whatever, just laid out in some order keyed only by their order of access/place in storage, sorted or unsorted).

What I meant was the naive way was just to have a vector or big array that you store each component or entity in, but it quickly fragments as entities die and doesn't do cache locality too well when you're using different components together frequently, so more complex (and low-level memory-fiddly) block-allocating strategies are usually needed.

Linear Zoetrope fucked around with this message at 23:54 on Mar 25, 2018

Mr Shiny Pants
Nov 12, 2012

OneEightHundred posted:

Real-time ray tracing is in my "I'll believe it when I see it" pile with voxels and blockchain.

The problem is that rasterization is extremely cheap and most of what ray tracing helps with are phenomena that humans are really bad at judging the accuracy of. There's been some renewed interest in it because of missing geometry artifacts with screen-space reflection, but there are tricks that can compensate for that like depth peeling and multiple scene renders that are expensive, but still vastly cheaper than ray tracing.

Same here, but that AO, oh man.

stramit
Dec 9, 2004
Ask me about making games instead of gains.

OneEightHundred posted:

Real-time ray tracing is in my "I'll believe it when I see it" pile with voxels and blockchain.

The real advantages of the new tech going to come from what it can do on the tooling and workflow side while the hardware catches up over however long it takes to become reality.... and even if it stays; just for tooling (fast lightmap bakes), baking lighting on level load instead of part of the build, or allowing (non prohibitive) lightmapping in procedural games... I'm all for that.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Strumpy posted:

and even if it stays; just for tooling (fast lightmap bakes), baking lighting on level load instead of part of the build, or allowing (non prohibitive) lightmapping in procedural games... I'm all for that.
One of the Larrabee demos was ETQW ray-traced, GPU power has vastly increased since then, yet we're still using rasterization. It's not a problem of power being available, it's a problem of rasterization being vastly more efficient in terms of utilizing available power than ray tracing in the vast majority of circumstances.

As far as lightmapping at runtime, doing it without GI is fairly simple. With GI, GPU Gems 2 has an approach that does photon mapping by basically using rasterization and depth peeling to shoot a bunch of parallel bounces, there are probably better/improved options now.

OneEightHundred fucked around with this message at 09:55 on Mar 26, 2018

Xerophyte
Mar 17, 2008

This space intentionally left blank
Real time ray tracing has been a thing for a while over here in cadland. I have coworkers who made realtime CPU raytracing software for the automotive industry in 2004, using instant radiosity to bake GI back in those days. It's just not been a thing in games, because games allow way more approximation error -- for lack of a better term -- than film or design visualization, and require much higher frame rates. Design visualizers have to produce something close enough to reality that they can replace physical prototyping in order to be useful, but it's perfectly fine to have them run at 5 FPS or so.

I expect the target markets for the real time ray tracing tech are more things like previz for film studios (hence ILM) and software like Enscape, not so much direct use in games.

Xerophyte
Mar 17, 2008

This space intentionally left blank
I was at the GTC presentations for the various real time ray tracing implementations today. I imagine this was also mentioned at GDC but I just render buildings and toasters so I wasn't at GDC. Anyhow:
- Scene updates cost on the order of 1 ms in acceleration structure rebuild for 1M-3M tris if I heard correctly, which is nice. I'd expect a second or two for that in offline land, but we optimize the hierarchies way harder.
- There will be hardware ray tracing API extensions to vulkan and dx12, with shading in glsl and hlsl respectively. Optix will still be better if you really want to build a full path tracer in cuda, and not just toss some rays in a game.
- Focus for the real time work that Epic and EA did seems to almost entirely be on using ray casts to better evaluate specific distributed features. Soft shadows, ambient occlusion and glossy reflections were the constant refrain. Probably for the good reason that screen space techniques and shadow maps are the shittiest looking hacks remaining in a modern rasterizer, and pretty expensive to boot.
- This said, performance is not exactly stellar compared to the lovely hacks. Star Wars demo hit 140 ms for the most expensive frame on a single titan v. They managed to hit their mild 24 FPS target by throwing 4 titan v:s at the problem and being super careful about scheduling. And that's on what's basically a toy scene: an elevator box with two area lights where all materials are either plastic or metals.
- The results are entirely dependent on temporal reprojection and cheap filtering. The 3 features they ray trace produce highly noisy 1 spp buffers that are smoothed using a cheap, separable cross bilateral (not the ML final frame filter that's part of optix and was mentioned upthread). This system will have awful failure cases. There were some pretty obvious flickering and smearing issues in the demo scenes and they tried to mostly isolate the features. Hopefully less awful failure cases than the screen space techniques but, still, this is partly built on praying that your shadows and reflections are sufficiently low-frequency that no one notices all the blurring.

It's still leagues ahead of shadow maps and screenspace AO and reflections in fidelity for most cases, but it's not going to hit a AAA near you any time soon I don't think. The cost in frame budget is going to be prohibitive for the majority of games when the hardware acceleration itself has trickled down to consumers. Design and architectural visualization still seems like a better first use case to me since we're OK with spending 150 ms/frame and tend to have a lot of time where neither the scene nor the camera change. I'm probably biased, of course, but architectural was mentioned as one of the situations the Epic types felt was promising too so I don't think I'm completely off.

Dr Monkeysee
Oct 11, 2002

just a fox like a hundred thousand others
Nap Ghost

Linear Zoetrope posted:

I was using "list" in the general sense of "a linear collection of data, (but in this case presumably allocated contiguously)", not as in what you're talking about. I assume C# or C++'s "list" type means "linked list"?

C++'s list is a linked list but the C# list is a vector (there is a separate LinkedList type called out as such).

In my experience most programming languages' default "list" structure is some sort of vector. It's perfectly reasonable to refer to "contiguous allocation data structures" as lists except in the specific context of C++ where everything is fiddly and specific.

ryde
Sep 9, 2011

God I love young girls
I'm looking to play around with gamedev by working on simple 2d platformers with the eventual goal of working up to a Metroidvania style game. My day-to-day programming is mostly in the Java world. I haven't done a lot of C++ in a while and would welcome an opportunity to refresh. I'm looking to start with Godot (using C# and moving to C++ as needed), but I see that people use Unity a lot, and of course there's Unreal. Is there an engine I should be eyeing instead of Godot?

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

ryde posted:

I'm looking to play around with gamedev by working on simple 2d platformers with the eventual goal of working up to a Metroidvania style game. My day-to-day programming is mostly in the Java world. I haven't done a lot of C++ in a while and would welcome an opportunity to refresh. I'm looking to start with Godot (using C# and moving to C++ as needed), but I see that people use Unity a lot, and of course there's Unreal. Is there an engine I should be eyeing instead of Godot?

Biggest reason not to use godot is the industry slants far more heavily to Unity and unreal. If your goal is to transition your career, having worked in the engine you’re applying to work in is a big plus.

As a solo dev, unity’s asset store can be very nice if you’re willing to put some money into it. The engine is much worse if you are not.

drgnvale
Apr 30, 2004

A sword is not cutlery!
What makes it worse?

Ranzear
Jul 25, 2013

drgnvale posted:

What makes it worse?

Godot? It doesn't have ... wait they added that in 3.0.

It's missing ... wait no, 3.0 has that, again.

Etc.

Godot got way better just a month ago. Can't fault anyone for not keeping tabs. It now looks like UE4 but is more flexible and extensible than Unity. It's just a matter of learning one thing versus another, like if you learned Cryengine these days.

ZombieApostate
Mar 13, 2011
Sorry, I didn't read your post.

I'm too busy replying to what I wish you said

:allears:
I think he meant that Unity has some features that aren't very good by default, but have better solutions on their marketplace written by other people which aren't generally free.

Adbot
ADBOT LOVES YOU

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Forum poster Xibanya is hosting a 10-day game jam: https://itch.io/jam/dogpit-jam-2018



The Jam posted:

A game jam from 2018-04-20 to 2018-04-30 hosted by Team Dogpit.
The Dogpit Jam is a chill competition in which you make a game based on the theme (announced April 20th)

There is a Dogpit Discord where you can find team-mates and chat about the jam.

Xibanya posted:


taqueso fucked around with this message at 00:18 on Apr 7, 2018

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply