Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
sunaurus
Feb 13, 2012

Oh great, another bookah.

Cuntpunch posted:

Related: In discussing how he's solved the problem of shared state in his personal project - some game - he proudly tried to explain the plan was basically to dump all global state into, well, a big fat global object and just make sure nothing touched things they weren't supposed to. :psyduck:

I'm not saying he wasn't an idiot, but are you sure he wasn't just talking about using entity component system architecture (which is a typical and good way of making games)?

sunaurus fucked around with this message at 07:57 on Jun 14, 2019

Adbot
ADBOT LOVES YOU

Hollow Talk
Feb 2, 2014

necrobobsledder posted:

If you mean the cultural practice of a code review I treat it like test coverage - start with new work instead of trying to go through the accumulated technical debt mountain. Depending upon the churn rate of the codebase this could mean it will never amount to anything materially helpful or at least newly written code should be better understood by the team.

I think this is more what I had in mind, really. Some good answers here, so I might see if I can introduce a more informal system for new projects (I've talked to my boss/the boss about it, so that side should be fine).

The issue in this company has traditionally been that since it's consulting, billing by the minute and various pressures in a (very segmented) multi-project environment meant that enforcing lots of standards (including tests...) are up to the person usually involved in the project. So part of an overall effort is to standardise more, connect project members/teams better, and approach new projects with a different, less siloed mindset.

:sigh:

Xarn
Jun 26, 2015

Volmarias posted:

If you don't have a code review system in TYOOL 2019 you need to find a different employer
This.


There are definitely changes we don't review, but I can't imagine not having a review process for non-trivial changes.

sunaurus
Feb 13, 2012

Oh great, another bookah.

Xarn posted:

This.


There are definitely changes we don't review, but I can't imagine not having a review process for non-trivial changes.

Even trivial changes should always be reviewed.

I know of a bank in my country where *everybody* had their remaining loan balances set to a specific amount for a while (until a rollback) because of a "trivial" one-liner sql migration that was supposed to only change a couple of values but was missing a WHERE clause.

Xarn
Jun 26, 2015
I am not saying you shouldn't test your changes before yoloing them into prod (WTF????), but yes, fixing spelling of poo poo in logs does not need review.

Cuntpunch
Oct 3, 2003

A monkey in a long line of kings

sunaurus posted:

I'm not saying he wasn't an idiot, but are you sure he wasn't just talking about using entity component system architecture (which is a typical and good way of making games)?

From passing discussions around stuff he runs into, it is intended that everything can alter gamestate arbitrarily. The logic seems to be "i want to be flexible" - so everything just lives in a global state blob and all entities do what they need with it.
It is not architecture as I would describe it.

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.

Kevin Mitnick P.E. posted:

Spring doesn't do it. It's a hard problem with no right answer. Which means whatever you do will be wrong; instead, remove the problem causing you to think that changing your DB params after startup is something you need to do.
Spring does do this, using Spring Cloud Config. It can watch for external configuration changes and will re-instantiate whatever beans need to be changed (including your datasources).

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Jabor posted:

If you've specifically engineered a part of your system to be configured without a restart, then sure, it can be okay to configure your running instances instead of starting new ones. That's only if you've specifically designed this part of your system to handle that, and hopefully you've comprehensively tested that it handles it correctly. It's not applicable for general properties that you haven't actually designed around being changed during execution.

Having it automagically happen based on a change in the repo seems problematic too. If I were designing the system, I probably wouldn't do it that way.

yeah obviously the only properties we have as dynamic are the ones we've designed specifically to by dynamic. changes to properties are gated via code review.

i guess i'm not sure how else you'd handle stuff like this. i mean say for instance you're releasing a new feature which requires a number of changes in a bunch of different microservices. how do you handle the release?

do you block up changes to all of the services involved until everything is done and ready to release? do you stagger releases with a feature flag and then re-release with the flag off when everything is done? how do you handle the race condition this presents?

this system lets us feature flag stuff and then toggle those flags when all of the other pieces are in place. it allows us to decouple the releases of services that others are dependent on and allows continuous development and delivery even while larger parts of feature work are still ongoing.

again, all of this stuff feels like a no-brainer to me but i've been working in this system for a bit and i'm sincerely curious how other places handle this stuff.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Cuntpunch posted:

I'm not even joking, the only other dev I'd evaluate as semi-competent at my (small) office unfortunately fits this paradigm. We were having a discussion about best practices a few months back - and during the exchange, we got onto the topic of OO best practices. His take is 'OO is garbage, unit testing is a waste of time, etc'. I suggested that the reason I spend a lot of time reading up on stuff is because I think there's a lot to be learned from people who have been writing software for longer than I've been alive. They aren't always right, but there are lessons to be learned from their experiences and ideas. I don't feel like I'm at a point in my knowledge and skillset where I'm in a position to somehow redefine software development paradigms. He said emphatically and sincerely that he felt like he had - in something like 5 years as a developer - reached that point and was ready to really ignore all the rules in order to make something better. I had to hold back a full body shudder.

Related: In discussing how he's solved the problem of shared state in his personal project - some game - he proudly tried to explain the plan was basically to dump all global state into, well, a big fat global object and just make sure nothing touched things they weren't supposed to. :psyduck:

Cuntpunch posted:

From passing discussions around stuff he runs into, it is intended that everything can alter gamestate arbitrarily. The logic seems to be "i want to be flexible" - so everything just lives in a global state blob and all entities do what they need with it.
It is not architecture as I would describe it.

OOP is often bad for games because of what the object model means for the utilization of your cache. The argument is usually that virtual calls and non homogenous data mean I can’t pipe as much data through my systems and therefore artificially reduce performance. The architectural pattern roughly becomes arrays of homogenous data. Different systems need different data, and will then manage their own arrays. Some other thing handles relationships between chunks of data.

If your game doesn’t drive performance as a concern (so.. most games) then the overhead implicit in object models isn’t a death knell. The thing people who say “OOP is bad” probably mean to say is “OOP has costs at runtime for the benefit of more natural reasoning to developers. Not all systems can support this artificially increased cost, so OOP is not a universal solution.”

ECS is a far less flexible paradigm than typical OOP. The implementation of systems is often simple, but a change in data may mean re-implementing a system. The goal effectively becomes to have your systems be simple enough that their requirements won’t change often. Conversely, a feature may involve multiple systems, and changes probably affect only a subset. The maintenance burden difference is analogous to monolithic applications vs microservices. Monolithic applications are easier to build and maintain up to some point, after which it’s far easier to have split everything into smaller pieces to reason about independently while consuming the overhead of reasoning about their interdependencies. It’s probably easier to intuit the total behavior of a monolith, but also probably easier to tune performance of microservices.

Lumpy
Apr 26, 2002

La! La! La! Laaaa!



College Slice

necrobobsledder posted:

I thought a pull request implies a code review?

It implies somebody gets a notification that there's a pull request, they click the link to it, they hit the green button without looking at it, and get back to whatever they were doing.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





Blinkz0rz posted:

do you block up changes to all of the services involved until everything is done and ready to release? do you stagger releases with a feature flag and then re-release with the flag off when everything is done? how do you handle the race condition this presents?

you use a proper runtime configuration system like etcd, vault, ssm or i dunno, env vars. you definitely don't use some kind of webhook that watches github to reconfigure things

really tho you should only need to resort to coordinating releases like this in extreme situations. version your apis and your data and you should never have this issue

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Sorry should have been clearer again, it's not webhook based, the whole thing wraps over Consul and a homebaked S3 store. We just keep the configuration in GitHub for auditing.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Runtime configuration changes are pretty distinct from ones a machine is baked with. However, any configuration changes do need to have some form of review anyway if you want to get SOX and SOC-2 right so hopefully you'd have a ticket to allow temporary credentials only when a ticket is used to justify it. Properly enforcing this stuff becomes more important as you get bigger and an insider attack possibility is much more realistic.

Also, even in shipped COTS software rather than in SaaS I've been advocating for feature flag usage that's simply never properly standardized so you can't accidentally enable the whole thing without decompiling and decrypting binaries. This is simply to alleviate a culture around a fear of merging too early rather than merging as long as nothing breaks. Heck, if Apple iOS releases have unreleased crap in them all the time I think a smaller company with a lot less scrutiny and risks for secrecy can probably ship some dead code once in a while.

Lumpy posted:

It implies somebody gets a notification that there's a pull request, they click the link to it, they hit the green button without looking at it, and get back to whatever they were doing.
Alright, so they consented to it. Blameless post-mortems are important to follow up with this. So be it. If people are that busy they can't review changes much, it's time to start adding in at least static analysis and integration tests to do some basic testing.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

necrobobsledder posted:

I thought a pull request implies a code review? If you didn't do a PR you'd just commit it right into trunk I figure.
It definitely implies it, but people could just hit the button and let it through.

qsvui posted:

Drawing from a small sample size of my current company, senior more often seems to mean "stuck around for a while" and rarely has anything to do with expertise.
This is definitely an older fellow.

Something else I discovered is another older fellow running something of a fiefdom with one of his tools is doing code reviews. I haven't checked but I am pretty sure for him "code review" means "I review your stuff. You don't review mine because my poo poo doesn't stink."

The project I'm personally in has been doing code reviews since a month or two after I walked in to it. The first group involved got very butthurt about the whole process. One of them was ostensibly the tech lead or something and moved their stuff offline for two months into a separate remote branch. We eventually had to figure out how to combine all this crap and it was a tire fire.

Something I decided in retrospect is when dealing with these divas is to get all your automated controls in place because they're more likely to respect a robot than another human being. If they submit the code and it fails the formatting and QA suite, they're more likely to take it than me pointing out a certain line of code will break everything--and did you even test this?

From what's left of the team includes somebody that came into the job from an actual software engineering background and we've had absolutely zero problems together. The funny thing there is he'll sometimes do something a little fancy, and I'll comment about it being novel and cool, and then he'll realize he hosed it up. So I learned to also comment on things I think are "neat" because they might also be time bombs.

vonnegutt
Aug 7, 2006
Hobocamp.

Rocko Bonaparte posted:

From what's left of the team includes somebody that came into the job from an actual software engineering background and we've had absolutely zero problems together. The funny thing there is he'll sometimes do something a little fancy, and I'll comment about it being novel and cool, and then he'll realize he hosed it up. So I learned to also comment on things I think are "neat" because they might also be time bombs.

As I've gotten more experienced I use code review more for understanding what the other people on my team are doing and less for finding errors. I always try to find something to praise and/or ask for more detail about, so that I learn new patterns / diversify my approach / etc.

I also try not to criticize code unless it's truly a breaking change. If there's multiple small bugs, I'll address the 2-3 that have the biggest customer impact and ignore the rest. Also, I try not to do multiple rounds of review unless it's truly, utterly broken. Basically, I want code checkin to be as quick as possible, with the idea that any small bugs can be found and dealt with quickly later on. Rubber-stamp code review is stupid, but I've also seen toxic environments use code review as a way to punish people and prevent them from getting code pushed.

Hughlander
May 11, 2005

Rocko Bonaparte posted:


From what's left of the team includes somebody that came into the job from an actual software engineering background and we've had absolutely zero problems together. The funny thing there is he'll sometimes do something a little fancy, and I'll comment about it being novel and cool, and then he'll realize he hosed it up. So I learned to also comment on things I think are "neat" because they might also be time bombs.

Brian Kernighan (as in K&R C) has a famous quote: Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?


I always try to keep that in mind for 99% of my code.

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!

vonnegutt posted:

If there's multiple small bugs, I'll address the 2-3 that have the biggest customer impact and ignore the rest.

What? No! As a reviewee, please, please, please don't ignore bugs you see in my code at review time. It will be way more annoying for me to fix them later.

Harriet Carker
Jun 2, 2009

Eggnogium posted:

What? No! As a reviewee, please, please, please don't ignore bugs you see in my code at review time. It will be way more annoying for me to fix them later.

Agreed. That’s crazy. Code should not be approved if it’s not correct.

I think approving stuff that could still use some refactoring is occasionally ok but buggy code should never be merged.

Xarn
Jun 26, 2015
There are edge cases where merging buggy code is ok, but please do not merge it without a set of tests that shows exactly when it fails, and add a documentation of why it is ok for it to fail like that.

Nomnom Cookie
Aug 30, 2009



Hughlander posted:

Brian Kernighan (as in K&R C) has a famous quote: Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?


I always try to keep that in mind for 99% of my code.

At a point before I encountered that quote, over about 4 hours I wrote some crazy code that did n-dimensional indexing on some very large flat arrays to collect a bunch of poo poo, then immediately munged all that together into a packed form in a byte array for a serialization scheme I invented on the spot. Forward references and a string table were involved. It was very fast as intended, and for three days or so I was so proud of my ultimate cleverness. Then there was a bug, I went back to the code, and it was immediately obvious what was about to happen: either it was an off-by-one error I could fix by poking at loop indices and seeing if the problem went away, or I'd have to rewrite the whole thing. Understanding was impossible. Poking at loop indices did fix it, so that was good. Ever since then I've written code that even an idiot could debug, because that's who's gonna be debugging it.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Xarn posted:

This.


There are definitely changes we don't review, but I can't imagine not having a review process for non-trivial changes.

In 2014 I went to work at a technology company that had just been acquired for multiple billions of dollars, and they had no code review process. I’m sure there’s another one out there.

Macichne Leainig
Jul 26, 2012

by VG

Kevin Mitnick P.E. posted:

Ever since then I've written code that even an idiot could debug, because that's who's gonna be debugging it.

If I've learned anything, it's this. Write code that's easy to understand and well-commented, because someone (me) is going to be coming back to it in a year and otherwise not understand a loving thing.

Keetron
Sep 26, 2008

Check out my enormous testicles in my TFLC log!

Currently I am blocking a PR from merging as I very much disagree with the design decision to centralize internal models for microservices in a common library. Either you merge the services if these models must be the same at all times or you keep your services separate entities with their own models and only share the DTO's. For some reason, only me one of the very experienced devs seem to see it this way.
Tell me I am wrong please as this PR is unmergeable as long as I keep it on "request changes".

Doom Mathematic
Sep 2, 2008

Hughlander posted:

Brian Kernighan (as in K&R C) has a famous quote: Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?


I always try to keep that in mind for 99% of my code.

The catch is: writing the simpler code is harder and takes longer than writing the ultra-clever version.

Volguus
Mar 3, 2009

Keetron posted:

Currently I am blocking a PR from merging as I very much disagree with the design decision to centralize internal models for microservices in a common library. Either you merge the services if these models must be the same at all times or you keep your services separate entities with their own models and only share the DTO's. For some reason, only me one of the very experienced devs seem to see it this way.
Tell me I am wrong please as this PR is unmergeable as long as I keep it on "request changes".

The universe is so much simpler in monolith applications. But i can see it for both sides of this story to be right. 2 services, doing 2 very different things to the same entities. There's an argument to keep them separate and there are arguments to unite them. Good luck. Whatever you choose it's a clusterfuck.

If you go with "unite them" soon enough you'll find yourself with a monolith. So, whatever reasons were there from the beginning to move to microservices, are they gone now?

Hughlander
May 11, 2005

Doom Mathematic posted:

The catch is: writing the simpler code is harder and takes longer than writing the ultra-clever version.

Construction is often the least amount of time spent on a code base so I’m fine with that. I’m also quick with the jet brains refactor tool.

Taffer
Oct 15, 2010


Doom Mathematic posted:

The catch is: writing the simpler code is harder and takes longer than writing the ultra-clever version.

But will save you countless hours down the road. It's always, always worth it to pay that up-front planning cost.

vonnegutt
Aug 7, 2006
Hobocamp.

Eggnogium posted:

What? No! As a reviewee, please, please, please don't ignore bugs you see in my code at review time. It will be way more annoying for me to fix them later.

I should've been more specific. Actual Bugs are not okay, but small/minor inefficiencies, mediocre UI decisions, less-than-totally-optimal performance...I won't block it if the sum total of the code is improving the product / adding a feature. There are degrees of "wrong" and when you have a solid CI system and frequent releases, I never want to block "good" code because it's not "perfect" code.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Blinkz0rz posted:

yeah obviously the only properties we have as dynamic are the ones we've designed specifically to by dynamic. changes to properties are gated via code review.

i guess i'm not sure how else you'd handle stuff like this. i mean say for instance you're releasing a new feature which requires a number of changes in a bunch of different microservices. how do you handle the release?

do you block up changes to all of the services involved until everything is done and ready to release? do you stagger releases with a feature flag and then re-release with the flag off when everything is done? how do you handle the race condition this presents?

this system lets us feature flag stuff and then toggle those flags when all of the other pieces are in place. it allows us to decouple the releases of services that others are dependent on and allows continuous development and delivery even while larger parts of feature work are still ongoing.

again, all of this stuff feels like a no-brainer to me but i've been working in this system for a bit and i'm sincerely curious how other places handle this stuff.

We typically use request flags. If the flag is set, it runs the new code for the new feature, otherwise it does the old thing. Your test code, qa process etc. can exercise that new code by setting the flag on their requests, but it doesn't immediately change anything when you roll it out to production.

Once all the individual services are released, you can configure your user-facing frontend to start setting that flag on appropriate backend requests. You can even do a staged rollout while keeping an eye on your monitoring to make sure there's nothing unexpected going wrong.

Finally, once the rollout is complete you clean up the request flag and make all the systems just do the new behaviour.

I'm curious how you handle ensuring consistency with your system? It doesn't seem like you have anything that ensures a request will never hit a mix of services where some have the feature on and others have it off?

Keetron
Sep 26, 2008

Check out my enormous testicles in my TFLC log!

Volguus posted:

So, whatever reasons were there from the beginning to move to microservices, are they gone now?

I have no idea, and going to ask that question. I think it boils down to the usual downsides of a monolith combined with hip and new technology.

sunaurus
Feb 13, 2012

Oh great, another bookah.

Keetron posted:

I have no idea, and going to ask that question. I think it boils down to the usual downsides of a monolith combined with hip and new technology.

The only good reason I've ever seen for microservices is being able to split up a large project between a bunch of small teams, because with normal tooling, it's much easier to have one team working on a small service than 10 teams working on a monolith. That's the usual downside of a monolith.

Any arguments about how "microservices promote better code quality" are absolute bullshit in my experience, some of the worst code I've ever seen has been in microservice form.
"More flexibility in scaling" is another argument I hear often, but I don't think this is really worth the overhead of maintaining multiple services for most teams, and if you really need more flexibility in scaling one or two parts of your monolith, just extract those parts instead of breaking your whole monolith into tons of microservices.

This is the way I view microservices: each new microservice adds tons of overhead for deployment and maintenance but removes tons of overhead from having multiple teams working on the same project. If you're not splitting work between multiple teams, then all you're getting is the negative part.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Jabor posted:

I'm curious how you handle ensuring consistency with your system? It doesn't seem like you have anything that ensures a request will never hit a mix of services where some have the feature on and others have it off?

System boundaries and domains. For the most part APIs are versioned consistently and feature flags are on the client side. Teams publish Pact contracts alongside javadocs and changing the behavior of an API that's already being consumed has, from memory, only been done a handful of times in the nearly 4 years I've been here and it was mostly early on.

It gets a little dicier with some of our services that work with queues or with object stores as the primary means of communication. In those cases both producers and consumers work off the same feature flag ensuring that changes from the producing side are reflected in the handling code on the consumer side. This doesn't happen often and is treated much more carefully than a regular release with appropriate retry handling and dead letter queue usage.

smackfu
Jun 7, 2004

Rocko Bonaparte posted:

Something else I discovered is another older fellow running something of a fiefdom with one of his tools is doing code reviews. I haven't checked but I am pretty sure for him "code review" means "I review your stuff. You don't review mine because my poo poo doesn't stink."

I’ve also seen teams where one senior person is really picky in code reviews and so people rubber stamp that persons code on some assumption of quality. Not good.

Hughlander
May 11, 2005

My biggest problem with microservices has been that most of what I deal with is backed by a lot of data in the database to the point where if our external cache isn’t primed we’re hosed. And that keeping to the internal LRU cache is a huge benefit. Breaking that up would be a huge undertaking of unknown complexity to the point where I don’t see it as possible.

Keetron
Sep 26, 2008

Check out my enormous testicles in my TFLC log!

We work with three teams on one product that has some 20+ services as a backend. All teams work on all services.
However, the most recent added team, that is on another continent, works on a service that is part of the larger product but can function stand alone (the service).

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





microservices are fine if your org is mature enough to treat them basically as third party services with backwards compatible apis and data and auth and all that other stuff you need

they loving suck if they're just a big ball of garbage that happens to run smeared across multiple network hops

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I'll stick with the "you must be this tall before attempting microservices" approach https://martinfowler.com/bliki/MicroservicePrerequisites.html - the rest is purely cultural and ability to handle decoupling well and to hold each other responsible effectively. As a more ops oriented engineer, I would also add "each service that is used by more than one other service must be highly available, support automatic rollbacks, and have zero downtime deployment architecture." Because at a certain rate of change you can't have 5 services all in different states of being hard down throwing 50x errors more than few times. Far, far too many times I've also seen places implement "microservices" when they are all reading from the same exact database or are constantly querying the same service like "customer" everywhere leading to a hub-and-spoke kind of SOA architecture that is pretty much the worst possible scenario of SOA operationally and architecturally possible. I mean, there's some room for sharing the same database server with a completely different schema just to save on costs, but if you have trouble affording different databases I really think you can't afford microservices either.

For code review principles, I go with "humans can evaluate general logic and higher-level constructs" while we should leave full-blown analysis to machines. Running linters, formatters, and static analyzers in your CI system should be mandatory to avoid bikeshedding that destroys productivity. And beyond that, you need to establish a set of coding convention guidelines sort of like a constitution or credo. For example, you don't need to go this far when you're starting anything but it doesn't hurt to start with a document like https://github.com/elastic/engineering/blob/master/development_constitution.md

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
While there are a few good points in that document, I find it ever amusing that each new project establishes the same fundamental failures: "We wanted to be fast, we learned we hosed it all up, we made some choices about replacements versus redesigns versus repairs, please help us try to ingrain these arbitrary guidelines that we just made up". Along the way they routinely (excessively) invoke "as long as it's good enough!!".

What I see as the biggest philosophical failure of agile software development is not the "good enough" or "faster than fast!!" desires, but the blatant oppositions to "more than just 'good'" and "fast enough". None of these documents start with "We decided to start with established practices from the most successful software projects, thus differentiate between proofs-of-concept and prototypes and production code, and most value building something that handles 99% accurately and correctly because we don't want to lose customers and peers while they wait around endlessly for us to fix crap libraries, and spending the extra 10-20% on that is worth the time (because it actually takes 3x to get to the same point is we try to take shortcuts).

Of course, any development methodology can support whatever time/cost/resource balance that is desirable, but what people actually follow will be the most lazy parts of the method.

lifg
Dec 4, 2000
<this tag left blank>
Muldoon
It's weird to read older books on software development, specifically before the agile revolution of 2000, and realize that there was a whole body of knowledge on that was simply tossed out and forgotten. I've been reading stuff by Capers Jones, who did a lot of hard work of analyzing and reporting on large projects, and I'm convinced I can use his insights to help plan the next thing I get put on.

Adbot
ADBOT LOVES YOU

Cuntpunch
Oct 3, 2003

A monkey in a long line of kings

leper khan posted:

OOP is often bad for games because of what the object model means for the utilization of your cache. The argument is usually that virtual calls and non homogenous data mean I can’t pipe as much data through my systems and therefore artificially reduce performance. The architectural pattern roughly becomes arrays of homogenous data. Different systems need different data, and will then manage their own arrays. Some other thing handles relationships between chunks of data.

If your game doesn’t drive performance as a concern (so.. most games) then the overhead implicit in object models isn’t a death knell. The thing people who say “OOP is bad” probably mean to say is “OOP has costs at runtime for the benefit of more natural reasoning to developers. Not all systems can support this artificially increased cost, so OOP is not a universal solution.”

ECS is a far less flexible paradigm than typical OOP. The implementation of systems is often simple, but a change in data may mean re-implementing a system. The goal effectively becomes to have your systems be simple enough that their requirements won’t change often. Conversely, a feature may involve multiple systems, and changes probably affect only a subset. The maintenance burden difference is analogous to monolithic applications vs microservices. Monolithic applications are easier to build and maintain up to some point, after which it’s far easier to have split everything into smaller pieces to reason about independently while consuming the overhead of reasoning about their interdependencies. It’s probably easier to intuit the total behavior of a monolith, but also probably easier to tune performance of microservices.

Your points are completely valid and reasonable, but I've been out of the gamedev world for a while. Even in a 'keep the class tree flat as possible' don't you still end up with *some* sort of structured orchestration of the game? Where various systems are orchestrated and things are largely kept away from one another?

It seems like chaos to me - in the sense that the only orchestration he's even spoken about tends to just be the main loop telling everything to update.
The player entity has full access to the gamestate. When it, for example, 'shoots' - it takes responsibility for spawning the projectile object and giving that entity access to this glob of state. The projectile then takes responsibility each frame for tracking where it's at, testing its own collision, and knowing how to resolve a collision. It hits a monster enemy? It reaches into the glob of state to manually goof with the health of that enemy.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply