Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
YanniRotten
Apr 3, 2010

We're so pretty,
oh so pretty
Starve the beast - stand up a replacement in the technology you want, incrementally add the API support you need, use it, delete the functionality from your Rails app. Ideally you have a ton of frontend tests that will tell you if you screw up the new APIs.

Yes, this will take forever and probably nobody will agree to let you do it.

Adbot
ADBOT LOVES YOU

12 rats tied together
Sep 7, 2006

prom candy posted:

Also let's say that the centrepiece of your backend is an SQL database that is the single source of truth for just about everything.

Probably helps to start by noting that this is the true source of the monolith, rails really doesn't have anything to do with it. Breaking up the monolith is going to mean, mainly, breaking up these foreign key relationships and coming up with a less tightly-coupled database schema.

If you change the service logic from rails to whatever new thing you want to work with more, but they both still depend on the "monolith of the row", it's now a distributed monolith which is way worse than it was before for no benefit.

Bongo Bill
Jan 17, 2012

Data is the basis of everything. Start by identifying which data in that database is independent from which other data. Any data that belongs to just a single domain can move to a database of its own, connected to exclusively by a single service; anything that's cross-cutting you can slap an HTTP API in front of.

That'll help you identify your destination. For the process of getting there, take each of those unrelated clients and give each one a backend of its own. They can start off as simple gateways that just forward all requests to the original monolith, but you can then incrementally move behavior to them until at last they do everything their client needs on their own.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


Look up the strangler pattern. That's a standard approach to problems like this.

prom candy
Dec 16, 2005

Only I may dance

12 rats tied together posted:

Probably helps to start by noting that this is the true source of the monolith, rails really doesn't have anything to do with it. Breaking up the monolith is going to mean, mainly, breaking up these foreign key relationships and coming up with a less tightly-coupled database schema.

If you change the service logic from rails to whatever new thing you want to work with more, but they both still depend on the "monolith of the row", it's now a distributed monolith which is way worse than it was before for no benefit.

I don't think I really understand more modern approaches for architecting data. All the data in our database feels very interrelated. When you say breaking up foreign key relationships and coming up with a less tightly coupled database schema what does that actually look like?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
As a concrete example, let's suppose you have a Users table and a File table, and Users can upload Files.

It might seem like your data is interrelated, because your File table has a foreign key to the Users table indicating who uploaded it, but is it really? When you request information about a File, do any of those queries actually want to join against the Users table and grab things like the uploader's hashed+salted password?

It might make sense to pull a lot of that User information into a separate database, that is only ever accessed by your authentication service. Once you've done that, you can consider also moving stuff like the user's display name and profile picture to your User database, and have your File-displaying code look up that user information via a purpose-built user information API rather than doing a database join to include it in their File results.

redleader
Aug 18, 2005

Engage according to operational parameters
and after a couple of rounds of that you're deep into microservices

prom candy
Dec 16, 2005

Only I may dance

Jabor posted:

As a concrete example, let's suppose you have a Users table and a File table, and Users can upload Files.

It might seem like your data is interrelated, because your File table has a foreign key to the Users table indicating who uploaded it, but is it really? When you request information about a File, do any of those queries actually want to join against the Users table and grab things like the uploader's hashed+salted password?

It might make sense to pull a lot of that User information into a separate database, that is only ever accessed by your authentication service. Once you've done that, you can consider also moving stuff like the user's display name and profile picture to your User database, and have your File-displaying code look up that user information via a purpose-built user information API rather than doing a database join to include it in their File results.

Got it, thanks! What happens when the PM says we need to add search to the file list and it should match against the user name? I guess that's when you cache a bunch of denormalized data in ElasticSearch or Algolia?

Destroyenator
Dec 27, 2004

Don't ask me lady, I live in beer
Is the pain point Rails, or it being a monolith, or the single SQL database?
What do you want to be better or easier once you've fixed this?

prom candy
Dec 16, 2005

Only I may dance

Destroyenator posted:

Is the pain point Rails, or it being a monolith, or the single SQL database?
What do you want to be better or easier once you've fixed this?

The pain point for me is 90% Rails. I want end to end type safety and a language with better DX. And no more inheritance nonsense and no more magic. I've been using Rails for 15 years so it's not like I don't understand it, I just don't believe it's a good solution for 2022's problems.

Maybe it would make sense to spin some things off into microservices as well, for example so I could use faster languages for our background services that run 24/7 and save some money on infra, but the biggest pain point for me is just working in a language and framework that I've come to loathe. I see people building these projects that are e2e typescript where they have type safety from the database schema all the way to their React components. My setup doesn't even have type safety from the database to the model.

Our org is also really really small so I'm not sure if some of the benefits of a microservices architecture would be lost on us. We'll probably grow a bit but not likely to the point where we'll be spinning out teams that would just own one or two pieces of the whole.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

prom candy posted:

Got it, thanks! What happens when the PM says we need to add search to the file list and it should match against the user name? I guess that's when you cache a bunch of denormalized data in ElasticSearch or Algolia?

If that's what you'd do in the original structure then yeah, you could keep doing that. An alternative would be to just separately search the files and the users and then merge the results before you present them back. (This also helps you do things like not only show results from people named June when someone searches for "June Sales Report", which can help make your searching more useful).

Destroyenator
Dec 27, 2004

Don't ask me lady, I live in beer

prom candy posted:

The pain point for me is 90% Rails. I want end to end type safety and a language with better DX. And no more inheritance nonsense and no more magic. I've been using Rails for 15 years so it's not like I don't understand it, I just don't believe it's a good solution for 2022's problems.

Maybe it would make sense to spin some things off into microservices as well, for example so I could use faster languages for our background services that run 24/7 and save some money on infra, but the biggest pain point for me is just working in a language and framework that I've come to loathe. I see people building these projects that are e2e typescript where they have type safety from the database schema all the way to their React components. My setup doesn't even have type safety from the database to the model.

Our org is also really really small so I'm not sure if some of the benefits of a microservices architecture would be lost on us. We'll probably grow a bit but not likely to the point where we'll be spinning out teams that would just own one or two pieces of the whole.

In that case I would start finding pieces you can cut off and make into another service, but keep the same data store for now.

When you have a new big feature or rework of an area, see if you can draw a line around that part of the data model and business logic and try to build it out as a separate service. Maybe it's the user profile functionality, let the rest of the code still join for presenting search results if you need, but get an agreement that this new service "owns" the tables involved and is the only place you can write to them or do migrations. (You can always create a new db user for the new service and set up table permissions if that helps with discipline.) Then you build your typed datamodels for the users and profiles and whatever in the service that owns those tables, and present the new version of those API to your apps.

That should give you an idea of how nice or not it is having a separate service. You'll have to address having two sets of logs, monitoring and alarms, deployments and all those fun things so don't underestimate the overhead setting up the first one of these. Tying it to ongoing feature work will make it easier to say something is "done" for now.

Further down the road you can think about splitting the data up, once the data models and ownership are more defined it will be easier to think about what needs to be colocated or has interesting data access patterns. You also might find it's fine just migrating slowly to another monolith.

Riven
Apr 22, 2002
Yeah there’s nothing inherently wrong with a monolith. If Ruby/Rails is the problem implement a strangler pattern where you slowly shift to a new monolith rather than microservices.

asur
Dec 28, 2012

Jabor posted:

As a concrete example, let's suppose you have a Users table and a File table, and Users can upload Files.

It might seem like your data is interrelated, because your File table has a foreign key to the Users table indicating who uploaded it, but is it really? When you request information about a File, do any of those queries actually want to join against the Users table and grab things like the uploader's hashed+salted password?

It might make sense to pull a lot of that User information into a separate database, that is only ever accessed by your authentication service. Once you've done that, you can consider also moving stuff like the user's display name and profile picture to your User database, and have your File-displaying code look up that user information via a purpose-built user information API rather than doing a database join to include it in their File results.

Just because you don't join doesn't mean there isn't a FK. If File is owned by a User(s) then it has a relationship that is easily represented by a FK and SQL has support for constraints, deletion, etc. Referential integrity is much harder to support across multiple databases.

I don't really agree that the base of the monolith is the data. If the data is in separate tables and you can have ACLs based on table, which I think SQL supports, then moving to separate dbs doesn't gain a lot in this situation and loses functionality. There are definitive reasons to break a database into multiple, but prom candy isn't giving any problems that would indicate his company is there yet.

Breaking the code monolith is a separate question and ownership is a very common reason. The reasons prom candy gives though seem lackluster and are unlikely to give a good RoI that will get management support. The path with the most success is probably to only rewrite code that is changing.

asur fucked around with this message at 17:59 on Aug 7, 2022

CPColin
Sep 9, 2003

Big ol' smile.
If you sit on the monolith service until it goes numb you can implement the Stranger pattern

meatbag
Apr 2, 2007
Clapping Larry

CPColin posted:

If you sit on the monolith service until it goes numb you can implement the Stranger pattern

New thread title right there.

12 rats tied together
Sep 7, 2006

asur posted:

I don't really agree that the base of the monolith is the data. If the data is in separate tables and you can have ACLs based on table, which I think SQL supports, then moving to separate dbs doesn't gain a lot in this situation and loses functionality. There are definitive reasons to break a database into multiple, but prom candy isn't giving any problems that would indicate his company is there yet.

Breaking the code monolith is a separate question and ownership is a very common reason. The reasons prom candy gives though seem lackluster and are unlikely to give a good RoI that will get management support. The path with the most success is probably to only rewrite code that is changing.

"Monolith" comes with a lot of implied context these days, but in the case I'm discussing, it absolutely stems from the data in the database. You can run a dozen different BE and FE technology stacks in front of the same models in the same database, sure, but then when you're running migrations every tech stack sees the update (and potentially the outage) immediately. That's still a monolith, it's just a way more complicated and brittle one for no benefit.

I don't think a "code monolith" is a real thing, it seems like you're describing a call stack, which is not really related to application architectures. I do agree with you that OP seems like they just want to rewrite off of rails because they don't like it, which is fair, but not really a question of where rails falls on monolithic vs service oriented design.

If this were my workplace I would implore OP to consider just writing a normal service, in ruby, and to use sorbet for static typing. This has the best path forward assuming that they get it right, and it has the least expensive path back into the monolith if the service boundary was not drawn properly and it ends up just being left join over HTTP.

smackfu
Jun 7, 2004

prom candy posted:

Got it, thanks! What happens when the PM says we need to add search to the file list and it should match against the user name? I guess that's when you cache a bunch of denormalized data in ElasticSearch or Algolia?

In my experience, you say “that will be hard because of technical limitations” and it will get moved down the priority list and never done.

prom candy
Dec 16, 2005

Only I may dance
Is monolith + a few extra services a common pattern? Overall I do kinda like having a monolith for our org, but I don't want to be tied to Typescript if, for example, I could write some kind of parsing or filtering service in Rust for some big gains. We do already have a couple of small services running in lambdas that receive thousands of webhooks and then forwards the 1% we actually care about to the Rails app. There's other sections of the codebase that could probably benefit similarly.

One nice thing is we already have a k8s cluster set up and all our infra is managed by terraform so it's not a complete poo poo show or anything. Just the app at the center of it all kinda sucks to work with.

Edit: missed a bunch of posts, catching up now. Just wanted to also say thanks for all the ideas!

prom candy fucked around with this message at 23:01 on Aug 7, 2022

12 rats tied together
Sep 7, 2006

It's very common but it's not especially good, which is probably why a bunch of people (myself included) jumped out of the woodwork to issue warnings of various stripes. The big trouble spot is usually something like: if your service provides an HTTP GET, it's usually just going to end up as join-over-HTTP, because someone will write some "SomeModel's Foo attribute is the result of http get, my-service, /things/, whatever id".

That's the distributed monolith. What happens when the service is down? or slow? Calls to this handler are unavailable now because we can't generate valid serializations of SomeModel. This means that anybody else who depended on us is now also down and we've officially stepped on the "cascading failure" rake.

You can end up writing an incredible amount of defensive architecture (circuit breakers, dead letter queues, backoff timers, deadlines and deadline propagation, and so on) for a problem that really didn't need to exist in the first place.

The benefit of monolithic design in this area is twofold: one, you don't require that everyone simulate an entire galaxy of HTTP APIs on their laptops with docker compose just to be able to run a test suite, and two, everything goes down for an upgrade at the exact same time, and it comes back up at the exact same time.

prom candy
Dec 16, 2005

Only I may dance

12 rats tied together posted:

It's very common but it's not especially good, which is probably why a bunch of people (myself included) jumped out of the woodwork to issue warnings of various stripes. The big trouble spot is usually something like: if your service provides an HTTP GET, it's usually just going to end up as join-over-HTTP, because someone will write some "SomeModel's Foo attribute is the result of http get, my-service, /things/, whatever id".

That's the distributed monolith. What happens when the service is down? or slow? Calls to this handler are unavailable now because we can't generate valid serializations of SomeModel. This means that anybody else who depended on us is now also down and we've officially stepped on the "cascading failure" rake.

You can end up writing an incredible amount of defensive architecture (circuit breakers, dead letter queues, backoff timers, deadlines and deadline propagation, and so on) for a problem that really didn't need to exist in the first place.

The benefit of monolithic design in this area is twofold: one, you don't require that everyone simulate an entire galaxy of HTTP APIs on their laptops with docker compose just to be able to run a test suite, and two, everything goes down for an upgrade at the exact same time, and it comes back up at the exact same time.

Thanks, this is good stuff. The services I'm thinking of are more like background jobs that run on a queue or are triggered by cron. We have a bunch of processes that are basically running 24/7 and I do wonder if I could save a bunch of money if I didn't have to spin up my entire Rails app for each one. We're a social media company so we do a lot of just getting stats from a user's profile and saving it in a DB so we can create charts for them, stuff like that.

After reading all of your wonderful posts and thinking it over a lot I think really the only responsible thing to do in my position is stick it out with Rails except in cases where something can really and truly be cleanly separated. We are launching some new products so I might see if I can let my Rails app provide auth to a secondary API that handles everything else for those products. I can also probably do more to make developing in the Rails environment less painful. Grass is greener where you water it and so on. This will be my last Rails job though, next time I'm on the hunt I'm taking it off my LinkedIn profile.

JawnV6
Jul 4, 2004

So hot ...

Pollyanna posted:

I…what :psyduck: what is even happening here

The PM's asking a perfectly reasonable question. "Delete after 30 days" doesn't tell you when the check is implemented or when a user would actually observe a deletion occurring. Do we check the entire spam folder every morning at 8 AM, so a spam email arriving at 8:01 could potentially sit there for 30.999 days? Is that operation done more frequently, like every 15 minutes? Or is there a continuous loop checking every spam email every second to ensure, against all reason and practicality, that the "delete after 30 days" requirement is met precisely? It might sound silly but for certain applications, legal requirements around retention policy might make that the preferred method.

But, for a certain kinda nerd, it's really fun to pretend like you can't possibly understand how a "human" uses a computer and continue talking past each other in increasingly histrionic ways!

12 rats tied together
Sep 7, 2006

prom candy posted:

Thanks, this is good stuff. The services I'm thinking of are more like background jobs that run on a queue or are triggered by cron. We have a bunch of processes that are basically running 24/7 and I do wonder if I could save a bunch of money if I didn't have to spin up my entire Rails app for each one. We're a social media company so we do a lot of just getting stats from a user's profile and saving it in a DB so we can create charts for them, stuff like that.

After reading all of your wonderful posts and thinking it over a lot I think really the only responsible thing to do in my position is stick it out with Rails except in cases where something can really and truly be cleanly separated. We are launching some new products so I might see if I can let my Rails app provide auth to a secondary API that handles everything else for those products. I can also probably do more to make developing in the Rails environment less painful. Grass is greener where you water it and so on. This will be my last Rails job though, next time I'm on the hunt I'm taking it off my LinkedIn profile.

Just to be clear, my current employer has this particular footgun on full auto and it's going to make my work life measurably worse in the next 6-9 months, so, if I seemed like I was projecting a little I absolutely was. I think you've got it that "really and truly be cleanly separated" is the key, but unfortunately it's very hard, it (correctly factoring a system, or decomposing a system to fit new requirements) is probably the hardest problem in all of software engineering, IMO.

It's easy to pull out "left join over HTTP" as a way to shoot down an idea, but I also want to be clear, a service that doesn't meaningfully interact with any other service is just an "app". Even if you did SOA/microservices perfectly and factored your subsystems and modules exactly correct, if you end up with 1 service that's still a monolith. All service architectures must react to other services (software that doesn't call other software is largely useless). I'm not an expert but to me, the sign for "there is potentially a clean break point here" is whether or not your service needs to perform a sql insert with data from another service.

Like, if you're doordash, and you have table Orders in your monolith and then your "delivery service", which is a different web stack with its own postgres, needs to GET monolith/orders/order_id -> find a valid driver -> and then save an instance of DeliveryOrder which is an OrderId plus a DriverAssignment, and then you had a status updates service that queried both for push notifications/websocket events, that's bad. That's the distributed monolith again, "go and GET delivery/orders/1234", we're talking about data and we want the data that the "delivery service" "owns".

There already exists an architectural term for "a place where data is", so (to be slightly pedantic) the delivery service doesn't really own any data, the delivery service is the behavior ("find a valid driver"). If you can cleanly cut your architecture at separation points between behavior domains, you will theoretically be able to create a good service. If your service is just a different database in disguise, that will usually be bad.

In my experience, the main hint for whether or not a service has a clean separation point is how easy it is to test. If it's any more complicated than "a series of mock DriverRequests", it's probably a bad idea. If it needs a dependent service (which, again, is likely not a "service" but a database in disguise) running locally for tests, it's probably a bad idea, and so on.

brand engager
Mar 23, 2011

Are estimates supposed to be the entire time spent including time due to other people reviewing the code and QA testing? It's kinda hard to estimate the parts that I'm not going to be doing

Truman Peyote
Oct 11, 2006



assuming you're talking scrum, estimates are supposed to be effort before the entire story is done, and is specifically not supposed to estimate time. or at least that's the "official" scrum line. when i get asked for time estimates i emphasize that there's a lot we don't know and that i refuse to be held to it, but if you need a number you can use X, where X is significantly longer than i actually expect it to take.

12 rats tied together
Sep 7, 2006

in kanban the estimation would come from your PM/lead and it would be based on historical data about how long it has taken to move a similar work item through the entire production chain in the past, where you might be tasked with grouping items of similar effort together, but you would never be asked how long it will take to complete TEAMNAME-1234

but, a key aspect of it is that the forecasting includes the entire production chain and all potential blocked states, which includes review from other people.

e: the other key aspect of it is that is explicitly based on time, not some abstraction for level of effort that everyone at all stages converts into time implicitly in their heads

12 rats tied together fucked around with this message at 21:07 on Aug 9, 2022

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

brand engager posted:

Are estimates supposed to be the entire time spent including time due to other people reviewing the code and QA testing? It's kinda hard to estimate the parts that I'm not going to be doing

Estimates are supposed to be put in place by consensus of the team responsible for delivering the story, not by an individual.

Che Delilas
Nov 23, 2009
FREE TIBET WEED

Truman Peyote posted:

assuming you're talking scrum, estimates are supposed to be effort before the entire story is done, and is specifically not supposed to estimate time. or at least that's the "official" scrum line. when i get asked for time estimates i emphasize that there's a lot we don't know and that i refuse to be held to it, but if you need a number you can use X, where X is significantly longer than i actually expect it to take.

I've found that really small stories can much more easily be translated into time. As complexity and uncertainty (and therefore points) go up, it gets a lot fuzzier and becomes "probably by end of sprint" but not much better than that.

YanniRotten
Apr 3, 2010

We're so pretty,
oh so pretty

New Yorp New Yorp posted:

Estimates are supposed to be put in place by consensus of the team responsible for delivering the story, not by an individual.

No, they're supposed to be squeezed out of the person trying to do 80% of the work on the "top priority" without effective help from their team, by product, under duress. If possible a new estimate should be requested at least daily and under absolutely no circumstances should product go to standup or look at Jira.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


Che Delilas posted:

I've found that really small stories can much more easily be translated into time. As complexity and uncertainty (and therefore points) go up, it gets a lot fuzzier and becomes "probably by end of sprint" but not much better than that.

Coding, Fast and Slow

lifg
Dec 4, 2000
<this tag left blank>
Muldoon
All the scrums teams I’ve been on eventually estimate everything as “medium”, using whatever number of points they’ve decided means medium. Stories that feel to large to be medium are broken down until they’re medium.

I don’t like scrum points. They take up a lot of mental space during planning meetings without adding a lot of use.

Making everything medium mostly works.

brand engager
Mar 23, 2011

The estimates are for our sprints, we've been doing them before&during the kickoff meeting

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
If this is for sprint planning, the useful thing to estimate is most likely to be "when will I be freed up from this task so I can commit to something else". If your QA and release process requires constant babying by the feature developer to push things through, and you won't be able to pick anything else up until after it's done, then you should include that time. (Really you should fix your process to involve less bureaucratic makework, but let's suppose that's off the table for now). If it's a more hands-off process where you can send it to QA, spend five minutes next week pushing the "yes release it" button once QA signs off, and otherwise not worry about it at all once you've handed it over, then you should just include the actual feature building time.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Jabor posted:

If this is for sprint planning, the useful thing to estimate is most likely to be "when will I be freed up from this task so I can commit to something else". If your QA and release process requires constant babying by the feature developer to push things through, and you won't be able to pick anything else up until after it's done, then you should include that time. (Really you should fix your process to involve less bureaucratic makework, but let's suppose that's off the table for now). If it's a more hands-off process where you can send it to QA, spend five minutes next week pushing the "yes release it" button once QA signs off, and otherwise not worry about it at all once you've handed it over, then you should just include the actual feature building time.

But if QA is the bottleneck -- and it always is anytime there's explicit manual QA -- then you're just releasing more work to get held up at the bottleneck.

That's why QA is still part of estimates. If you can't release your stories because they're still being QAed, the story isn't complete.

The estimate should include ALL the effort, not just the developer's effort.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

New Yorp New Yorp posted:

But if QA is the bottleneck -- and it always is anytime there's explicit manual QA -- then you're just releasing more work to get held up at the bottleneck.

That's why QA is still part of estimates. If you can't release your stories because they're still being QAed, the story isn't complete.

The estimate should include ALL the effort, not just the developer's effort.

Again, this depends on who the estimates are for.

If this is for planning your own sprints, then your own time is what's relevant to that planning. Are you going to sit around twiddling your thumbs for the second half of the sprint because you finished your part of it and are waiting on QA?

And for what it's worth, we do QA on all our releases and major features and QA resources have never been "the bottleneck" for us - they're a step in the process that takes a pretty consistent amount of time. If a lack of inexpensive QA resources is bottlenecking your expensive developer resources, then whoever's in charge of your staffing is an idiot.

brand engager
Mar 23, 2011

We don't have personal sprints, it's just one for everyone. There's only 5 developers also

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...
The hell is a personal sprint?

brand engager
Mar 23, 2011

Volmarias posted:

The hell is a personal sprint?

Whatever the post above my last one meant by "planning your own sprints"

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
what? i'm talking about the sprints that you as a team of engineers are doing in order to deliver on engineering goals. the estimates that matter for those sprints are the engineering work that will be required from your team. work that will not be carried out by an engineer on your team does not factor in to what your team can accomplish during the sprint, and should not be part of your estimate.

these are different from product-focused estimates like "a major customer wants this feature, what estimated ship date should we give them" which of course should include everything in the path to get that feature fully shipped.

Adbot
ADBOT LOVES YOU

Xguard86
Nov 22, 2004

"You don't understand his pain. Everywhere he goes he sees women working, wearing pants, speaking in gatherings, voting. Surely they will burn in the white hot flames of Hell"
Think the clarification point is if QA is part of the team" or if it also has to go to some kind of external QA.

*Even if QA is done by an engineer.

Usually your estimation is tied to what you control. So if you QA in the team it counts, but a separate department or team does not. That applies to estimation up front or Kanban/lean style measurements to derive estimates.

Ideally team owns everything straight through to the customer or user.

Not perfect but OK is someone measuring the full cycle and focusing improvements where they'll actually help.

The rest is various bullshit, theater, and malpractice.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply