|
I'm convinced that the primary reason to choose a microservices architecture is organisational and not technical. Once you're dealing with a system that's too complex for one (small) team to manage, and you want teams to be able to operate independently, you kinda have to embrace Conway's law and split the system into independently deployable chunks, even (to some extend) to the detriment of the system from a purely technical viewpoint.
|
# ? Jan 2, 2019 16:38 |
|
|
# ? May 25, 2024 06:40 |
|
Messyass posted:I'm convinced that the primary reason to choose a microservices architecture is organisational and not technical. It's 100% this
|
# ? Jan 2, 2019 17:04 |
|
People already split up a service into database / peristence, frontend, and middle tier without needing multiple teams to do it effectively. The more dysfunctional you are the more you start splitting team members based upon function than the service although you do need a group of people that are experts at certain functional parts of your stack. Also, to improve stability, it also makes sense to separate parts of a system that keep changing from parts that don’t change as often. You don’t blow away an entire rack of servers when simply invalidating a cache can do the trick. This can help improve availability from the stance of isolating changes better. My company separated components out not necessarily into microservices because taking out availabilty in multiple areas when making cowboy deploys in prod was too much risk to handle even at 3 engineers. Most companies should probably get SOA going somewhat well before thinking they can break it down finer.
|
# ? Jan 2, 2019 17:08 |
|
The operational requirements themselves largely spring out of Conway's Law, because it becomes too complex for disconnected teams to maintain awareness of each other. Lots of monolithic applications have parts that change less frequently than other parts. The virtual memory subsystems of some Unixes have been virtually unchanged for thirty years, but the relative infrequency of change versus drivers for recent hardware hasn't prompted a move to message-passing microkernels. The components have vastly different stability guarantees even though they exist within the same kernel. Operationally, this doesn't matter for most use cases because the reliability is ensured at other levels of the system: sure, you need to reboot a perfectly functional VM subsystem to upgrade a RAID driver, but it doesn't matter because you have load balancers able to route around the failure. The industry has bought into this idea that there's a dichotomy between singular monoliths and extremely fine-grained microservices. There's a whole wealth of possibilities that exists in between those two extremes—much like layering user-space programs on top of a monolithic kernel—and that space is exactly what's right for most mid-sized projects.
|
# ? Jan 2, 2019 21:17 |
|
Messyass posted:I'm convinced that the primary reason to choose a microservices architecture is organisational and not technical. This is why we do it at my current job. Before that we used a badly implemented micro service architecture for cv building.
|
# ? Jan 3, 2019 06:55 |
|
The worst part about coming back from PTO is seeing a new bug assigned to you while you were gone. Even worse is when it's about the error message returned when a workflow is configured improperly, an error message written by the BA, with approval from the PO, and with no objections from the team when reviewing the user story, copied and pasted directly into a string literal with no modifications. I am not looking forward to having this conversation immediately after returning from a break. Even more infuriating is that our QA team members are constantly bitching about never having enough time and then they open bogus bugs like this.
|
# ? Jan 7, 2019 16:47 |
|
Protocol7 posted:I am not looking forward to having this conversation immediately after returning from a break. Even more infuriating is that our QA team members are constantly bitching about never having enough time and then they open bogus bugs like this. I used to have a QA manager who would always make this complaint and say we weren't limiting our scope properly and identifying where changes were made, which led to wasted time as QA tested everything, just to be sure. So I started being extremely specific about what I changed and listing what needed to be tested, plus a little extra around the edges. The QA person on my team still repeatedly tested random other poo poo and filed urgent bugs against Test about behavior that was also on Production (and wasn't touched). This person was completely oblivious, too, and just completely failed to internalize any of the "we need to spend less time testing unrelated functionality" feedback in our sprint retrospectives. The QA manager, meanwhile, was the one who repeatedly objected to our proposal of deploying all the cron jobs with every release (they had been separated from the main code base and each other years ago because we had no review process and kept breaking poo poo) unless we were willing to document what every job did, when it ran, what parameters it could take, etc. This was so QA would know how to test all the jobs, because the manager threatened to have the team test all of them, for every deploy. So I added all that documentation (and made it required in the future) and we started deploying all of them. QA never added a procedure to do any additional testing of them, unless we called our specific jobs in our changelogs. We devs knew all along how that would end up.
|
# ? Jan 7, 2019 17:43 |
|
We now have weekly “deployment managers” (read: team member hats) who run deployments to staging and production, depending on the day of the week, every day of the week except Friday. Groan.
|
# ? Jan 7, 2019 17:49 |
|
CPColin posted:I used to have a QA manager who would always make this complaint and say we weren't limiting our scope properly and identifying where changes were made, which led to wasted time as QA tested everything, just to be sure. So I started being extremely specific about what I changed and listing what needed to be tested, plus a little extra around the edges. The QA person on my team still repeatedly tested random other poo poo and filed urgent bugs against Test about behavior that was also on Production (and wasn't touched). That's... almost exactly what's happening with this project, funnily enough. We're removing all of the unused legacy code, most of which is COM we want to remove so that we can stop doing nasty registry poo poo in the installer and move to a more standard xcopy style installer. With the help of an architect I wrote a LinqPad script that recursively identifies COM dependencies for a specific project in a BFS style search, so we used that script to break down the work into discrete user stories and we all agreed that it was the developer's responsibility to identify which features of the application needed testing based on what code was changed or removed. They're still testing random poo poo and since it's basically a system-wide regression, they're finding a bunch of existing bugs and can't keep them separated from the work we've actually completed.
|
# ? Jan 7, 2019 18:50 |
|
I don't understand the pain and I would like to. Bugs are bugs, what does it matter why the exist or where they were found? Sprint planning or the person responsible for sorting bug reports for priority should be routing them to the right person and making sure they most important ones get fixed irregardless of if the bugs have been around for years or just introduced. I agree it's bad that QA is wasting their time, and thus under-testing new code but that shouldn't really be an issue for a frontline developer. If anything it should mean less work since the old code has to have at least pretended to be stable in order to get shipped right?
|
# ? Jan 7, 2019 19:58 |
|
LLSix posted:I don't understand the pain and I would like to. Bugs are bugs, what does it matter why the exist or where they were found? Sprint planning or the person responsible for sorting bug reports for priority should be routing them to the right person and making sure they most important ones get fixed irregardless of if the bugs have been around for years or just introduced. In my case, the problem was that all these bugs were being pulled straight into the sprint and cluttering everything up, because the QA person thought they were related to the changes we were trying to push through. They weren't, but I still had to spend the time to look at them and eject them from the sprint and back onto the Triage board. No big deal for an every-now-and-then kind of thing, but this person just kept doing it with stuff that was barely tangentially related to the functionality I'd actually changed, even when I would put do not test feature Y; it didn't change right in the ticket. Also, a lot of the bugs being filed were in the format, "I think feature X should act like Z instead!" and I always wanted to close those with, "That's your opinion, not a loving bug, you idiot!"
|
# ? Jan 7, 2019 20:05 |
|
CPColin posted:In my case, the problem was that all these bugs were being pulled straight into the sprint and cluttering everything up, because the QA person thought they were related to the changes we were trying to push through. They weren't, but I still had to spend the time to look at them and eject them from the sprint and back onto the Triage board. No big deal for an every-now-and-then kind of thing, but this person just kept doing it with stuff that was barely tangentially related to the functionality I'd actually changed, even when I would put do not test feature Y; it didn't change right in the ticket. Are we coworkers? This sounds bizarrely familiar. Logging UI bugs in a part of the application we have not even touched is useful but not pleasant when the bug has existed for years and they think you broke it. Macichne Leainig fucked around with this message at 20:41 on Jan 7, 2019 |
# ? Jan 7, 2019 20:39 |
I think that's just endemic to QA in general "hepl i found bug" "Oh yeah that's existed for a while, don't worry about it for now" "fix bug." "Don't worry it's not relevant to this release, we can leave it for now" "fix b.ug" "We have more important things to worry about and want to keep our scope fixed for this release" "fx bug" "Seriously this isn't important right now, it's in the backlog, we'll get to it later" -bug escalated to critical. release goes amber. in the distance, sirens-
|
|
# ? Jan 7, 2019 21:51 |
|
CPColin posted:The QA person on my team still repeatedly tested random other poo poo and filed urgent bugs against Test about behavior that was also on Production (and wasn't touched).
|
# ? Jan 7, 2019 22:12 |
|
Cool, GitHub now allows users to make unlimited private repos on the free tier. There's some things you can't do, like add more than 3 contributors but still. I often have personal projects I'm working on that I don't want public yet but would like to store on GitHub and this is perfect.
|
# ? Jan 7, 2019 22:23 |
|
Also, bugs sitting unnoticed in production for long periods of time might turn out to be features, depending on the specific case.
|
# ? Jan 7, 2019 22:31 |
|
Carbon dioxide posted:Cool, GitHub now allows users to make unlimited private repos on the free tier. I have a heap of stuff on gitlab because of this. Nice to see.
|
# ? Jan 7, 2019 22:43 |
|
LLSix posted:I don't understand the pain and I would like to. Bugs are bugs, what does it matter why the exist or where they were found? Sprint planning or the person responsible for sorting bug reports for priority should be routing them to the right person and making sure they most important ones get fixed irregardless of if the bugs have been around for years or just introduced. This ends up being a dog chasing its tail. Forever. Like my CEO who is incapable of not sidetracking everything and then wondering why things aren't ready yet. Time moves in a straight line you gently caress!
|
# ? Jan 8, 2019 00:32 |
raminasi posted:Also, bugs sitting unnoticed in production for long periods of time might turn out to be features, depending on the specific case. Fifth Axiom of API development: all API behaviour, intentional or not, must be supported
|
|
# ? Jan 8, 2019 00:38 |
|
this is probably a stupid question, but: if a bug exists and is known, why is there not a backlogged ticket filed for it so that any future tickets can be closed as dupes?
|
# ? Jan 8, 2019 00:42 |
|
vonnegutt posted:this is probably a stupid question, but: if a bug exists and is known, why is there not a backlogged ticket filed for it so that any future tickets can be closed as dupes? I worked at a place where the dev lead would go through and blindly close any ticket more than 60 days old "because it was probably fixed by now anyway." Nobody ever made long-term bug reports.
|
# ? Jan 8, 2019 00:55 |
|
My boss partially quit over nobody trying to fix issues that were open for years that caused production outages. In fact, we occasionally would go look in ticket history and find “closed” tickets that were never resolved where there was some speculation that it could cause a problem but because it never caused an outage it was closed by product. The worst ones have been recurring production outages that destroy our SLA budget but nothing is followed through on the actions suggested. I suggested an APM solution and instrumentation of our JVMs with a ticket saying it should be about - nothing over a year later and about 20 hours of outages that we can’t explain.
|
# ? Jan 8, 2019 01:41 |
|
necrobobsledder posted:My boss partially quit Is this a software stroke? Like the half of his body that doesn't browse hacker news is paralyzed?
|
# ? Jan 8, 2019 01:52 |
|
Gildiss posted:Is this a software stroke? Like the half of his body that doesn't browse hacker news is paralyzed? Worse, the half that does has become libertarian.
|
# ? Jan 8, 2019 01:58 |
|
What’s the solution when your bug tracker is drowning in unreproducible bugs?
|
# ? Jan 8, 2019 02:25 |
|
I'm going to wildly mix the various discussions in the most bad faith reading here. Buuuuuut if QA is running an invalid setup and kicking that bugs over to one of four (4) human developers, there's zero 'process' solution available. Who, exactly, is cloning that to the known issue in the backlog? If there was a PM capable of triage, one of four (4) human devs wouldn't be picking it up either.
|
# ? Jan 8, 2019 02:35 |
|
smackfu posted:What’s the solution when your bug tracker is drowning in unreproducible bugs?
|
# ? Jan 8, 2019 03:25 |
|
Seat Safety Switch posted:I worked at a place where the dev lead would go through and blindly close any ticket more than 60 days old "because it was probably fixed by now anyway." Our VP of Engineering made that the official policy but for a decent reason. POs have 60 days to get a bug to a resolved state as a way of balancing bugs vs feature development. If after 4 sprints you didn't prioritize it to be fixed then it's obviously not important enough to ever fix so it gets resolved as won't fix.
|
# ? Jan 8, 2019 03:52 |
|
Hughlander posted:Our VP of Engineering made that the official policy but for a decent reason. POs have 60 days to get a bug to a resolved state as a way of balancing bugs vs feature development. If after 4 sprints you didn't prioritize it to be fixed then it's obviously not important enough to ever fix so it gets resolved as won't fix. See, where I work, the project managers would love this because when we're not fixing bugs, we're developing new features! Until a customer screams and then it's "this has been a known issue for months why haven't we fixed it?"
|
# ? Jan 8, 2019 05:20 |
|
Gildiss posted:Is this a software stroke? Like the half of his body that doesn't browse hacker news is paralyzed?
|
# ? Jan 8, 2019 16:09 |
|
Carbon dioxide posted:Cool, GitHub now allows users to make unlimited private repos on the free tier. A week after a group of peers and I switched to Bitbucket for this exact reason. Ah well, they're both good Git services in their own right.
|
# ? Jan 8, 2019 17:13 |
|
Che Delilas posted:See, where I work, the project managers would love this because when we're not fixing bugs, we're developing new features! The PM now wants it to be hotpatched, so I'm telling him no. It feels good to tell the PM that his bug will not get fixed immediately because he should have thought about that during planning. And, I know exactly which customer is screaming about it and I want them to suffer.
|
# ? Jan 9, 2019 05:33 |
|
return0 posted:I disagree. There are clear operational wins to be had at scale by breaking a monolith apart in any circumstance where different logical parts of the monolith have different functional characteristics. For example, a user identity service component which is called very frequently contrasted with some rarely called leaf system; a highly compute intensive component contrasted with an I/O intensive component; a subsystem with a data model best suited to a relational DB contrasted with another using a document store, etc. Isolation has benefits here. I'm a little curious what your definition of "monolith" is here. My understanding is that traditional monoliths are three tiered architectures, so they still have the data layer as a separate system. If you consider a monolith to be a single system that is committed and deployed at the same time then there are big companies that run off monoliths.
|
# ? Jan 10, 2019 04:06 |
|
NovemberMike posted:I'm a little curious what your definition of "monolith" is here. My understanding is that traditional monoliths are three tiered architectures, so they still have the data layer as a separate system. If you consider a monolith to be a single system that is committed and deployed at the same time then there are big companies that run off monoliths. I think that's a pretty accurate definition of "monolith" but it's missing a couple of of other elements in its definition. They are also usually projects large enough to have multiple teams with little day to day interaction committing to them, and a backing DB with a large number of tables. Not every monolith needs to be broken into micro services though. There is a lot of admin/engineering overhead that comes with the micro service pattern and it mostly arises as a response to large concurrent user bases in a web service context. The Windows OS would probably be considered a monolith, but it is deployed (installed) at a more or less 1:1 ratio of running instance to user. The issues of development at scale there are solved by breaking things into packages and the notion of user scale is mostly non-existent (since each user has their own dedicated instance). Micro services though give you a finer granularity for when you are dealing with like, a million+ users hitting the same login/registration/home end points in a web context at the same time. You want to be able to scale those user edge components at load without necessarily scaling up backend or business logic components that operate async or in the background. But, you also want to preserve the ability to scale up those async processes as their demand ramps up. Those two things usually have different usage/scale patterns. It also allows you to make DB data models a little more fluid by trying to pin one service to one table, so you don't have to worry about changes to one service's model having unintended knock on effects to other services in the ecosystem.
|
# ? Jan 10, 2019 04:42 |
|
ChickenWing posted:Fifth Axiom of API development: all API behaviour, intentional or not, must be supported AKA Hyrum's Law
|
# ? Jan 10, 2019 11:32 |
|
Microservices can also be defined in terms of risk / churn profile where a component is under heavy development and can put other services’ availability at risk by rolling them together. Loosely coupled monoliths to me include services that internally use forms of service discovery and indirection to determine how to call them. Welll designed monoliths from decades ago could be broken out to other services living in the same box and moving them to other machines would simply mean determining appropriate protocols and conventions to communicate with them. Things that make microservices (or any modularizarion) worse begins the overhead issues is when you’re sharing more libraries than you realize and that library is under constant development. Every other company has some core util library that people use everywhere that’s also insufficiently tested and when there’s a bug found you have to re-deploy everything.
|
# ? Jan 10, 2019 14:40 |
|
necrobobsledder posted:Things that make microservices (or any modularizarion) worse begins the overhead issues is when you’re sharing more libraries than you realize and that library is under constant development. Every other company has some core util library that people use everywhere that’s also insufficiently tested and when there’s a bug found you have to re-deploy everything. Doesn't that get largely solved with some sort of verisoned package management? "We fixed a bug in Util library affecting FooService, version 1.6 of Util has been published". "Okay, but I'm developing BarService, 1.5 is fine because that bug doesn't affect us, we'll put a story on the backlog to upgrade in our next sprint".
|
# ? Jan 10, 2019 15:38 |
Yes that, couldn't remember where I'd read it I found that article about a week after experiencing it as fully as humanly possible and it will stick with me for the rest of my goddamn professional career
|
|
# ? Jan 10, 2019 15:44 |
|
That reminds me of the time the WinSCP guy fixed a UI bug that I basically developed a workflow around using because I thought it was a feature, so I asked him to turn it into a UI option. Dunno if he ever did, though, since I haven't used it heavily since I got out of LAMP hell.
|
# ? Jan 10, 2019 16:05 |
|
|
# ? May 25, 2024 06:40 |
|
New Yorp New Yorp posted:Doesn't that get largely solved with some sort of verisoned package management? "We fixed a bug in Util library affecting FooService, version 1.6 of Util has been published".
|
# ? Jan 10, 2019 16:12 |