|
rt4 posted:What's the deal with microservices? Sounds like a good way to end up with an incoherent, fragmented codebase they can be good if the alternative is a 10MM LOC monstrosity with nothing decoupled. basically, extremes are bad and something in the middle is probably good
|
# ? Dec 19, 2016 17:14 |
|
|
# ? May 10, 2024 08:51 |
|
You start with an incoherent monolithic codebase, and isolate each piece. Now you have microservices. Declare victory.
|
# ? Dec 19, 2016 17:15 |
|
rt4 posted:What's the deal with microservices? Sounds like a good way to end up with an incoherent, fragmented codebase If done well, you end up with small, independently deployable, independently versionable services that communicate via a clearly defined API that is easy to isolate for unit testing.
|
# ? Dec 19, 2016 17:18 |
|
In practice most people seem to end up with a bunch of tightly coupled services that have to all be updated in lockstep.
|
# ? Dec 19, 2016 19:33 |
|
Programming is hard.
|
# ? Dec 19, 2016 19:52 |
|
Plorkyeran posted:In practice most people seem to end up with a bunch of tightly coupled services that have to all be updated in lockstep.
|
# ? Dec 19, 2016 19:55 |
|
KoRMaK posted:I've gotten a lot better about this, and have pushed the ideology out to my team. Our code is far less rigid and fragile now because of it. I owe a lot of that to Sandi Metz and her youtube talks and vids. Thanks Sandi. Thandi! I really should learn about decoupling and all that, cause I like the idea of microservices, I just always get tripped up on where the lines should be drawn. Might as well, I haven't watched a Sandi talk in a while anyway
|
# ? Dec 19, 2016 20:56 |
|
microservices are great if you follow three rules: 1. the scope of a service is whatever has direct access to whatever state it encompasses. if you have a sql db anything that directly queries that db is a single service and should be treated as such. if you have multiple sources of state (shards or sql + some sort of cache or whatever) that are tightly coupled those are the same service too 2. publish apis (even if it's informally) and never ever break backwards compatibility if you can avoid it. if you have to break backwards compatibility it's probably better to write a new service rather than update the old one 3. share application models. if you have a domain object (like a transaction for an ecommerce platform or a post for a content platform) you should have all services sharing the model. this makes it way easier to write clients and servers. gRPC + protobufs or thrift or something similar can help here
|
# ? Dec 19, 2016 21:08 |
|
the talent deficit posted:microservices are great if you follow three rules: Versioned APIs are nice for #2, but in practice you'll never be able to shut down your old API. It does help with people not using deprecated things moving forward.
|
# ? Dec 19, 2016 22:28 |
|
Use kafka as a central data hub, enforce protobuf as a message format and publish all your data in kafka. microservices for free.
|
# ? Dec 19, 2016 23:02 |
|
BabyFur Denny posted:Use kafka as a central data hub, enforce protobuf as a message format and publish all your data in kafka. microservices for free.
|
# ? Dec 20, 2016 02:42 |
|
the talent deficit posted:microservices are great if you follow three rules: 1. Tracing requests is still really important, but it's much harder in a microservice architecture. Latency aside, your chances of suffering a partial failure are several orders of magnitude higher than a situation where your calls don't leave the machine. If you can, use a distributed tracing framework like Zipkin. If you can't, make sure your requests are tagged at the point where they enter the system so you can filter them sensibly in whatever log aggregator you're using. 1a. Good lord, sync your server times. 2. Performance is far less predictable in a distributed system than the same layout of components talking over local sockets. Instrument appropriately. The upside is that while it's far more work to instrument, and far harder to make sense of results, each step of that instrumentation is far easier from an external monitoring component. Your explicit contracts ensure that the inner services are not a black box.
|
# ? Dec 20, 2016 03:01 |
|
Plorkyeran posted:In practice most people seem to end up with a bunch of tightly coupled services that have to all be updated in lockstep. Like anything else, you need to consider the business requirements of what you're pushing out and the nature of what you're building. If all your components are capable of enough backward-compatibility to facilitate zero-downtime deploys, and you're operating a small enough team where you have adequate visibility into breaking changes, you probably have very little reason to go with a full-on microservice architecture to get the benefits you're looking for. A healthy path forward is to build components with sensible boundaries -- standard engineering practice for the last several decades -- so that making it into a microservice (or smaller monolithic service, if that makes more sense for your sub-project) later just involves shimming a network interface onto it. One of the problems with microservices is that it can be very hard to build a minimum viable product with them, depending on your project/developer boundaries. Setting hard contracts too early can seriously impinge your ability to iterate quickly, especially if your change has to touch a lot of different components. They're great when you have reasonably final API requirements, but this isn't good for exploratory work. It can make a lot more sense to keep everything in a monolithic repository/construct so that, for example, code reviews can comfortably span projects and preserve the correct context. But this approach can be dangerous if the team is lacking the discipline to keep things reasonably orthogonal so they can be separated out later. Vulture Culture fucked around with this message at 03:13 on Dec 20, 2016 |
# ? Dec 20, 2016 03:11 |
|
Bongo Bill posted:Programming is hard. Programming is easy. Programming WELL is hard.
|
# ? Dec 20, 2016 04:08 |
|
How big of a loving system do I have to have to start getting into caring about distributed logging??? Something the size of a loving star destroyer?
|
# ? Dec 20, 2016 04:21 |
|
Vulture Culture posted:Except for, you know, actually operating them. Ops problem now! Nagios event handler to restart service on any problem. Next! (This is not a serious reply)
|
# ? Dec 20, 2016 04:21 |
|
KoRMaK posted:How big of a loving system do I have to have to start getting into caring about distributed logging??? Distributed logging is great on systems of all sizes, especially if you don't have casual access into the guts of the running system (containers, Heroku, etc). You just, you know, use one of the ten million packages that already exist for doing it, or use a cloud service like Papertrail (after making sure your logs are scrubbed of credentials), or whatever else sane instead of rolling your own.
|
# ? Dec 20, 2016 10:18 |
|
rt4 posted:What's the deal with microservices? Sounds like a good way to end up with an incoherent, fragmented codebase That's not necessarily a negative. A major benefit of decoupled microservices is that the "ownership" of a service belongs to a particular team, who can use the technologies most suitable for the task, not just the ones that every dev in the organization is familiar with. Just remember the rules: 1. Each service takes care of its own persistence - no sharing allowed. Physically you might share a single db or schema between them, but NO looking or touching other service's data. That would be integration by database, which defeats decoupling. 2. Never start with microservices ground up. Build a monolith first, then separate parts as needed and sensible. Practice good modular programming and you can have "microservices" within your application.
|
# ? Dec 20, 2016 12:44 |
|
Microservices - functional call ABIs over the network AKA SOA 2.0. In the worst case, you get the worst of networked services (bad ops is common when you're heavy on devs or have little investment in making prod work well) and the worst of tightly coupled systems, but people are hoping for the best case where the network is performant or a known variable and that each component is loosely coupled from others while still making maintenance easy. The "never" rules always have exceptions but it's really, really rare to see a greenfield project succeed using microservices-first over the past few years and the few stories I've read where the organization came to the conclusion that the iterative approach of system architecture would have resulted in failure are places where the software is super simple to decouple and decompose already or are moving so fast that they're going to be running into problems unrelated to microservice early adoption problems that could tank the project anyway. My take is that if you have the discipline to succeed at microservices, you probably had the discipline to properly modularize your code and operate it effectively. Refactoring microservices, on the other hand (services A and B split into C and D taking parts of A and B with them), is a subject that I'm morbidly curious about because I suspect that is the real spaghetti code problem of a microservice architecture where you'll have a lot of services in production at the same time that may compete with services that take on newer roles. Netflix handles this with instrumentation that tells you who is still calling your legacy service and you can go yell at them to stop using your service endpoint, but there's gotta be a better way to do this without O(m * n2) meetings. Because this approach is common in enterprise situations, possibly one of the top reasons enterprise software is awful, and nobody sane wants to program like they're in an enterprise.
|
# ? Dec 20, 2016 15:24 |
|
necrobobsledder posted:My take is that if you have the discipline to succeed at microservices, you probably had the discipline to properly modularize your code and operate it effectively. Exactly. Just last week I advised a company to think about modularizing their application as if they were using microservices, but to not use microservices. I'm sure that microservices can be very useful when you're dealing with the scaling challenges that Netflix or Spotify have to deal with, but for most other companies, the best thing you can take away from it all is that shared databases are bad.
|
# ? Dec 20, 2016 15:47 |
|
Hey guys, I've got a great idea. Let's build a well-modularized monolith, and then hire new "architects" that decide to force a split into microservices for purely theoretical future value while simultaneously moving to an entirely new, untested alpha-level deployment platform, re-write all our tests in a different test framework, then act really surprised when (as the engineering teams said 6 months ago), there's no way to meet deliverable deadlines and it ends up costing way more than the monolith while being poo poo at performance and stability. Oh, and the monolith? Let's stop any new feature development there, but still actively support it and apply bug fixes. When everyone starts realizing this was perhaps not such a great idea, chalk it up to a learning experience and start the process again but with a different deployment platform
|
# ? Dec 20, 2016 16:34 |
|
A very tangible benefit from microservices for many companies is that they can use geographically distant teams to develop different services. The teams are also focused on doing their part, and don't need to care quite as much about the domain and technology choices of others. Makes everyone's job much easier organizationally. That, as well as the scaling, hot deployments, etc.
|
# ? Dec 20, 2016 16:59 |
|
No-sharing of databases have long been an anti-pattern (and therefore a pattern) in enterprise space where you have 10+ different products by different vendors all using different versions of databases even from the same company, and the integration points (read: microservices calling each other) are impossible resulting in an O(n2) number of functions to maintain (read: integration test) while your actual time to do maintenance is O(1) and the industry push by sales teams is to grow n polynomially to meet myopic growth quotas (growing k is more common though thankfully here). It's not cost-effective whatsoever to build out literally 50+ database instances where each service is running maybe 2 queries / second, so most projects I've been on had shared databases where different services had different tables (there's table spaces and views to do the job too). Eventually though, you have to stop bludgeoning that poor database, cache effectively, and talk to each other for data and incur the latency penalties locally instead of making your database(s) slower hurting everyone. It's premature optimization to give everyone a special snowflake database unless you are absolutely sure that it's not going to work (like what I want with 1M+ < 4KB writes / second with latency-sensitive bulk, streamed reads). But really, a third of the places doing microservices (rather than just... SOA) are under direction to expect "hypergrowth" and build accordingly because if they don't they're probably going to fold. Another third are engineers that want to get something on their resume that's cool for their next job and are loving around. The remainder are enterprise bureaucracies that are following another silver bullet technical solution to social problems (see also: "devops").
|
# ? Dec 20, 2016 17:05 |
|
necrobobsledder posted:It's not cost-effective whatsoever to build out literally 50+ database instances Why not?
|
# ? Dec 21, 2016 08:38 |
|
We could all just program in Elixir and leverage OTP from now on and worry less about how to implement microservices.
|
# ? Dec 21, 2016 14:42 |
|
One True Pairing?
|
# ? Dec 21, 2016 14:43 |
|
HardDiskD posted:One True Pairing? All my apps are powered by fanfiction.
|
# ? Dec 21, 2016 14:46 |
|
Not sure about the fan, but my code sure is fiction.
|
# ? Dec 21, 2016 14:48 |
|
necrobobsledder posted:No-sharing of databases have long been an anti-pattern (and therefore a pattern) in enterprise space where you have 10+ different products by different vendors all using different versions of databases even from the same company, and the integration points (read: microservices calling each other) are impossible resulting in an O(n2) number of functions to maintain (read: integration test) while your actual time to do maintenance is O(1) and the industry push by sales teams is to grow n polynomially to meet myopic growth quotas (growing k is more common though thankfully here). I you are using commercial databases like Oracle, good luck at paying that bill, getting someone high up to pay that bill.
|
# ? Dec 21, 2016 15:46 |
|
HFX posted:I you are using commercial databases like Oracle, good luck at paying that bill, getting someone high up to pay that bill. e: and this, of course, assumes that all of your data belongs in Oracle in the first place. Not overloading your most expensive database instances with data that doesn't need to be there is a good way to cut costs, not increase them. Vulture Culture fucked around with this message at 17:17 on Dec 21, 2016 |
# ? Dec 21, 2016 17:11 |
|
Docjowles posted:Nagios event handler to restart service on any problem. Next! I've seriously seen this deployed to production.
|
# ? Dec 21, 2016 17:29 |
|
Vulture Culture posted:You don't need separate RAC instances to keep your data logically separate and use authentication/ACLs to enforce that separation. On the other hand, cross-database joins are expensive, so it all has to be looked at holistically in the context of your overall ETL/BI strategy. When dealing with enterprise architects, everything belongs in Oracle. You should see the hoops I had to jump through to get Solr allowed (with the backing of 3 application architects).
|
# ? Dec 21, 2016 18:04 |
|
Is it too much to ask designers to read about and understand the limitations of the front-end CSS framework they want to use and not submit designs that are wonky as gently caress to pull off in said framework? why is the button all the way over there now aaargh
|
# ? Dec 21, 2016 20:17 |
|
necrobobsledder posted:No-sharing of databases have long been an anti-pattern (and therefore a pattern) in enterprise space where you have 10+ different products by different vendors all using different versions of databases even from the same company, and the integration points (read: microservices calling each other) are impossible resulting in an O(n2) number of functions to maintain (read: integration test) while your actual time to do maintenance is O(1) and the industry push by sales teams is to grow n polynomially to meet myopic growth quotas (growing k is more common though thankfully here). so it sounds like you're hitting a slightly different use case (managing contracted out/bought software) but how much is the cost to your business when that single database goes out? single points of failure w/in infrastructure or w/in process should be avoided when costs of global failure are greater than redundancy. i would also say writing services to front data stores is cool and good because now you have a coupling to the service interface, and not the underlying infrastructure. this lets you grow/change as necessary over time but still requires discipline because you still have a contract w/ your consumers.
|
# ? Dec 21, 2016 20:30 |
|
Pollyanna posted:why is the button all the way over there now aaargh Boy do I have a mug for you!
|
# ? Dec 22, 2016 02:54 |
|
HFX posted:When dealing with enterprise architects, everything belongs in Oracle. You should see the hoops I had to jump through to get Solr allowed (with the backing of 3 application architects).
|
# ? Dec 22, 2016 03:29 |
|
FamDav posted:so it sounds like you're hitting a slightly different use case (managing contracted out/bought software) but how much is the cost to your business when that single database goes out? single points of failure w/in infrastructure or w/in process should be avoided when costs of global failure are greater than redundancy. i would also say writing services to front data stores is cool and good because now you have a coupling to the service interface, and not the underlying infrastructure. this lets you grow/change as necessary over time but still requires discipline because you still have a contract w/ your consumers. Yes, any service interfaces in non-greenfield environments require incredible commitment to old API versions.
|
# ? Dec 22, 2016 03:34 |
|
Saw someone mention a while back that you'd better have memorized Enterprise Integration Patterns if you want to successfully implement a microservice architecture. Great book, but very very non-trivial stuff.Pollyanna posted:Is it too much to ask designers to read about and understand the limitations of the front-end CSS framework they want to use and not submit designs that are wonky as gently caress to pull off in said framework? Every loving design I get has zero consideration for difficulty of implementation. They don't want to be "restricted".
|
# ? Dec 22, 2016 08:53 |
|
Iverron posted:Saw someone mention a while back that you'd better have memorized Enterprise Integration Patterns if you want to successfully implement a microservice architecture. Great book, but very very non-trivial stuff. Lol do you also get ones that don't read well, like light grey on white? I do. UI work is the worst, but at least Unity gives me a WYSIWYG that lets me click on things and move them over a pixel. Wish my projects were large enough for a dedicated UI person.
|
# ? Dec 22, 2016 13:30 |
|
|
# ? May 10, 2024 08:51 |
|
One real problem with UI design is that somebody with design skills probably lacks technical skills while somebody with technical skills probably lacks design skills. People with both are pretty hard to come by.
|
# ? Dec 23, 2016 03:20 |