Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

rt4 posted:

What's the deal with microservices? Sounds like a good way to end up with an incoherent, fragmented codebase

they can be good if the alternative is a 10MM LOC monstrosity with nothing decoupled.

basically, extremes are bad and something in the middle is probably good

Adbot
ADBOT LOVES YOU

lifg
Dec 4, 2000
<this tag left blank>
Muldoon
You start with an incoherent monolithic codebase, and isolate each piece. Now you have microservices. Declare victory.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

rt4 posted:

What's the deal with microservices? Sounds like a good way to end up with an incoherent, fragmented codebase

If done well, you end up with small, independently deployable, independently versionable services that communicate via a clearly defined API that is easy to isolate for unit testing.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
In practice most people seem to end up with a bunch of tightly coupled services that have to all be updated in lockstep.

Bongo Bill
Jan 17, 2012

Programming is hard.

KoRMaK
Jul 31, 2012



Plorkyeran posted:

In practice most people seem to end up with a bunch of tightly coupled services that have to all be updated in lockstep.
I've gotten a lot better about this, and have pushed the ideology out to my team. Our code is far less rigid and fragile now because of it. I owe a lot of that to Sandi Metz and her youtube talks and vids. Thanks Sandi. Thandi!

Pollyanna
Mar 5, 2005

Milk's on them.


KoRMaK posted:

I've gotten a lot better about this, and have pushed the ideology out to my team. Our code is far less rigid and fragile now because of it. I owe a lot of that to Sandi Metz and her youtube talks and vids. Thanks Sandi. Thandi!

I really should learn about decoupling and all that, cause I like the idea of microservices, I just always get tripped up on where the lines should be drawn. Might as well, I haven't watched a Sandi talk in a while anyway :v:

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





microservices are great if you follow three rules:

1. the scope of a service is whatever has direct access to whatever state it encompasses. if you have a sql db anything that directly queries that db is a single service and should be treated as such. if you have multiple sources of state (shards or sql + some sort of cache or whatever) that are tightly coupled those are the same service too

2. publish apis (even if it's informally) and never ever break backwards compatibility if you can avoid it. if you have to break backwards compatibility it's probably better to write a new service rather than update the old one

3. share application models. if you have a domain object (like a transaction for an ecommerce platform or a post for a content platform) you should have all services sharing the model. this makes it way easier to write clients and servers. gRPC + protobufs or thrift or something similar can help here

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

the talent deficit posted:

microservices are great if you follow three rules:

1. the scope of a service is whatever has direct access to whatever state it encompasses. if you have a sql db anything that directly queries that db is a single service and should be treated as such. if you have multiple sources of state (shards or sql + some sort of cache or whatever) that are tightly coupled those are the same service too

2. publish apis (even if it's informally) and never ever break backwards compatibility if you can avoid it. if you have to break backwards compatibility it's probably better to write a new service rather than update the old one

3. share application models. if you have a domain object (like a transaction for an ecommerce platform or a post for a content platform) you should have all services sharing the model. this makes it way easier to write clients and servers. gRPC + protobufs or thrift or something similar can help here

Versioned APIs are nice for #2, but in practice you'll never be able to shut down your old API. It does help with people not using deprecated things moving forward.

BabyFur Denny
Mar 18, 2003
Use kafka as a central data hub, enforce protobuf as a message format and publish all your data in kafka. microservices for free.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

BabyFur Denny posted:

Use kafka as a central data hub, enforce protobuf as a message format and publish all your data in kafka. microservices for free.
Except for, you know, actually operating them. Ops problem now!

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

the talent deficit posted:

microservices are great if you follow three rules:

1. the scope of a service is whatever has direct access to whatever state it encompasses. if you have a sql db anything that directly queries that db is a single service and should be treated as such. if you have multiple sources of state (shards or sql + some sort of cache or whatever) that are tightly coupled those are the same service too

2. publish apis (even if it's informally) and never ever break backwards compatibility if you can avoid it. if you have to break backwards compatibility it's probably better to write a new service rather than update the old one

3. share application models. if you have a domain object (like a transaction for an ecommerce platform or a post for a content platform) you should have all services sharing the model. this makes it way easier to write clients and servers. gRPC + protobufs or thrift or something similar can help here
There are a few important rules around operability too.

1. Tracing requests is still really important, but it's much harder in a microservice architecture. Latency aside, your chances of suffering a partial failure are several orders of magnitude higher than a situation where your calls don't leave the machine. If you can, use a distributed tracing framework like Zipkin. If you can't, make sure your requests are tagged at the point where they enter the system so you can filter them sensibly in whatever log aggregator you're using.
1a. Good lord, sync your server times.

2. Performance is far less predictable in a distributed system than the same layout of components talking over local sockets. Instrument appropriately. The upside is that while it's far more work to instrument, and far harder to make sense of results, each step of that instrumentation is far easier from an external monitoring component. Your explicit contracts ensure that the inner services are not a black box.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Plorkyeran posted:

In practice most people seem to end up with a bunch of tightly coupled services that have to all be updated in lockstep.
And this isn't always a bad thing. It's how Google maintains nearly all of their internal projects, with the exception of Android and Chromium/Chrome.

Like anything else, you need to consider the business requirements of what you're pushing out and the nature of what you're building. If all your components are capable of enough backward-compatibility to facilitate zero-downtime deploys, and you're operating a small enough team where you have adequate visibility into breaking changes, you probably have very little reason to go with a full-on microservice architecture to get the benefits you're looking for. A healthy path forward is to build components with sensible boundaries -- standard engineering practice for the last several decades -- so that making it into a microservice (or smaller monolithic service, if that makes more sense for your sub-project) later just involves shimming a network interface onto it.

One of the problems with microservices is that it can be very hard to build a minimum viable product with them, depending on your project/developer boundaries. Setting hard contracts too early can seriously impinge your ability to iterate quickly, especially if your change has to touch a lot of different components. They're great when you have reasonably final API requirements, but this isn't good for exploratory work. It can make a lot more sense to keep everything in a monolithic repository/construct so that, for example, code reviews can comfortably span projects and preserve the correct context. But this approach can be dangerous if the team is lacking the discipline to keep things reasonably orthogonal so they can be separated out later.

Vulture Culture fucked around with this message at 03:13 on Dec 20, 2016

FlapYoJacks
Feb 12, 2009

Bongo Bill posted:

Programming is hard.

Programming is easy.
Programming WELL is hard.

KoRMaK
Jul 31, 2012



How big of a loving system do I have to have to start getting into caring about distributed logging??? :eyepop:

Something the size of a loving star destroyer?

Docjowles
Apr 9, 2009

Vulture Culture posted:

Except for, you know, actually operating them. Ops problem now!

Nagios event handler to restart service on any problem. Next!

(This is not a serious reply)

Roadie
Jun 30, 2013

KoRMaK posted:

How big of a loving system do I have to have to start getting into caring about distributed logging??? :eyepop:

Something the size of a loving star destroyer?

Distributed logging is great on systems of all sizes, especially if you don't have casual access into the guts of the running system (containers, Heroku, etc).

You just, you know, use one of the ten million packages that already exist for doing it, or use a cloud service like Papertrail (after making sure your logs are scrubbed of credentials), or whatever else sane instead of rolling your own.

pigdog
Apr 23, 2004

by Smythe

rt4 posted:

What's the deal with microservices? Sounds like a good way to end up with an incoherent, fragmented codebase

That's not necessarily a negative. A major benefit of decoupled microservices is that the "ownership" of a service belongs to a particular team, who can use the technologies most suitable for the task, not just the ones that every dev in the organization is familiar with.

Just remember the rules:

1. Each service takes care of its own persistence - no sharing allowed. Physically you might share a single db or schema between them, but NO looking or touching other service's data. That would be integration by database, which defeats decoupling.

2. Never start with microservices ground up. Build a monolith first, then separate parts as needed and sensible. Practice good modular programming and you can have "microservices" within your application.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Microservices - functional call ABIs over the network AKA SOA 2.0. In the worst case, you get the worst of networked services (bad ops is common when you're heavy on devs or have little investment in making prod work well) and the worst of tightly coupled systems, but people are hoping for the best case where the network is performant or a known variable and that each component is loosely coupled from others while still making maintenance easy.

The "never" rules always have exceptions but it's really, really rare to see a greenfield project succeed using microservices-first over the past few years and the few stories I've read where the organization came to the conclusion that the iterative approach of system architecture would have resulted in failure are places where the software is super simple to decouple and decompose already or are moving so fast that they're going to be running into problems unrelated to microservice early adoption problems that could tank the project anyway. My take is that if you have the discipline to succeed at microservices, you probably had the discipline to properly modularize your code and operate it effectively.

Refactoring microservices, on the other hand (services A and B split into C and D taking parts of A and B with them), is a subject that I'm morbidly curious about because I suspect that is the real spaghetti code problem of a microservice architecture where you'll have a lot of services in production at the same time that may compete with services that take on newer roles. Netflix handles this with instrumentation that tells you who is still calling your legacy service and you can go yell at them to stop using your service endpoint, but there's gotta be a better way to do this without O(m * n2) meetings. Because this approach is common in enterprise situations, possibly one of the top reasons enterprise software is awful, and nobody sane wants to program like they're in an enterprise.

Messyass
Dec 23, 2003

necrobobsledder posted:

My take is that if you have the discipline to succeed at microservices, you probably had the discipline to properly modularize your code and operate it effectively.

Exactly. Just last week I advised a company to think about modularizing their application as if they were using microservices, but to not use microservices.

I'm sure that microservices can be very useful when you're dealing with the scaling challenges that Netflix or Spotify have to deal with, but for most other companies, the best thing you can take away from it all is that shared databases are bad.

baquerd
Jul 2, 2007

by FactsAreUseless
Hey guys, I've got a great idea. Let's build a well-modularized monolith, and then hire new "architects" that decide to force a split into microservices for purely theoretical future value while simultaneously moving to an entirely new, untested alpha-level deployment platform, re-write all our tests in a different test framework, then act really surprised when (as the engineering teams said 6 months ago), there's no way to meet deliverable deadlines and it ends up costing way more than the monolith while being poo poo at performance and stability. Oh, and the monolith? Let's stop any new feature development there, but still actively support it and apply bug fixes. When everyone starts realizing this was perhaps not such a great idea, chalk it up to a learning experience and start the process again but with a different deployment platform :suicide:

pigdog
Apr 23, 2004

by Smythe
A very tangible benefit from microservices for many companies is that they can use geographically distant teams to develop different services. The teams are also focused on doing their part, and don't need to care quite as much about the domain and technology choices of others. Makes everyone's job much easier organizationally. That, as well as the scaling, hot deployments, etc.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
No-sharing of databases have long been an anti-pattern (and therefore a pattern) in enterprise space where you have 10+ different products by different vendors all using different versions of databases even from the same company, and the integration points (read: microservices calling each other) are impossible resulting in an O(n2) number of functions to maintain (read: integration test) while your actual time to do maintenance is O(1) and the industry push by sales teams is to grow n polynomially to meet myopic growth quotas (growing k is more common though thankfully here).

It's not cost-effective whatsoever to build out literally 50+ database instances where each service is running maybe 2 queries / second, so most projects I've been on had shared databases where different services had different tables (there's table spaces and views to do the job too). Eventually though, you have to stop bludgeoning that poor database, cache effectively, and talk to each other for data and incur the latency penalties locally instead of making your database(s) slower hurting everyone. It's premature optimization to give everyone a special snowflake database unless you are absolutely sure that it's not going to work (like what I want with 1M+ < 4KB writes / second with latency-sensitive bulk, streamed reads).

But really, a third of the places doing microservices (rather than just... SOA) are under direction to expect "hypergrowth" and build accordingly because if they don't they're probably going to fold. Another third are engineers that want to get something on their resume that's cool for their next job and are loving around. The remainder are enterprise bureaucracies that are following another silver bullet technical solution to social problems (see also: "devops").

return0
Apr 11, 2007

necrobobsledder posted:

It's not cost-effective whatsoever to build out literally 50+ database instances

Why not?

Pollyanna
Mar 5, 2005

Milk's on them.


We could all just program in Elixir and leverage OTP from now on and worry less about how to implement microservices.

Space Kablooey
May 6, 2009


One True Pairing?

Pollyanna
Mar 5, 2005

Milk's on them.


HardDiskD posted:

One True Pairing?

All my apps are powered by fanfiction.

Space Kablooey
May 6, 2009


Not sure about the fan, but my code sure is fiction.

HFX
Nov 29, 2004

necrobobsledder posted:

No-sharing of databases have long been an anti-pattern (and therefore a pattern) in enterprise space where you have 10+ different products by different vendors all using different versions of databases even from the same company, and the integration points (read: microservices calling each other) are impossible resulting in an O(n2) number of functions to maintain (read: integration test) while your actual time to do maintenance is O(1) and the industry push by sales teams is to grow n polynomially to meet myopic growth quotas (growing k is more common though thankfully here).

It's not cost-effective whatsoever to build out literally 50+ database instances where each service is running maybe 2 queries / second, so most projects I've been on had shared databases where different services had different tables (there's table spaces and views to do the job too). Eventually though, you have to stop bludgeoning that poor database, cache effectively, and talk to each other for data and incur the latency penalties locally instead of making your database(s) slower hurting everyone. It's premature optimization to give everyone a special snowflake database unless you are absolutely sure that it's not going to work (like what I want with 1M+ < 4KB writes / second with latency-sensitive bulk, streamed reads).

But really, a third of the places doing microservices (rather than just... SOA) are under direction to expect "hypergrowth" and build accordingly because if they don't they're probably going to fold. Another third are engineers that want to get something on their resume that's cool for their next job and are loving around. The remainder are enterprise bureaucracies that are following another silver bullet technical solution to social problems (see also: "devops").

I you are using commercial databases like Oracle, good luck at paying that bill, getting someone high up to pay that bill.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

HFX posted:

I you are using commercial databases like Oracle, good luck at paying that bill, getting someone high up to pay that bill.
You don't need separate RAC instances to keep your data logically separate and use authentication/ACLs to enforce that separation. On the other hand, cross-database joins are expensive, so it all has to be looked at holistically in the context of your overall ETL/BI strategy.

e: and this, of course, assumes that all of your data belongs in Oracle in the first place. Not overloading your most expensive database instances with data that doesn't need to be there is a good way to cut costs, not increase them.

Vulture Culture fucked around with this message at 17:17 on Dec 21, 2016

OWLS!
Sep 17, 2009

by LITERALLY AN ADMIN

Docjowles posted:

Nagios event handler to restart service on any problem. Next!

(This is not a serious reply)

I've seriously seen this deployed to production.

HFX
Nov 29, 2004

Vulture Culture posted:

You don't need separate RAC instances to keep your data logically separate and use authentication/ACLs to enforce that separation. On the other hand, cross-database joins are expensive, so it all has to be looked at holistically in the context of your overall ETL/BI strategy.

e: and this, of course, assumes that all of your data belongs in Oracle in the first place. Not overloading your most expensive database instances with data that doesn't need to be there is a good way to cut costs, not increase them.

When dealing with enterprise architects, everything belongs in Oracle. You should see the hoops I had to jump through to get Solr allowed (with the backing of 3 application architects).

Pollyanna
Mar 5, 2005

Milk's on them.


Is it too much to ask designers to read about and understand the limitations of the front-end CSS framework they want to use and not submit designs that are wonky as gently caress to pull off in said framework? :shepicide:

why is the button all the way over there now aaargh

FamDav
Mar 29, 2008

necrobobsledder posted:

No-sharing of databases have long been an anti-pattern (and therefore a pattern) in enterprise space where you have 10+ different products by different vendors all using different versions of databases even from the same company, and the integration points (read: microservices calling each other) are impossible resulting in an O(n2) number of functions to maintain (read: integration test) while your actual time to do maintenance is O(1) and the industry push by sales teams is to grow n polynomially to meet myopic growth quotas (growing k is more common though thankfully here).

It's not cost-effective whatsoever to build out literally 50+ database instances where each service is running maybe 2 queries / second, so most projects I've been on had shared databases where different services had different tables (there's table spaces and views to do the job too). Eventually though, you have to stop bludgeoning that poor database, cache effectively, and talk to each other for data and incur the latency penalties locally instead of making your database(s) slower hurting everyone. It's premature optimization to give everyone a special snowflake database unless you are absolutely sure that it's not going to work (like what I want with 1M+ < 4KB writes / second with latency-sensitive bulk, streamed reads).

so it sounds like you're hitting a slightly different use case (managing contracted out/bought software) but how much is the cost to your business when that single database goes out? single points of failure w/in infrastructure or w/in process should be avoided when costs of global failure are greater than redundancy. i would also say writing services to front data stores is cool and good because now you have a coupling to the service interface, and not the underlying infrastructure. this lets you grow/change as necessary over time but still requires discipline because you still have a contract w/ your consumers.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

Pollyanna posted:

why is the button all the way over there now aaargh

Boy do I have a mug for you!

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

HFX posted:

When dealing with enterprise architects, everything belongs in Oracle. You should see the hoops I had to jump through to get Solr allowed (with the backing of 3 application architects).
I feel your pain, and the pain felt by tens or hundreds of thousands of developers worldwide in the same situation, but this is a problem of the organization setting incompatible priorities (or protecting fragile egos), not an issue with microservices and separation of data privileges.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FamDav posted:

so it sounds like you're hitting a slightly different use case (managing contracted out/bought software) but how much is the cost to your business when that single database goes out? single points of failure w/in infrastructure or w/in process should be avoided when costs of global failure are greater than redundancy. i would also say writing services to front data stores is cool and good because now you have a coupling to the service interface, and not the underlying infrastructure. this lets you grow/change as necessary over time but still requires discipline because you still have a contract w/ your consumers.
If your business process involves data flowing across a pipeline, it doesn't matter if the entire pipeline is broken or just a piece of it -- you're down. From that perspective, it often makes more sense to have fewer potential points of failure, provided that the number of eggs in a single basket doesn't cause the cluster to fall over under the load.

Yes, any service interfaces in non-greenfield environments require incredible commitment to old API versions.

Iverron
May 13, 2012

Saw someone mention a while back that you'd better have memorized Enterprise Integration Patterns if you want to successfully implement a microservice architecture. Great book, but very very non-trivial stuff.

Pollyanna posted:

Is it too much to ask designers to read about and understand the limitations of the front-end CSS framework they want to use and not submit designs that are wonky as gently caress to pull off in said framework? :shepicide:

why is the button all the way over there now aaargh

Every loving design I get has zero consideration for difficulty of implementation. They don't want to be "restricted".

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Iverron posted:

Saw someone mention a while back that you'd better have memorized Enterprise Integration Patterns if you want to successfully implement a microservice architecture. Great book, but very very non-trivial stuff.


Every loving design I get has zero consideration for difficulty of implementation. They don't want to be "restricted".

Lol do you also get ones that don't read well, like light grey on white? I do. UI work is the worst, but at least Unity gives me a WYSIWYG that lets me click on things and move them over a pixel. Wish my projects were large enough for a dedicated UI person.

Adbot
ADBOT LOVES YOU

ToxicSlurpee
Nov 5, 2003

-=SEND HELP=-


Pillbug
One real problem with UI design is that somebody with design skills probably lacks technical skills while somebody with technical skills probably lacks design skills. People with both are pretty hard to come by.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply