Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
12 rats tied together
Sep 7, 2006

Vulture Culture posted:

Yeah, that's one trivial case. Most of the ones in the real world rely on a multitude of resources, like "we don't permit use of the account-default encryption key, so create a new CMK for this resource if none was provided", or "ensure this S3 bucket writes access logs to the standard location for the account", or "create a new backup vault for this file system instead of using the account-default one that every tenant shares".
I assert that these things don't live in terraform modules because it's not enforceable or auditable. The terraform runs as a principal that either has permission or doesn't. If it has permission, it can choose to use or not use the module, any effort put into baking governance into the module is wasted.

If it's critical that the account-default encryption key is not used, there are other services in AWS that are better equipped to evaluate/enforce this, which should be deployed by their own team, using their own terraform, not injected into every other workspace that exists.

In the simplest case you can run e.g. Sentinel to provide guidance in the CI phase, but that's not governance, and guidance shouldn't result in remote teams making authoritative calls about the structure of everyone's terraform state.

Vulture Culture posted:

Root state is fine if you have few Terraform authors and few important governance rules to implement consistently. It doesn't necessarily scale [...]
Maybe a terminology problem on my end here, root state always exists, it's the state from which modules are invoked from. I think TFE calls this "the workspace root", but that means something else in terraform CLI, so I don't use that phrasing.

Vulture Culture posted:

Import-as-code is something Terraform definitely needs in order to work in the kinds of GitOps workflows people imagine Terraform would actually be good at, but I'm not connecting it to this problem. Say more?
I'm asserting that moved{} is useless without imported{}, and that I disagree with you that 1.3's inclusion of this feature makes module sprawl more manageable, or module-heavy architectures more desirable.

Adbot
ADBOT LOVES YOU

The Fool
Oct 16, 2003


Actual governance and policy should be done outside of terraform using whatever native tools your service provides. I come from Azure and we use Azure Policy heavily for enforcement. Sometimes those tools can be managed by terraform/iac but that should be a completely separate process from your normal infrastructure.

The Fool
Oct 16, 2003


FISHMANPET posted:

Well, success. That was, all-in-all, much easier than I thought it would be. Each "pipeline" is defined via 8 values in a local map, and my actual resource definition is only about 60 lines of code.

I'll have to do a little work because of an edge case I just discovered with provider weirdness, but this is all pretty slick.

Is there a way to force terraform to verify that its current stored state actually aligns with the state of the actual objects? I know it should be doing that but, because of provider weirdness things got out of sync.

Basically, I used the same group for two different items, and then removed one of the items. So it removed the group definition entirely, but unfortunately, it doesn't know that, and so it removed the access permissions I set, and a group membership I set. I'm going to workaround this in a way that should prevent it from happening entirely, but still kind of curious if there's a way to force terraform to sync its state.

Unless you were modifying something that terraform did not not about, this is literally what terraform does by default.

Terraform stores an internal state, representing what it thinks things should be like. When you run terraform plan it evaluates your configuration, the internal state, and what is actually deployed and will calculate the changes needed for reality to match your configuration.

However, it can only do this for things that it knows about.

The Fool
Oct 16, 2003


I've never used the ADO provider but for AAD groups and group members are separate resources, so you can modify the membership of a group without terraform knowing or caring about it.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Governance needs both guardrails and UX on top of policy/GxP guidance. You can't just write OPA or Sentinel policies to say "this is hosed, you figure it out" on every run (unless you're a security consultant).

Vulture Culture fucked around with this message at 21:44 on Feb 21, 2023

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Like I said, it's... provider weirdness.

You can create an azuredevops_group resource, which can be either a group internal to azure devops, or it can be a reference to an azure AD group. Each element in my local map had a reference to a particular Azure AD group. When I created it, terraform saw it as two distinct group resources, but in Azure DevOps it was only a single group resource. In Terraform I imagine they had the same id, but terraform saw them as two distinct objects. If I remove one of my items from the local map, terraform sees that there's something in state that should no longer be there - the second reference to that group. So it deletes the group. As far as it knows, nothing has happened to the group reference in the first element. So it's happily stored the descriptor for that reference in the state for the first element. But it no longer maps to anything that actually exists in azure devops. And the provider is... not smart enough to realize that. So I guess that's really a provider bug that it's not actually syncing state the way terraform should be.

12 rats tied together
Sep 7, 2006

Vulture Culture posted:

Governance needs both guardrails and UX on top of policy/GxP guidance. You can't just write OPA or Sentinel policies to say "this is hosed, you figure it out" on every run (unless you're a security consultant).

I don't disagree but I also don't think modules are the answer. "It will be fine because everyone will use the module" is so far out of my observed reality that it feels impossible to suggest except as a mean-spirited joke.

I've gone into my preferences a little bit ITT. The best answer here IMO is that the devops silo writes the terraform and knows of/is accountable for the security policies.

I have seen this model work well at scales of roughly 2 silo members : 100 developers, but obviously it's not as simple as a ratio in reality.

The Fool
Oct 16, 2003


12 rats tied together posted:

I don't disagree but I also don't think modules are the answer. "It will be fine because everyone will use the module" is so far out of my observed reality that it feels impossible to suggest except as a mean-spirited joke.

We do this and it is only possible because we spun up our cloud services as a greenfield and:

We use TFE and have a sentinel policy that enforces the PMR
We have 100% management buy in from the VP level
We have a dedicated team running support while maintaining and doing development on platform tooling

And even then it requires an insane amount of janitoring and if any of those 3 things stopped being true it would all fall apart in a day

Methanar
Sep 26, 2013

by the sex ghost
I put in a DC hardware purchase request for 60 000 cores worth of server today.

Methanar fucked around with this message at 08:03 on Feb 22, 2023

tortilla_chip
Jun 13, 2007

k-partite
So like 10 racks of ARM?

Docjowles
Apr 9, 2009

Nowadays I just click OK on big orders of RI's and savings plans and it is much less satisfying (for me. I'm sure Amazon is quite satisfied)

Warbird
May 23, 2012

America's Favorite Dumbass

I've been running up against something odd at work and I'm hoping someone here might have an idea. We have an ancient Jenkins host server that is being migrated away from to a much more recent version. Due to reasons, the existing pool of agents is being lifted and shifted over to the new host with the usual list of tire kicking and canary builds to make things work. The only real change on the agent side was the inclusion of Java 11 (from 11) as the default as that's required for the newer Jenkins host.

However, while they connect just fine and so on, we have some ant builds that are failing for reasons I can't deduce. Gut feeling was that the jobs have stuff hard coded to require Java 8, but the scripting is such that it will provide overrides pointed at Java 8 and as much has been confirmed from some debugging.

Any notions of what might be the delta here? I can't see anything that would cause builds to stop working from how this is architected out. Hell, moving the agents back to the legacy host and kicking off the builds with the same parameters works as expected, even with the user still pointed at Java 11. It doesn't add up.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Possibly a red herring or absolutely the wrong path but I had some issues with Java 17's security model that requires me to set JVM properties that are probably way too insecure and embarrassing for me to post. Eh... we're going to burn this Jenkins server down anyway

code:
--add-opens java.base/java.lang.reflect=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.util.concurrent=ALL-UNNAMED
There's all sorts of possible configuration differences that didn't quite transfer over from your prior settings under Java 8 though that may be completely unrelated to Java version such as weird scripts that rely upon certain path entries. You should take a look at the logs on the Jenkins controller and kick up the log levels to an obnoxious level to compare what's going on between the builds that succeed and ones that don't. Welcome to my hell for the past year I guess, enjoy your stay.

Junkiebev
Jan 18, 2002


Feel the progress.

java was a mistake imho

Docjowles
Apr 9, 2009

Junkiebev posted:

java was a mistake imho

My eyes absolutely glaze over looking at Java code but I guess they did something right. Because despite all us hipsters out here trying to write in Go or Python or TypeScript or Rust or whatever, an ungodly percentage of the world runs on Java.

Maybe not in the open source world. But in ~the enterprise~, oh boy, it's Java or Microsoft poo poo (or COBOL lol for the truly critical systems if your company is an OG) all the way down

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Docjowles posted:

My eyes absolutely glaze over looking at Java code but I guess they did something right. Because despite all us hipsters out here trying to write in Go or Python or TypeScript or Rust or whatever, an ungodly percentage of the world runs on Java.

Maybe not in the open source world. But in ~the enterprise~, oh boy, it's Java or Microsoft poo poo (or COBOL lol for the truly critical systems if your company is an OG) all the way down

We use golang now actually.

12 rats tied together
Sep 7, 2006

java is good, fast, runs on the majority of OSes you're likely to use at work. it's one of the better languages

Methanar
Sep 26, 2013

by the sex ghost
I played with making a Java swing app and video game mods last summer as a form of self-harm.
After a few weeks I was mostly okay with it. But only because of how good intellij is.
The main difficulty with Java is learning all of the poo poo like Google Guice and Spring boot to work around the Java-isms.

Made me appreciate how simple and minimalistic go can be.

Maven is trash though I'm not going to be a Maven apologist.

Methanar fucked around with this message at 08:21 on Feb 23, 2023

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.

Methanar posted:

Maven is trash though I'm not going to be a Maven apologist.
Citation needed, because Maven owns.

Methanar
Sep 26, 2013

by the sex ghost

Sagacity posted:

Citation needed, because Maven owns.

My brain is too small for it

xzzy
Mar 5, 2009

Docjowles posted:

My eyes absolutely glaze over looking at Java code but I guess they did something right. Because despite all us hipsters out here trying to write in Go or Python or TypeScript or Rust or whatever, an ungodly percentage of the world runs on Java.

The only thing it did right was timing. It got momentum by dazzling c levels with the "write once run anywhere" buzzword in an era when the internet was exploding. Once it got embedded into web browsers our fate was sealed.

Sun had great penetration in colleges so it was easy to infect young brains with it too. Legions of kids graduated thinking Java was the square peg for every round hole.

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.

Methanar posted:

My brain is too small for it
No brainshaming in this thread!

But genuinely curious, since Maven is usually pretty straightforward since it's not a fully programmable mess like, say, Gradle.

Warbird
May 23, 2012

America's Favorite Dumbass

necrobobsledder posted:

Possibly a red herring or absolutely the wrong path but I had some issues with Java 17's security model that requires me to set JVM properties that are probably way too insecure and embarrassing for me to post. Eh... we're going to burn this Jenkins server down anyway

code:
--add-opens java.base/java.lang.reflect=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.util.concurrent=ALL-UNNAMED
There's all sorts of possible configuration differences that didn't quite transfer over from your prior settings under Java 8 though that may be completely unrelated to Java version such as weird scripts that rely upon certain path entries. You should take a look at the logs on the Jenkins controller and kick up the log levels to an obnoxious level to compare what's going on between the builds that succeed and ones that don't. Welcome to my hell for the past year I guess, enjoy your stay.

Cool. Great. Awesome. gently caress. I had managed to avoid Java up until now so this is double fun.

Chatting with the fellow the helped arch the solution makes it look like something with the node build portion is funky so who even knows. I’m going to get a dump of env vars periodically through the process and see if a diff points something out. Failing that, off to verbosity land. This all isn’t helped by the fact that most of the people here have been around for a decade or more and my rear end has been here all of a month. Government projects are weird.

The Fool
Oct 16, 2003


xzzy posted:

The only thing it did right was timing. It got momentum by dazzling c levels with the "write once run anywhere" buzzword in an era when the internet was exploding. Once it got embedded into web browsers our fate was sealed.

i was about to do a java is not javascript post but then a bunch of repressed memories started flooding back and now I hate you

DkHelmet
Jul 10, 2001

I pity the foal...


Timely conversation- I'm standing up a Terraform practice. Where are some good references on avoiding (or at least hopefully minimizing) footguns and style/methods/tools for modern tf development? I've got a sprawl of old tf from 0.7 with various styles and nary a module in sight.

Any good books or standards?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

FISHMANPET posted:

Like I said, it's... provider weirdness.

You can create an azuredevops_group resource, which can be either a group internal to azure devops, or it can be a reference to an azure AD group. Each element in my local map had a reference to a particular Azure AD group. When I created it, terraform saw it as two distinct group resources, but in Azure DevOps it was only a single group resource. In Terraform I imagine they had the same id, but terraform saw them as two distinct objects. If I remove one of my items from the local map, terraform sees that there's something in state that should no longer be there - the second reference to that group. So it deletes the group. As far as it knows, nothing has happened to the group reference in the first element. So it's happily stored the descriptor for that reference in the state for the first element. But it no longer maps to anything that actually exists in azure devops. And the provider is... not smart enough to realize that. So I guess that's really a provider bug that it's not actually syncing state the way terraform should be.

Terraform is 100% the wrong tool for managing Azure devops. I can't even imagine why someone would want to try.

It seems like classic "everything is a nail" syndrome. Same for kubernetes provider for terraform or the people who wrap every single thing in ansible for no discernable reason.

New Yorp New Yorp fucked around with this message at 19:34 on Feb 23, 2023

The Fool
Oct 16, 2003


DkHelmet posted:

Timely conversation- I'm standing up a Terraform practice. Where are some good references on avoiding (or at least hopefully minimizing) footguns and style/methods/tools for modern tf development? I've got a sprawl of old tf from 0.7 with various styles and nary a module in sight.

Any good books or standards?

This isn't terrible: https://developer.hashicorp.com/terraform/cloud-docs/recommended-practices

Do you have management buy in to be able to enforce standards? It doesn't do any good if everyone ignores you.

I'm sure I'll think of more, but off the top of my head: use tflint, and if you decide to do modules, use a PMR

12 rats tied together
Sep 7, 2006

the documentation these days is good, so i would definitely give them a full read as well (they're not very long)

a piece of learned advice i would offer is to resist the temptation to make a module for as long as you can. when you create and invoke a module you're making a firm declaration about what shape your state has everywhere the module will be invoked from.

you're going to need to satisfy every required input, and you're going to need to be satisfied by every output, or else you'll gum the inputs and outputs up with conditions.

inlining tons of conditions into your modules is not, by itself, bad terraform, but it is step 1 on the "how to create unmaintainable garbage" checklist, so you can simply not do it!

resource for_each is the way to avoid tediously repeating yourself in almost all circumstances because:

- it doesn't hide logic or locals in module scope (which is hidden from you in `terraform console`)
- supports arbitrary mangling by using expressions as the input to the resource for_each
- can be empty or evaluate to empty, unlike modules, which require setting and handling optionals everywhere, and then imposing "ability to handle optional outputs" requirements on every root state you have
- is easier to refactor later because you don't need to excise or inject into module scope, you can just `state mv`, `state rm`, `import`, etc

12 rats tied together
Sep 7, 2006

New Yorp New Yorp posted:

Terraform is 100% the wrong tool for managing Azure devops. I can't even imagine why someone would want to try.

It seems like classic "everything is a nail" syndrome. Same for kubernetes provider for terraform or the people who wrap every single thing in ansible for no discernable reason.

OP would be able to tackle this problem without issue in ansible, is the main reason to wrap all of your management crap in ansible, but it's a way heavier lift and you have to read 3x as much documentation and probably some python too.

It's devops.txt to be like, I'm gonna learn this tool to expand my repertoire, and then immediately step on a rake. These tools aren't good, they make trade offs and assumptions that are huge and foundational, and the only way to really get your head around them is to try and fail repeatedly.

It's noble to try and use one thing to manage all of your azure stuff. OP is not wrong to have this expectation. They've just come away from the exercise with a new understanding that terraform actually sucks, instead of a solution to their problem, which I guess is also devops.txt.

Docjowles
Apr 9, 2009

DkHelmet posted:

Timely conversation- I'm standing up a Terraform practice. Where are some good references on avoiding (or at least hopefully minimizing) footguns and style/methods/tools for modern tf development? I've got a sprawl of old tf from 0.7 with various styles and nary a module in sight.

Any good books or standards?

Just a couple random tips, but:

Do not let people run "terraform apply" from their local workstations. Put it into some sort of CI/CD pipeline which applies for you once whatever approvals you want to put in place pass (automated tests, manual code review, etc). We use Atlantis for this, which is free, and it's not perfect but not the worst? Hashicorp also offers Terraform Cloud/Enterprise if you have a budget. People will still manage to push out bad changes like this. But you're less likely to have assholes stomping on each others' changes as they gleefully "terraform apply" from a branch on their laptop that's 3 months behind main without looking at the plan at all.

Terraform has a canonical style and running "terraform fmt -recursive" will "fix" all your files. So put that in your pipeline as well, and/or as a pre-commit hook. Here's some other hooks you may or may not find useful. Running these can spot simple mistakes early, and hopefully head off "NO YOU IDIOT THERE ARE SUPPOSED TO BE 2 SPACES NOT 4" shouting matches. Just tell everyone that the formatter is god and they don't get to hold up code reviews for this crap.

12 rats tied together posted:

It's devops.txt to be like, I'm gonna learn this tool to expand my repertoire, and then immediately step on a rake.

Also to be fair he asked "hey how do I get some more marketable stuff on my resume" and I/we said he should get experience with Terraform or some other IaC tool. So from a resume driven development standpoint, I can't argue :v: Also even if TF isn't ideal it's still way better than just building poo poo in the GUI.

12 rats tied together
Sep 7, 2006

Docjowles posted:

Also to be fair he asked "hey how do I get some more marketable stuff on my resume" and I/we said he should get experience with Terraform or some other IaC tool. So from a resume driven development standpoint, I can't argue :v: Also even if TF isn't ideal it's still way better than just building poo poo in the GUI.
i'm phone posting sp i wanted to be clear / reiterate that this wasn't a dig or anything, your suggestion was good, OP's instincts were good, everything about this is fine and normal and part of working in this field

whether you go "this is a poor fit for terraform so it's a problem with the provider" or "this is a poor fit for terraform which means terraform sucks", and the degree you go for each, is up to the human behind the computer

i lean really hard towards the second one and i have a bunch of strong opinions about it that aren't really appropriate for me to voice again ITT unprompted

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
It is kind of a weird fit for terraform, though it might still be easier than other alternatives...

Basically I need an easy repeatable way to create a pipeline definition with defined access control. The "offering" I'm building out is an internal service that builds and serves documentation static sites built with mkdocs. The building is done using a template in Azure DevOps that their repo connects to, and as long as they're using that template, the pipeline will automatically build and deploy their site when they make a commit. So whenever we onboard a new "site" I need to create a pipeline definition, create a "Group" inside Azure DevOps, and grant that group specific permissions onto the pipeline definition so they can see their linting failures.

I could do it all with their API obviously, since this provider is just a wrapper around the API. I'm deploying the assets in Azure that actually run the service (Azure App Service, Storage Account, etc) via Terraform, so it wasn't a huge lift for me to try terraform here as well. Not sure how I'll proceed at this point. I'm pretty sure I know how to work around the "provider weirdness" (which is just a subset of the general Terraform problem of storing the state for the same object in multiple places) to prevent the issue I ran into. I could use terraform in a "stateless" way and just use it to create new resources, instead of writing a script to do the same thing via API.

And yeah, the tale of "it works perfectly except for this weird rough spot" is pretty much a tale as old as time in IT, you'll never get rid of all the rough spots, you just decide what rough spots are easiest for you to deal with and build in that direction.

luminalflux
May 27, 2005



Docjowles posted:

Do not let people run "terraform apply" from their local workstations. Put it into some sort of CI/CD pipeline which applies for you once whatever approvals you want to put in place pass (automated tests, manual code review, etc). We use Atlantis for this, which is free, and it's not perfect but not the worst? Hashicorp also offers Terraform Cloud/Enterprise if you have a budget. People will still manage to push out bad changes like this. But you're less likely to have assholes stomping on each others' changes as they gleefully "terraform apply" from a branch on their laptop that's 3 months behind main without looking at the plan at all.

Terraform has a canonical style and running "terraform fmt -recursive" will "fix" all your files. So put that in your pipeline as well, and/or as a pre-commit hook. Here's some other hooks you may or may not find useful. Running these can spot simple mistakes early, and hopefully head off "NO YOU IDIOT THERE ARE SUPPOSED TO BE 2 SPACES NOT 4" shouting matches. Just tell everyone that the formatter is god and they don't get to hold up code reviews for this crap.

100% agree to both of these. Having something like Atlantis or TF Enterprise also allows for non-infra engineers to be able to open PRs and see the plan without you having to hold their hands through "how do I get AWS creds? How do I install terraform?" questions. Even more valuable if you work in several accounts.

With regards to pre-commit/linters - this should be a must for all repos. They should also be part of the test pipeline so that you can't even merge the PR if the linter is complaining. Being able to use a politer version of "Sorry, can't argue with CI, go fix your poo poo" stomps out a lot linter/formatting discussion.

Docjowles
Apr 9, 2009

A dev asked me if there was some mechanism they could use in their front end app to control which backend in a pool of k8s pods a particular request would get routed to and it felt like a good time to close slack for the weekend and find some whiskey. If this is how you are thinking about kubernetes you have already lost the game.

Docjowles fucked around with this message at 05:21 on Feb 25, 2023

luminalflux
May 27, 2005



This week I had someone try to rules-lawyer me about "well, how often do pods get restarted? if it's not too often can we just run MongoDB in k8s???" when I explained that running MongoDB in Kubernetes on ephemeral storage means that the data goes away and they should use DocumentDB and call it a day.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

luminalflux posted:

This week I had someone try to rules-lawyer me about "well, how often do pods get restarted? if it's not too often can we just run MongoDB in k8s???" when I explained that running MongoDB in Kubernetes on ephemeral storage means that the data goes away and they should use DocumentDB and call it a day.

I mean, that's the wrong question and the wrong answer with a correct conclusion. It doesn't matter, you can use persistent volumes, but you're taking on a lot of burden for no good reason so don't do that and use DocumentDB.

Docjowles
Apr 9, 2009

Yeah... I mean for both of us, StatefulSets exist. But why are you running this app in k8s at all except to say you can.

edit: I guess this is a good time to ask the audience if any of you are running important databases or elasticsearch clusters or something in k8s and are happy about it or doing it at gunpoint. We run a lot of k8s but really try to limit it to just stateless services here.

Docjowles fucked around with this message at 05:50 on Feb 25, 2023

Methanar
Sep 26, 2013

by the sex ghost

Docjowles posted:

A dev asked me if there was some mechanism they could use in their front end app to control which backend in a pool of k8s pods a particular request would get routed to and it felt like a good time to close slack for the weekend and find some whiskey. If this is how you are thinking about kubernetes you have already lost the game.

There are genuine software architecture reasons for where you may want to do this. I went over one of them today with a dev. He had some easily cacheable but expensive to compute db views. He was putting together a little distributed cache system with a consistent hash algorithm that needed to know of all pods of a service

I ended up telling him to use headless services. What he really needed was a service discovery. mechanism. He could have done zk or something but just using regular old k8s dns made sense in his case and involved less bs.

Sometimes its okay to route traffic to specific backends and not just lb it. We have many cases of this devs doing this for various reasons.

Docjowles
Apr 9, 2009

It is really loving jarring for you to finally have an avatar, Methanar :glomp:

Adbot
ADBOT LOVES YOU

luminalflux
May 27, 2005



Docjowles posted:

Yeah... I mean for both of us, StatefulSets exist. But why are you running this app in k8s at all except to say you can.

edit: I guess this is a good time to ask the audience if any of you are running important databases or elasticsearch clusters or something in k8s and are happy about it or doing it at gunpoint. We run a lot of k8s but really try to limit it to just stateless services here.

We've called out that we don't support services that need persistent storage, as we're early in our kates migration story. In this case, it's a third party service with no helm chart or anything that they want to run that uses MongoDB, a database we have no idea how to manage. We've had other inbound requests for another third party app that needs Elasticsearch and maybe Cassandra or MySQL and our response is similar - run the stateful services the way we currently run them (via AWS services or in EC2).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply