|
|
# ? Oct 3, 2022 09:51 |
|
|
# ? Jun 8, 2024 06:14 |
|
Progressive JPEG posted:oh god are you using aks well, some of our customers are they're not the worst ones though--AKS at least has a somewhat active OSS cloud provider repo and responded to the PR i opened to add an override (but has since gone silent since saying "yep this'll work", so vov). today i encountered, wonder of wonders, someone using kubernetes on the loving IBM CLOUD (nee Bluemix) unlike AKS, IBM Cloud does not make iffy assumptions about standard behaviors of HTTP applications. IBM Cloud simply rejects any LoadBalancer Service that sets appProtocol outright. yes, any value whatsoever. i guess some dumbfuck saw a standard field and, knowing that their infra couldnt do anything more complicated than a basic TCP or UDP load balancer, said "nuh uh, we're not gonna fail gracefully and just create a basic L4 LB following the protocol field anyway, we're gonna force you to remove that config". while they do have an OSS repo for their cloud provider, it has received an astonishing 0 issues and PRs over its 2 or so years of existence. this is simultaneously not encouraging and hilarious--IBM's cloud offering is so irrelevant that nobody even bothers to report problems despite it being poo poo kudos to <other vendor in our space> who just accepted a PR to make this configurable, because why ask other vendors in the ecosystem to not be garbage when we can instead just turn the ostensibly vendor-agnostic parts of the system into a minefield of "you shouldn't actually need to change this, but you need this one weird trick if you use this vendor". im not sure why im surprised--it's basically the same as the rest of the software ecosystem--but it is grating when we get continuous complaints about "ugh too many config options i thought you were supposed to make this simple for us!" followed by "oh but could you add a config option for this thing that shouldn't need it? we need it for uh... reasons". the reason there are too many config options is u. or rather the reason is that some middle manager let IBM wine and dine them knowing that they were gonna jump ship and not have to deal with their lovely vendor choice
|
# ? Oct 29, 2022 05:43 |
|
gonna guess based on usual software quality that there have been tons of problem reports against ibm cloud, it's just since they have zero non-megacorp users they all go through paid support contracts
|
# ? Oct 29, 2022 14:43 |
|
i think this question is relevant to this thread: im trying to set up multiple environments (e.g. dev, staging, prod) and i think i'm doing it wrong. the experts im working with recommended terraform to set up the eks cluster and other poo poo, but they suggested that each environment have its own branch in the terraform repository. i didn't know enough about tf to do anything else but this feels extremely stupid and annoying to maintain. is this normal?
|
# ? Nov 15, 2022 15:30 |
|
its in the documentation but its extremely stupid, your experts are dumbasses (1 for suggesting terraform in the first place, 2 for whatever this is) are the environments just logical distinctions or do they need to be present in physically separate networks? if its the former, you can just have 1 eks cluster but chuck 3 namespaces onto it, one for each env if you need to have 3 EKS clusters you can just... create 3 of them. if you're stuck with terraform that's as simple as "mkdir prod". you could also click the button 3 times. or create 3 cloudformation stacks.
|
# ? Nov 15, 2022 18:53 |
|
we wrap pretty much all our terraform resources up in modules, then have a directory for each "environment" where those shared modules are called
|
# ? Nov 15, 2022 20:10 |
|
i want prod to be in its own cluster because a few jobs ago we had a situation where a non prod system created too much 'cluster metadata stuff' and brought the whole thing down even though it was just a dumb batch process (i dont know what exactly the "stuff" was, but using Argo for batch processes that were too small created a shitload of pods an hour that lived for like 4 minutes each and somehow this overtaxed some k8s system that was supposed to keep track of data about the cluster. this was apparently an unrecoverable issue because all of the k8s stuff was unresponsive)
|
# ? Nov 15, 2022 20:16 |
|
nudgenudgetilt posted:we wrap pretty much all our terraform resources up in modules, then have a directory for each "environment" where those shared modules are called the better terraform implementations that ive seen have done it this way as well.
|
# ? Nov 15, 2022 20:20 |
|
nudgenudgetilt posted:we wrap pretty much all our terraform resources up in modules, then have a directory for each "environment" where those shared modules are called this seems very good and coherent
|
# ? Nov 15, 2022 20:28 |
|
creating a module for everything is usually bad. i highly recommend you read the documentation for when to write a module, module composition and (most importantly) dependency inversion if you're a software developer by trade this will probably be fairly remedial to you. its important for ops engineers who are conned into using terraform and who inevitably create a workspace root that invokes a single module called "thething" that invokes nested submodules that each contain 70+ optional resources that are toggled off and on with boolean parameters or by the presence of other related optional resources and which results in configuration that is impossible to inspect or reason about without just running a plan and seeing what breaks rules of thumb: do not put a count = ??? 1 : 0 in your module without thinking about it really hard. if you must have a conditional resource, always explicitly specify the on and off states. never, for any reason, call another module from within a module.
|
# ? Nov 15, 2022 20:38 |
|
Corla Plankun posted:i want prod to be in its own cluster because a few jobs ago we had a situation where a non prod system created too much 'cluster metadata stuff' and brought the whole thing down even though it was just a dumb batch process (i dont know what exactly the "stuff" was, but using Argo for batch processes that were too small created a shitload of pods an hour that lived for like 4 minutes each and somehow this overtaxed some k8s system that was supposed to keep track of data about the cluster. this was apparently an unrecoverable issue because all of the k8s stuff was unresponsive) agreed, not least because of how dumb you'll look when an issue on staging brings down prod. everyone outside engineering is going to see it as an obviously avoidable fuckup
|
# ? Nov 15, 2022 20:42 |
|
12 rats tied together posted:creating a module for everything is usually bad. hell yeah, thank you for this!
|
# ? Nov 15, 2022 22:32 |
|
do not use namespaces for prod/test/dev separation, you will hate your existence. don't even have a separate prod and then a multi-environment non-prod cluster if you can avoid it. plenty of poo poo is cluster-wide and not really possible to isolate. namespaces are for (kinda lovely) isolation between applications and account permissions afaik back in the day it was an official gke recommendation to even spin up separate clusters for some levels of application isolation cause hey, free control instances. then they started charging for control instances. oh no.
|
# ? Nov 16, 2022 07:22 |
|
I’m a heretic but I really like terraform workspaces for making GBS threads out identical EKS clusters. makes dealing with 20+ clusters across multiple regions way easier then the folder per env, or worse per cluster. locals and sane defaults can go a long way imo. from there if your doing EKS I’d use native kubernetes constructs via something like argocd or flux to customize the cluster. for your own sanity don’t use terraform if at all possible to install stuff inside the cluster
|
# ? Dec 2, 2022 02:05 |
|
i destroyed an eks cluster today. replaced it with old school ec2 instances in an an asg behind an alb with fully prebaked ami for deployment. felt loving nice doing that where i could, even if i cant do it everywhere.
|
# ? Dec 2, 2022 02:50 |
|
nudgenudgetilt posted:i destroyed an eks cluster today. replaced it with old school ec2 instances in an an asg behind an alb with fully prebaked ami for deployment. god we are going in the opposite direction right now with some stuff and it suuuuucks soooo badddd
|
# ? Dec 2, 2022 03:14 |
|
freeasinbeer posted:I’m a heretic but I really like terraform workspaces for making GBS threads out identical EKS clusters. makes dealing with 20+ clusters across multiple regions way easier then the folder per env, or worse per cluster. locals and sane defaults can go a long way imo. works ok with the kustomize provider. you’re using terraform as a templating layer then, basically. I wouldn’t trust something running inside the cluster to manage the infrastructure gunk that kube needs to be usable in production, and terraform is already there, so why not the thing about workspaces is that you could have a module, a list, a foreach, and achieve the same result in a much more easily discoverable way. some might object to managing 20 eks clusters in a single state file, but to those people I say that glory shuns a coward
|
# ? Dec 2, 2022 05:01 |
|
if you have any sort of cross-cluster dependencies or shared data you are going to wish you had them in the same file later anyway e: dont use a module though just use a resource foreach
|
# ? Dec 2, 2022 05:25 |
|
how does eks + kubectl work? my cluster is in a vpc and i gotta connect to the bastion host to mess with anything. how come kubectl works fine from my machine? does aws eks' update-kubeconfig automatically route it through a bastion or something?
|
# ? Dec 14, 2022 15:50 |
|
Corla Plankun posted:how does eks + kubectl work? the eks control plane isn't inside your vpc, only the nodes are. the control plane has an external api endpoint of something.region.eks.amazonaws.com. take a look at your .kubectl/config
|
# ? Dec 14, 2022 16:27 |
|
you can also put the control plane inside the vpc though
|
# ? Dec 17, 2022 01:31 |
|
more fun helm poo poo: someone tried to suggest that a values.yaml key use the `tpl` function when rendering its value for https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#topologyspreadconstraint-v1-core because they have various environments with different topologies clearly the only way to fill this is to inject arbitrary template output into the field, in case, idk, maybe you need the otherwise strongly-typed field to contain a value from template. like sure, maybe you might need to template TopologySpreadConstraint because maybe there's some unknown field inside that may contain _the entire universe_ as generated by a template, or a 🌴 emoji. who knows? 🌴🌴🌴🌴🌴 loving templates inside templates interpreting config values as templates so you can template while you template never give an engineer a tool that can perform multiple functions ever. hammer nail nail hammer palm tree palm tree palm tree 🌴🌴🌴🌴🌴🌴 https://www.youtube.com/watch?v=k1Zwhi6ag7g
|
# ? Jan 23, 2023 21:12 |
|
why doesn’t k8s just JCL
|
# ? Jan 25, 2023 10:19 |
|
eschaton posted:why doesn’t k8s just JCL you need knative for that
|
# ? Jan 25, 2023 16:31 |
|
carry on then posted:you need knative for that nobody uses knative lol
|
# ? Jan 25, 2023 18:45 |
|
VSOKUL girl posted:nobody uses knative lol they're missing out in recreating that mainframe experience can knative be configured to take down the entire cluster if you don't remember to go in periodically and purge the spools?
|
# ? Jan 26, 2023 00:36 |
|
ask your mom about purging spool
|
# ? Jan 26, 2023 11:24 |
|
I've been spending the last two weeks having a special kind of hatefest at helm. Set up makefiles so that when a developer builds the repo it builds all their docker images tagged as developer images and pushes them to artifactory then does a helm lint/template/package on their helmcharts, updating the values.yaml and pushes up into artifactory. Works great. Update helm, everything blows up. Helm package no longer supports --set. Checkout github to find out why they are breaking my repo... apparently they accidentally included that functionality in helm 3 and decided to remove it?! People are complaining about broken CI/CD pipelines and writing bash wrappers just to provide fairly simple functionality. Package also apparently shits itself if you have chart dependencies that are packaged charts? Because they can't bother to untar a chart? And they straight up don't respect disabled chart dependencies when they package. It blows my mind they don't have competition. I'm literally at the point of writing commits to their github out of frustration.
|
# ? Jan 29, 2023 06:45 |
|
VSOKUL girl posted:do not use namespaces for prod/test/dev separation, you will hate your existence. don't even have a separate prod and then a multi-environment non-prod cluster if you can avoid it. plenty of poo poo is cluster-wide and not really possible to isolate. namespaces are for (kinda lovely) isolation between applications and account permissions Any particular reason? I mean, rancher bases a lot of it's isolation on namespacing and I honestly don't hate it. We run a couple rke2 clusters and we do namespacing down to an individual users, Jenkins tasks, build pipeline, etc. Maybe this is a bad take but it's... eh? Fine? BedBuglet fucked around with this message at 07:20 on Jan 29, 2023 |
# ? Jan 29, 2023 06:58 |
|
BedBuglet posted:I've been spending the last two weeks having a special kind of hatefest at helm. Set up makefiles so that when a developer builds the repo it builds all their docker images tagged as developer images and pushes them to artifactory then does a helm lint/template/package on their helmcharts, updating the values.yaml and pushes up into artifactory. Works great. i've switched a bunch of helm stuff to terraform and it's been a lot smoother overall despite the usual caveats around terraform itself like needing to deal with safe tfstate storage etc
|
# ? Jan 29, 2023 17:55 |
|
what should i be monitoring on eks? i tried to read the docs for ContainerInsights but they direct me to places that don't exist in the aws gui anymore lol nice i think i only care about pod crashloops but idk if there's pieces of the actual EKS system that i need to keep an eye on or not
|
# ? Feb 24, 2023 19:37 |
|
|
# ? Jun 8, 2024 06:14 |
|
poo poo you find in what are ostensibly supposed to be technical standards documentsquote:We use the term “metaresource” to describe the class of objects that only augment the behavior of another Kubernetes object, regardless of what they are targeting. if your copy has even a hint that you may need to consult a classics professor to understand how to implement a technology standard please, reconsider what you are writing fire half the engineers and give me a squad of technical writers for the love of god the same document proceeds into some sort of stage play between hypothetical users. it's like someone heard the joke about the paxos paper flying under the radar for decades because nobody gives a gently caress about hypothetical extinct greek island political systems and thought it was a template for success
|
# ? Oct 20, 2023 04:33 |