|
I’ve spent the last week evaluating different terrible tools that generate insane cloudformation yaml files and I can’t believe there’s something out there that makes me miss helm’s dumbass yaml templating I imagine in a week I’ll be back to helmin and eating every single one of these words
|
# ? Jun 3, 2022 08:35 |
|
|
# ? May 17, 2024 23:27 |
|
just use cdk and ignore the cfn files it makes
|
# ? Jun 3, 2022 08:57 |
|
CMYK BLYAT! posted:i really do hate that every CD tool on the market has added a helm option and that they universally use helm template instead of helm install, each with its own idiosyncrasies We hit all these issues, and I'd still take any other tool managing the deployment vs doing a helm install. Something like ArgoCD will give me way more visibility into what's going on vs helms install and pray approach. All those template problems have flags to change the behaviour and do what you want. If someone requests changes to a chart because of their template output, tell them to fix their poo poo.
|
# ? Jun 3, 2022 12:37 |
|
Gentle Autist posted:just use cdk and ignore the cfn files it makes yes, that is what I’m leaning towards of course, our company doesn’t allow use of IAM users, it’s all STS AssumeRole for everything, including our (on-prem) CICD. so I need to figure out how to get cross-account CDK working, which it supports but is much more poorly documented and, of course, requires reading and understanding the raw CFN template the CDK uses for bootstrapping point being, dehumanize yourself and face to yaml
|
# ? Jun 3, 2022 20:43 |
|
DELETE CASCADE posted:the day i use a loving yaml template is the day i eat a shotgun your jaw isn’t strong enough to chew through a shotgun. gonna have to call bullshit
|
# ? Jun 3, 2022 22:19 |
|
trap sprung: yaml templates itself so every yaml document is technically a template
|
# ? Jun 3, 2022 22:42 |
|
Cerberus911 posted:All those template problems have flags to change the behaviour and do what you want. If someone requests changes to a chart because of their template output, tell them to fix their poo poo. they do, yes. sadly this cool code that builds the template command for you with a hard-coded subset of available arguments doesn't let you use them this is more a complaint about dealing with OSS community bullshit. jenkins may be the party at fault here but that doesn't stop people telling me to paper over their failings and complaining when i tell them to submit an issue elsewhere because other chart authors have already acquiesced to just duplicating features
|
# ? Jun 4, 2022 01:32 |
|
Progressive JPEG posted:imo the pro move is to pipe the output of "helm template" into regular "kubectl apply" and sidestep helm's logic for that entirely, using it only for a template rendering stage - I’m feeling blessed not stressed that I can do all my simple poo poo with an `envsubst farts.yaml | kubectl apply -f -` if I ever need helm I will try it out as you describe op. blessings upon ye
|
# ? Jun 4, 2022 06:15 |
|
Gentle Autist posted:just use cdk and ignore the cfn files it makes pro move. at my last company i built out a framework and rendering pipeline using troposphere to write cfn in python. it worked well enough given their constraints, and i stole some cdk ideas along the way, but at newjob i'm 100% cdk and it is just so, so much more fluid and easier to work with. fingers crossed that cdk8s works out, i'd use that over helm any day
|
# ? Jun 4, 2022 06:41 |
|
we use argo at work and I’m moderately sure piping helm template to kubectl apply is how it works behind the scenes, nobody actually wants to use helm
|
# ? Jun 4, 2022 07:12 |
|
we're currently doing the helm install routine, but at some point may have to move to the apply routine if we figure out a non-lovely way to deal with gitops for automated deploys and build promotion. So far though it's helpful for some conditional checks to do things like run a helm diff and whatnot to know if we need to run pre-deploy checks on preprod environments when doing hourly automated deploys, and use their hooks mechanism to annotate observability data with deploy times.
|
# ? Jun 4, 2022 17:56 |
|
Ok so I spent a lot of the afternoon dorking around with k8s, moving components from docker-compose to get them to run one by one since kompose left me with a jumbled mess One thing I'm going to have a problem with in the future is how do I do local dev? I have a postgres DB and the underlying app is 12-factor so I can just swap out some environment variables for RDS, but for local I've been running minikube and will want to spin up a local postgres... I think. If there's a guide or a book that outlines this I'm happy to read. So far I read Mastering Kubernetes which is fine but I don't think covers some fo this.
|
# ? Aug 30, 2022 02:34 |
|
Hed posted:Ok so I spent a lot of the afternoon dorking around with k8s, moving components from docker-compose to get them to run one by one since kompose left me with a jumbled mess we have a harness called 'devpop' and when you run it, it builds a k8s stack that's identical to any of our stacks in CDN prod and you can just curl localhost to see how it works. you just change this environment variable to `local` and it only rolls your dumb garbage code to local
|
# ? Aug 30, 2022 02:36 |
|
I’ve been using microk8s for local stuff. Tilt is generally good and you can ask tilt to use whatever local cluster to work with afaict.
|
# ? Aug 30, 2022 03:07 |
|
when should i use ecs vs eks? we hired a consultant to tell us but tbh they havent helped at all
|
# ? Aug 30, 2022 15:15 |
|
you should use eks if you have a legitimate need for the k8s api, for example, because you or someone in your org couldn't figure out how to do something and started using a public helm chart if you just need to launch containers in AWS, use ecs instead
|
# ? Aug 30, 2022 16:51 |
|
thank you
|
# ? Aug 30, 2022 17:37 |
|
i've used helm charts at past orgs but didn't really care about them one way or the other. they seemed pretty arbitrary but i was good at putting my little numbers in the places they were supposed to go
|
# ? Aug 30, 2022 17:38 |
|
i literally work for a kubernetes business, avoid it if you can. but any time helm charts are involved everything gets hosed up.
|
# ? Aug 30, 2022 17:38 |
|
helm charts have the same smells/problems as puppet modules from the forge, chef cookbooks from the supermarket, and ansible collections from the galaxy. each is a killer abstraction to use within an organization, but very few orgs can actually use that published $whatever without having to alter it to their needs. instead of maintaining simple internal whatever that describes just the organization's needs, they end up maintaining a fork of a complex whatever that tries to do everything for everyone.
|
# ? Aug 30, 2022 17:48 |
|
I felt pretty good when I read a helm chart and figured out why it was building some pods wrong and how to fix it
|
# ? Aug 30, 2022 18:15 |
|
my previous org had someone loving incredible at their job set up a helm chart repo, so we just had a properties file we'd fill out, and helm and k8s did the rest. it was loving awesome. made me love k8s almost single handedly
|
# ? Aug 30, 2022 18:23 |
|
I’m a K8s architect, so I’m biased, but everytime I use ECS I’m annoyed that once you step outside the golden path you end up building so much stuff. and it doesn’t have a ton of the nicer primitives built in K8s has. Kubernetes also has such a huge ecosystem that what ever you’re doing has probably been solved countless times.
|
# ? Aug 30, 2022 18:26 |
|
helm templates are go with lisp in them
|
# ? Sep 4, 2022 04:20 |
|
Bored Online posted:helm templates are go with lisp in them i'm glad we transitioned to publishing operators for our own products
|
# ? Sep 4, 2022 04:25 |
|
nudgenudgetilt posted:very few orgs can actually use that published $whatever without having to alter it to their needs. instead of maintaining simple internal whatever that describes just the organization's needs, they end up maintaining a fork of a complex whatever that tries to do everything for everyone. forking and modifying was the ostensible original use model for helm, and it probably would have been okay-ish, but nobody does it between a combination of "UGH ITS SO MUCH EXTRA WORK" (it's not unless you're doing massive modifications to a bunch of core templates) and "i am babby who has never done a kubernetes, i just wanna fill in value and get Deployment, no i do not understand what a Deployment is please i do not want to learn new things", so every official chart ends up being a mess of every possible feature ever, to the point of fully replicating every field in every resource it spawns in values.yaml, but organized differently than the underlying resources because lol organic growth. it's a crime that helm lacks first-class support for kustomize to let it handle the 85% of requests that are "just add this additional field to the Deployment or w/e" without writing a post-processing script. after spurning all these alternatives, everyone proceeds to complain that the values.yaml is too complex carry on then posted:i'm glad we transitioned to publishing operators for our own products months deep into trying to replace a chart with one, i am unconvinced at least for basic deploying app poo poo (once you're taking action based on custom resources it's a different story). now you have: - the same values.yaml problem where it's each vendor organizing core k8s resources in their own way (i wanted to try and avoid this; the more cavalier engineer on the team has rammed through config design with the barest review possible to get ready for Big Sales Event, promising we can change them after. we won't) - redoing all of Helm's state management from scratch, cause controller-runtime and kubebuilder aren't really prescriptive about how to do that, using something akin to Tiller, which everyone rightly wanted to get rid of - an additional layer of red hat bullshit with unclear, conflicting documentation and guidance because there are apparently 3 people that understand the additional poo poo red hat added, all of whom have left the company. red hat is literally paying us to comply with all the extra openshift requirements, and still can't find someone who can answer our questions authoritatively. choice moments include: -- someone acknowledging that the original OLM config design was poo poo, so they changed it, but offered no migration path (rather, they promised to find someone who could describe the migration path, and we never heard from them again). didn't hear back on recommended approaches for handling both a "community" and "certified" operator, where the design makes it functionally impossible to easily maintain both from the same git repo because they use mutually incompatible config instead of an overlay -- an engineer saying something to the effect of "yeah, our validation servers just uh... break, a lot. you gotta just retry and open a support ticket if the retry just breaks it further" -- a red hat person asking why a helm feature (we originally started with the comedy helm-based operator poo poo) wasn't working, when it wasn't working because red hat's docs say "you must override this in such a fashion that the standard approach doesn't work", with no response to our questions about what their recommendations were to make it less unintutive. we've received this same question on three separate occasions ultimately it's not clear wtf operators offer that isn't just kubebuilder and controller-runtime (which, to be fair, do have a significant degree of involvement from red hat afaik) really. they add on OLM (poo poo? idk, at least from the app dev perspective idk what it's doing or how to best use it, and it's cumbersome to work with) and a CLI tool that i have no obvious use for (jfc just give me a flat file format) that makes backwards-incompatible changes every few versions
|
# ? Sep 7, 2022 10:02 |
|
freeasinbeer posted:I’m a K8s architect, so I’m biased, but everytime I use ECS I’m annoyed that once you step outside the golden path you end up building so much stuff. and it doesn’t have a ton of the nicer primitives built in K8s has. if you're doing something really basic (like running some stateless web apis) then ECS (via fargate) is really nice and has decent docs. And it has some features that our much harder to do in k8s - I think one of essentialContainer and containerDependency is a real PITA to recreate in k8s and requires adding some custom bash code or magic shared files.
|
# ? Sep 7, 2022 10:56 |
|
VSOKUL girl posted:once you're taking action based on custom resources it's a different story i mean this is the crux of it, it sounds like it's way overkill for what you're to do but for us (delivering a java application server image that users are going to take and customize with their own config and apps, and allowing them to reconfigure on the fly to capture debug data) the extra flexibility of CRs and a running agent managing things is worth the extra complexity in development, because the end-deployer experience does get simpler.
|
# ? Sep 7, 2022 15:30 |
|
kubernetes: probably too complex for your use case
|
# ? Sep 7, 2022 15:31 |
|
carry on then posted:kubernetes: probably too complex for your use case
|
# ? Sep 7, 2022 15:35 |
|
please alternative daily with "kubernetes: actually, it's super simple"
|
# ? Sep 7, 2022 15:35 |
|
is it too complex or is this the usual "we use only 5% of the features so surely this can be simplified a *lot*" where everyone uses a slightly different 5%?
|
# ? Sep 7, 2022 17:12 |
|
i think kubernetes is only complex if you need persistent storage or if you do something stupid like install a service mesh
|
# ? Sep 7, 2022 22:11 |
|
carry on then posted:i mean this is the crux of it, it sounds like it's way overkill for what you're to do but for us (delivering a java application server image that users are going to take and customize with their own config and apps, and allowing them to reconfigure on the fly to capture debug data) the extra flexibility of CRs and a running agent managing things is worth the extra complexity in development, because the end-deployer experience does get simpler. we do also update configuration on the fly based on API resource changes, but via a controller that essentially predates the operator framework and controller-runtime. we've since ported it over to use controller-runtime. i was kinda expecting the operator framework stuff to actually provide something beyond what we were doing already, but nah, not really. it's just more of the same, with a lot of marketing fluff. we're now managing Deployments also (because we're implementing a standard that does require spawning Deployments when someone creates another API resource), but beyond being able to react to resource CRUD instead of requiring something external run "helm install" it doesn't seem like there's much on that side we couldn't do with Helm--the basic "fill this envvar with the name of some other resource you're creating" glue work is entirely doable with templates, even if actually writing the templates sucks that last part matters a lot though--being able to use a proper type system, write unit tests, and get failure reports more useful than "couldn't parse the output YAML, good luck finding the source of the problems in the templates" is arguably far more useful than anything it's providing capability-wise
|
# ? Sep 7, 2022 22:46 |
|
when i was a young boy And way bored And looking for some fun i built an etcd Exported the configs Of some dumb Python code For a b b s But then I Realized We needed Persistence For some old s q l's So I built An engine That leveraged Bittorrent To keep those bits at haaaaaaaaaand <guitar riff> (true story.)
|
# ? Sep 7, 2022 22:57 |
|
my homie dhall posted:i think kubernetes is only complex if you need persistent storage or if you do something stupid like install a service mesh i cant hate on kubernetes too hard as much as it ruins my life because it is job security
|
# ? Sep 8, 2022 05:02 |
|
my homie dhall posted:i think kubernetes is only complex if you need persistent storage or if you do something stupid like install a service mesh maybe using a service mesh is the root of our problems (we've certainly spent a lot of time messing with config values after the infrastructure team added it and we started getting random networking errors). But it's also hard to say no to something described like this quote:Kubernetes supports a microservices architecture through the Service construct. It allows developers to abstract away the functionality of a set of Pods, and expose it to other developers through a well-defined API. It allows adding a name to this level of abstraction and perform rudimentary L4 load balancing. But it doesn’t help with higher-level problems, such as L7 metrics, traffic splitting, rate limiting, circuit breaking, etc.
|
# ? Sep 8, 2022 07:11 |
|
that wasn't our experience of it btw, we did have to change a bunch of things. e.g. our apps now all have little loops at the beginning that sleep until they confirm that the envoy sidecar is up and fully functional before they try and do anything
|
# ? Sep 8, 2022 07:15 |
|
istio seems like an insanely over complicated solution that ends up being much bigger than the problem it claims to solve I’m real glad the team at work that wanted us to standardize on it just kinda gave up and moved on
|
# ? Sep 8, 2022 07:36 |
|
|
# ? May 17, 2024 23:27 |
|
dads friend steve posted:istio seems like an insanely over complicated solution that ends up being much bigger than the problem it claims to solve in one of the other threads people were making fun of someone for not wanting to learn how deployments work on k8s (which is fine, as an app developer you should care about how your deployments work and be able to change things about them). but the combination of needing to know about k8s and a service mesh (and therefore a load of details about how networking works because there will definitely be hosed up bugs and weird config params not interacting well) and your own special snowflake CI/CD pipeline is basically impossible for a junior dev, and takes up a lot of the mental capacity of anyone else who is trying to deliver application features. i want to say that the problem is that "self serve" dev ops systems are being chosen by people who dedicate their jobs to infrastructure, not to the people focussing on application and feature development, but have not much confidence in that statement.
|
# ? Sep 8, 2022 09:00 |