Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DkHelmet
Jul 10, 2001

I pity the foal...


Erwin posted:

Absolutely, ruby sucks. Also Salt sucks too. Ansible is the least annoying of the four, however for this:

...I'd do immutable infrastructure and not worry about a config management tool, unless I really need one in the build pipeline.

Yeah, I'm in this camp. Unless you have need of long-term maintenance of instances, just seal them and deal with Launch Templates or something. You can do fun things with rolling upgrades when a new version of the template releases via pipelines.

You also might get some joy with AWS Config if you're on that platform.

Hadlock posted:

In 2024 should I be using services or ingress with external dns? Nginx seems to support services in addition to ingress and whatever "nginx virtual server" is. Amazon load balancer controller also supports services.

2019 it was still all about ingresses. Did the world move on and I just didn't notice?

Also: what's everyone's preferred way to handle secrets in terraform in AWS? Looks like one lone developer is maintaining the wildly popular sops terraform provider

You're a little spun around. Services in k8s are (real roughly) cluster based named endpoints. It's only discoverable in-cluster with coredns or an analog. Ingresses allow external access to cluster Services. ExternalDNS glues known Ingress endpoints to a DNS service for external entities to connect to the Ingress. This is all mildly confusing until you diagram it out. :) You generally should stick to Services of type ClusterIP (think DHCP for Services) and glue them to Ingresses as needed. You can get external access to a Service with a Service type of NodePort, but it's generally a bad idea.

ExternalDNS is still quite useful in TYOOL24. When using the LBC, you'll get a random AWS CNAME attached to the ALB/NLB. ExternalDNS is a controller that runs and sniffs out Ingresses that are ready and slaps the rando AWS CNAME into, say, R53. Without it you'd have to have a checklist to add in the ALB address into R53 whenever it changes.

I had to spend some quality time with ExternalDNS. It's eazy to think about and reason by just running it as a CLI command. It's just a Controller, and really doesn't have much magic on its own. It iterates over Services (with various options) and then pulls data from the DNS system (stored sideband as TXT records), and then does a reconciliation action to CRUD what it needs to in DNS. You can do dry runs with verbose flags as needed to get a grip on it.

There's a new hotness in k8s called Gateway that's a general improvement over the Ingress system, which just graduated to 1.0.0 GA in October. There's no plans to deprecate Ingress, so no rush unless the new features make you happy.


edit: some clarification. Ingress is abstract; it has various implementations as defined by IngressClass. The in-tree one is nginx, but you can also use traefik. The AWS native one is managed out of tree and is the AWS LBC. If you're on AWS, you can definitely use a nginx IngressClass, but you'd be avoiding the AWS ALB for a pod with nginx in it.

edit edit: k8s Ingress comparison table

DkHelmet fucked around with this message at 18:38 on Dec 13, 2023

Adbot
ADBOT LOVES YOU

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
With Ansible, it's important to understand that it's basically 2 parts: (1) a way of maintaining an inventory of configuration (i.e. what data applies to what hosts, and how you organize it so that there's maximum sanity & minimal duplication), and (2) applying that configuration to a bunch of hosts, via the playbooks/roles/tasks/modules and whatnot. The first step ultimately renders a giant JSON data structure that contains the final per-host config, and then the 2nd step uses that data as input to the playbooks which apply it to all the hosts.

The thing that bugged me at first was that they didn't really prescribe a way to manage the inventory side of things, you're kind of left to feel that out for yourself, so that took a bit of trial and error. But since the end result of the first step is just "render a big data blob", you can replace their built-in way of doing things with your own code (so-called "inventory sources"). This helped a lot because I no longer had to try to shoehorn my particular situation's weird structure into Ansible's config format, and I could also easily leverage oddball data sources like spreadsheets and databases.

The "apply the data to the hosts" side of things also took a bit of getting used to. It's really an overly-simple programming language, and it's really not cut out for doing data manipulation, so it's best if all your data is rendered into an easy-to-consume form in the first step. But it is really good at applying the same steps in parallel to lots of hosts, gathering status & errors, retrying on failed hosts etc.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
DKhelmet is on the money with k8s services with the notable exception of services of type: LoadBalancer. These integrate with your cloud provider and will create internal/public load balancers that point traffic to your kubernetes service and from there onto your pods matching the service labelselector. This is how your Ingress controller is typically exposed to applications outside the cluster calling resources inside the cluster. Your DNS record for service-foo.contoso.com would resolve to the nginx ingress controller ILB, which would then read the ingress object for service foo which directs traffic for the host service-foo.contoso.com to the ClusterIP type service “foo”. This works because nginx is a proxy and can communicate with ClusterIP services since it too is inside the same cluster as service foo.

Hadlock
Nov 9, 2004

The Iron Rose posted:

DKhelmet is on the money with k8s services with the notable exception of services of type: LoadBalancer. These integrate with your cloud provider and will create internal/public load balancers that point traffic to your kubernetes service and from there onto your pods matching the service labelselector.

Yeah thank you

Yeah this is how I have it setup currently and routing live traffic over the Internet to an ELB via cname to a service. This seems to be the default way of doing a tutorial for AWS these days, I suppose

1) it has fewer moving parts and
2) don't have to cover setting up nginx controller in the tutorial it Just Works™

The big downside I'm seeing with no ingress is that instead of one load balancer per cluster with an ingress controller, you end up with one ~$20/mo load balancer per service, per cluster. I have like 8 production services I need to support immediately with plans for 10+ more next year. So that's like $300/mo/cluster just for routing. Plus development and tooling clusters. That's a fair amount of unnecessary cost and exposed surface area, even if load balancers are probably the most security hardened products AWS offers.

Other plus of ingress controller is that setting up cert manager + let's encrypt is more straightforward, I suppose.

The big upside is that with no ingress controller, there's less to configure which in theory means lower overhead but in practice I think ingress is still the correct way to do it.

I'll switch over to API gateway in like, 18 months when more things support it. Nginx ingress controller is the gold standard and I'm too busy standing up greenfield everything to go reinvent the wheel right now

Double edit: I don't care that API gateway is technically v1 now, the tooling around it is still immature

Hadlock fucked around with this message at 20:49 on Dec 13, 2023

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
Using an ingress is absolutely the right way to for exposing HTTP services, especially because of integrations with cert-manager and/or external-dns. Use it basically across the board.

If you’ve got non-http services, you should probably still go with a LoadBalancer type service.


There’s also two nginx ingress controllers. Use the in tree one: https://github.com/kubernetes/ingress-nginx, not the nginx one: https://docs.nginx.com/nginx-ingress-controller/

The Iron Rose fucked around with this message at 21:27 on Dec 13, 2023

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
The AWS Load Balancer Controller is perfectly capable of homing multiple applications onto the same ALB/NLB if you tell it which ones to target via Ingress/Service annotations, though you may run into some complications if you need to support a lot of clusters in different VPCs.

DkHelmet
Jul 10, 2001

I pity the foal...


The Iron Rose posted:

notable exception of services of type: LoadBalancer. These integrate with your cloud provider and will create internal/public load balancers that point traffic to your kubernetes service and from there onto your pods matching the service labelselector.

Service type LoadBalancer uses (potentially) legacy, in-tree drivers. k8s has been actively trying to get all of the old, we-copied-and-pasted-some-binaries called in-tree for external, vendor-maintained plugings. If you're on AWS, it's honestly much, much better to install the LBC and explicitly set it as your default IngressClass.

From the AWS documentation:

AWS posted:

When you create a Kubernetes Service of type LoadBalancer, the AWS cloud provider load balancer controller creates AWS Classic Load Balancers by default, but can also create AWS Network Load Balancers. This controller is only receiving critical bug fixes in the future. For more information about using the AWS cloud provider load balancer , see AWS cloud provider load balancer controller in the Kubernetes documentation. Its use is not covered in this topic.

Don't use in-tree LoadBalancers if you can avoid it. It's going away. A lot of in-tree is going away.


The Iron Rose posted:

There’s also two nginx ingress controllers. Use the in tree one:

This is more nuanced. If you're on AWS, don't: use the LBC. If you're not, then if your company is big on nginx, or you want support then use the nginx vendor controller. The in-tree one is community maintained. It's not bad, but there's implications to think about for production ready workloads and your enterprise.

Fun tip: the AWS LBC can also create NLBs for your UDP goodness. It also integrates with ACM to eliminate the need for cert management.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

DkHelmet posted:

There's a new hotness in k8s called Gateway that's a general improvement over the Ingress system, which just graduated to 1.0.0 GA in October. There's no plans to deprecate Ingress, so no rush unless the new features make you happy.
One of those new features is the AWS Gateway API Controller for VPC Lattice, and who doesn't love getting service mesh for free with their ingress configurations?

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Vulture Culture posted:

The AWS Load Balancer Controller is perfectly capable of homing multiple applications onto the same ALB/NLB if you tell it which ones to target via Ingress/Service annotations, though you may run into some complications if you need to support a lot of clusters in different VPCs.

This is what we do when we have a root dns record and a bunch of services that handle various paths. It's super easy. When you create the ingress just set the target for the path to the service and beep boop it's set up as a route on the alb.

DkHelmet
Jul 10, 2001

I pity the foal...


Following up, I hope I'm not coming across as confrontational. I'm just very, very familiar with this stuff.

It may not seem like it, but k8s is very forgiving. Especially with Ingresses since there's no real magic there. You can install all of them if you want; they all are of type IngressController. You can front a Service with as many Ingresses as you want concurrently. I've done this during the big v1beta1 deprecation back in .19 when I rolled from a hodge podge of nginx/traefik to the LBC.

You can set spec: IngressClass on a dozen ingresses pointing to different implementations, all heading back to one Service.

There's a setting ingressclass.kubernetes.io/is-default-class that points all Ingress instances of unspecified IngressClass to one you set.

DkHelmet fucked around with this message at 00:06 on Dec 14, 2023

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Also I picked up a project to build an OVA with Ansible vs a shitload of hairy shell scripts so we can build various flavors of Ubuntu and centos/rhel and I thought of this thread.

Doesn't make the Ansible any less frustrating or testing it any less tedious though (lol lmao qemu/libvirt is the only way to do amd64 on a m1 with molecule and it does it badly)

Hadlock
Nov 9, 2004

The Iron Rose posted:

There’s also two nginx ingress controllers. Use the in tree one: https://github.com/kubernetes/ingress-nginx, not the nginx one: https://docs.nginx.com/nginx-ingress-controller/

Yeah I think my current upgrade path will be load balancer service -> k8s nginx Q1 '24 -> API gateway whatever blerg Q1 '25 as a stretch goal to move towards current best practices

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

DkHelmet posted:

Service type LoadBalancer uses (potentially) legacy, in-tree drivers. k8s has been actively trying to get all of the old, we-copied-and-pasted-some-binaries called in-tree for external, vendor-maintained plugings. If you're on AWS, it's honestly much, much better to install the LBC and explicitly set it as your default IngressClass.

From the AWS documentation:

Don't use in-tree LoadBalancers if you can avoid it. It's going away. A lot of in-tree is going away.

This is more nuanced. If you're on AWS, don't: use the LBC. If you're not, then if your company is big on nginx, or you want support then use the nginx vendor controller. The in-tree one is community maintained. It's not bad, but there's implications to think about for production ready workloads and your enterprise.

Fun tip: the AWS LBC can also create NLBs for your UDP goodness. It also integrates with ACM to eliminate the need for cert management.

We use the load balancer controller in all our AWS clusters, because death to classic load balancers. It’s ridiculous it’s not built in, honestly. Highly highly recommend it and it’s not hard to set up. There are also kubernetes services we need to expose that don’t use HTTP; syslog receivers and powerDNS servers to use two examples. Both use the load balancer controller to provision NLBs, it’s great.

I’m still a huge fan of using nginx over the LBC for ingress resources however because the ingress object (not the nginx LoadBalancer service mind you) is consistent no matter where you deploy it. We are a heavily multicloud company though - I’ve got literally dozens of GKE and AKS clusters too, so having the consistent config and more importantly consistent data flow paradigm is huge.

The Iron Rose fucked around with this message at 01:18 on Dec 14, 2023

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Hit me up for all my tag-based EKS controller IAM policies if you're running in a multi-tenant account where the managed ones are insecure

Junkiebev
Jan 18, 2002


Feel the progress.

Blinkz0rz posted:

Also I picked up a project to build an OVA with Ansible vs a shitload of hairy shell scripts so we can build various flavors of Ubuntu and centos/rhel and I thought of this

Is packer the move here?

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Junkiebev posted:

Is packer the move here?

Packer is how we eventually end up with the OVA but right now it's provisioned using shell scripts written exclusively for CentOS.

My biggest complaint is that apple migrating to arm has hosed virtualization that isn't docker pretty badly.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Anyone have good cheatsheets for common controllers' Prometheus metrics? Most stuff seems to be undocumented and a lot of metric names/samples won't show up in metrics output until a specific error case is actually encountered. Mostly interested in AWS stuff (Load Balancer Controller, CSI drivers, etc.).

22 Eargesplitten
Oct 10, 2010



Not having K8s / Terraform experience seems like it's really holding me back from a lot of jobs in my latest job search. Are there generally accepted/respected certs for it? I'm seeing Terraform Associate and CKA, are there recommended courses?

Warbird
May 23, 2012

America's Favorite Dumbass

Have you tried being a functional alcoholic OP? Correlation is not causation but it couldn’t hurt to try.

Hadlock
Nov 9, 2004

CKA seems to be the only widely recognized cert (due to being sponsored directly by CNCF)

Not sure if AWS and azure have EKS/AKS specific certs but maybe worth looking into for larger corporate gigs

Warbird posted:

Have you tried being a functional alcoholic OP? Correlation is not causation but it couldn’t hurt to try.

:hmmyes:

Hadlock
Nov 9, 2004

In late 2020 we hired (against my advice) some guy who had a working and up to date GitHub repo that could spin up an opinionated eks cluster and security groups

He was terrible and fixing/killing his projects are used as the prime example of "tell me a conflict you had with a co-worker and how you fixed it" stories in interviews, but the point is we still hired him on largely the basis of his GitHub repo (incredibly)

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

22 Eargesplitten posted:

Not having K8s / Terraform experience seems like it's really holding me back from a lot of jobs in my latest job search. Are there generally accepted/respected certs for it? I'm seeing Terraform Associate and CKA, are there recommended courses?

Certs aren’t needed for terraform at all and you can learn it in an afternoon or two. Do that, put some silly hobby project on your GitHub, and put it on your resume under your “consulting” section

The CKA is usually the standard for kube certs but again, not sure how meaningful the certs part of it is for getting work. I’ve never seen someone with it in the wild, but my coworker did get his a few moons back.

Hadlock
Nov 9, 2004

The Iron Rose posted:

Certs aren’t needed for terraform at all and you can learn it in an afternoon or two. Do that, put some silly hobby project on your GitHub, and put it on your resume under your “consulting” section

The CKA is usually the standard for kube certs but again, not sure how meaningful the certs part of it is for getting work. I’ve never seen someone with it in the wild, but my coworker did get his a few moons back.

Strong agree on all points

Everyone I've seen hired for IaC K8S stuff either pushed for it at work and got it up and running or were on the team when it happened

I'm sure there's a comparison to screen actors guild here, but I'm not cultured enough to make it

Edit: oh it's "you have to have already worked on a SAG film to get hired for future SAG films"

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
oh also pro tip never use terraform for deploying kubernetes objects.

It’s fine for provisioning cloud resources: the control plane, the node groups, networking, etc. but the existence of the terraform kubernetes provider is a cruel lie that’s set up entirely to deceive junior devops engineers.

For deploying k8s objects, do it with plain old kubectl apply for awhile to understand the object formats but once you’ve got that then helm is the de facto standard for templating and deploying k8s objects. There’s a few rough edges regarding custom resources/CRDs, but it’s both simple and very helpful. Helmfile is great for orchestration of multiple helm releases in concert.

The Iron Rose fucked around with this message at 01:04 on Dec 17, 2023

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
From firsthand experience, the terraform cert is loving pathetic. Dont bother with it.

Hadlock
Nov 9, 2004

The Iron Rose posted:

oh also pro tip never use terraform for deploying kubernetes objects.

It’s fine for provisioning cloud resources: the control plane, the node groups, networking, etc. but the existence of the terraform kubernetes provider is a cruel lie that’s set up entirely to deceive junior devops engineers.

loving firing on all cylinders today, mostly agree

I'm currently setting up greenfield terraform for new company. Besides the cluster I'm using the helm provider to ONLY do nginx, external dns, cert manager and ArgoCD

The first three are really to only expose ArgoCD, all my other helm nonsense gets deployed via ArgoCD (or, that's the plan)

At my last job my predecessor used terraform to deploy short lived helm charts in a terraform repo that was somehow a prod deploy dependency. hosed up secrets file in develop? Hope you enjoy debugging a hosed prod deploy :smithicide:

The Fool
Oct 16, 2003


Matt Zerella posted:

From firsthand experience, the terraform cert is loving pathetic. Dont bother with it.


I did it at hashiconf this year because I was there and it was free

it was very easy

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

Hadlock posted:


At my last job my predecessor used terraform to deploy short lived helm charts in a terraform repo that was somehow a prod deploy dependency. hosed up secrets file in develop? Hope you enjoy debugging a hosed prod deploy :smithicide:

I can compete with this! My predecessor used makefiles to template and apply terraform code to do deployments from laptops with no remote state at all.

The dev team in question is just the worst, but at least they’ve moved to using makefiles to deploy kube objects directly from their laptops to prod instead of having terraform in the middle of it too. Two years people have tried to force them to use CI/CD (or rootless containers, or our artifactory repo with vuln scanning, or having more than one service account with owner permissions across dev/staging/prod, or stop creating GCP projects and elastic/kube clusters costing 10k a month for every single developer, or even delete said resources when people leave, etc) and they’ve just never done poo poo and management doesn’t seem inclined to force them.

Right now they’re trying to do a remote reindex of 4TB of elasticsearch data from prod to staging every two weeks and are freaking out about the slightest pushback. And they have the audacity to be condescending assholes about it too! Just the loving worst.

Hadlock
Nov 9, 2004

The Iron Rose posted:

I can compete with this! My predecessor used makefiles to template and apply terraform code to do deployments from laptops with no remote state at all.

Right before I left my two job ago, my coworker (who was angling hard for a management job of some sort) introduced ansible (via make!) as a templating solution for cloudformation :psyduck:, but not just single templates, but like, literally 5, and often times as many as 7 layers deep templates :psypop:

Thankfully he accepted a job at a block chain company and timed it just perfectly for his negotiated Bitcoin backed options to drop from the high of $70k down to about $35 couldn't have happened to a nicer guy

TL;DR if you need more than one layer of template overrides you've really hosed up

Going all the way back to 2015 I'm extremely wary when someone has a make file sitting in the root directory of a repo, usually it was over designed and largely unmaintainable by anyone other than the original author

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Oh god one of the first major projects I worked on at this one company was building, extending, and maintaining a ruby dsl over cloudformation back in 2015. I asked the 10xer who wrote it why and he never gave me a good answer.

It did cement an age old adage: the more layers of abstraction, the bigger and more opaque the stacktrace.

barkbell
Apr 14, 2006

woof
vp of engineering got fired and i get a bunch of his responsibilities so ive taken over as lead of the "devops" team which as far i can tell just does whatever nonsense every app dev team requests without question like spin up 10k/mo databases for product's pet projects that arent even in production yet. thinking the correct play here is to just turn it all off but thought id check with the experts here first

Hadlock
Nov 9, 2004

Goondolences

I would gate it with a ticketing system and then con one of the project managers into "handling the devops team as a 5% of your time project" to act as a firewall and maybe throw in a new jira required field that amounts to "will this ticket result in a higher $opex?"

Even if (when) you get enormous pushback it should have done the job of setting expectations going forward and drawing a line in the sand

Vrih
Apr 4, 2004
:)

The Iron Rose posted:

The CKA is usually the standard for kube certs but again, not sure how meaningful the certs part of it is for getting work. I’ve never seen someone with it in the wild, but my coworker did get his a few moons back.

It's one of those annoying things where the CKA is probably the best cert you can get but it's already highly irrelevant. It focuses a lot on things like backing up etcd and deploying your own cluster without any kind of framework. If you're running on any cloud platform, then it's pretty much irrelevant. I only took it because I could and had some experience of deploying Kops clusters and lowing up etcd from a few years ago, and to banish some demons from that.

What it doesn't touch at all is the practicalities of administering a cluster, all of the additional services you need to deploy in a real-world scenario to manage ingress, meshing, certs, managing resilience of the nodes, etc.

That being said, having the cert shows intent, and at least it's a very fun and practical exam to do.

Hadlock
Nov 9, 2004

13 posts in the last decade, welcome back, I guess

drunk mutt
Jul 5, 2011

I just think they're neat
I'm not a big fan of helm as it drives some pretty nasty anti-patterns in how the resources are rendered down through templates. This starts to get hairy with even the slightest divergence between deployment environments; especially if a small subset needs a fairly drastic difference in what resources are being provisioned.

Been trying to get my team shifting their focus over to kustomize, but guess it's hard for them to wrap their head around it; but I am very much enjoying it paired with Argo and GitHub. Guess should add that I do find "trunk based" development as a required sensible default in this pattern.

Junkiebev
Jan 18, 2002


Feel the progress.

The Iron Rose posted:

oh also pro tip never use terraform for deploying kubernetes objects.

It’s fine for provisioning cloud resources: the control plane, the node groups, networking, etc. but the existence of the terraform kubernetes provider is a cruel lie that’s set up entirely to deceive junior devops engineers.

For deploying k8s objects, do it with plain old kubectl apply for awhile to understand the object formats but once you’ve got that then helm is the de facto standard for templating and deploying k8s objects. There’s a few rough edges regarding custom resources/CRDs, but it’s both simple and very helpful. Helmfile is great for orchestration of multiple helm releases in concert.

this has been the opposite of my experience but I have already mastered being a functional alcoholic so I might be uniquely positioned for success

Junkiebev
Jan 18, 2002


Feel the progress.

Versioned cert-manager CRDs, for instance

http data object to grab yaml list by release

Split by the document separator into list

for_each the list with decodeyaml() as Kubernetes Manifest on the Kubernetes provider

The only downside is it removes # Comments, but otherwise it works like a charm

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

Junkiebev posted:

Versioned cert-manager CRDs, for instance

http data object to grab yaml list by release

Split by the document separator into list

for_each the list with decodeyaml() as Kubernetes Manifest on the Kubernetes provider

The only downside is it removes # Comments, but otherwise it works like a charm

:barf:

Obviously yaml has its problems, but HCL is somehow still even worse! some abstractions are bad! HCL is a garbage language for garbage people! Death to state files!!

The Fool
Oct 16, 2003


I did a handful of aoc exercises in hcl/terraform this year

I stopped because I decided I didnt hate myself that much

Adbot
ADBOT LOVES YOU

Hadlock
Nov 9, 2004

drunk mutt posted:

I'm not a big fan of helm as it drives some pretty nasty anti-patterns in how the resources are rendered down through templates. This starts to get hairy with even the slightest divergence between deployment environments; especially if a small subset needs a fairly drastic difference in what resources are being provisioned.

What happened that your environments are so different, genuinely curious.


For all the complaints I hear about yaml, it seems like every one of them is fixed by using a modern ide and/or linting

Hadlock fucked around with this message at 20:27 on Dec 17, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply