Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Woof Blitzer
Dec 29, 2012

[-]
What kind of sick individual invented YAML anyways

Adbot
ADBOT LOVES YOU

duck monster
Dec 15, 2004

Woof Blitzer posted:

What kind of sick individual invented YAML anyways

Yet Another Malajusted Lout.

duck monster
Dec 15, 2004

This is fun. Deploy script that uses IMAGEVERSION var in .deploy to drive a few things in k8

code:
source .deploy
export API_IMAGE=registry.digitalocean.com/<stuff goes here>:$IMAGEVERSION
doctl registry login
docker build -t <stuff here>:$IMAGEVERSION .
docker tag <stuff here>i:$IMAGEVERSION $API_IMAGE
docker push $API_IMAGE
pushd <k8s dir>
./yg e -i '.spec.template.spec.containers[0].image = strenv(API_IMAGE)' api/api-deployment.yaml         #Update the k8s yaml
kubectl replace -f api/api-deployment.yaml
popd
Theres a bit more to it thats stuff that'd get me in trouble to reveal. But thats a nifty little script that we put on a git hook to a deploy branch and magic! Instant deployments. Just update the IMAGEVERSION .deploy in the git repo, push to del, and your good to go.

Next step is to get some CI in on it (Probably jenkins) to run the tests and make sure we're not pushing hot garbage. I think. If the boss will let me

12 rats tied together
Sep 7, 2006

yaml is a totally fine markup language, it's basically at feature parity with XML, including all the fun stuff that makes both formats into RCE vectors

it's problem is that it looks so simple that most people don't bother to read the documentation or implement it to even a small fraction of the specification. python's standard lib library is particularly bad about this, and for a while vs code's default syntax plugin flagged yaml type tags as syntax errors, for instance

Hadlock
Nov 9, 2004

Yeah yaml is fine in TYOOL-everyone uses linters now

I don't think I've ever had a problem with it :confused:

The big advantage it has over json is that comments are explicitly allowed, and if you're a big baby who must have everything in json there are in-ide conversion plugins

Zorak of Michigan
Jun 10, 2006


I started writing scripts in Perl 4. The idea of a human-readable file format that can encode complex data structures and be parsed by almost any language feels vaguely miraculous to be.

Volguus
Mar 3, 2009
We have TOML now for configuration. It has everything one (me) could ever want (probably RCE vectors as well). And does not throw an tantrum over spaces vs tabs.

Hadlock
Nov 9, 2004

Volguus posted:

We have TOML ... does not throw an tantrum over spaces vs tabs.

Person who doesn't use a linter/works with people who don't use linters spotted

Volguus
Mar 3, 2009

Hadlock posted:

Person who doesn't use a linter/works with people who don't use linters spotted

Yeah, we only use linters on python and c++ (with go under evaluation I think, no idea about that one) for now. conf files (whatever format they are in), no. Mainly also because we do not use yaml, so no, there's no need to.

Junkiebev
Jan 18, 2002


Feel the progress.

duck monster posted:

This is fun. Deploy script that uses IMAGEVERSION var in .deploy to drive a few things in k8

code:
source .deploy
export API_IMAGE=registry.digitalocean.com/<stuff goes here>:$IMAGEVERSION
doctl registry login
docker build -t <stuff here>:$IMAGEVERSION .
docker tag <stuff here>i:$IMAGEVERSION $API_IMAGE
docker push $API_IMAGE
pushd <k8s dir>
./yg e -i '.spec.template.spec.containers[0].image = strenv(API_IMAGE)' api/api-deployment.yaml         #Update the k8s yaml
kubectl replace -f api/api-deployment.yaml
popd
Theres a bit more to it thats stuff that'd get me in trouble to reveal. But thats a nifty little script that we put on a git hook to a deploy branch and magic! Instant deployments. Just update the IMAGEVERSION .deploy in the git repo, push to del, and your good to go.

Next step is to get some CI in on it (Probably jenkins) to run the tests and make sure we're not pushing hot garbage. I think. If the boss will let me

you should use kustomize for this imho - it's k8s native and base+overlays is slick and easy to understand

it spits out all k8s manifests and you publish that as a release artifact

Junkiebev fucked around with this message at 21:38 on Jun 17, 2022

Hadlock
Nov 9, 2004

Volguus posted:

Yeah, we only use linters on..... conf files , no.

:barf:

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

12 rats tied together posted:

basically at feature parity with XML

lol

Dukes Mayo Clinic
Aug 31, 2009
it has been so many I lost count zero days since last thinking about xmlstarlet

:mad:

Warbird
May 23, 2012

America's Favorite Dumbass

Yaml is fine with a decent editor to keep the “fun” in check, be the real pita is the lack of logical linting integration. I miss intellisense when working in Yaml files, especially mature ansible/terraform workflows where references may be pulled in from any number of files and places.

Had a cloud migration under Microsoft some time back with an entirely custom Terraform framework that was both black box and barely documented. It was a loving nightmare that you had to check in and push through a pipeline every 3 minutes to see if you had achieved whatever specific set of key value pairs it wanted. Official guidance was to “go look at an already migrated app to see what it wants”. Naturally they also kept updating it without warning so things would break and also render the reference pipelines useless. Goddamn project put me in the hospital.

The Fool
Oct 16, 2003


Warbird posted:


Had a cloud migration under Microsoft some time back with an entirely custom Terraform framework that was both black box and barely documented. It was a loving nightmare that you had to check in and push through a pipeline every 3 minutes to see if you had achieved whatever specific set of key value pairs it wanted. Official guidance was to “go look at an already migrated app to see what it wants”. Naturally they also kept updating it without warning so things would break and also render the reference pipelines useless. Goddamn project put me in the hospital.

this sounds a lot like our legacy stack

we replaced it with TFE before I started and I absolutely dread working with any apps that are still on the old pipelines

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
I store all my mark up and configuration in MS Access and convert it to DynamoDB when I'm ready to deploy

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Storing configuration? Yeah sure no problem just use a complex distributed messaging service with a bunch of moving parts like consul and hook it into everything, nothing will go wrong and this is a fantastic idea.

e: Alternatively: you can use salt, like rhn products do under the hood now and get an underperformant multi-part eventual-consistency registration handshake powered by pixie dust and prayers.

Bhodi fucked around with this message at 16:10 on Jun 19, 2022

Hadlock
Nov 9, 2004

Bhodi posted:

Storing configuration? Yeah sure no problem just use a complex distributed messaging service with a bunch of moving parts like consul and hook it into everything, nothing will go wrong and this is a fantastic idea.

I legit can't tell if you're making fun of k8s and etcd or not

Dukes Mayo Clinic
Aug 31, 2009
With roughly equal exposure I have spent more time cussing at Zookeeper than etcd, and this somehow correlates to why everyone loves k8s and forgets mesos.

LochNessMonster
Feb 3, 2005

I need about three fitty


Dukes Mayo Clinic posted:

With roughly equal exposure I have spent more time cussing at Zookeeper than etcd, and this somehow correlates to why everyone loves k8s and forgets mesos.

There are more than enough reasons to hate Mesos. Don’t put it all in Zookeepers shoes.

Junkiebev
Jan 18, 2002


Feel the progress.

it’s fun explaining promise-theory to BC/DR staff and I get to do it 2 times a week

LochNessMonster
Feb 3, 2005

I need about three fitty


edit: Wrong thread

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Hadlock posted:

I legit can't tell if you're making fun of k8s and etcd or not
I mean, etcd is the most reliable and fast of the bunch

FWIW

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
What’s the current hotness for feature flagging in k8s? Still configmaps with the app hitting the watch api and reading the change?

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Check out flagger

Wicaeed
Feb 8, 2005
We use LaunchDarkly, it seems to work well.

Question Time: What's the most sane way to deploy Helm charts these days?

I'm building a Rancher cluster for an IT-Ops team to run some hosted apps (Atlassian Jira/Confluence/Stash) and some day also run monitoring tools like Prometheus as well, however nobody on this team wants to use the existing Platform-team owned CI/CD environment that has historically been used with K8s thus far.

I'm thinking of exploring GitHub Actions, GitLab Runners and maybe even BitBucket Agents to see if any of these will meet their requirements:

* Should be able to be hosted internally (ie, on a private, internet-connected subnet in our Datacenter) as a VM
* Ideally, only the Runner itself would need to be hosted on-prem. The management pane can completely live in the cloud w/o issue.

Zephirus
May 18, 2004

BRRRR......CHK

Wicaeed posted:

We use LaunchDarkly, it seems to work well.

Question Time: What's the most sane way to deploy Helm charts these days?

I'm building a Rancher cluster for an IT-Ops team to run some hosted apps (Atlassian Jira/Confluence/Stash) and some day also run monitoring tools like Prometheus as well, however nobody on this team wants to use the existing Platform-team owned CI/CD environment that has historically been used with K8s thus far.

I'm thinking of exploring GitHub Actions, GitLab Runners and maybe even BitBucket Agents to see if any of these will meet their requirements:

* Should be able to be hosted internally (ie, on a private, internet-connected subnet in our Datacenter) as a VM
* Ideally, only the Runner itself would need to be hosted on-prem. The management pane can completely live in the cloud w/o issue.

Github/bitbucket work well in this form in my experience, gitlab is IMO a total shitshow for running on prem agents from cloud. If you don't have a k8s instance you have to rely on their wonky fork of docker machine, if you do have a k8s cluster you have to rely on their k8s executor which is just as shoddy.

I would implore you to reconsider running atlassian apps as containers unless you rarely change the versions. Jira and confluence are particularly fussy about upgrading between container versions - both have required manual sql fuckery between minor versions for us recently.

If you want to keep running them licensed beyond next year you'll need to move to datacentre sku which requires shared storage (and a metric wheelbarrow of cash) which may be an issue depending on your container stack.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

Wicaeed posted:

We use LaunchDarkly, it seems to work well.

Question Time: What's the most sane way to deploy Helm charts these days?

I'm building a Rancher cluster for an IT-Ops team to run some hosted apps (Atlassian Jira/Confluence/Stash) and some day also run monitoring tools like Prometheus as well, however nobody on this team wants to use the existing Platform-team owned CI/CD environment that has historically been used with K8s thus far.

I'm thinking of exploring GitHub Actions, GitLab Runners and maybe even BitBucket Agents to see if any of these will meet their requirements:

* Should be able to be hosted internally (ie, on a private, internet-connected subnet in our Datacenter) as a VM
* Ideally, only the Runner itself would need to be hosted on-prem. The management pane can completely live in the cloud w/o issue.

I implore you to use the existing CI/CD environment unless it’s truly awful. Will probably save you - and your security team - a lot of trouble and get you a lot more support.

As far as deployment goes, I’ve used both helm and ansible to manage releases, and while helm is definitely *better*, especially now that tiller is dead, I’m still kinda meh on it. As far as orchestrating releases goes, you can build your own test/apply logic really easily into your runners, or use something like helmfile to do the heavy lifting for you. Honestly though, I’m open to some alternatives here too, Helm has been thoroughly so-so so far. I think it’s because I just don’t love the extra layer of abstraction between me and the deployment manifests, I’d rather set envvars and write HPAs myself. I guess the utility goes up when using third party services, but still.

Haven’t used bitbucket, but gitlab or GitHub will certainly work with self hosted runners and a non-hosted VCS. I don’t mind the gitlab k8s executor, but we don’t do anything very complicated with it. You should probably use whichever option the rest of your developers are already using. Frankly I’m surprised you even have the option.

The Iron Rose fucked around with this message at 06:50 on Jun 28, 2022

Wicaeed
Feb 8, 2005

The Iron Rose posted:

I implore you to use the existing CI/CD environment unless it’s truly awful. Will probably save you - and your security team - a lot of trouble and get you a lot more support.

As far as deployment goes, I’ve used both helm and ansible to manage releases, and while helm is definitely *better*, especially now that tiller is dead, I’m still kinda meh on it. As far as orchestrating releases goes, you can build your own test/apply logic really easily into your runners, or use something like helmfile to do the heavy lifting for you. Honestly though, I’m open to some alternatives here too, Helm has been thoroughly so-so so far. I think it’s because I just don’t love the extra layer of abstraction between me and the deployment manifests, I’d rather set envvars and write HPAs myself. I guess the utility goes up when using third party services, but still.

Haven’t used bitbucket, but gitlab or GitHub will certainly work with self hosted runners and a non-hosted VCS. I don’t mind the gitlab k8s executor, but we don’t do anything very complicated with it. You should probably use whichever option the rest of your developers are already using. Frankly I’m surprised you even have the option.

(Un)Fortunately there really is no upper case security Team here :v:

The use case for this is going to just be this Rancher environment and just this team & their (small) set of requirements, so I'm not feeling too shy about diverging from our existing CI/CD (Jenkins) & tooling as that is focused mainly around our product teams ci/cd environment for putting code onto servers

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
The fat tail usually isn't terribly difficult if you're writing your pipelines in a fairly generic way (i.e. using shell scripts or equivalents instead of too many CI platform-specific constructs). The long tail is often the more difficult thing: having agents work correctly with your infrastructure's bespoke DNS configurations, getting access to secrets, or putting your audit logs where you're required to have them. If none of that stuff matters to you, off-roading might actually be the best option. But most organizations don't invest into paved paths and internal platforms because they think they can solve the obvious 99% use case better than a SaaS vendor can; they do it because the integration points between systems are hard, and most developers have better things to do than gently caress around at the undocumented fringes.

Vulture Culture fucked around with this message at 19:59 on Jul 3, 2022

Hadlock
Nov 9, 2004

We have EKS on AWS running kube2iam which is deprecated

Roadmap says to move to IRSA

One guy had IRSA deployed to a sandbox env using a helm operator, which has now been deleted from public Internet

There is some janky operator from some fly by night kubernetes consultant one of my coworkers found that is 4 months old but not super confident on running this in prod

Is the only correct way to run IRSA using IaC is to manage it 100% through terraform? I am not finding a tremendous amount of information on the topic

22 Eargesplitten
Oct 10, 2010



I need to learn CI/CD if I'm going to get into any kind of DevOps position (just got fired from my last cloud system engineer position) and I vaguely know what it is, but don't know enough to talk about it in an interview. Are there any good paint by numbers projects out there that I could work on to learn better?

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Hadlock posted:

We have EKS on AWS running kube2iam which is deprecated

Roadmap says to move to IRSA

One guy had IRSA deployed to a sandbox env using a helm operator, which has now been deleted from public Internet

There is some janky operator from some fly by night kubernetes consultant one of my coworkers found that is 4 months old but not super confident on running this in prod

Is the only correct way to run IRSA using IaC is to manage it 100% through terraform? I am not finding a tremendous amount of information on the topic

In theory there is also crossplane and ACK for trying to do it kubernetes native.

But yes it’s as janky as you imagine. And yea you have to peer it to each individual account to assume a role in that account. Yes all 112 of them.

luminalflux
May 27, 2005



Literally was having a conversation with my AWS SA about this today. IRSA is really gross to deal with because you need to add each cluster to the trust policy for each IAM role - there's no ability to trust like "eks-pods.amazonaws.com" like you can with ecs-tasks. https://github.com/aws/containers-roadmap/issues/1408 is the roadmap issue, please yell at your TAM/SA about this since it's gross as gently caress.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

22 Eargesplitten posted:

I need to learn CI/CD if I'm going to get into any kind of DevOps position (just got fired from my last cloud system engineer position) and I vaguely know what it is, but don't know enough to talk about it in an interview. Are there any good paint by numbers projects out there that I could work on to learn better?

It's not really complicated and there's practically nothing to learn. Take poo poo that people do manually that shouldn't be manual and automate it, then plug it into a robust task running platform like Azure pipelines or github actions.

Look at any well-run OSS project on github and you'll see how they do it.

Docjowles
Apr 9, 2009

Sorry to hear about your job :(

GitHub Actions is a good and free way to experiment with CI/CD. As others said it’s really not a difficult concept. CI is automatically building and testing code when you push it. CD is actually automatically deploying that code once it passes tests. There are a billion tools for this including GitHub actions, Jenkins, CircleCI, Travis, AWS CodePipeline, Azure DevOps, ArgoCD, GitLab pipelines etc etc. Doesn’t really matter which one you pick to play with cause it’s all the same idea, and most are configured via YAML files (though of course they are all different formats :argh:). Definitely a good area to have on your resume.

Docjowles fucked around with this message at 05:44 on Jul 13, 2022

Hadlock
Nov 9, 2004

22 Eargesplitten posted:

I need to learn CI/CD if I'm going to get into any kind of DevOps position (just got fired from my last cloud system engineer position) and I vaguely know what it is, but don't know enough to talk about it in an interview. Are there any good paint by numbers projects out there that I could work on to learn better?

Find your favorite open source project, fork it, then use GitHub actions and/or circle ci to run the test suite, update the test suite pass/fall badge, then build and push the container and also deploy it to k8s using either flux or argocd

Bonus points if you can orchestrate any of this in terraform, or at least lie about it convincingly.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
Seriously I do tons of CI/CD projects these days and the hardest one I've ever worked on was still less difficult than the easiest software I've ever implemented.

Of course, being able to do both is a huge benefit. Too many orgs have decided these are separate skillsets so you have people who are terrified or incapable of reading or debugging code responsible for ensuring it compiles and deploys.

Docjowles
Apr 9, 2009

Yeah I see a LOT of job postings that boil down to “you will build and maintain our CI/CD pipelines”. And I have to wonder, what the gently caress is the current state of things there? Does a designated senior dev drag and drop code to the production server with an FTP client like it’s 2004?

Adbot
ADBOT LOVES YOU

LochNessMonster
Feb 3, 2005

I need about three fitty


Docjowles posted:

Yeah I see a LOT of job postings that boil down to “you will build and maintain our CI/CD pipelines”. And I have to wonder, what the gently caress is the current state of things there? Does a designated senior dev drag and drop code to the production server with an FTP client like it’s 2004?

I've worked at a few places like this. It's not that the devs have a really old dated way of deploying stuff, it's more like they can't be bothered to maintain and improve CI/CD pipelines.

This usually comes from bottlenecks like incomplete or inconsistent testing, not properly scaled infra, updating dependencies or security vulnerabilities that need to be patched. If you've got a large enough codebase and a slow/hierarchical organization poo poo can take a long time to resolve. For devs this just distracts them from what they like most: write code.

Managers might "just hire a DevOps guy" to take care of all these things as a dedicated resource.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply