Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Docjowles posted:

Zookeeper 3.5 seems to have been in prerelease since dinosaurs roamed the earth so I am not holding my breath for that one.

(We still run 3.5 in production lol because it actually has functional support for certificate authentication)

It went alpha->beta a couple months ago

Adbot
ADBOT LOVES YOU

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


What are y’all using for docker garbage collection? Just docker prune?

freeasinbeer
Mar 26, 2015

by Fluffdaddy

jaegerx posted:

What are y’all using for docker garbage collection? Just docker prune?

Delete the nodes every so often.

Hughlander
May 11, 2005

jaegerx posted:

What are y’all using for docker garbage collection? Just docker prune?

Spotify/docker-GF

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

StabbinHobo posted:

this somewhat impossible recursive chasing of a way to abstract away a state assumption is, in large part, why kafka was invented.

jury is still out on if thats a good thing (i lean yes).

sorry that doesn't really help you though because "rewrite everything to upgrade from rmq to kafka" is about as helpful as "install linux problem solved".
Funny joke, some java dev in the monolothic app I'm talking about decided this exact thing about two years ago, so now we run two problems. Only half of the app's been ported so we have both zk and kafka for the foreseeable future during the "transition period"

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
HACK = How Actual Code is Kept

Kafka still uses Zookeeper anyway though I thought? At least that's how I remember Kafka being deployed when I was doing it.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

necrobobsledder posted:

HACK = How Actual Code is Kept

Kafka still uses Zookeeper anyway though I thought? At least that's how I remember Kafka being deployed when I was doing it.
Yes. It's possible to run Kafka with an embedded Zookeeper rather than a discrete ZK cluster, but this is not recommended in a production deployment scenario.

Hadlock
Nov 9, 2004

I have a database-backed webapp that's deployed to k8s using helm

QA is now a consumer of this product,

Right now if you want to upgrade the webapp or db, we use helm upgrade myapp --set build=123 to upgrade the container version

However QA needs to reset the db periodically, our system can handle this if you delete the old PVC and let k8s auto provision a new one

What's the best way to handle this programmatically? Find the pvc that matches the helm deployment, then just kubectl delete pvc $myappDB ?

Volguus
Mar 3, 2009

Hadlock posted:

I have a database-backed webapp that's deployed to k8s using helm

QA is now a consumer of this product,

Right now if you want to upgrade the webapp or db, we use helm upgrade myapp --set build=123 to upgrade the container version

However QA needs to reset the db periodically, our system can handle this if you delete the old PVC and let k8s auto provision a new one

What's the best way to handle this programmatically? Find the pvc that matches the helm deployment, then just kubectl delete pvc $myappDB ?

Jesus, I know some of these words. Is kubernetes the holy grails of app deployment nowadays and I (a developer, no interest whatsoever in managing build and deployment infrastructures, but i have to do it sometimes) just gonna have to get familiar with all this crap?

Hadlock
Nov 9, 2004

Download and install minikube or minik8sb on your laptop, you can prototype all this stuff on your local env and 99% of it will transition to your cloud based infra

I'm deving one environment in the cloud, and plan to have 6 setup by Wednesday, and then we'll be programmatically spinning up and down six environments per branch (our qa department is insane) via an api call and probably have 100 environments at any one time

PVC is provisioned volume claim, i.e. an ebs volume, which is just a cloud hard drive

Helm is sort of the deployment wrapper for kubernetes, people call it a package manager

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

Volguus posted:

Jesus, I know some of these words. Is kubernetes the holy grails of app deployment nowadays and I (a developer, no interest whatsoever in managing build and deployment infrastructures, but i have to do it sometimes) just gonna have to get familiar with all this crap?

yes

Hadlock posted:

I have a database-backed webapp that's deployed to k8s using helm

QA is now a consumer of this product,

Right now if you want to upgrade the webapp or db, we use helm upgrade myapp --set build=123 to upgrade the container version

However QA needs to reset the db periodically, our system can handle this if you delete the old PVC and let k8s auto provision a new one

What's the best way to handle this programmatically? Find the pvc that matches the helm deployment, then just kubectl delete pvc $myappDB ?

For QA purposes, why bother with PV/PVCs? Mount the DB directory from an emptyDir and it all goes away when you trash the pod. If you absolutely need to use PVCs, maybe put a label on that PVC and then you can delete it directly through the selector.

Hadlock
Nov 9, 2004

Our QA cycle is long and the selenium guys need a week or more developing tests and getting developers to write code that will pass the tests. Our QA are in three different continents and engineering in same continents but different countries so communication sucks and the dev cycle is slow, but it's cheaper than valley engineers #globalism

We have ephemeral QA environments already for smoke tests and db migration validation, and also the named pet systems; these are the Goldilocks environments.

The selector sounds good. Here's hoping k8s isn't too smart to prevent me from shooting my foot off and delete the disk from underneath a live db pod

Hadlock fucked around with this message at 10:42 on Dec 15, 2018

Mao Zedong Thot
Oct 16, 2008


Yeah the answer to most things in k8s is labels. And when it's not it's probably annotations.

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Hadlock posted:

Our QA cycle is long and the selenium guys need a week or more developing tests and getting developers to write code that will pass the tests. Our QA are in three different continents and engineering in same continents but different countries so communication sucks and the dev cycle is slow, but it's cheaper than valley engineers #globalism

We have ephemeral QA environments already for smoke tests and db migration validation, and also the named pet systems; these are the Goldilocks environments.

The selector sounds good. Here's hoping k8s isn't too smart to prevent me from shooting my foot off and delete the disk from underneath a live db pod

I think you’d be able to get away with emptydir more often then not, unless they are restarting the DB often.

We use terraform talking natively to an external DB instance(that has all of the dev environments as separate internal DBs) to do this, and the workflow around terraform destroy is pretty simple to grasp.

FamDav
Mar 29, 2008

Cancelbot posted:

We had one team use Spinnaker and its clunky and very slow, and we are a .NET/Windows company which doesn't fit as nicely into some of the products or practices available. Fortunately something magical happened last week: our infrastructure team obliterated the Spinnaker server because it had a "QA" tag and deleted all the backups as well. So right now that one team is being ported into ECS as the first goal in our "move all poo poo to containers" strategy.

Edit: We're probably going to restore Spinnaker but make it more ECS focused than huge Windows AMI focused.

So anecdotally there’s some things I worked on as part of the design for https://aws.amazon.com/blogs/devops/use-aws-codedeploy-to-implement-blue-green-deployments-for-aws-fargate-and-amazon-ecs/ that should eventually make integration with spinnaker a snap. Spoilers!

freeasinbeer
Mar 26, 2015

by Fluffdaddy

FamDav posted:

So anecdotally there’s some things I worked on as part of the design for https://aws.amazon.com/blogs/devops/use-aws-codedeploy-to-implement-blue-green-deployments-for-aws-fargate-and-amazon-ecs/ that should eventually make integration with spinnaker a snap. Spoilers!

If only ECS had feature parity with k8s.

Votlook
Aug 20, 2005
I'm using ansible for the first time today, and wow it feels scary! I can't stop thinking that running glorified shell scripts over SSH is a terrible idea.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
Ansible is soooo much better then not having anything though. And Chef/Puppet are conceptually nicer but in my experience have more issues.


With that set the new hotness is kubernetes which largely makes most of that tooling redundant.

Votlook
Aug 20, 2005
Time to start lobbying for Kubernetes then! Is it any good?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Votlook posted:

Time to start lobbying for Kubernetes then! Is it any good?

Those tools aren't equivalent. Kubernetes is a container orchestration platform. Chef/Puppet/Ansible are for infrastructure configuration. Not every application can be containerized smoothly.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
They might not be exactly equivalent but they solve similar problem domains. I use cloudinit(kops) to get my worker nodes talking to the masters and then handle all my config management in kubernetes. If I need to patch I roll out a new worker AMI.

Nomad and Mesos can run uncontinerized workloads if your stuff really can’t be dockerized. But I truly think that Ansible/puppet/chef are not the right tools to control your workloads. They are super fiddly at times and I’d rather control everything up a level then worrying about nodes. And if you are gonna build something that’s the equivalent in those platforms you should look at k8s/Mesos/Nomad.

This presumes you are running software in Linux. If not then condolences/ignore me.

freeasinbeer fucked around with this message at 18:23 on Dec 18, 2018

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

Votlook posted:

Time to start lobbying for Kubernetes then! Is it any good?

devops.txt

Mao Zedong Thot
Oct 16, 2008


New Yorp New Yorp posted:

Those tools aren't equivalent. Kubernetes is a container orchestration platform. Chef/Puppet/Ansible are for infrastructure configuration. Not every application can be containerized smoothly.

Plus you have to configure your kubernetes nodes with something.

Chef and Puppet are great in theory, and anywhere from okay to :suicide: in practice. Ansible is much less horrible. Salt is really dope too, but I haven't used it for anything significant personally.

tortilla_chip
Jun 13, 2007

k-partite
Salt is best, it really shines when you actually want to orchestrate something.

Docjowles
Apr 9, 2009

Salt is cool and good, and it bums me out that it's a very, very distant also-ran in terms of market share. I enjoyed working with it so much more than Chef.

Votlook posted:

Time to start lobbying for Kubernetes then! Is it any good?

https://twitter.com/dril/status/473265809079693312

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
it's all https://github.com/brandonhilkert/fucking_shell_scripts

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Mao Zedong Thot posted:

Plus you have to configure your kubernetes nodes with something.
The "immutable hosts" train of thought suggests that you configure it once on first boot with a tool like Ignition, and then you never touch it after that. If it's something like ContainerLinux then it'll auto-update itself with kernel upgrades. Any significant config change means nuking the cattle node and spinning up a new one. Which is totally fine if you've got a system like Kubernetes behind it to manage the rescheduling of workloads across nodes; not so much if you don't.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

minato posted:

The "immutable hosts" train of thought suggests that you configure it once on first boot with a tool like Ignition, and then you never touch it after that. If it's something like ContainerLinux then it'll auto-update itself with kernel upgrades. Any significant config change means nuking the cattle node and spinning up a new one. Which is totally fine if you've got a system like Kubernetes behind it to manage the rescheduling of workloads across nodes; not so much if you don't.

And now you have 2 problems

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
It's very hard to buy into containers halfway and see any kind of benefit whatsoever

Votlook
Aug 20, 2005

minato posted:

The "immutable hosts" train of thought suggests that you configure it once on first boot with a tool like Ignition, and then you never touch it after that. If it's something like ContainerLinux then it'll auto-update itself with kernel upgrades. Any significant config change means nuking the cattle node and spinning up a new one. Which is totally fine if you've got a system like Kubernetes behind it to manage the rescheduling of workloads across nodes; not so much if you don't.

In my previous job I did use immutable servers (without kubernetes unfortunately), and while it takes more effort upfront I loved that when I had an AMI that was tested, it was pretty much ensured to work in production.
Pusing updates with Ansible just feels so loving brittle in comparison

Votlook
Aug 20, 2005

Vulture Culture posted:

It's very hard to buy into containers halfway and see any kind of benefit whatsoever

True, at my new job they embrace docker on the laptop, and banish docker on the server, it's strange.
All our applications come with a fancy script that builds a docker-compose file for its dependencies, so I can run all the services I need for development on my laptop with very little effor.
For our server however Ansible is used to muck around with jars and shellscripts.
Oh and the versions used for services in the docker-compose files and in Ansible are totally not in sync.

Vanadium
Jan 8, 2005

Docker is cool because my team lead built a feature by writing a bunch of ruby scripts in ~ on some snowflake server, and I just grabbed them and put them into a repo next to a Dockerfile and some ECS json and now we're devops.

Blinkz0rz posted:

And now you have 2 problems

If I could keep anything about our deployment setups down to 2 problems, I'd be ecstatic :v:

Methanar
Sep 26, 2013

by the sex ghost
My deployment scheme is actually like 9 different processes depending on what you're pushing its really cool.

One of the processes is something like Chef generates ansible manifests with erb templates and then ansible uses the chef inventory as a dynamic inventory source and then ansible executes shell scripts on remote boxes and sometimes messes with haproxy to drain backends.

The cloud overflow stuff is a shell script in rc.local that runs on boot.

Also theres a thing with openresty and some lua scripts dynamically rewriting urls to s3 based on the build supposed to be deployed to a certain environment/region/rack.

Methanar fucked around with this message at 21:46 on Dec 18, 2018

12 rats tied together
Sep 7, 2006

Votlook posted:

I'm using ansible for the first time today, and wow it feels scary! I can't stop thinking that running glorified shell scripts over SSH is a terrible idea.
[...]
Time to start lobbying for Kubernetes then! Is it any good?

If your level of involvement in the application is "how do I perform <some configuration task>?" it's worse in every possible way. Minor correction also, you're not running a glorified shell script over SSH, you're invoking a module against a machine. It's a shell script in the same way "/bin/bash -c "python script.py args" is a shell script I guess.

It's pretty much push-mode chef except instead of writing a cookbook you feed the orchestration a list of objects serialized to yaml. It's a way better approach because making assertions about a data structure is way easier than trying to infer meaning from arbitrary ruby/python/golang/whatever.

Votlook posted:

In my previous job I did use immutable servers (without kubernetes unfortunately), and while it takes more effort upfront I loved that when I had an AMI that was tested, it was pretty much ensured to work in production.
Pushing updates with Ansible just feels so loving brittle in comparison

It doesn't have to. It's totally possible to write really lovely ansible (it's just a task orchestration tool -- configuring a server is only one of the many tasks you might choose to orchestrate). It's also possible to write extremely robust ansible -- we did immutable infrastructure except without the images by just having really good habits in a pure ansible shop at my previous role.

I've jokingly referred to it as "deterministic infrastructure" in this very thread in that we assume a fresh server, when fed the same input as another fresh server, ends up in an identical state. It's like rolling an AMI except instead of baking an AMI you configure it every time. Packer/etc can run ansible for you and save the results into a machine image or you can just run ansible yourself, it's the same thing +/- a few minutes of bootstrap time.

It seems like the thing you're worried about here is that ansible can be run at any time? This is 100% on you guys to enforce some kind of procedure or policy here. Ansible modules are idempotent, but it's up to you to write playbooks that don't take down your application at random throughout the day. There's no fundmanetal design choice in ansible that makes this any more dangerous than any other type of automation; it's totally possible to accidentally brick your kubernetes application in pretty much the same way.

You could even use the ansible helm module to brick your kubernetes application if you wanted.

12 rats tied together
Sep 7, 2006

freeasinbeer posted:

But I truly think that Ansible/puppet/chef are not the right tools to control your workloads. They are super fiddly at times and I’d rather control everything up a level then worrying about nodes.

I agree with you in theory but again, in practice, ansible can only help here. If you have a list of steps in a readme somewhere, that should be a playbook, even (especially) if those steps are kubectl apply, helm create, or whatever else.

There's a reasonable argument that ansible is not worth the extra complexity compared to a makefile with your scheduler orchestration commands in it, or even just having them in that readme, but pretending that ansible is a fundamentally different approach or a solution to a different problem entirely is kind of missing the mark a bit.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Ansible only works when you can easily ssh to stuff. I have been unable to setup even a scripted ssh configuration anywhere I’ve worked for the past 3 jobs due to how horribly mismanaged and opaque systems can get while being “cloud” There’s almost zero point doing any tooling unless you have basics like log aggregation, monitoring, etc. also involved. That’s what has mostly kept me from doing much more with Ansible. Ansible in pull mode might as well be like push mode Chef / Puppet

The number of shops I’m seeing deploying a Kubernetes cluster without having solved basic sysadmin needs is frightening.

Methanar
Sep 26, 2013

by the sex ghost

necrobobsledder posted:

Ansible only works when you can easily ssh to stuff. I have been unable to setup even a scripted ssh configuration anywhere I’ve worked for the past 3 jobs due to how horribly mismanaged and opaque systems can get while being “cloud” There’s almost zero point doing any tooling unless you have basics like log aggregation, monitoring, etc. also involved. That’s what has mostly kept me from doing much more with Ansible. Ansible in pull mode might as well be like push mode Chef / Puppet

The number of shops I’m seeing deploying a Kubernetes cluster without having solved basic sysadmin needs is frightening.

Hot take: deploying kubernetes (properly) and throwing everything else into the trash is easier than making an existing system better.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The entire reason things oftentimes can’t be moved to Kubernetes is that the available resources are sucked up maintaining an existing system in the first place and management is unwilling to put resources into anything that doesn’t deliver shiny features to maintain or increase funds. If you could put it into a container, you probably would have done it by now. Otherwise, it’s stateful abominations relying on weird crap that makes no sense (saw something in code that checked for MAC address prefixes to determine a datacenter region because it’s what was setup in freakin’ VMware, for example) left and right.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Methanar posted:

Hot take: deploying kubernetes (properly) and throwing everything else into the trash is easier than making an existing system better.

Hot take: deploying kubernetes (properly) and maintaining deployment systems on top of it takes more work (and reaps less rewards) than a mostly working existing system.

This side of the industry loves new toys but gently caress me if kubernetes adoption for its own sake is the loving dumbest thing I've ever seen.

Adbot
ADBOT LOVES YOU

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
We've used Kubernetes for years and haven't felt the need to automate anything with Ansible (or Chef, Puppet, etc). We use a combination of Jenkins to monitor our gitops repos that contain the kubernetes manifest files, which in turn triggers Helm/Tiller re-deployments. It works very well for 95% of the apps we run. We use AWS RDS for databases and EBS for persistent storage (which Kubernetes supports).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply