|
Docjowles posted:Zookeeper 3.5 seems to have been in prerelease since dinosaurs roamed the earth so I am not holding my breath for that one. It went alpha->beta a couple months ago
|
# ? Dec 13, 2018 20:49 |
|
|
# ? May 21, 2024 18:18 |
|
What are y’all using for docker garbage collection? Just docker prune?
|
# ? Dec 14, 2018 03:47 |
|
jaegerx posted:What are y’all using for docker garbage collection? Just docker prune? Delete the nodes every so often.
|
# ? Dec 14, 2018 03:54 |
|
jaegerx posted:What are y’all using for docker garbage collection? Just docker prune? Spotify/docker-GF
|
# ? Dec 14, 2018 05:33 |
|
StabbinHobo posted:this somewhat impossible recursive chasing of a way to abstract away a state assumption is, in large part, why kafka was invented.
|
# ? Dec 14, 2018 14:46 |
|
HACK = How Actual Code is Kept Kafka still uses Zookeeper anyway though I thought? At least that's how I remember Kafka being deployed when I was doing it.
|
# ? Dec 14, 2018 20:31 |
|
necrobobsledder posted:HACK = How Actual Code is Kept
|
# ? Dec 14, 2018 21:53 |
|
I have a database-backed webapp that's deployed to k8s using helm QA is now a consumer of this product, Right now if you want to upgrade the webapp or db, we use helm upgrade myapp --set build=123 to upgrade the container version However QA needs to reset the db periodically, our system can handle this if you delete the old PVC and let k8s auto provision a new one What's the best way to handle this programmatically? Find the pvc that matches the helm deployment, then just kubectl delete pvc $myappDB ?
|
# ? Dec 15, 2018 02:11 |
|
Hadlock posted:I have a database-backed webapp that's deployed to k8s using helm Jesus, I know some of these words. Is kubernetes the holy grails of app deployment nowadays and I (a developer, no interest whatsoever in managing build and deployment infrastructures, but i have to do it sometimes) just gonna have to get familiar with all this crap?
|
# ? Dec 15, 2018 02:25 |
|
Download and install minikube or minik8sb on your laptop, you can prototype all this stuff on your local env and 99% of it will transition to your cloud based infra I'm deving one environment in the cloud, and plan to have 6 setup by Wednesday, and then we'll be programmatically spinning up and down six environments per branch (our qa department is insane) via an api call and probably have 100 environments at any one time PVC is provisioned volume claim, i.e. an ebs volume, which is just a cloud hard drive Helm is sort of the deployment wrapper for kubernetes, people call it a package manager
|
# ? Dec 15, 2018 03:47 |
|
Volguus posted:Jesus, I know some of these words. Is kubernetes the holy grails of app deployment nowadays and I (a developer, no interest whatsoever in managing build and deployment infrastructures, but i have to do it sometimes) just gonna have to get familiar with all this crap? yes Hadlock posted:I have a database-backed webapp that's deployed to k8s using helm For QA purposes, why bother with PV/PVCs? Mount the DB directory from an emptyDir and it all goes away when you trash the pod. If you absolutely need to use PVCs, maybe put a label on that PVC and then you can delete it directly through the selector.
|
# ? Dec 15, 2018 03:48 |
|
Our QA cycle is long and the selenium guys need a week or more developing tests and getting developers to write code that will pass the tests. Our QA are in three different continents and engineering in same continents but different countries so communication sucks and the dev cycle is slow, but it's cheaper than valley engineers #globalism We have ephemeral QA environments already for smoke tests and db migration validation, and also the named pet systems; these are the Goldilocks environments. The selector sounds good. Here's hoping k8s isn't too smart to prevent me from shooting my foot off and delete the disk from underneath a live db pod Hadlock fucked around with this message at 10:42 on Dec 15, 2018 |
# ? Dec 15, 2018 10:39 |
|
Yeah the answer to most things in k8s is labels. And when it's not it's probably annotations.
|
# ? Dec 15, 2018 17:17 |
|
Hadlock posted:Our QA cycle is long and the selenium guys need a week or more developing tests and getting developers to write code that will pass the tests. Our QA are in three different continents and engineering in same continents but different countries so communication sucks and the dev cycle is slow, but it's cheaper than valley engineers #globalism I think you’d be able to get away with emptydir more often then not, unless they are restarting the DB often. We use terraform talking natively to an external DB instance(that has all of the dev environments as separate internal DBs) to do this, and the workflow around terraform destroy is pretty simple to grasp.
|
# ? Dec 15, 2018 18:45 |
|
Cancelbot posted:We had one team use Spinnaker and its clunky and very slow, and we are a .NET/Windows company which doesn't fit as nicely into some of the products or practices available. Fortunately something magical happened last week: our infrastructure team obliterated the Spinnaker server because it had a "QA" tag and deleted all the backups as well. So right now that one team is being ported into ECS as the first goal in our "move all poo poo to containers" strategy. So anecdotally there’s some things I worked on as part of the design for https://aws.amazon.com/blogs/devops/use-aws-codedeploy-to-implement-blue-green-deployments-for-aws-fargate-and-amazon-ecs/ that should eventually make integration with spinnaker a snap. Spoilers!
|
# ? Dec 16, 2018 00:52 |
|
FamDav posted:So anecdotally there’s some things I worked on as part of the design for https://aws.amazon.com/blogs/devops/use-aws-codedeploy-to-implement-blue-green-deployments-for-aws-fargate-and-amazon-ecs/ that should eventually make integration with spinnaker a snap. Spoilers! If only ECS had feature parity with k8s.
|
# ? Dec 16, 2018 01:53 |
|
I'm using ansible for the first time today, and wow it feels scary! I can't stop thinking that running glorified shell scripts over SSH is a terrible idea.
|
# ? Dec 18, 2018 13:50 |
|
Ansible is soooo much better then not having anything though. And Chef/Puppet are conceptually nicer but in my experience have more issues. With that set the new hotness is kubernetes which largely makes most of that tooling redundant.
|
# ? Dec 18, 2018 14:30 |
|
Time to start lobbying for Kubernetes then! Is it any good?
|
# ? Dec 18, 2018 16:12 |
|
Votlook posted:Time to start lobbying for Kubernetes then! Is it any good? Those tools aren't equivalent. Kubernetes is a container orchestration platform. Chef/Puppet/Ansible are for infrastructure configuration. Not every application can be containerized smoothly.
|
# ? Dec 18, 2018 17:05 |
|
They might not be exactly equivalent but they solve similar problem domains. I use cloudinit(kops) to get my worker nodes talking to the masters and then handle all my config management in kubernetes. If I need to patch I roll out a new worker AMI. Nomad and Mesos can run uncontinerized workloads if your stuff really can’t be dockerized. But I truly think that Ansible/puppet/chef are not the right tools to control your workloads. They are super fiddly at times and I’d rather control everything up a level then worrying about nodes. And if you are gonna build something that’s the equivalent in those platforms you should look at k8s/Mesos/Nomad. This presumes you are running software in Linux. If not then condolences/ignore me. freeasinbeer fucked around with this message at 18:23 on Dec 18, 2018 |
# ? Dec 18, 2018 18:19 |
|
Votlook posted:Time to start lobbying for Kubernetes then! Is it any good? devops.txt
|
# ? Dec 18, 2018 18:31 |
|
New Yorp New Yorp posted:Those tools aren't equivalent. Kubernetes is a container orchestration platform. Chef/Puppet/Ansible are for infrastructure configuration. Not every application can be containerized smoothly. Plus you have to configure your kubernetes nodes with something. Chef and Puppet are great in theory, and anywhere from okay to in practice. Ansible is much less horrible. Salt is really dope too, but I haven't used it for anything significant personally.
|
# ? Dec 18, 2018 18:32 |
|
Salt is best, it really shines when you actually want to orchestrate something.
|
# ? Dec 18, 2018 18:44 |
|
Salt is cool and good, and it bums me out that it's a very, very distant also-ran in terms of market share. I enjoyed working with it so much more than Chef.Votlook posted:Time to start lobbying for Kubernetes then! Is it any good? https://twitter.com/dril/status/473265809079693312
|
# ? Dec 18, 2018 18:48 |
|
it's all https://github.com/brandonhilkert/fucking_shell_scripts
|
# ? Dec 18, 2018 18:49 |
|
Mao Zedong Thot posted:Plus you have to configure your kubernetes nodes with something.
|
# ? Dec 18, 2018 19:03 |
|
minato posted:The "immutable hosts" train of thought suggests that you configure it once on first boot with a tool like Ignition, and then you never touch it after that. If it's something like ContainerLinux then it'll auto-update itself with kernel upgrades. Any significant config change means nuking the cattle node and spinning up a new one. Which is totally fine if you've got a system like Kubernetes behind it to manage the rescheduling of workloads across nodes; not so much if you don't. And now you have 2 problems
|
# ? Dec 18, 2018 20:13 |
|
It's very hard to buy into containers halfway and see any kind of benefit whatsoever
|
# ? Dec 18, 2018 20:47 |
|
minato posted:The "immutable hosts" train of thought suggests that you configure it once on first boot with a tool like Ignition, and then you never touch it after that. If it's something like ContainerLinux then it'll auto-update itself with kernel upgrades. Any significant config change means nuking the cattle node and spinning up a new one. Which is totally fine if you've got a system like Kubernetes behind it to manage the rescheduling of workloads across nodes; not so much if you don't. In my previous job I did use immutable servers (without kubernetes unfortunately), and while it takes more effort upfront I loved that when I had an AMI that was tested, it was pretty much ensured to work in production. Pusing updates with Ansible just feels so loving brittle in comparison
|
# ? Dec 18, 2018 21:04 |
|
Vulture Culture posted:It's very hard to buy into containers halfway and see any kind of benefit whatsoever True, at my new job they embrace docker on the laptop, and banish docker on the server, it's strange. All our applications come with a fancy script that builds a docker-compose file for its dependencies, so I can run all the services I need for development on my laptop with very little effor. For our server however Ansible is used to muck around with jars and shellscripts. Oh and the versions used for services in the docker-compose files and in Ansible are totally not in sync.
|
# ? Dec 18, 2018 21:18 |
|
Docker is cool because my team lead built a feature by writing a bunch of ruby scripts in ~ on some snowflake server, and I just grabbed them and put them into a repo next to a Dockerfile and some ECS json and now we're devops.Blinkz0rz posted:And now you have 2 problems If I could keep anything about our deployment setups down to 2 problems, I'd be ecstatic
|
# ? Dec 18, 2018 21:26 |
|
My deployment scheme is actually like 9 different processes depending on what you're pushing its really cool. One of the processes is something like Chef generates ansible manifests with erb templates and then ansible uses the chef inventory as a dynamic inventory source and then ansible executes shell scripts on remote boxes and sometimes messes with haproxy to drain backends. The cloud overflow stuff is a shell script in rc.local that runs on boot. Also theres a thing with openresty and some lua scripts dynamically rewriting urls to s3 based on the build supposed to be deployed to a certain environment/region/rack. Methanar fucked around with this message at 21:46 on Dec 18, 2018 |
# ? Dec 18, 2018 21:43 |
|
Votlook posted:I'm using ansible for the first time today, and wow it feels scary! I can't stop thinking that running glorified shell scripts over SSH is a terrible idea. If your level of involvement in the application is "how do I perform <some configuration task>?" it's worse in every possible way. Minor correction also, you're not running a glorified shell script over SSH, you're invoking a module against a machine. It's a shell script in the same way "/bin/bash -c "python script.py args" is a shell script I guess. It's pretty much push-mode chef except instead of writing a cookbook you feed the orchestration a list of objects serialized to yaml. It's a way better approach because making assertions about a data structure is way easier than trying to infer meaning from arbitrary ruby/python/golang/whatever. Votlook posted:In my previous job I did use immutable servers (without kubernetes unfortunately), and while it takes more effort upfront I loved that when I had an AMI that was tested, it was pretty much ensured to work in production. It doesn't have to. It's totally possible to write really lovely ansible (it's just a task orchestration tool -- configuring a server is only one of the many tasks you might choose to orchestrate). It's also possible to write extremely robust ansible -- we did immutable infrastructure except without the images by just having really good habits in a pure ansible shop at my previous role. I've jokingly referred to it as "deterministic infrastructure" in this very thread in that we assume a fresh server, when fed the same input as another fresh server, ends up in an identical state. It's like rolling an AMI except instead of baking an AMI you configure it every time. Packer/etc can run ansible for you and save the results into a machine image or you can just run ansible yourself, it's the same thing +/- a few minutes of bootstrap time. It seems like the thing you're worried about here is that ansible can be run at any time? This is 100% on you guys to enforce some kind of procedure or policy here. Ansible modules are idempotent, but it's up to you to write playbooks that don't take down your application at random throughout the day. There's no fundmanetal design choice in ansible that makes this any more dangerous than any other type of automation; it's totally possible to accidentally brick your kubernetes application in pretty much the same way. You could even use the ansible helm module to brick your kubernetes application if you wanted.
|
# ? Dec 18, 2018 23:11 |
|
freeasinbeer posted:But I truly think that Ansible/puppet/chef are not the right tools to control your workloads. They are super fiddly at times and I’d rather control everything up a level then worrying about nodes. I agree with you in theory but again, in practice, ansible can only help here. If you have a list of steps in a readme somewhere, that should be a playbook, even (especially) if those steps are kubectl apply, helm create, or whatever else. There's a reasonable argument that ansible is not worth the extra complexity compared to a makefile with your scheduler orchestration commands in it, or even just having them in that readme, but pretending that ansible is a fundamentally different approach or a solution to a different problem entirely is kind of missing the mark a bit.
|
# ? Dec 18, 2018 23:26 |
|
Ansible only works when you can easily ssh to stuff. I have been unable to setup even a scripted ssh configuration anywhere I’ve worked for the past 3 jobs due to how horribly mismanaged and opaque systems can get while being “cloud” There’s almost zero point doing any tooling unless you have basics like log aggregation, monitoring, etc. also involved. That’s what has mostly kept me from doing much more with Ansible. Ansible in pull mode might as well be like push mode Chef / Puppet The number of shops I’m seeing deploying a Kubernetes cluster without having solved basic sysadmin needs is frightening.
|
# ? Dec 19, 2018 01:02 |
|
necrobobsledder posted:Ansible only works when you can easily ssh to stuff. I have been unable to setup even a scripted ssh configuration anywhere I’ve worked for the past 3 jobs due to how horribly mismanaged and opaque systems can get while being “cloud” There’s almost zero point doing any tooling unless you have basics like log aggregation, monitoring, etc. also involved. That’s what has mostly kept me from doing much more with Ansible. Ansible in pull mode might as well be like push mode Chef / Puppet Hot take: deploying kubernetes (properly) and throwing everything else into the trash is easier than making an existing system better.
|
# ? Dec 19, 2018 01:06 |
|
The entire reason things oftentimes can’t be moved to Kubernetes is that the available resources are sucked up maintaining an existing system in the first place and management is unwilling to put resources into anything that doesn’t deliver shiny features to maintain or increase funds. If you could put it into a container, you probably would have done it by now. Otherwise, it’s stateful abominations relying on weird crap that makes no sense (saw something in code that checked for MAC address prefixes to determine a datacenter region because it’s what was setup in freakin’ VMware, for example) left and right.
|
# ? Dec 19, 2018 01:25 |
|
Methanar posted:Hot take: deploying kubernetes (properly) and throwing everything else into the trash is easier than making an existing system better. Hot take: deploying kubernetes (properly) and maintaining deployment systems on top of it takes more work (and reaps less rewards) than a mostly working existing system. This side of the industry loves new toys but gently caress me if kubernetes adoption for its own sake is the loving dumbest thing I've ever seen.
|
# ? Dec 19, 2018 01:41 |
|
|
# ? May 21, 2024 18:18 |
|
We've used Kubernetes for years and haven't felt the need to automate anything with Ansible (or Chef, Puppet, etc). We use a combination of Jenkins to monitor our gitops repos that contain the kubernetes manifest files, which in turn triggers Helm/Tiller re-deployments. It works very well for 95% of the apps we run. We use AWS RDS for databases and EBS for persistent storage (which Kubernetes supports).
|
# ? Dec 19, 2018 01:52 |