|
EAT THE EGGS RICOLA posted:Docker has had a few crazy bugs recently that would make it insane to use in production, hasn't it? Yeah I was working at a govt department with a *lot* of Django apps and we tried to set up a hg -> jenkins -> docker -> test -> deploy kind of chain but docker was just too fragile to really be worth it. Its awesome in theory but in practice its just not solid enough to do what we wanted it to, which ultimately was to build an inhouse heroku type set up so the various coders in the sub departments could deploy their poo poo without being given the keys to the castle on out various servers. Plus as far as security goes its actually *less* secure than a chroot jail. Theres also that COREOS thing that your supposed to deploy it onto but we found etcd to be completely flakey.
|
# ¿ Feb 1, 2015 09:20 |
|
|
# ¿ Apr 29, 2024 03:04 |
|
NovemberMike posted:What about Saltstack? I've been playing around with it and it seems nice ,anyone have real opinions? We ran a very large government department with it. Science clusters, windows servers, various linux boxes, virtual hosts and servers, the lot. Its very nice. Like all of these things, theres a bit of a learning curve, but honestly I found it much easier than puppet
|
# ¿ Mar 10, 2015 01:34 |
|
So recently I started a new job that partially involves inhereting a giant Kubernetes cluster on DigitalOcean. I've never used Kubernetes so its all a massive learning curve. This moring I got into the office and realised the entire cluster down with all the pods in "Pending" mode (Including about a bazillion cronjob containers that seemed to be piling up). It would seem at some point in the night for reasons I'm completely unsure of the whole drat thing was reset causing it to reissue a whole bunch of nodes which where in an unlabeled state. So after labelling them, it all came back up, although I had to delete the node spec for the cronjobs because there where literally hundreds of the bloody things trying to be created. Followed by a slow recycling of nodes to get the drat things to exit the "Terminating" state. Massive and disrupive pain in the arse. Is there a way to tell Kubernetes how to label nodes after a rebuild? Beause this *sucks*
|
# ¿ Nov 22, 2021 07:31 |
|
my homie dhall posted:The process you should be looking at is kubelet. Looks like you can modify the kubelet config to have the kubelet come up with whatever node labels you want. Thats not a thing you can do with Digitaloceans kubernetes system.
|
# ¿ Nov 22, 2021 08:53 |
|
necrobobsledder posted:According to this Github issue there's a CLI tool you can use to set the node labels https://github.com/digitalocean/DOKS/issues/3 Yeah i ended up figuring out that digital oceans cli can do it too., doctl kubernetes cluster node-pool update <cluster name> <nodepool name> --label <key>=<value> I probably have to learn about how the taints and affinities and poo poo works. ("Hey love , guess what I did at work today? I put labels on taints! Do you have an affinity for taint? Better label that too! I'll go drain the pool.").
|
# ¿ Nov 24, 2021 06:32 |
|
Woof Blitzer posted:What kind of sick individual invented YAML anyways Yet Another Malajusted Lout.
|
# ¿ Jun 17, 2022 01:25 |
|
|
# ¿ Apr 29, 2024 03:04 |
|
This is fun. Deploy script that uses IMAGEVERSION var in .deploy to drive a few things in k8code:
Next step is to get some CI in on it (Probably jenkins) to run the tests and make sure we're not pushing hot garbage. I think. If the boss will let me
|
# ¿ Jun 17, 2022 01:35 |