|
Spring Heeled Jack posted:I’m using swarm as well and I think with replicas:2 and parallelism:2 it is killing both replicas at the same time. Try removing parallelism? When I do stack updates it shows in the console that it is only updating one replica at a time. Obviously this wouldn’t be an issue if you had like 8 replicas. You're not seeing the issue because of your external load balancer, and I'm assuming the API manager both check for life before directing traffic there. I'm going to spend a day or so on kubernates to compare since I'm not locked into anything yet. E: context for the new page. Harik fucked around with this message at 21:19 on Jan 16, 2019 |
# ? Jan 16, 2019 21:15 |
|
|
# ? May 16, 2024 13:02 |
|
In kubernetes you'd put a "readiness probe" on your pod, which is a check that has to pass before the system considers it ready to receive traffic and proceed with tearing down the next old pod in your deployment. It looks like Docker has built in health check functionality these days that might do that for you, too? https://blog.newrelic.com/engineering/docker-health-check-instruction/
|
# ? Jan 16, 2019 21:26 |
|
Docjowles posted:In kubernetes you'd put a "readiness probe" on your pod, which is a check that has to pass before the system considers it ready to receive traffic and proceed with tearing down the next old pod in your deployment. It looks like Docker has built in health check functionality these days that might do that for you, too? That's exactly what I was looking for, I was looking in the service configuration and not the dockerfile. Is k8s the way to go for greenfield stuff?
|
# ? Jan 17, 2019 01:18 |
|
Harik posted:That's exactly what I was looking for, I was looking in the service configuration and not the dockerfile. It’s the defacto industry choice at the moment. Swarm is barely a thing anymore. However it’s super complicated and easy to mess up. Also I’d say stay away from using openshift or spinnaker.
|
# ? Jan 17, 2019 01:36 |
|
freeasinbeer posted:It’s the defacto industry choice at the moment. Swarm is barely a thing anymore. What's wrong with openshift
|
# ? Jan 17, 2019 02:09 |
|
Methanar posted:What's wrong with openshift It has started to diverge too far from “stock” kubernetes for my tastes. It even has its own kubectl replacement. Edit: and to steal from the IT thread: if I walk into a shop wearing suits(running openshift) then I figure I might not like working there. freeasinbeer fucked around with this message at 02:35 on Jan 17, 2019 |
# ? Jan 17, 2019 02:31 |
|
If you have like An Application to deploy and already have a working way of deploying it via Swarm or whatever, I would not bother with the overhead of kubernetes. k8s is cool, and good, and the way the industry is heading. But it's also a lot of hassle to run. Run your single app in whatever you like, Swarm or some cloud provider's managed container thing. When you have a bunch of containerized services to run, start talking about kubernetes.
|
# ? Jan 17, 2019 03:24 |
|
OpenShift is to Kubernetes as Fedora is to the Linux kernel. It takes a tightly-focused software project and makes it usable by the masses. You can run vanilla Kubernetes by itself, but you'll need to add a bunch of stuff to turn it into a PaaS.
|
# ? Jan 17, 2019 03:25 |
|
Docjowles posted:If you have like An Application to deploy and already have a working way of deploying it via Swarm or whatever, I would not bother with the overhead of kubernetes. k8s is cool, and good, and the way the industry is heading. But it's also a lot of hassle to run. Run your single app in whatever you like, Swarm or some cloud provider's managed container thing. I've got dozens of sites or services that should probably be properly containerized that this is a pilot for. I've been at this since the mid 90s so I'm usually behind the best-practices and I'm taking some time to catch up. The long-term impacts a lot of things, the dev/prod separation is cleaner on newer projects, the older ones don't even have the capability of separate dev so everything is live. Automated testing is all over the map. Tech is an eclectic mix of perl, php, python and probably some arcane poo poo I won't realize is running until I try to pack it up. Secret management was almost entirely "edit a config file" back then and bespoke deployment is subject to bitrot and institutional memory loss. I've been packaging more and more but doing the container deployments manually with docker run is not going to scale, especially when you have to manually place them and edit the proxies to redirect to a different endpoint. Isolation is heavy pet VMs, and because of that "related" things end up on the same VM with unix permissions to keep them separate. It also means resource allocation is a mess, being both overcommitted and insufficient at the same time because there's limited ability to borrow from another VM outside of CPU overcommit. Oh, and upgrades to keep the VM up-to-date may or may not break any number of services that all have to be tested. Yes, I could just deploy the new one on docker swarm, but now I have N+1 bespoke deployments. I need to be able to really push this forward, merging VMs into better-resourced container nodes and more central and standard deployment and management.
|
# ? Jan 17, 2019 11:07 |
|
honestly, just use k8s.
|
# ? Jan 17, 2019 16:32 |
|
Swarm done been dead
|
# ? Jan 17, 2019 17:44 |
|
The problem with brownfield specifically when it comes to Kubernetes isn't even the K8S part but the containerization part. When I did a number of VMware based migrations, not a lot of folks were rewriting old, crusty applications from the 90s to handle even a virtualized environment and that's a big part of how they grew pretty well. OpenStack tried to carry this vision out further but hit a snag when the reality of it all couldn't be delayed much more - applications that are unsuitable for cloud environments will need to be rewritten / rearchitected unless you have some pretty lax SLAs / SLOs (a number of really poorly running applications are held back because their DCs are run by clowns / technical debt belt-tightening armies and lift & shift can get some value then). If your application is properly containerized (meets enough 12F design constraints that basically means "stop putting goddamn state on your boxes") then you should be able to pretty effectively swap your orchestration around between a bunch of providers. Problem is that a lot of people rely upon external services that their cloud providers offer like S3, SQS, Big Query, etc. and creating an indirection layer for that isn't the same thing as "look up another internal REST service."
|
# ? Jan 17, 2019 17:52 |
|
Gyshall posted:Swarm done been dead uncurable mlady posted:honestly, just use k8s. The difference in complexity between Swarm and k8s is staggering though. A minikube is more complex than a production-ready Swarm stack. Swarm literally takes a docker-compose.yml file and gives you a few basic but important features like zero-downtime upgrades and health checking almost for free, even before you get into scaling and distributed workloads. It's extremely straightforward, to the point that docker stack can be used from the start in place of the old docker-compose, and you can go through the entire documentation in an hour. K8S is an entire friggin' ecosystem. Sure it can do all those things better, but if you're one dude deploying a handful of containers, learning about Ingresses and StatefulSets and ReplicationControllers and a bunch of other new PascalCased concepts along with a completely different YAML configuration file and a completely different CLI tool is a hell of an adoption cost. It can't replace Swarm for the 'just give be some basic orchestration' use case. Maybe if someone came up with a simplified abstraction on top of it, the application layer to its transport layer, but it looks like every tool meant to be used on top of K8S is designed to add complexity and power, rather than reduce it. NihilCredo fucked around with this message at 18:26 on Jan 17, 2019 |
# ? Jan 17, 2019 18:15 |
|
"Just use K8s" is this year's "Install Linux, problem solved" I'd sooner roll my own deployment something if I was at a small enough scale where Swarm made sense, though. At least that way I understand where the footguns are and I'm not at the mercy of someone else's opinionated software stack
|
# ? Jan 17, 2019 19:59 |
|
Does anyone have any opinions about Digital Oceans managed K8s offering?
|
# ? Jan 17, 2019 20:15 |
|
NihilCredo posted:The difference in complexity between Swarm and k8s is staggering though. A minikube is more complex than a production-ready Swarm stack. As someone who has been crash coursing into K8s the past week, I agree with this 100%. Swarm has been so stupidly simple to setup and administer. Vulture Culture posted:"Just use K8s" is this year's "Install Linux, problem solved" This and I feel like every reply to complaining about the complexity around the 'net is 'git gud'. Spring Heeled Jack fucked around with this message at 20:58 on Jan 17, 2019 |
# ? Jan 17, 2019 20:54 |
|
There's always comedy options like Nomad and Cloud Foundry but from a usability standpoint Docker Swarm pretty much nailed it. As much as Nomad isn't really used much, getting up to speed on it probably takes less time than getting dropped into some random place's K8S cluster and fumbling with kubectl and whatever CI/CD system is setup for days.
|
# ? Jan 17, 2019 21:44 |
|
necrobobsledder nailed it when he said legacy apps assume an environment that containers may not provide and making that work is absolutely the hard part. Fortunately, most of them don't have to scale, just be manageable. So dumb poo poo like local filesystem state storage can be solved by a volume - only one instance, no problem with sharing. Since packaging all that is going to be the hard part, I'll spend my time on that and load it into docker swarm. I'll play with k8s on my own time later, and if we keep growing beyond what swarm can manage the hard part is already finished. Thanks for all the replies, especially the tip that healthcheck is a container option and not a service option, that fixed the downtime problem during cutover.
|
# ? Jan 17, 2019 22:05 |
|
you probably don’t need to know statefulsets and all the vagaries of the podspec to get started with k8s imo. like, no poo poo it does a lot of things, it’s an object db that someone built a container orchestration platform on. but most applications aren’t really that complicated, and learning how to decompose them into containers for k8s imo makes more sense than trying to use swarm at all. that’s just my 0.02, ymmv.
|
# ? Jan 18, 2019 17:18 |
|
The Fool posted:Does anyone have any opinions about Digital Oceans managed K8s offering? I had it up and running and doing some basic volume stuff inside an hour, its a nice simple functioning offering. Only roadblock I hit was they don't do calico/networkpolicy so its not for larger-team/real-company/multi-tenant environments yet where "there should be firewall rules or something" is a requirement.
|
# ? Jan 18, 2019 17:45 |
|
Hi. I'm a docker newbie here. I have a simple Flask app that talks a database. I've successfully dockerized the app, and have my app running on port 5000 and my MySQL instance running on another container. How do I go about getting this onto a Linux box? I literally cannot find a tutorial that shows me how to do it
|
# ? Jan 20, 2019 09:05 |
|
There's a couple of options. The easiest way is if you've got a docker registry running somewhere to store your containers. Then you can do something like: code:
code:
code:
code:
|
# ? Jan 20, 2019 09:15 |
|
minato posted:There's a couple of options. Unfortunately, this is going way over my head. I know so very little about docker. And had to rely on a very hand-holdy tutorial to get my a working docker-compose file Like.....do I still need nginx or apache? Or am I just running docker-compose on the server? And the request to the IP address will just work?
|
# ? Jan 20, 2019 09:19 |
|
Grump posted:Unfortunately, this is going way over my head. I know so very little about docker. And had to rely on a very hand-holdy tutorial to get my a working docker-compose file Your image is a template. You can export it to a registry (or tar file) and import it to any other computer/server in the world and a container created from that template will run exactly the same on each machine. Flask is your web/app server and is already installed in your container. If you start a container with that image it’ll come up on port 5000 on that machine. If you use multiple containers with the same ports exposed you want to start diving deeper into docker (ingress and orchestration) but for now this should be enough.
|
# ? Jan 20, 2019 09:53 |
|
Grump posted:I know so very little about docker. The key to learning any new thing is to first understand what the thing is. Saying "I don't know what docker is or what it's for or how it works, but I'm going to follow a tutorial to use it" is going to get you into exactly this situation: You're asking questions that you wouldn't be asking if you just read https://www.docker.com/resources/what-container
|
# ? Jan 20, 2019 16:26 |
|
So we had that golden turning moment when the vp of engineering and cto have been playing with my prototypes for a couple of months and finally decided to convert the entire stack over to k8s and switch our dev/qa systems (well, half of qa was on k8s already) over to aws in it cloud/k8s, and ditch our third rate bare metal hosting provider. The problem is that they want to get rid of config completely, every "stack" gets it's own namespace, and uses hard coded dns/user/pass... And then in production, we're going to use a different method of supplying dns and credentials. This goes pretty much against the whole idea of "same container in dev, same in qa and prod"... Vp of engineering has no experience with ops and wants to simplify things up improve deployment speed/reliability in dev, I can't seem to convince him that we should use the same dns/cred mechanisms in dev as we do prod. Thoughts?
|
# ? Jan 21, 2019 01:05 |
|
Hadlock posted:So we had that golden turning moment when the vp of engineering and cto have been playing with my prototypes for a couple of months and finally decided to convert the entire stack over to k8s and switch our dev/qa systems (well, half of qa was on k8s already) over to aws in it cloud/k8s, and ditch our third rate bare metal hosting provider. "Where do you want to find out that something is hosed up? Where it is invisible to our users and has minimal cost to fix, or where it's highly visible and costs of money/credibility?"
|
# ? Jan 21, 2019 01:26 |
|
Hadlock posted:So we had that golden turning moment when the vp of engineering and cto have been playing with my prototypes for a couple of months and finally decided to convert the entire stack over to k8s and switch our dev/qa systems (well, half of qa was on k8s already) over to aws in it cloud/k8s, and ditch our third rate bare metal hosting provider. quit
|
# ? Jan 21, 2019 02:50 |
|
Hadlock posted:So we had that golden turning moment when the vp of engineering and cto have been playing with my prototypes for a couple of months and finally decided to convert the entire stack over to k8s and switch our dev/qa systems (well, half of qa was on k8s already) over to aws in it cloud/k8s, and ditch our third rate bare metal hosting provider. use terraform
|
# ? Jan 21, 2019 02:59 |
|
Hadlock posted:So we had that golden turning moment when the vp of engineering and cto have been playing with my prototypes for a couple of months and finally decided to convert the entire stack over to k8s and switch our dev/qa systems (well, half of qa was on k8s already) over to aws in it cloud/k8s, and ditch our third rate bare metal hosting provider. What's the concern your VP is trying to address by doing it this way?
|
# ? Jan 21, 2019 03:01 |
|
Vulture Culture posted:What's the concern your VP is trying to address by doing it this way? I let a vault token expire in prod that caused the VP's pet non-critical microservice to flap for a couple of days Not confidence inspiring but I've been building up the cloud infrastructure on an island by myself and something was bound to fail at some point
|
# ? Jan 21, 2019 03:21 |
|
Hadlock posted:Vp of engineering has no experience with ops and wants to simplify things up improve deployment speed/reliability in dev, I can't seem to convince him that we should use the same dns/cred mechanisms in dev as we do prod. I wound up with a situation where I wind up implementing designs other people wanted (and are no longer with the company) against my wishes out of the spirit of "let's see how well it works out!" and it is causing massive problems where I spend half my time auditing every line of a 4000+ line change on 3+ month release cycles (never, ever use git-flow for infrastructure repositories - learn from my lack of backbone, friends) or having to poke holes in security groups and routing tables constantly for no additional business value (the worst kind of technical debt ever - no business value gained short or long term, hurts productivity, customer visible). Even in the spirit of agreeability and positive thinking you need to have the ability to reverse course on technical decisions and you need to have some pretty clear desired results and revisit them - that feedback loop exists beyond 2 sprints in a company with vision, direction, and leadership. So I say the kind of decision that must be strongly opposed is any kind that can't be corrected or measured in effectiveness.
|
# ? Jan 21, 2019 04:13 |
|
necrobobsledder posted:(never, ever use git-flow for infrastructure repositories - learn from my lack of backbone, friends) I’m using git flow for our (small scale) infra repos and am wondering what kind of problems I’d he running into in the future and what alternatives there are. Care to elaborate on this?
|
# ? Jan 21, 2019 07:09 |
|
LochNessMonster posted:I’m using git flow for our (small scale) infra repos and am wondering what kind of problems I’d he running into in the future and what alternatives there are. Care to elaborate on this? Agree, we used git flow at our ~125 person company (50+ engineers)... We bought in so hard to get flow, the documentation is even using git flow. Works loving great. You do need at least one release czar to keep things straight and uphold principles, otherwise it (or any release strategy, really) is going to go to poo poo fast necrobobsledder posted:My summation is you were hired as someone that is an expert in an important area like operations and yet every major decision is being overridden for "business value" and yet the value delivered is consistently negative mostly because of the decisions made against the SME's advice. Therefore, you don't need the SME and should just hire 2+ junior engineers to implement everything instead because the leadership of the SME is of no importance. This is wise advice, need to think on this some more
|
# ? Jan 21, 2019 08:06 |
|
honestly if you're getting away with a namespace per stack instead of a separate k8s cluster per stack, or worse per-tier-per-stack, thats a pretty big win. take it, get it done, then come back to config debates.
|
# ? Jan 21, 2019 16:13 |
|
LochNessMonster posted:I’m using git flow for our (small scale) infra repos and am wondering what kind of problems I’d he running into in the future and what alternatives there are. Care to elaborate on this? What's the difference between the head of develop and master? Infra should only have a finalised state and unless you're doing infra smoke tests using develop it's just another place where things can drift and conflicts can arise.
|
# ? Jan 21, 2019 18:14 |
|
Hadlock posted:I've been building up the cloud infrastructure on an island by myself
|
# ? Jan 21, 2019 18:31 |
|
uncurable mlady posted:honestly, just use k8s. Glances nervously at ECS pipeline we're 90% close to completing. As soon as I can upgrade Octopus deploy we can ride the k8 wave as well!
|
# ? Jan 21, 2019 19:40 |
|
StabbinHobo posted:honestly if you're getting away with a namespace per stack instead of a separate k8s cluster per stack, or worse per-tier-per-stack, thats a pretty big win. take it, get it done, then come back to config debates. Something has gone horribly wrong with the implementation if you can only deploy one thing to one cluster
|
# ? Jan 21, 2019 22:40 |
|
|
# ? May 16, 2024 13:02 |
|
I'm trying to deploy the docker example voting app on minikube. Using ingress-nginx, I'm routing / to the voting service and /result/ to the result service, rewriting to /. When I access the result service via http://.../result, it can't find the stylesheets for example, because they are linked at /stylesheets/ as opposed to ./stylesheets/. So this brings me to the general question: Is it possible to let each container think they live at / while still getting things to work, using nginx routing?
|
# ? Jan 22, 2019 17:01 |