Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Spring Heeled Jack posted:

I’m using swarm as well and I think with replicas:2 and parallelism:2 it is killing both replicas at the same time. Try removing parallelism? When I do stack updates it shows in the console that it is only updating one replica at a time. Obviously this wouldn’t be an issue if you had like 8 replicas.

We have an upstream load balancer (F5) to direct traffic into our api manager running in the swarm, which in turns directs traffic to the apis. We’re still figuring some things out but it seems to works well enough for our needs and is pretty straightforward. This was my main reason for going with Swarm at this time, it was very quick to get up and running on-prem (and we're running Windows containers). Now that K8s support is supposedly better with 2019 I'm going to do more time investigating what is involved with running it.
I tried swapping that, but it still ends up with a few seconds of downtime for half the connections as each service rolls over. The problem is that docker thinks the service is live and starts directing new traffic to it, not that it's doing them multiple at a time. I don't know that there's any good way to solve that in the docker model, normally you fork and exit the parent to signal you are ready but for docker that terminates the container.

You're not seeing the issue because of your external load balancer, and I'm assuming the API manager both check for life before directing traffic there.

I'm going to spend a day or so on kubernates to compare since I'm not locked into anything yet.

E: context for the new page.

Harik fucked around with this message at 21:19 on Jan 16, 2019

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

In kubernetes you'd put a "readiness probe" on your pod, which is a check that has to pass before the system considers it ready to receive traffic and proceed with tearing down the next old pod in your deployment. It looks like Docker has built in health check functionality these days that might do that for you, too?

https://blog.newrelic.com/engineering/docker-health-check-instruction/

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Docjowles posted:

In kubernetes you'd put a "readiness probe" on your pod, which is a check that has to pass before the system considers it ready to receive traffic and proceed with tearing down the next old pod in your deployment. It looks like Docker has built in health check functionality these days that might do that for you, too?

https://blog.newrelic.com/engineering/docker-health-check-instruction/

That's exactly what I was looking for, I was looking in the service configuration and not the dockerfile.

Is k8s the way to go for greenfield stuff?

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Harik posted:

That's exactly what I was looking for, I was looking in the service configuration and not the dockerfile.

Is k8s the way to go for greenfield stuff?

It’s the defacto industry choice at the moment. Swarm is barely a thing anymore.

However it’s super complicated and easy to mess up. Also I’d say stay away from using openshift or spinnaker.

Methanar
Sep 26, 2013

by the sex ghost

freeasinbeer posted:

It’s the defacto industry choice at the moment. Swarm is barely a thing anymore.

However it’s super complicated and easy to mess up. Also I’d say stay away from using openshift or spinnaker.

What's wrong with openshift

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Methanar posted:

What's wrong with openshift

It has started to diverge too far from “stock” kubernetes for my tastes. It even has its own kubectl replacement.

Edit: and to steal from the IT thread: if I walk into a shop wearing suits(running openshift) then I figure I might not like working there.

freeasinbeer fucked around with this message at 02:35 on Jan 17, 2019

Docjowles
Apr 9, 2009

If you have like An Application to deploy and already have a working way of deploying it via Swarm or whatever, I would not bother with the overhead of kubernetes. k8s is cool, and good, and the way the industry is heading. But it's also a lot of hassle to run. Run your single app in whatever you like, Swarm or some cloud provider's managed container thing.

When you have a bunch of containerized services to run, start talking about kubernetes.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
OpenShift is to Kubernetes as Fedora is to the Linux kernel. It takes a tightly-focused software project and makes it usable by the masses. You can run vanilla Kubernetes by itself, but you'll need to add a bunch of stuff to turn it into a PaaS.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Docjowles posted:

If you have like An Application to deploy and already have a working way of deploying it via Swarm or whatever, I would not bother with the overhead of kubernetes. k8s is cool, and good, and the way the industry is heading. But it's also a lot of hassle to run. Run your single app in whatever you like, Swarm or some cloud provider's managed container thing.

When you have a bunch of containerized services to run, start talking about kubernetes.
I guess the question is "what is a bunch?", for both services and systems they run on.

I've got dozens of sites or services that should probably be properly containerized that this is a pilot for. I've been at this since the mid 90s so I'm usually behind the best-practices and I'm taking some time to catch up.

The long-term impacts a lot of things, the dev/prod separation is cleaner on newer projects, the older ones don't even have the capability of separate dev so everything is live. Automated testing is all over the map. Tech is an eclectic mix of perl, php, python and probably some arcane poo poo I won't realize is running until I try to pack it up. Secret management was almost entirely "edit a config file" back then and bespoke deployment is subject to bitrot and institutional memory loss. I've been packaging more and more but doing the container deployments manually with docker run is not going to scale, especially when you have to manually place them and edit the proxies to redirect to a different endpoint.

Isolation is heavy pet VMs, and because of that "related" things end up on the same VM with unix permissions to keep them separate. It also means resource allocation is a mess, being both overcommitted and insufficient at the same time because there's limited ability to borrow from another VM outside of CPU overcommit. Oh, and upgrades to keep the VM up-to-date may or may not break any number of services that all have to be tested.

Yes, I could just deploy the new one on docker swarm, but now I have N+1 bespoke deployments.

:words: I need to be able to really push this forward, merging VMs into better-resourced container nodes and more central and standard deployment and management.

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison
honestly, just use k8s.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Swarm done been dead

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The problem with brownfield specifically when it comes to Kubernetes isn't even the K8S part but the containerization part. When I did a number of VMware based migrations, not a lot of folks were rewriting old, crusty applications from the 90s to handle even a virtualized environment and that's a big part of how they grew pretty well. OpenStack tried to carry this vision out further but hit a snag when the reality of it all couldn't be delayed much more - applications that are unsuitable for cloud environments will need to be rewritten / rearchitected unless you have some pretty lax SLAs / SLOs (a number of really poorly running applications are held back because their DCs are run by clowns / technical debt belt-tightening armies and lift & shift can get some value then).

If your application is properly containerized (meets enough 12F design constraints that basically means "stop putting goddamn state on your boxes") then you should be able to pretty effectively swap your orchestration around between a bunch of providers. Problem is that a lot of people rely upon external services that their cloud providers offer like S3, SQS, Big Query, etc. and creating an indirection layer for that isn't the same thing as "look up another internal REST service."

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Gyshall posted:

Swarm done been dead

uncurable mlady posted:

honestly, just use k8s.

The difference in complexity between Swarm and k8s is staggering though. A minikube is more complex than a production-ready Swarm stack.

Swarm literally takes a docker-compose.yml file and gives you a few basic but important features like zero-downtime upgrades and health checking almost for free, even before you get into scaling and distributed workloads. It's extremely straightforward, to the point that docker stack can be used from the start in place of the old docker-compose, and you can go through the entire documentation in an hour.

K8S is an entire friggin' ecosystem. Sure it can do all those things better, but if you're one dude deploying a handful of containers, learning about Ingresses and StatefulSets and ReplicationControllers and a bunch of other new PascalCased concepts along with a completely different YAML configuration file and a completely different CLI tool is a hell of an adoption cost.

It can't replace Swarm for the 'just give be some basic orchestration' use case. Maybe if someone came up with a simplified abstraction on top of it, the application layer to its transport layer, but it looks like every tool meant to be used on top of K8S is designed to add complexity and power, rather than reduce it.

NihilCredo fucked around with this message at 18:26 on Jan 17, 2019

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
"Just use K8s" is this year's "Install Linux, problem solved"

I'd sooner roll my own deployment something if I was at a small enough scale where Swarm made sense, though. At least that way I understand where the footguns are and I'm not at the mercy of someone else's opinionated software stack

The Fool
Oct 16, 2003


Does anyone have any opinions about Digital Oceans managed K8s offering?

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

NihilCredo posted:

The difference in complexity between Swarm and k8s is staggering though. A minikube is more complex than a production-ready Swarm stack.

Swarm literally takes a docker-compose.yml file and gives you a few basic but important features like zero-downtime upgrades and health checking almost for free, even before you get into scaling and distributed workloads. It's extremely straightforward, to the point that docker stack can be used from the start in place of the old docker-compose, and you can go through the entire documentation in an hour.

K8S is an entire friggin' ecosystem. Sure it can do all those things better, but if you're one dude deploying a handful of containers, learning about Ingresses and StatefulSets and ReplicationControllers and a bunch of other new PascalCased concepts along with a completely different YAML configuration file and a completely different CLI tool is a hell of an adoption cost.

It can't replace Swarm for the 'just give be some basic orchestration' use case. Maybe if someone came up with a simplified abstraction on top of it, the application layer to its transport layer, but it looks like every tool meant to be used on top of K8S is designed to add complexity and power, rather than reduce it.

As someone who has been crash coursing into K8s the past week, I agree with this 100%. Swarm has been so stupidly simple to setup and administer.

Vulture Culture posted:

"Just use K8s" is this year's "Install Linux, problem solved"

This and I feel like every reply to complaining about the complexity around the 'net is 'git gud'.

Spring Heeled Jack fucked around with this message at 20:58 on Jan 17, 2019

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
There's always comedy options like Nomad and Cloud Foundry but from a usability standpoint Docker Swarm pretty much nailed it. As much as Nomad isn't really used much, getting up to speed on it probably takes less time than getting dropped into some random place's K8S cluster and fumbling with kubectl and whatever CI/CD system is setup for days.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
necrobobsledder nailed it when he said legacy apps assume an environment that containers may not provide and making that work is absolutely the hard part. Fortunately, most of them don't have to scale, just be manageable. So dumb poo poo like local filesystem state storage can be solved by a volume - only one instance, no problem with sharing.

Since packaging all that is going to be the hard part, I'll spend my time on that and load it into docker swarm. I'll play with k8s on my own time later, and if we keep growing beyond what swarm can manage the hard part is already finished.

Thanks for all the replies, especially the tip that healthcheck is a container option and not a service option, that fixed the downtime problem during cutover.

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison
you probably don’t need to know statefulsets and all the vagaries of the podspec to get started with k8s imo. like, no poo poo it does a lot of things, it’s an object db that someone built a container orchestration platform on. but most applications aren’t really that complicated, and learning how to decompose them into containers for k8s imo makes more sense than trying to use swarm at all.

that’s just my 0.02, ymmv.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS

The Fool posted:

Does anyone have any opinions about Digital Oceans managed K8s offering?

I had it up and running and doing some basic volume stuff inside an hour, its a nice simple functioning offering. Only roadblock I hit was they don't do calico/networkpolicy so its not for larger-team/real-company/multi-tenant environments yet where "there should be firewall rules or something" is a requirement.

teen phone cutie
Jun 18, 2012

last year i rewrote something awful from scratch because i hate myself
Hi. I'm a docker newbie here. I have a simple Flask app that talks a database.

I've successfully dockerized the app, and have my app running on port 5000 and my MySQL instance running on another container.

How do I go about getting this onto a Linux box? I literally cannot find a tutorial that shows me how to do it

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
There's a couple of options.

The easiest way is if you've got a docker registry running somewhere to store your containers. Then you can do something like:
code:
docker build -t my-registry.com/blah/mycontainer:latest .
docker push my-registry.com/blah/mycontainer:latest
then on the Linux box pull it from the registry:
code:
docker pull my-registry.com/blah/mycontainer:latest
If you don't have a registry, then build the image and save it to a tar file, then copy the tar file over and import it:
code:
docker build -t my-registry.com/blah/mycontainer:latest .
docker save -o somefile.tar push my-registry.com/blah/mycontainer:latest
scp somefile.tar your.linux.host:
then on the host:
code:
docker import somefile.tar my-registry.com/blah/mycontainer:latest

teen phone cutie
Jun 18, 2012

last year i rewrote something awful from scratch because i hate myself

minato posted:

There's a couple of options.

Unfortunately, this is going way over my head. I know so very little about docker. And had to rely on a very hand-holdy tutorial to get my a working docker-compose file

Like.....do I still need nginx or apache? Or am I just running docker-compose on the server? And the request to the IP address will just work?

LochNessMonster
Feb 3, 2005

I need about three fitty


Grump posted:

Unfortunately, this is going way over my head. I know so very little about docker. And had to rely on a very hand-holdy tutorial to get my a working docker-compose file

Like.....do I still need nginx or apache? Or am I just running docker-compose on the server? And the request to the IP address will just work?

Your image is a template. You can export it to a registry (or tar file) and import it to any other computer/server in the world and a container created from that template will run exactly the same on each machine.

Flask is your web/app server and is already installed in your container. If you start a container with that image it’ll come up on port 5000 on that machine. If you use multiple containers with the same ports exposed you want to start diving deeper into docker (ingress and orchestration) but for now this should be enough.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Grump posted:

I know so very little about docker.

The key to learning any new thing is to first understand what the thing is. Saying "I don't know what docker is or what it's for or how it works, but I'm going to follow a tutorial to use it" is going to get you into exactly this situation: You're asking questions that you wouldn't be asking if you just read https://www.docker.com/resources/what-container

Hadlock
Nov 9, 2004

So we had that golden turning moment when the vp of engineering and cto have been playing with my prototypes for a couple of months and finally decided to convert the entire stack over to k8s and switch our dev/qa systems (well, half of qa was on k8s already) over to aws in it cloud/k8s, and ditch our third rate bare metal hosting provider.

The problem is that they want to get rid of config completely, every "stack" gets it's own namespace, and uses hard coded dns/user/pass... And then in production, we're going to use a different method of supplying dns and credentials.

This goes pretty much against the whole idea of "same container in dev, same in qa and prod"... Vp of engineering has no experience with ops and wants to simplify things up improve deployment speed/reliability in dev, I can't seem to convince him that we should use the same dns/cred mechanisms in dev as we do prod.

Thoughts?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Hadlock posted:

So we had that golden turning moment when the vp of engineering and cto have been playing with my prototypes for a couple of months and finally decided to convert the entire stack over to k8s and switch our dev/qa systems (well, half of qa was on k8s already) over to aws in it cloud/k8s, and ditch our third rate bare metal hosting provider.

The problem is that they want to get rid of config completely, every "stack" gets it's own namespace, and uses hard coded dns/user/pass... And then in production, we're going to use a different method of supplying dns and credentials.

This goes pretty much against the whole idea of "same container in dev, same in qa and prod"... Vp of engineering has no experience with ops and wants to simplify things up improve deployment speed/reliability in dev, I can't seem to convince him that we should use the same dns/cred mechanisms in dev as we do prod.

Thoughts?

"Where do you want to find out that something is hosed up? Where it is invisible to our users and has minimal cost to fix, or where it's highly visible and costs of money/credibility?"

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

Hadlock posted:

So we had that golden turning moment when the vp of engineering and cto have been playing with my prototypes for a couple of months and finally decided to convert the entire stack over to k8s and switch our dev/qa systems (well, half of qa was on k8s already) over to aws in it cloud/k8s, and ditch our third rate bare metal hosting provider.

The problem is that they want to get rid of config completely, every "stack" gets it's own namespace, and uses hard coded dns/user/pass... And then in production, we're going to use a different method of supplying dns and credentials.

This goes pretty much against the whole idea of "same container in dev, same in qa and prod"... Vp of engineering has no experience with ops and wants to simplify things up improve deployment speed/reliability in dev, I can't seem to convince him that we should use the same dns/cred mechanisms in dev as we do prod.

Thoughts?

quit

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison

Hadlock posted:

So we had that golden turning moment when the vp of engineering and cto have been playing with my prototypes for a couple of months and finally decided to convert the entire stack over to k8s and switch our dev/qa systems (well, half of qa was on k8s already) over to aws in it cloud/k8s, and ditch our third rate bare metal hosting provider.

The problem is that they want to get rid of config completely, every "stack" gets it's own namespace, and uses hard coded dns/user/pass... And then in production, we're going to use a different method of supplying dns and credentials.

This goes pretty much against the whole idea of "same container in dev, same in qa and prod"... Vp of engineering has no experience with ops and wants to simplify things up improve deployment speed/reliability in dev, I can't seem to convince him that we should use the same dns/cred mechanisms in dev as we do prod.

Thoughts?

use terraform

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

So we had that golden turning moment when the vp of engineering and cto have been playing with my prototypes for a couple of months and finally decided to convert the entire stack over to k8s and switch our dev/qa systems (well, half of qa was on k8s already) over to aws in it cloud/k8s, and ditch our third rate bare metal hosting provider.

The problem is that they want to get rid of config completely, every "stack" gets it's own namespace, and uses hard coded dns/user/pass... And then in production, we're going to use a different method of supplying dns and credentials.

This goes pretty much against the whole idea of "same container in dev, same in qa and prod"... Vp of engineering has no experience with ops and wants to simplify things up improve deployment speed/reliability in dev, I can't seem to convince him that we should use the same dns/cred mechanisms in dev as we do prod.

Thoughts?
"Same container in dev, same in QA and prod" is a nice platitude, but probably isn't close to the top 20 tech problems your org is having or it would have more visibility at the VP level. As an engineer it's cool to have everything align with your vision, but this probably doesn't matter and isn't worth fighting anyone about as long as it dovetails into an otherwise half-sane build/release management process.

What's the concern your VP is trying to address by doing it this way?

Hadlock
Nov 9, 2004

Vulture Culture posted:

What's the concern your VP is trying to address by doing it this way?

I let a vault token expire in prod that caused the VP's pet non-critical microservice to flap for a couple of days

Not confidence inspiring but I've been building up the cloud infrastructure on an island by myself and something was bound to fail at some point

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Hadlock posted:

Vp of engineering has no experience with ops and wants to simplify things up improve deployment speed/reliability in dev, I can't seem to convince him that we should use the same dns/cred mechanisms in dev as we do prod.

Thoughts?
For the same reasons I'm quitting my job (among others), there are bigger problems than the credentials mechanism that led you here. My summation is you were hired as someone that is an expert in an important area like operations and yet every major decision is being overridden for "business value" and yet the value delivered is consistently negative mostly because of the decisions made against the SME's advice. Therefore, you don't need the SME and should just hire 2+ junior engineers to implement everything instead because the leadership of the SME is of no importance.

I wound up with a situation where I wind up implementing designs other people wanted (and are no longer with the company) against my wishes out of the spirit of "let's see how well it works out!" and it is causing massive problems where I spend half my time auditing every line of a 4000+ line change on 3+ month release cycles (never, ever use git-flow for infrastructure repositories - learn from my lack of backbone, friends) or having to poke holes in security groups and routing tables constantly for no additional business value (the worst kind of technical debt ever - no business value gained short or long term, hurts productivity, customer visible).

Even in the spirit of agreeability and positive thinking you need to have the ability to reverse course on technical decisions and you need to have some pretty clear desired results and revisit them - that feedback loop exists beyond 2 sprints in a company with vision, direction, and leadership. So I say the kind of decision that must be strongly opposed is any kind that can't be corrected or measured in effectiveness.

LochNessMonster
Feb 3, 2005

I need about three fitty


necrobobsledder posted:

(never, ever use git-flow for infrastructure repositories - learn from my lack of backbone, friends)

I’m using git flow for our (small scale) infra repos and am wondering what kind of problems I’d he running into in the future and what alternatives there are. Care to elaborate on this?

Hadlock
Nov 9, 2004

LochNessMonster posted:

I’m using git flow for our (small scale) infra repos and am wondering what kind of problems I’d he running into in the future and what alternatives there are. Care to elaborate on this?

Agree, we used git flow at our ~125 person company (50+ engineers)... We bought in so hard to get flow, the documentation is even using git flow. Works loving great.

You do need at least one release czar to keep things straight and uphold principles, otherwise it (or any release strategy, really) is going to go to poo poo fast

necrobobsledder posted:

My summation is you were hired as someone that is an expert in an important area like operations and yet every major decision is being overridden for "business value" and yet the value delivered is consistently negative mostly because of the decisions made against the SME's advice. Therefore, you don't need the SME and should just hire 2+ junior engineers to implement everything instead because the leadership of the SME is of no importance.

This is wise advice, need to think on this some more

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
honestly if you're getting away with a namespace per stack instead of a separate k8s cluster per stack, or worse per-tier-per-stack, thats a pretty big win. take it, get it done, then come back to config debates.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

LochNessMonster posted:

I’m using git flow for our (small scale) infra repos and am wondering what kind of problems I’d he running into in the future and what alternatives there are. Care to elaborate on this?

What's the difference between the head of develop and master? Infra should only have a finalised state and unless you're doing infra smoke tests using develop it's just another place where things can drift and conflicts can arise.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

I've been building up the cloud infrastructure on an island by myself
This does actually sound like a serious problem worth addressing for a platform of any complexity

Cancelbot
Nov 22, 2006

Canceling spam since 1928

uncurable mlady posted:

honestly, just use k8s.

Glances nervously at ECS pipeline we're 90% close to completing.

As soon as I can upgrade Octopus deploy we can ride the k8 wave as well!

Hadlock
Nov 9, 2004

StabbinHobo posted:

honestly if you're getting away with a namespace per stack instead of a separate k8s cluster per stack, or worse per-tier-per-stack, thats a pretty big win. take it, get it done, then come back to config debates.

Something has gone horribly wrong with the implementation if you can only deploy one thing to one cluster :psyduck:

Adbot
ADBOT LOVES YOU

busfahrer
Feb 9, 2012

Ceterum censeo
Carthaginem
esse delendam
I'm trying to deploy the docker example voting app on minikube. Using ingress-nginx, I'm routing / to the voting service and /result/ to the result service, rewriting to /. When I access the result service via http://.../result, it can't find the stylesheets for example, because they are linked at /stylesheets/ as opposed to ./stylesheets/. So this brings me to the general question: Is it possible to let each container think they live at / while still getting things to work, using nginx routing?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply