Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
ISTG docker is going to make me jump off a building

"Hey we killed your entire stack with nothing in the logs, no OOM nah we just felt like it, cool right?"

code:
Nov 17 12:15:23 brain dockerd[678]: time="2020-11-17T12:15:23.163656004-05:00" level=info msg="NetworkDB stats brain(d27a6fd2c2e9) - netID:zhiu5b26idkphoxezt55d3ycj leaving:false netPeers:1 entries:10 Queue qLen:0 netMsg/s:0"
Nov 17 12:15:23 brain dockerd[678]: time="2020-11-17T12:15:23.174464323-05:00" level=info msg="NetworkDB stats brain(d27a6fd2c2e9) - netID:8xtqjl42xi0ae0d92iz66xwqw leaving:false netPeers:1 entries:13 Queue qLen:0 netMsg/s:0"
Nov 17 12:15:24 brain dockerd[678]: time="2020-11-17T12:15:23.356434003-05:00" level=error msg="heartbeat to manager { } failed" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" method="(*session).heartbeat" module=node/agent node.id=lr0b8uyy1cuxz23sm2go8d31m session.id=upy6sixmo1s7zn96kor1bmpua sessionID=upy6sixmo1s7zn96kor1bmpua
Nov 17 12:15:24 brain dockerd[678]: time="2020-11-17T12:15:24.051547439-05:00" level=error msg="agent: session failed" backoff=100ms error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node/agent node.id=lr0b8uyy1cuxz23sm2go8d31m
Nov 17 12:15:24 brain dockerd[678]: time="2020-11-17T12:15:24.417521455-05:00" level=error msg="agent: session failed" backoff=300ms error="rpc error: code = Canceled desc = context canceled" module=node/agent node.id=lr0b8uyy1cuxz23sm2go8d31m
Nov 17 12:15:24 brain dockerd[678]: time="2020-11-17T12:15:24.420876430-05:00" level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=lr0b8uyy1cuxz23sm2go8d31m
Nov 17 12:15:24 brain dockerd[678]: time="2020-11-17T12:15:24.421016896-05:00" level=info msg="waiting 147.592795ms before registering session" module=node/agent node.id=lr0b8uyy1cuxz23sm2go8d31m
Nov 17 12:15:25 brain dockerd[678]: time="2020-11-17T12:15:25.767349504-05:00" level=info msg="worker lr0b8uyy1cuxz23sm2go8d31m was successfully registered" method="(*Dispatcher).register"
Nov 17 12:15:46 brain dockerd[678]: sync duration of 7.659315671s, expected less than 1s
Nov 17 12:16:05 brain dockerd[678]: sync duration of 1.152709946s, expected less than 1s
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.819246098-05:00" level=info msg="Container 2983d2bb842c32a40dbc0905e842245092da4c9950158ac4eb5ea90b40a79d5f failed to exit within 10 seconds of signal 15 - using the force"
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.838358685-05:00" level=info msg="Container 35a696f803d4b9765ea85b3a1f7d3914a21337c443c2b0fccfe9e0d044e4b60c failed to exit within 10 seconds of signal 15 - using the force"
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.838893049-05:00" level=info msg="Container 6aa620827e96174913cb962a26cbc5dfd74b409894e788065d3d381803f4735f failed to exit within 10 seconds of signal 15 - using the force"
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.926186167-05:00" level=info msg="Container 11112d6c2d1be166a01283e4bcba5bfbbc47a2bb56ea48a2093d9a6a258df092 failed to exit within 10 seconds of signal 15 - using the force"
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.929450175-05:00" level=info msg="Container a443981017104a691015314167aeb5935e4d7017a500078ab36cbe8e50828435 failed to exit within 10 seconds of signal 15 - using the force"
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.930908454-05:00" level=info msg="Container bab7828911252c16012807a44361ea377de37bf479676eef1f5a791516ea4cdd failed to exit within 10 seconds of signal 15 - using the force"
motherfucker i don't remember asking you to kill a goddamned thing

Adbot
ADBOT LOVES YOU

LochNessMonster
Feb 3, 2005

I need about three fitty


Harik posted:

ISTG docker is going to make me jump off a building

"Hey we killed your entire stack with nothing in the logs, no OOM nah we just felt like it, cool right?"

code:
Nov 17 12:15:23 brain dockerd[678]: time="2020-11-17T12:15:23.163656004-05:00" level=info msg="NetworkDB stats brain(d27a6fd2c2e9) - netID:zhiu5b26idkphoxezt55d3ycj leaving:false netPeers:1 entries:10 Queue qLen:0 netMsg/s:0"
Nov 17 12:15:23 brain dockerd[678]: time="2020-11-17T12:15:23.174464323-05:00" level=info msg="NetworkDB stats brain(d27a6fd2c2e9) - netID:8xtqjl42xi0ae0d92iz66xwqw leaving:false netPeers:1 entries:13 Queue qLen:0 netMsg/s:0"
Nov 17 12:15:24 brain dockerd[678]: time="2020-11-17T12:15:23.356434003-05:00" level=error msg="heartbeat to manager { } failed" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" method="(*session).heartbeat" module=node/agent node.id=lr0b8uyy1cuxz23sm2go8d31m session.id=upy6sixmo1s7zn96kor1bmpua sessionID=upy6sixmo1s7zn96kor1bmpua
Nov 17 12:15:24 brain dockerd[678]: time="2020-11-17T12:15:24.051547439-05:00" level=error msg="agent: session failed" backoff=100ms error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node/agent node.id=lr0b8uyy1cuxz23sm2go8d31m
Nov 17 12:15:24 brain dockerd[678]: time="2020-11-17T12:15:24.417521455-05:00" level=error msg="agent: session failed" backoff=300ms error="rpc error: code = Canceled desc = context canceled" module=node/agent node.id=lr0b8uyy1cuxz23sm2go8d31m
Nov 17 12:15:24 brain dockerd[678]: time="2020-11-17T12:15:24.420876430-05:00" level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=lr0b8uyy1cuxz23sm2go8d31m
Nov 17 12:15:24 brain dockerd[678]: time="2020-11-17T12:15:24.421016896-05:00" level=info msg="waiting 147.592795ms before registering session" module=node/agent node.id=lr0b8uyy1cuxz23sm2go8d31m
Nov 17 12:15:25 brain dockerd[678]: time="2020-11-17T12:15:25.767349504-05:00" level=info msg="worker lr0b8uyy1cuxz23sm2go8d31m was successfully registered" method="(*Dispatcher).register"
Nov 17 12:15:46 brain dockerd[678]: sync duration of 7.659315671s, expected less than 1s
Nov 17 12:16:05 brain dockerd[678]: sync duration of 1.152709946s, expected less than 1s
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.819246098-05:00" level=info msg="Container 2983d2bb842c32a40dbc0905e842245092da4c9950158ac4eb5ea90b40a79d5f failed to exit within 10 seconds of signal 15 - using the force"
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.838358685-05:00" level=info msg="Container 35a696f803d4b9765ea85b3a1f7d3914a21337c443c2b0fccfe9e0d044e4b60c failed to exit within 10 seconds of signal 15 - using the force"
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.838893049-05:00" level=info msg="Container 6aa620827e96174913cb962a26cbc5dfd74b409894e788065d3d381803f4735f failed to exit within 10 seconds of signal 15 - using the force"
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.926186167-05:00" level=info msg="Container 11112d6c2d1be166a01283e4bcba5bfbbc47a2bb56ea48a2093d9a6a258df092 failed to exit within 10 seconds of signal 15 - using the force"
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.929450175-05:00" level=info msg="Container a443981017104a691015314167aeb5935e4d7017a500078ab36cbe8e50828435 failed to exit within 10 seconds of signal 15 - using the force"
Nov 17 12:16:09 brain dockerd[678]: time="2020-11-17T12:16:09.930908454-05:00" level=info msg="Container bab7828911252c16012807a44361ea377de37bf479676eef1f5a791516ea4cdd failed to exit within 10 seconds of signal 15 - using the force"
motherfucker i don't remember asking you to kill a goddamned thing

Sounds like this bug: https://github.com/moby/moby/issues/38203

How many stacks are you running? We had a similar issue at $client-1 at around 500 stacks on a 20-30 node swarm cluster. No OOM warnings either but from time to time a node just evicted all containers and then got stuck until we rebooted it. Wasn’t VM related as far as I could tell as we rebuilt the whole cluster from scratch but kept having this issue from time to time.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

LochNessMonster posted:

Sounds like this bug: https://github.com/moby/moby/issues/38203

How many stacks are you running? We had a similar issue at $client-1 at around 500 stacks on a 20-30 node swarm cluster. No OOM warnings either but from time to time a node just evicted all containers and then got stuck until we rebooted it. Wasn’t VM related as far as I could tell as we rebuilt the whole cluster from scratch but kept having this issue from time to time.


2 years old, no developer has glanced at it, last "me too" last week

yup, that's the troubleshooting we know and love.

AFAICT it's docker failing to talk to itself due to a hiccough and deciding "gently caress it the best thing to do now is to explode"

I told it to wait a year for heartbeats to timeout because it's clearly too loving stupid to know what to do.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Anyone have suggestions on IDM solutions for on premise airgapped/unreliable internet connections where we're using Gsuite/okta for identity?

We have a bunch of bare metal Linux servers we wanted to see about federating SSH and whatnot to. We also have a small cluster of webapps using SAML and some LDAP.

Was eyeing keycloak but I'm not sure that's what I want. I'd like to allow Gsuite/okta users to my on premise stuff, even if it's just a routine sync

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Teleport? https://goteleport.com/teleport/

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
Anyway, docker. I've got a stack running with multiple copies of a service using the docker internal load balancing. Yes, I know. I'm going to move to proper service discovery and an actual load balancer. One thing at a time.

Their networks aren't connected to the host but is there a way to temporarily connect to one / tell docker to proxy me a single socket? There's statistics available on a pull basis I'd like to access for each individual instance.

Right now I'm working around it with docker exec <instanceid> wget but for anything more complicated that's not going to work, and the exec attachment is slow and expensive for a single http hit.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

This looks sweet but not exactly what I'm after. Right now I'm going to try IPA server with Google directory sync/okta and see how that goes. Heh.

Re: docker, if those networks aren't visible to the host, you can also directly expose a port on the host and connect that way. If you're using swarm, just add the port as 8080:8080 or whatever and away you go. I assume that's what you mean by socket.

The downside is you have to have those ports available on the host, they can't overlap.

Failing that we use Traefik and are very happy with it in our swarms. Pretty easy to set up.

LochNessMonster
Feb 3, 2005

I need about three fitty


Harik posted:

Anyway, docker. I've got a stack running with multiple copies of a service using the docker internal load balancing. Yes, I know. I'm going to move to proper service discovery and an actual load balancer. One thing at a time.

Their networks aren't connected to the host but is there a way to temporarily connect to one / tell docker to proxy me a single socket? There's statistics available on a pull basis I'd like to access for each individual instance.

Right now I'm working around it with docker exec <instanceid> wget but for anything more complicated that's not going to work, and the exec attachment is slow and expensive for a single http hit.

Not sure if this is what you want, but you could spin up a container and connect it to the network with docker network connect run command to gather stats and disconnect from the network again.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Gyshall posted:

Anyone have suggestions on IDM solutions for on premise airgapped/unreliable internet connections where we're using Gsuite/okta for identity?

We have a bunch of bare metal Linux servers we wanted to see about federating SSH and whatnot to. We also have a small cluster of webapps using SAML and some LDAP.

Was eyeing keycloak but I'm not sure that's what I want. I'd like to allow Gsuite/okta users to my on premise stuff, even if it's just a routine sync
My last company did this with JumpCloud. Worked well enough for us

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Harik posted:

Anyway, docker. I've got a stack running with multiple copies of a service using the docker internal load balancing. Yes, I know. I'm going to move to proper service discovery and an actual load balancer. One thing at a time.

Their networks aren't connected to the host but is there a way to temporarily connect to one / tell docker to proxy me a single socket? There's statistics available on a pull basis I'd like to access for each individual instance.

Right now I'm working around it with docker exec <instanceid> wget but for anything more complicated that's not going to work, and the exec attachment is slow and expensive for a single http hit.

You can expose a service's port only to a particular IP. ports: "127.0.0.1:8888:80" will let you call localhost:8888 from the local machine and have it redirected to the service's port 80, but the rest of the world won't be able to use it.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

LochNessMonster posted:

Not sure if this is what you want, but you could spin up a container and connect it to the network with docker network connect run command to gather stats and disconnect from the network again.

Not much different than using docker exec to run curl inside the containers like I have running now. I could write something that exposed its own service that let me bounce IPs but at that point i might as well just have each service push instead of pull. Probably the better long-term plan anyway.

NihilCredo posted:

You can expose a service's port only to a particular IP. ports: "127.0.0.1:8888:80" will let you call localhost:8888 from the local machine and have it redirected to the service's port 80, but the rest of the world won't be able to use it.

It's N-scaled services in a stack using docker proxy to load balance. They do only show up on localhost, with SSL termination done outside the container. I don't think I can tell it to give me one port per container and not sure how i'd manage that anyway.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Oh, I misunderstood your problem then.

It's looking like a bit an X-Y problem here though. If you're running a scaled service, you should absolutely not care about the distinction between one instance versus another - they should be stateless. So ideally the service itself should poo poo out those statistics into a mounted volume or whatever, and not wait for them to be pulled.

If you can't do that, then you're looking at essentially a pod design - each instance combines one container running the service and one container to pull out the logs periodically. Swarm doesn't support pods natively like k8s, but there's at least one project to emulate them.

Or you could take a more hacky route and write a Dockerfile that runs both your service itself and a cronjob that pulls the statistics. Uglier than pods but certainly simpler.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
yes, push is the way to go and what I'll be using in the future.

The x-y is i'm hunting a soft lockup where everything appears to be running smoothly but the number of processed requests/s falls off a cliff in a single instance. The server containers are ephemeral and stateless, but when one fails i need to be able to pull up the lifetime metrics for that singular instance to hunt for clues.

I may just write it to the service's stdout periodically and if I care what happened with eaf58702a i can look at the log before it gets recycled.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Harik posted:

Anyway, docker. I've got a stack running with multiple copies of a service using the docker internal load balancing. Yes, I know. I'm going to move to proper service discovery and an actual load balancer. One thing at a time.

Their networks aren't connected to the host but is there a way to temporarily connect to one / tell docker to proxy me a single socket? There's statistics available on a pull basis I'd like to access for each individual instance.

Right now I'm working around it with docker exec <instanceid> wget but for anything more complicated that's not going to work, and the exec attachment is slow and expensive for a single http hit.
You can expose each service instance on a different forwarded port without impacting your container-side networking.

The other answers about the push model are, of course, also viable and good and right.

Vulture Culture fucked around with this message at 16:21 on Nov 27, 2020

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read
I have a question regarding on-prem k8s. Up until now I've just been dealing with clusters run by cloud providers that wire everything up for me.

I have a 3 node cluster running in my vSphere environment (the cluster was created with Rancher). I have another server running nginx that is doing load balancing for all 3 nodes. This is a flat network as I am just testing things out.

In the cluster I have deployed the ingress-nginx ingress controller. I have a test deployment and service running in the cluster and ingress seems to be working normally, at least as I would expect it to.

Essentially: test-service.domain.int > external nginx lb IP > cluster > ingress controller > service


My question is, am I missing some additional component to the load balancing? I have been reading about Metal LB, but in this scenario is it required since I have this nginx server doing external load balancing to the cluster nodes? Or is it still required in some capacity?

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
your nginx controller is probably running as a daemonset in hostnetwork mode or with hostports which means all you need to do is get the traffic to the nodes and you’re set (for ingress resources)

needing non-http services is when you’d want something like metallb (although nodeports can do in a pinch)

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

my homie dhall posted:

your nginx controller is probably running as a daemonset in hostnetwork mode or with hostports which means all you need to do is get the traffic to the nodes and you’re set (for ingress resources)

needing non-http services is when you’d want something like metallb (although nodeports can do in a pinch)

I’ll have to check but I did a stock install of the latest helm chart from the official repo. Right now we only do http services (or at least stick everything behind an api gateway) so that works out for now.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Spring Heeled Jack posted:

I have a question regarding on-prem k8s. Up until now I've just been dealing with clusters run by cloud providers that wire everything up for me.

I have a 3 node cluster running in my vSphere environment (the cluster was created with Rancher). I have another server running nginx that is doing load balancing for all 3 nodes. This is a flat network as I am just testing things out.

In the cluster I have deployed the ingress-nginx ingress controller. I have a test deployment and service running in the cluster and ingress seems to be working normally, at least as I would expect it to.

Essentially: test-service.domain.int > external nginx lb IP > cluster > ingress controller > service


My question is, am I missing some additional component to the load balancing? I have been reading about Metal LB, but in this scenario is it required since I have this nginx server doing external load balancing to the cluster nodes? Or is it still required in some capacity?
MetalLB is really for n-way scaling of your network services. Whatever HA configuration you have on your Nginx cluster will work just fine. In a pinch, if you needed HA straight to your ingress controllers without a pile of BGP, you could set up something like a VRRP active/passive IP failover without too much headache.

LochNessMonster
Feb 3, 2005

I need about three fitty


Whats the current standard for spinning up a PoC bare metal k8s cluster to show off some standard capabilities (nothing fancy). Kops, Kubeadm or Kubespray? Or should I just run k3s?

Erwin
Feb 17, 2006

LochNessMonster posted:

Whats the current standard for spinning up a PoC bare metal k8s cluster to show off some standard capabilities (nothing fancy). Kops, Kubeadm or Kubespray? Or should I just run k3s?

To me kubeadm is simple enough that the opinionation of the other tools isn't worth the hassle.

Docjowles
Apr 9, 2009

Kops is pretty much just for AWS with some other options kinda-sorta supported, which rules that one out.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
We've been using Kubespray/digital rebar which may be overkill for your use case.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Is it Kube-spray or Kubes-pray though

Erwin
Feb 17, 2006

I always thought Kubespray sounded gross. Like “ah jeez grab a mop, I got Kubernetes everywhere. Ew, it’s dripping off of the ceiling!”

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

Erwin posted:

I always thought Kubespray sounded gross. Like “ah jeez grab a mop, I got Kubernetes everywhere. Ew, it’s dripping off of the ceiling!”

Checks out

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Canonical MaaS is underrated IMO, I run it in my homelab

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Latest OpenShift has a bare metal install option too.

Hadlock
Nov 9, 2004

LochNessMonster posted:

Whats the current standard for spinning up a PoC bare metal k8s cluster to show off some standard capabilities (nothing fancy). Kops, Kubeadm or Kubespray? Or should I just run k3s?

Actual bare metal, or EC2 instances? For true bare metal, Rancher v2 is pretty painless but you have to define your own pvc against an NFS server or whatever

Option B as you mentioned, if you're not doing anything exotic, is k3s which is also by the Rancher folks

For AWS stuff kops is dead reliable, or at least it was last time I used it two years ago

12 rats tied together
Sep 7, 2006

Vulture Culture posted:

Canonical MaaS is underrated IMO, I run it in my homelab

Agreed, I pushed for it at last-job but group consensus was that storing the os images in postgres blob format was a non starter. I liked the product overall, though.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
I used maas and juju about five (!?) Years ago and liked it, but holy poo poo was it a mess to inherit.

LochNessMonster
Feb 3, 2005

I need about three fitty


Hadlock posted:

Actual bare metal, or EC2 instances? For true bare metal, Rancher v2 is pretty painless but you have to define your own pvc against an NFS server or whatever

Option B as you mentioned, if you're not doing anything exotic, is k3s which is also by the Rancher folks

For AWS stuff kops is dead reliable, or at least it was last time I used it two years ago

Unfortunately I'm dealing with on prem VM's, otherwise I'd just have used EKS/AKS/GKE.

Thanks for the feedback everyone.

vanity slug
Jul 20, 2010

LochNessMonster posted:

Unfortunately I'm dealing with on prem VM's, otherwise I'd just have used EKS/AKS/GKE.

Thanks for the feedback everyone.

https://aws.amazon.com/eks/eks-anywhere/

Fresh off the press.

Hed
Mar 31, 2004

Fun Shoe
Ok that’s neat.

LochNessMonster
Feb 3, 2005

I need about three fitty



Cool, gonna check that out for sure.

drunk mutt
Jul 5, 2011

I just think they're neat
Putting together plans for the new architecture, let me know what y'all think:

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Corp por

Doc Hawkins
Jun 15, 2010

Dashing? But I'm not even moving!


kal vas flam



i did not know those memory circuits were still active.

Che Delilas
Nov 23, 2009
FREE TIBET WEED

drunk mutt posted:

Putting together plans for the new architecture, let me know what y'all think:



No token ring.

No Safe Word
Feb 26, 2005

drunk mutt posted:

Putting together plans for the new architecture, let me know what y'all think:



You're missing a bus off the bottom cloud.

Adbot
ADBOT LOVES YOU

Hadlock
Nov 9, 2004

Does anyone have a roadmap for running 12 factor apps (particularly the config stored in the environment part) on aws ECS

In the K8S world I'd just encrypt the secrets in a sops secrets, and then deploy it via helm secrets

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply