|
ISTG docker is going to make me jump off a building "Hey we killed your entire stack with nothing in the logs, no OOM nah we just felt like it, cool right?" code:
|
# ? Nov 17, 2020 18:40 |
|
|
# ? May 14, 2024 10:32 |
|
Harik posted:ISTG docker is going to make me jump off a building Sounds like this bug: https://github.com/moby/moby/issues/38203 How many stacks are you running? We had a similar issue at $client-1 at around 500 stacks on a 20-30 node swarm cluster. No OOM warnings either but from time to time a node just evicted all containers and then got stuck until we rebooted it. Wasn’t VM related as far as I could tell as we rebuilt the whole cluster from scratch but kept having this issue from time to time.
|
# ? Nov 17, 2020 21:11 |
|
LochNessMonster posted:Sounds like this bug: https://github.com/moby/moby/issues/38203 2 years old, no developer has glanced at it, last "me too" last week yup, that's the troubleshooting we know and love. AFAICT it's docker failing to talk to itself due to a hiccough and deciding "gently caress it the best thing to do now is to explode" I told it to wait a year for heartbeats to timeout because it's clearly too loving stupid to know what to do.
|
# ? Nov 17, 2020 21:19 |
|
Anyone have suggestions on IDM solutions for on premise airgapped/unreliable internet connections where we're using Gsuite/okta for identity? We have a bunch of bare metal Linux servers we wanted to see about federating SSH and whatnot to. We also have a small cluster of webapps using SAML and some LDAP. Was eyeing keycloak but I'm not sure that's what I want. I'd like to allow Gsuite/okta users to my on premise stuff, even if it's just a routine sync
|
# ? Nov 19, 2020 03:31 |
|
Teleport? https://goteleport.com/teleport/
|
# ? Nov 19, 2020 03:53 |
|
Anyway, docker. I've got a stack running with multiple copies of a service using the docker internal load balancing. Yes, I know. I'm going to move to proper service discovery and an actual load balancer. One thing at a time. Their networks aren't connected to the host but is there a way to temporarily connect to one / tell docker to proxy me a single socket? There's statistics available on a pull basis I'd like to access for each individual instance. Right now I'm working around it with docker exec <instanceid> wget but for anything more complicated that's not going to work, and the exec attachment is slow and expensive for a single http hit.
|
# ? Nov 19, 2020 04:01 |
|
This looks sweet but not exactly what I'm after. Right now I'm going to try IPA server with Google directory sync/okta and see how that goes. Heh. Re: docker, if those networks aren't visible to the host, you can also directly expose a port on the host and connect that way. If you're using swarm, just add the port as 8080:8080 or whatever and away you go. I assume that's what you mean by socket. The downside is you have to have those ports available on the host, they can't overlap. Failing that we use Traefik and are very happy with it in our swarms. Pretty easy to set up.
|
# ? Nov 19, 2020 11:28 |
|
Harik posted:Anyway, docker. I've got a stack running with multiple copies of a service using the docker internal load balancing. Yes, I know. I'm going to move to proper service discovery and an actual load balancer. One thing at a time. Not sure if this is what you want, but you could spin up a container and connect it to the network with docker network connect run command to gather stats and disconnect from the network again.
|
# ? Nov 19, 2020 14:47 |
|
Gyshall posted:Anyone have suggestions on IDM solutions for on premise airgapped/unreliable internet connections where we're using Gsuite/okta for identity?
|
# ? Nov 19, 2020 15:54 |
|
Harik posted:Anyway, docker. I've got a stack running with multiple copies of a service using the docker internal load balancing. Yes, I know. I'm going to move to proper service discovery and an actual load balancer. One thing at a time. You can expose a service's port only to a particular IP. ports: "127.0.0.1:8888:80" will let you call localhost:8888 from the local machine and have it redirected to the service's port 80, but the rest of the world won't be able to use it.
|
# ? Nov 19, 2020 16:53 |
|
LochNessMonster posted:Not sure if this is what you want, but you could spin up a container and connect it to the network with docker network connect run command to gather stats and disconnect from the network again. Not much different than using docker exec to run curl inside the containers like I have running now. I could write something that exposed its own service that let me bounce IPs but at that point i might as well just have each service push instead of pull. Probably the better long-term plan anyway. NihilCredo posted:You can expose a service's port only to a particular IP. ports: "127.0.0.1:8888:80" will let you call localhost:8888 from the local machine and have it redirected to the service's port 80, but the rest of the world won't be able to use it. It's N-scaled services in a stack using docker proxy to load balance. They do only show up on localhost, with SSL termination done outside the container. I don't think I can tell it to give me one port per container and not sure how i'd manage that anyway.
|
# ? Nov 20, 2020 10:25 |
|
Oh, I misunderstood your problem then. It's looking like a bit an X-Y problem here though. If you're running a scaled service, you should absolutely not care about the distinction between one instance versus another - they should be stateless. So ideally the service itself should poo poo out those statistics into a mounted volume or whatever, and not wait for them to be pulled. If you can't do that, then you're looking at essentially a pod design - each instance combines one container running the service and one container to pull out the logs periodically. Swarm doesn't support pods natively like k8s, but there's at least one project to emulate them. Or you could take a more hacky route and write a Dockerfile that runs both your service itself and a cronjob that pulls the statistics. Uglier than pods but certainly simpler.
|
# ? Nov 20, 2020 11:44 |
|
yes, push is the way to go and what I'll be using in the future. The x-y is i'm hunting a soft lockup where everything appears to be running smoothly but the number of processed requests/s falls off a cliff in a single instance. The server containers are ephemeral and stateless, but when one fails i need to be able to pull up the lifetime metrics for that singular instance to hunt for clues. I may just write it to the service's stdout periodically and if I care what happened with eaf58702a i can look at the log before it gets recycled.
|
# ? Nov 25, 2020 10:48 |
|
Harik posted:Anyway, docker. I've got a stack running with multiple copies of a service using the docker internal load balancing. Yes, I know. I'm going to move to proper service discovery and an actual load balancer. One thing at a time. The other answers about the push model are, of course, also viable and good and right. Vulture Culture fucked around with this message at 16:21 on Nov 27, 2020 |
# ? Nov 27, 2020 16:17 |
|
I have a question regarding on-prem k8s. Up until now I've just been dealing with clusters run by cloud providers that wire everything up for me. I have a 3 node cluster running in my vSphere environment (the cluster was created with Rancher). I have another server running nginx that is doing load balancing for all 3 nodes. This is a flat network as I am just testing things out. In the cluster I have deployed the ingress-nginx ingress controller. I have a test deployment and service running in the cluster and ingress seems to be working normally, at least as I would expect it to. Essentially: test-service.domain.int > external nginx lb IP > cluster > ingress controller > service My question is, am I missing some additional component to the load balancing? I have been reading about Metal LB, but in this scenario is it required since I have this nginx server doing external load balancing to the cluster nodes? Or is it still required in some capacity?
|
# ? Nov 30, 2020 21:42 |
|
your nginx controller is probably running as a daemonset in hostnetwork mode or with hostports which means all you need to do is get the traffic to the nodes and you’re set (for ingress resources) needing non-http services is when you’d want something like metallb (although nodeports can do in a pinch)
|
# ? Nov 30, 2020 23:42 |
|
my homie dhall posted:your nginx controller is probably running as a daemonset in hostnetwork mode or with hostports which means all you need to do is get the traffic to the nodes and you’re set (for ingress resources) I’ll have to check but I did a stock install of the latest helm chart from the official repo. Right now we only do http services (or at least stick everything behind an api gateway) so that works out for now.
|
# ? Dec 1, 2020 00:02 |
|
Spring Heeled Jack posted:I have a question regarding on-prem k8s. Up until now I've just been dealing with clusters run by cloud providers that wire everything up for me.
|
# ? Dec 1, 2020 01:00 |
|
Whats the current standard for spinning up a PoC bare metal k8s cluster to show off some standard capabilities (nothing fancy). Kops, Kubeadm or Kubespray? Or should I just run k3s?
|
# ? Dec 1, 2020 15:29 |
|
LochNessMonster posted:Whats the current standard for spinning up a PoC bare metal k8s cluster to show off some standard capabilities (nothing fancy). Kops, Kubeadm or Kubespray? Or should I just run k3s? To me kubeadm is simple enough that the opinionation of the other tools isn't worth the hassle.
|
# ? Dec 1, 2020 16:27 |
|
Kops is pretty much just for AWS with some other options kinda-sorta supported, which rules that one out.
|
# ? Dec 1, 2020 16:31 |
|
We've been using Kubespray/digital rebar which may be overkill for your use case.
|
# ? Dec 1, 2020 17:10 |
|
Is it Kube-spray or Kubes-pray though
|
# ? Dec 1, 2020 19:47 |
|
I always thought Kubespray sounded gross. Like “ah jeez grab a mop, I got Kubernetes everywhere. Ew, it’s dripping off of the ceiling!”
|
# ? Dec 1, 2020 22:50 |
|
Erwin posted:I always thought Kubespray sounded gross. Like “ah jeez grab a mop, I got Kubernetes everywhere. Ew, it’s dripping off of the ceiling!” Checks out
|
# ? Dec 1, 2020 23:20 |
|
Canonical MaaS is underrated IMO, I run it in my homelab
|
# ? Dec 2, 2020 00:12 |
|
Latest OpenShift has a bare metal install option too.
|
# ? Dec 2, 2020 00:47 |
|
LochNessMonster posted:Whats the current standard for spinning up a PoC bare metal k8s cluster to show off some standard capabilities (nothing fancy). Kops, Kubeadm or Kubespray? Or should I just run k3s? Actual bare metal, or EC2 instances? For true bare metal, Rancher v2 is pretty painless but you have to define your own pvc against an NFS server or whatever Option B as you mentioned, if you're not doing anything exotic, is k3s which is also by the Rancher folks For AWS stuff kops is dead reliable, or at least it was last time I used it two years ago
|
# ? Dec 2, 2020 01:29 |
|
Vulture Culture posted:Canonical MaaS is underrated IMO, I run it in my homelab Agreed, I pushed for it at last-job but group consensus was that storing the os images in postgres blob format was a non starter. I liked the product overall, though.
|
# ? Dec 2, 2020 03:33 |
|
I used maas and juju about five (!?) Years ago and liked it, but holy poo poo was it a mess to inherit.
|
# ? Dec 2, 2020 03:37 |
|
Hadlock posted:Actual bare metal, or EC2 instances? For true bare metal, Rancher v2 is pretty painless but you have to define your own pvc against an NFS server or whatever Unfortunately I'm dealing with on prem VM's, otherwise I'd just have used EKS/AKS/GKE. Thanks for the feedback everyone.
|
# ? Dec 2, 2020 12:14 |
|
LochNessMonster posted:Unfortunately I'm dealing with on prem VM's, otherwise I'd just have used EKS/AKS/GKE. https://aws.amazon.com/eks/eks-anywhere/ Fresh off the press.
|
# ? Dec 2, 2020 15:20 |
|
Ok that’s neat.
|
# ? Dec 2, 2020 16:05 |
|
Jeoh posted:https://aws.amazon.com/eks/eks-anywhere/ Cool, gonna check that out for sure.
|
# ? Dec 2, 2020 19:17 |
|
Putting together plans for the new architecture, let me know what y'all think:
|
# ? Dec 3, 2020 05:08 |
|
Corp por
|
# ? Dec 3, 2020 05:43 |
|
kal vas flam i did not know those memory circuits were still active.
|
# ? Dec 3, 2020 06:13 |
|
drunk mutt posted:Putting together plans for the new architecture, let me know what y'all think: No token ring.
|
# ? Dec 3, 2020 06:23 |
|
drunk mutt posted:Putting together plans for the new architecture, let me know what y'all think: You're missing a bus off the bottom cloud.
|
# ? Dec 3, 2020 06:39 |
|
|
# ? May 14, 2024 10:32 |
|
Does anyone have a roadmap for running 12 factor apps (particularly the config stored in the environment part) on aws ECS In the K8S world I'd just encrypt the secrets in a sops secrets, and then deploy it via helm secrets
|
# ? Dec 7, 2020 23:21 |