|
ah, the thread of my people
|
# ¿ Feb 12, 2020 11:53 |
|
|
# ¿ May 9, 2024 08:25 |
|
any of y’all tenants trying to get you to run a god dang “service mesh”? idk what real problems these things are trying to solve, I think they’re just inventing stuff for themselves to do
|
# ¿ Feb 12, 2020 12:04 |
|
Nomnom Cookie posted:load balancing and canaries. basically, threading a single end user request back through a load balancer 10 times because of microservices is obviously horrible. service mesh isn’t obviously horrible well it definitely seems like more of the former! love too pass all dataplane traffic through userspace
|
# ¿ Feb 12, 2020 21:41 |
|
imo on-prem is the best place for it. if you already have nice apis for controlling infrastructure k8s adds features you may not need. for example if I’m using ec2 I can already use things like security groups for restricting access between VMs or ELBs for L4/L7 load balancing. Kubernetes gives some nice APIs like NetworkPolicy, Ingress, and LoadBalancer for doing the same in a standardized way on-prem and you don’t have to worry about it being some janky API written by your network team. you also get things like service discovery via dns. it’s ideal on-prem operationally speaking as well because when a vm goes bad in the cloud you delete it and the ASG creates a fresh one on a different hypervisor. you obviously can’t do that on-prem so you’d like to just have a pool of workers that can be pulled for maintenance and returned for service in a way where workloads will get rescheduled which is what k8s offers
|
# ¿ Jun 9, 2020 22:55 |
|
Nomnom Cookie posted:yeah, you’re not saying anything I don’t know. it’s not feasible to guarantee that requests never fail. I’m bitching about an aspect of k8s that guarantees some requests will fail, when they could have made different choices to avoid the failures. architectural purity is valued more highly than proper operation, and that pisses me off the authors of kubernetes would probably say you’re going to have to solve this problem (requests dropping) eventually somewhere and also the problem you want them to solve is intractable. you’re asking them to somehow come up with a routing model that’s a) consistent/atomic across distributed nodes, b) supports dynamic scaling events and c) never dropping requests. how would you do this with any other provider or even conceptually other than a sleep after scaling down?
|
# ¿ Jun 11, 2020 16:23 |
|
Nomnom Cookie posted:your a) is stronger than I need. what I’m looking for is an ordering guarantee between iptables updates and SIGKILL. you don’t need consensus for that. write a txid into endpoints when they’re updated, and write the most recent txid visible in iptables into node status. the rest is a pretty simple pre-stop hook. paging someone to unfuck the cluster when a node breaks is acceptable to me. god knows we do that enough with loving docker as it is just because a host no longer has an iptables rule for a target doesn’t mean it’s not going to route traffic to that target though. existing conntrack flows will continue to use that route which means you’re not just waiting on iptables to update, but also all existing connections (which may or may not even be valid) to that pod to terminate before you can delete it. what happens if neither side closes the connection, should the pod just never delete? it might be workable for your specific use case (in which case, write your own kube-proxy! there are already cni providers that replace it), but I don’t think it’s as trivial as you’re making it sound
|
# ¿ Jun 11, 2020 22:49 |
|
fwiw I think relying on iptables as the core mechanism in your data plane is a pretty questionable decision, but I’m not sure there are/were better alternatives available for kernels without nftables. I have only limited exposure to ipvs
my homie dhall fucked around with this message at 00:21 on Jun 12, 2020 |
# ¿ Jun 11, 2020 22:53 |
|
abigserve posted:Any load balancing solution requires the person operating it to have a beyond-cursory understanding of the apps they are load balancing and that's an unreasonable request for a network team that may have to look after several thousand virtual servers so you get a lot of "tcp port alive" health checks and poo poo like that even better is the ping health check
|
# ¿ Jun 13, 2020 05:02 |
|
my stepdads beer posted:I thought docker did serious nat fuckery instead of ipv6 typically the host running the containers has a subnet like 192.168.0.1/24 that each container gets an IP out of. all of the containers have their own veth device in their net namespace with their IP on it and they're all connected to the same bridge device in the root network namespace (docker0) so all the containers on the same host see each other as directly connected. any traffic that needs to go elsewhere (ie the rest of your network) will get NAT'd with the host's IP
|
# ¿ Nov 13, 2020 01:10 |
|
if you're doing kube or some other orchestration where your containers get real IPs (from the perspective of the host), then instead of all containers being directly connected they'll each get their own veth interface in the host root namespace which will make everything routed
|
# ¿ Nov 13, 2020 01:14 |
|
Hed posted:is there any way to dump an interface into a container like I can with LXC? or at least make a user-defined bridge that has a real device in it? Yeah LXC and docker use the same mechanism for interface isolation, you should be able to look up the net namespace of the docker container and move whatever interface you want inside of it I'd assume docker also doesn't mind if you add an interface to a user defined bridge
|
# ¿ Nov 13, 2020 23:36 |
|
Bored Online posted:it was decided that we are gonna adopt kubernetes hope you’re using a managed offering
|
# ¿ Feb 12, 2021 06:24 |
|
[infra person voice]: if the infra people are the ones pushing it it might be ok. if it’s devs you should be very scared
|
# ¿ Feb 12, 2021 06:30 |
|
the raison d'etre for kubernetes is ostensibly getting economies of scale for your ops and release teams. if you're trying to solve those two specific problems, then it's not bad. if you don't have either of those problems I don't know why you would use it whicvh is also why you should be suspicious of devs who are pushing it, the kubernetes is not *for* them.
|
# ¿ Feb 13, 2021 10:39 |
|
Nomnom Cookie posted:is k8s better for this than other container orchestrators, though? I’m thinking of nomad, specifically, because that’s what I’ve had exposure to. it seemed simpler to set up and operate no clue, I only have experience with k8s, but it’s the one that “won” so it’s going to be here for a while also, at least in terms of its logical model, it’s not very difficult to understand. you have workload units (pods) and then a bunch of abstractions to create and manage them in different ways. there’s incidental complexity mostly in implementing networking and storage and everything that’s involved in initially standing a cluster up, but if you can make those things Someone Else’s Problem then it’s not too bad imo
|
# ¿ Feb 14, 2021 02:19 |
|
more like no users kinda hard to believe that what’s exciting to everyone right now is basically just a process scheduler, but here we are
|
# ¿ Feb 21, 2021 08:06 |
|
minato posted:It also abstracts away the underlying $cloud (also for the benefit of ops of release teams). It's not uncommon to use (say) AWS for public-facing workloads and some internal vSphere or RHV setup for more secure workloads. Having the same k8s interface for both is nice for the release/ops teams. yeah, definitely this too. also you can shift teams to / from different types of underlying machines (eg moving to or from VMs) with much less hassle
|
# ¿ Feb 25, 2021 12:59 |
|
Bored Online posted:luv 2 augment my college degree with cert tests that cost hundreds of dollars a piece personally I would find that repulsive, but I’m glad you’re managing to keep a good attitude about it
|
# ¿ Mar 16, 2021 12:51 |
|
what kind of certs are useful nowadays? like if I see an applicant with a ccna my thought is just “cool, I won’t have to teach them networking” but it seems like you can’t just have one “thing” nowadays
|
# ¿ Mar 16, 2021 12:54 |
|
Dear Mister “I don’t route or bridge my LANs” This will be the last frame I ever send your rear end I’ve sent six ARPs and still no word, I don't deserve it? I know you got my last two packets, I wrote the addresses on 'em perfect
|
# ¿ Apr 10, 2021 02:14 |
|
my stepdads beer posted:today i accidentally got one of our transit providers to give me transit over their peering exchange, oops time to accidentally leak some routes
|
# ¿ Apr 21, 2021 14:09 |
|
does anyone have experience with HA VIPs in a L3 ECMP environment? I know there is glb director which is supposed to solve this, but it looks like it's a bit complicated to set up, was wondering if there are any other projects/reading I should look at before I try to implement a hopefully more dumb + simple POC using something like conntrackd what I want is tcp over anycast that can survive a change of paths/endpoints
|
# ¿ May 12, 2021 00:19 |
|
tortilla_chip posted:Is that a mandatory requirement due to long lived flows? Resilient hashing works decently well and there's not a huge state penalty. not long-lived flows, just a fairly dynamic network so flows would be breaking all the time without resilient/consistent hashing or some other mechanism. and unfortunately (although imo probably correctly) network guys have so far refused to put anything smart into the network and something like this would require them enabling it everywhere. will check these vids out after work though, thanks!
|
# ¿ May 13, 2021 01:00 |
|
ate poo poo on live tv posted:Maybe I'm naive, but I wouldn't expect TCP to survive changing endpoints (changing path's should be fine though) however on the application side you should be able to identify the same user session so that a drained endpoint doesn't disrupt the front end. my stepdads beer posted:yeah typically the app has to have some shared state to accommodate the VIP changing between nodes yeah, what I'd like to have is a proxy/VIP service that lives across multiple nodes and have traffic be able to land on any of them and get forwarded to correct service. normally traffic for a single flow/connection will always take the same path in a network, even in ECMP environments, but this is a result of the way l3 ecmp is implemented. at every hop the 5-tuple (sport, sip, dip, dport, proto) is hashed into buckets equal to however many next hops are available to determine what the next hop should be. so if network is completely static, a given flow/connection will always wind up at the same place (because the 5-tuple doesn't change and the number of buckets are not changing at each hop along the way) and this would be easy. our network changes all the time though, which breaks this behavior because whenever it happens a bunch of flows that were previously going along one path and ending up at one endpoint are going to be reshuffled to a different path/endpoint and break the connection because the new endpoint won't know about it
|
# ¿ May 13, 2021 01:14 |
|
SamDabbers posted:What kind of service are you running on those VIPs? This is probably better accomplished at the application layer to direct traffic to different IPs rather than this "anycast TCP" at the network layer. Your network peeps are correct to the motivation is not for any specific application, but for building something like ELB on-prem, so you have some pool of servers holding a bunch of VIPs fronting the backends of various teams who need load balancing. you definitely need VIPs, but it'd also be nice if a node failing over breaking all existing connections wasn't a thing, which it would be without some kind of connection state sharing. the fact that you don't need consistent hashing in the network if you solve this problem is just a bonus I guess but given the reaction from everyone here perhaps asking for chashing + tolerating mass connection death might be a more rational way to go
|
# ¿ May 13, 2021 13:18 |
|
cheque_some posted:so i'm kind of out of my depth on this, but what you were talking about kinda reminded me of google's maglev system: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/44824.pdf yeah maglev is one implementation, the others that I know about are katran and glb director. I was hoping there might be a more accessible thing to play around with
|
# ¿ May 14, 2021 13:34 |
|
content delivery notwork
|
# ¿ Jun 8, 2021 12:44 |
|
what do people think about cumulus?
|
# ¿ Jun 12, 2021 08:48 |
|
I dunno if any of youse have had to deal with it yet, but I found out this weekend iptables feels positively ergonomic compared to its successor. nftables has an interface that could only have been developed on extreme linux brain
|
# ¿ Jun 14, 2021 06:05 |
|
my stepdads beer posted:https://www.fastly.com/blog/debunking-cloudflares-recent-performance-tests this is just like ford vs ferrari
|
# ¿ Dec 7, 2021 03:37 |
|
have been driven into a situation where we'd like to do some kernel introspection, systemtap is the only real option for us due to the age of our kernels in local testing it looks fine, but i'm extremely gunshy about deploying it at scale. anyone have any experience doing so? we're staying within the "safe zone" of systemtap, ie no guru mode, and our kernel-side code is extremely simple, but bugs in systemtap or the kernel could obviously still gently caress us and unfortunately even a single kernel lockup would be a Bad Time in our environment
|
# ¿ Dec 26, 2021 11:00 |
|
CMYK BLYAT! posted:im guessing you can't just make the case to move to something new enough to support ebpf us being able to upgrade to something new (here a very liberal definition of new) is dependent on this project
|
# ¿ Dec 26, 2021 13:22 |
|
Nomnom Cookie posted:do it in phases then. do real testing not just "yup script seems to do something" yeah, we'll certainly roll it out in stages and test it under synthetic load, etc. unfortunately in this part of the shop everything is pets so there's a limited amount of confidence we can gain by doing so
|
# ¿ Dec 27, 2021 05:03 |
|
|
# ¿ Jul 24, 2023 13:40 |
|
didn't deploy the systemtap poo poo, think it was for the best. our environment still sucks so much poo poo though, it's unbelievable, we have just recently managed to mostly move off of a kernel version that was released when I was in high school.
|
# ¿ Jul 24, 2023 14:39 |
|
The Iron Rose posted:The fact tf has statefiles at all is the problem don’t think so, terraform needs a way to figure out existing state and it can’t figure it out without storing the previously applied state
|
# ¿ Sep 16, 2023 14:33 |
|
Progressive JPEG posted:the tfstate file is a state cache, terraform is missing a way to refresh/populate its cache. like "tf import --all" to fetch current content for each listed resource in the config how is it a cache? if I declare a resource, apply, and then stop declaring the resource and apply again, tf needs to know the resource was created in the first place so it can remove it. probably there are a lot of resources in a lot of providers that could do something like scan for tags that were applied on creation or something, but this is not going to work for everything
|
# ¿ Sep 18, 2023 09:36 |
|
Progressive JPEG posted:i think it's fine for state recovery to be best-effort. if there's a moronic write-only service that wouldn't work for this, then it can continue not working it’s not about being write only, it’s about having there being an entire universe of resources in a given provider and knowing which ones are associated to which resource in this particular tf set. the state file might not be implemented in a great way, but it’s kinda the whole reason anyone uses tf. if you just want a boto script that can’t manage state(or just pretends to), you can write one!
|
# ¿ Sep 21, 2023 02:53 |
|
what do your routing tables look like
|
# ¿ Mar 2, 2024 07:01 |
|
|
# ¿ May 9, 2024 08:25 |
|
yeah, i’d try to figure out whether all traffic between the two hosts is using the wrong routes or just the smb traffic. also would be interested in the routing table of the receiving host, and also the arp tables
|
# ¿ Mar 3, 2024 05:37 |