|
Nomnom Cookie posted:kubernetes is real, real bad actually. it was designed on the assumption that you could use etcd to provide every kubelet and every kube-proxy and every controller in the cluster with a globally consistent view of cluster state ahem, it's not globally consistent, it's eventually globally consistent everything works fine so long as nothing ever changes in your cluster and, if it does, that those changes don't result in poo poo fighting to account for a state that only showed up due to something else trying to account for a change before settling into an equilibrium that never actually happens the primitives generally make sense though! too bad most everyone lacks the OS theory and distributed systems background to try and use them well tho
|
# ? Feb 20, 2024 06:42 |
|
|
# ? Apr 27, 2024 08:41 |
|
this tracks with what ive seen - on prem implementations of kube are effectively 1 app per cluster and as stateless as possible. a static kube cluster is a reliable kube cluster no upgrades or security fixes, make a new one and deploy the current version of the app to it and shoot the old cluster in the face.
|
# ? Feb 20, 2024 16:45 |
|
Nomnom Cookie posted:kubernetes is real, real bad actually. it was designed on the assumption that you could use etcd to provide every kubelet and every kube-proxy and every controller in the cluster with a globally consistent view of cluster state. as anyone who has actually scaled a distributed system before would have guessed, this lasted for about five seconds after hitting a real use case and has only gotten worse since. a "properly" functioning production kube cluster is nothing more or less than an enormous pile of poo poo covered in monkeys, and all of the monkeys are constantly grabbing handfuls of poo poo to fling at each other and to different places on the pile. you see all these monkeys being extremely busy and get impressed by how much is going on, but in the end its still monkeys flinging poo poo and you hope occasionally a splat lands in the right spot to make something happen i work on a pretty big kube deployment as a managed platform used by the rest of the company and it's ok things are divided into groups of a few thousand nodes each and then have some basic federation on top of that but we don't expose the kube apis to users, its effectively an implementation detail on our end also buddy if you think kube is bad then take a look at mesos lol
|
# ? Feb 20, 2024 19:27 |
|
kube is certainly A Solution to the problem of "how do i bin pack a bunch of garbage onto these computers" imagine though if you didn't have this problem. imagine what kind of world that would be.
|
# ? Feb 20, 2024 19:40 |
|
any design based on polling loops, no service dependancies but instead random retry and sleeps with eventual giving up, and otherwise timing windows on top of timing windows is a terrible design. its like the stupidest possible design that only could work if you have effectively infinite cores available and a network with infinite bandwidth and 0 latency.
|
# ? Feb 20, 2024 19:53 |
|
Progressive JPEG posted:also buddy if you think kube is bad then take a look at mesos lol yeah. jesus christ, mesos. I ran a mesos cluster for a few years because we thought "kubernetes isn't mature enough" in 2016 or whatever. I couldn't throw that stack into the trash fast enough once we had kubernetes working.
|
# ? Feb 20, 2024 20:04 |
|
i dunno if we should be listening to the guy trying to run ospf between vms for rdma traffic for what is a bad design
|
# ? Feb 20, 2024 20:04 |
|
nobody should listen to me ever for any reason but forget rdma: when you have 2 network adapters, one is fast and talks to nearby stuff, and one is slow and talks to the entire world, how do you decide when you can use the fast network? no, you cannot bridge or route the fast network to the world. a) /etc/hosts overrides b) split horizon dns c) dynamic routing d) unique host names for all viable paths and configure the path in the application config e) ?????
|
# ? Feb 20, 2024 20:18 |
|
chuck it all in the bin and go play pinball
|
# ? Feb 20, 2024 20:27 |
|
seconded
|
# ? Feb 20, 2024 20:32 |
|
see? this is why i ask you excellent people, because you can see both forest and trees.
|
# ? Feb 20, 2024 20:42 |
|
the answer though OP is the route table. it's what it was designed for and it has all the features you need to solve that problem.
|
# ? Feb 20, 2024 21:20 |
|
fresh_cheese posted:nobody should listen to me ever for any reason in the situation where the two networks aren't connected d would be the objectively correct solution in some hosed up stupid situation where you've got two networks, where one can access the internet but the other can't, but they both use the same addressing, the only shortest path you need to calculate is out the door and away from that situation
|
# ? Feb 20, 2024 21:29 |
|
12 rats tied together posted:the answer though OP is the route table. it's what it was designed for and it has all the features you need to solve that problem. invent my own dynamic routing automation based on internally discoverable topology facts. check. … gotta do both ends tho so i dont end up with asymmetric routes, so ill need a notification mechanism… aaaaand i just reinvented ospf. greaaaat.
|
# ? Feb 20, 2024 21:33 |
|
abigserve posted:in the situation where the two networks aren't connected d would be the objectively correct solution nah the two networks do not overlap in terms of ip addresses. the slow network is 10.0.0.0/8 and the fast network is 192.168.0.0/24
|
# ? Feb 20, 2024 21:36 |
|
no just use ospf. or configure a 0.0.0.0 route out the slow nic and a more specific route out the internal one. probably start with the second one
|
# ? Feb 20, 2024 21:36 |
|
12 rats tied together posted:no just use ospf. or configure a 0.0.0.0 route out the slow nic and a more specific route out the internal one. probably start with the second one exactly. that's literally all you need to do bruz
|
# ? Feb 20, 2024 21:54 |
|
if i hack a route for outbound traffic on host1 to use the fast network when it wants to talk to host2, i need a corresponding return path on host2 to also use the fast path to get the responses back to host1 its the coordination of both those changes across not just a pair of nodes, but every node running on the fast network that makes this effectively a use ospf or reimplement ospf poorly situation. i was hoping i was just too dumb to see a easy solution. nope, just dumb. situation is also dumb. everything is dumb except naps. naps rule.
|
# ? Feb 20, 2024 22:13 |
|
outhole surfer posted:can ospf do anycast? yes, it does it quite well (ECMP), but remember that ECMP routing is based on C for 'cost'. ospf links can have different costs, which means you may need to be careful. eg: one server has a 1 gbps nic, the other has a 10 gbps nic, those are not equal 'cost' by default, the 10 gbps server will get 100% of the traffic until it goes offline. you can adjust the 'cost' of links manually to smooth that out. also your implementation of ospf may not have different costs between fast links (eg >10 gbps) because they made cost go down with link speed instead of up and then hit the bottom around 10 gbps and so when 25/40/100 came out they didn't have any lower numbers and so just shrugged and said "well those are all close enough" but that's a specific ospf implementation detail that can be tuned/corrected but just fyi
|
# ? Feb 20, 2024 22:14 |
|
fresh_cheese posted:if i hack a route for outbound traffic on host1 to use the fast network when it wants to talk to host2, i need a corresponding return path on host2 to also use the fast path to get the responses back to host1 is the fast network (192.168.0.0/24) a single contiguous network? hosts will automatically send traffic for that local network out the local (fast nic). if not contiguous, just point the supernet (192.168.0.0/24) at the gateway on said network. the reply will do the same since it'll be sourcing from the ip on the fast nic i've used additional dns records for this before, eg: host1 and host1-fast, host2 and host2-fast, and if i want to use the good link, i put in `host2-fast` into the app, and it works because everybody with a NIC in the fast network will prefer that due to the local route always being installed with the interface
|
# ? Feb 20, 2024 22:20 |
|
fresh_cheese posted:if i hack a route for outbound traffic on host1 to use the fast network when it wants to talk to host2, i need a corresponding return path on host2 to also use the fast path to get the responses back to host1 what the gently caress are you talking about, you've got two nodes that are connected to the same network, you don't need to hack anything it's literally just "have a route to the other network that uses the faster interface" madsushi posted:is the fast network (192.168.0.0/24) a single contiguous network? hosts will automatically send traffic for that local network out the local (fast nic). if not contiguous, just point the supernet (192.168.0.0/24) at the gateway on said network. the reply will do the same since it'll be sourcing from the ip on the fast nic this. You need one extra route (assuming one supernet) on all your hosts and they should have it anyway lmao
|
# ? Feb 20, 2024 22:39 |
|
Asymmetric POSTer posted:i dunno if we should be listening to the guy trying to run ospf between vms for rdma traffic for what is a bad design well listen to me instead then
|
# ? Feb 20, 2024 23:25 |
|
abigserve posted:what the gently caress are you talking about, you've got two nodes that are connected to the same network, you don't need to hack anything it's literally just "have a route to the other network that uses the faster interface" mfs will literally deploy ospf to avoid learning about route tables
|
# ? Feb 20, 2024 23:27 |
|
ip route add 192.168.0.0/24 fast-nic done
|
# ? Feb 20, 2024 23:30 |
|
very painfully piecing together what our hero has splayed out across 12 posts, here is my guess as to the situation: 1. VMs are deployed to various hosts 2. every VM is on two networks 3. 10.0.0.0/8 is the real network and puts packets on wires 4. 192.168.0.0/24 is the fake network that can only talk to VMs on the same machine and puts packets directly in socket buffers 5. the requirement is to transparently use the intra-VM fake nic when possible so that packets dont have to hairpin through the host's network stack to reach the destination when that destination happens to be on the same physical machine 6. therefore you cant just make a route table because the problem is that the correct destination IP varies depending on the sender address currently they are doing fucky things with /etc/hosts to make names resolve to the intra-VM address when that is available what they should be doing is giving each VM host a chunk of the real network to call its own so that you can make a route table edit: what they really should be doing is none of this bullshit because it doesn't matter and isn't load bearing. or if it is load bearing they need to stop with this opportunistic bullshit and come up with a scheme that guarantees VM placement so as to get the perf they need to hit SLAs
|
# ? Feb 20, 2024 23:36 |
|
Nomnom Cookie posted:ip route add 192.168.0.0/24 fast-nic lol ok im done youre right you win jesus gently caress goddamn
|
# ? Feb 21, 2024 01:01 |
|
if you aren’t rdmaing does a vswitch transit matter? extremely not my area but I don’t understand a functional difference between a vswitch internal only transit vs a vswitch transit with an uplink or does this scenario require some internal non-vswitch networking
|
# ? Feb 21, 2024 01:09 |
|
Nomnom Cookie posted:4. 192.168.0.0/24 is the fake network that can only talk to VMs on the same machine and puts packets directly in socket buffers hmm. in that case i would simply stop using VMs, run 2 processes in the same OS (the OS is good at this im told) and then use pipes instead of sockets.
|
# ? Feb 21, 2024 01:18 |
|
hey whats a reasonable way to handle two ISPs in a home situation like if i had a WISP and a 4g modem that have similar speeds. thinking load could be distributed across both, rather than doing a priority failover setup don't really know what i should be looking for here
|
# ? Feb 21, 2024 01:39 |
|
[dusting off ccna] GLBP lets you do load balancing. it is a type of first-hop redundancy protocol which normally is for active/standby-ing your gateways. i would start by googling FHRP + load balancing / sharing / etc. words. you will need some form of managed router.
|
# ? Feb 21, 2024 01:47 |
|
thunderdome it with two dhcp servers on the same subnet
|
# ? Feb 21, 2024 01:50 |
|
two static routes, 128.0.0.0/8 and 129.0.0.0/8
|
# ? Feb 21, 2024 02:02 |
|
lol
|
# ? Feb 21, 2024 02:07 |
|
Without provider independent address space your options will be limited. The shape of the idea is something that determines which interface/provider to use and then DNAT based off that to keep return traffic symmetric.
|
# ? Feb 21, 2024 02:47 |
|
deez nats
|
# ? Feb 21, 2024 03:12 |
|
Most modern enterprisey firewalls will support something like that via session-based ECMP, which should also handle NAT based on outgoing interface. I assume that sorta thing has trickled down into the prosumer market already.Jabor posted:deez nats
|
# ? Feb 21, 2024 04:41 |
|
12 rats tied together posted:hmm. in that case i would simply stop using VMs, run 2 processes in the same OS (the OS is good at this im told) and then use pipes instead of sockets. making something that's simple and works isn't fun. getting to gently caress with routing protocols is fun but yeah if what you want is a set of processes that run on the same host and have extremely convenient communication between them, docker compose is right there. just use that
|
# ? Feb 21, 2024 05:30 |
|
Progressive JPEG posted:hey whats a reasonable way to handle two ISPs in a home situation multi-wan is the term you're looking for
|
# ? Feb 21, 2024 05:55 |
|
Jabor posted:deez nats
|
# ? Feb 21, 2024 06:09 |
|
|
# ? Apr 27, 2024 08:41 |
Somebody fucked around with this message at 06:29 on Feb 21, 2024 |
|
# ? Feb 21, 2024 06:24 |