|
in all honesty id feel better about being called a nft crypto scammer than a hft it manager
|
# ? Feb 15, 2024 14:15 |
|
|
# ? May 9, 2024 02:17 |
|
that’s how you lose your virginity ngl
|
# ? Feb 16, 2024 07:20 |
|
fresh_cheese posted:hft? gently caress no this is generic moving money around bank poo poo. from my almost-entirely uneducated perspective, the environment you've described seems incredibly bodged-together for "generic moving money around bank" poo poo like is this some startup whose gimmick is "we're spending x% less on IT than the competition because we're always near or at max hardware utilization" hence the criticality of "we can't do routing because that would eat a CPU cycle and every cycle not spent on 'moving money' costs us money"?
|
# ? Feb 16, 2024 17:14 |
|
sorry, i know that's a bit cynical of me. i really am curious to know more about what motivated the decisions that brought you to this point. i also understand if you can't get into much more detail due to security or nda concerns.
|
# ? Feb 16, 2024 17:19 |
|
Farmer Crack-rear end posted:what motivated the decisions that brought you to this point with computer nerds its always whatever makes them feel clever
|
# ? Feb 16, 2024 17:57 |
|
a) not my design - im just the QA guy - im supposed to be making sure the design and technology works even when its stupid b) finance companies are run by finance people who measure things financially and put financiers in charge of everything including IT. %utilization of an expensive asset is trivial to measure and there is generations of pressure to run them at 100% because otherwise youre “wasting money.” these are the people who until the past 5 years said poo poo like ”if you are not paging you bought too much memory.” c) buddy, if you think the entire it backbone of the world is well designed and not just a bunch of bodged together crap someone drew on a chalkboard 40 years ago i dunno what to tell you
|
# ? Feb 17, 2024 16:13 |
|
fresh_cheese posted:a) not my design - im just the QA guy - im supposed to be making sure the design and technology works even when its stupid bullshit ive worked with QA guys and QA guys will side eye you if you say some nerd poo poo like "why arent we using OSPF to select links". admit it youre a ccna-aboo
|
# ? Feb 17, 2024 20:27 |
|
lol i have no certifications what so ever. ospf is just fukin cool. apparently im the only person alive who doesnt think its gross and has no place on multi homed servers.
|
# ? Feb 18, 2024 16:41 |
|
ospf on servers is fine as long as the configs on the switch side are suitably defensive in case of misconfiguration. stick the servers in a stub area and put route maps on the interfaces for which prefixes the server is allowed to advertise. might be more config but multipathing is much nicer at layer 3 than layer 2 (m)lag
|
# ? Feb 18, 2024 18:08 |
|
lagg/teaming/bonding is great for physical availability and load balancing within the context of a single layer2 broadcast domain. multipathing at layer3 is great on top of that for when you need to be worried about spanning tree loops taking down a whole L2 due to misconfigured bonds and bridges, for handling core router outages and maintenance, and also for magically handling relocation of virtual machines when your network cant do VXLAN because the BISO read an article on the jet that said it was a security vulnerability or some other bs
|
# ? Feb 18, 2024 19:02 |
|
networks are cool and good because they let computers talk to each other so you can play multiplayer myst and stuff networks are terrible because they let computers talk to each other and that will be the downfall of us all
|
# ? Feb 18, 2024 19:16 |
|
why ospf over ibgp
|
# ? Feb 18, 2024 19:26 |
|
its easier. setting up ospf is you just turn it on and put everything in area 0.
|
# ? Feb 18, 2024 19:40 |
|
i dunno never done ibgp the “put it all in area 0” guy is right as long as you get that thats a joke. the network people own area 0 and the routers between that and your stubby area. they give you a stubby area 5 that you put all your crap in and then yea it works great. stubby just means your area only talks to itself and the routers the network team connects your junk to, your stubby area does not provide transit routing to other areas adjacent to or behind it.
|
# ? Feb 18, 2024 19:51 |
|
can ospf do anycast? my main use case for ibgp at the server is ha dns and such
|
# ? Feb 18, 2024 21:51 |
|
i don't see why not. you would want to make sure that you dont accidentally install ECMP routes to anycast destinations and end up 50/50ing your traffic to 2 random nodes, but i'd be really surprised if there wasn't a config param on your routers for that
|
# ? Feb 18, 2024 22:13 |
|
outhole surfer posted:can ospf do anycast? no idea, never tried that! you could try adding the same vipa dns service address to all the dns servers and let ospf select the closest one with a viable path. that may get you where you wanna be. youll have the same ip reachable on (dns hosts * interfaces per host ) paths in the ospf routing tables its chatty though by default - thats part of why it converges fast. youll maybe want to tune the link advertisement intervals if your environment is heavily virtualized and a hundred ospf daemons talking to each other on one core is gonna be too much.
|
# ? Feb 18, 2024 22:25 |
|
yea ospf can do anycast, you don't even need a routing protocol at all- you just have the same route in multiple places and/or destined to multiple places, no reason they can't be static or whtaever, routing protocols just enable better fault tolerance blurring the lines between systems and networks is good fun and imo that setup is sane enough (although I work in telecom) but you'll spend the rest of your life teaching every new hire the realities of ip routing
|
# ? Feb 18, 2024 23:35 |
|
I setup an anycast setup for a print service once, we had servers in every state and routing took you to the closest one. The servers had a normal NIC and a loopback interface, we used a static route to point to the loopback interface and redistributed it into ospf in each state. Routing always took you to the closest one. It worked really well with the only downside being if a print server had a system issue like the service had crashed there's no fault tolerance. If the server itself went down the static route would be removed so that was fine for maintenance etc.
|
# ? Feb 18, 2024 23:51 |
|
fresh_cheese posted:lol i have no certifications what so ever. ha nailed it. i meant ccna-aboo as a parallel to weeaboo, i.e. someone who is a fan of the thing, wishes they were they thing, dreams of being the thing. good news for you getting a ccna is a hell of a lot easier than becoming japanese
|
# ? Feb 19, 2024 20:39 |
|
have you considered making a home lab with a 6 raspberry pi kubernetes cluster, a dual socket sandy bridge VM host, and triple-NUC san. cause then you could run all the weirdo network protocols you want without bothering the people at work about it
|
# ? Feb 19, 2024 20:41 |
|
ospf is the normal routing protocol. ibgp and is-is are the kubernetes cringe of the networking world is-is is maybe closer to solaris in that if you are running it you're probably a Knower and are using it to solve a problem that actually exists instead of a fake problem like kubernetes
|
# ? Feb 19, 2024 20:44 |
|
Nomnom Cookie posted:ha nailed it. i meant ccna-aboo as a parallel to weeaboo, i.e. someone who is a fan of the thing, wishes they were they thing, dreams of being the thing. good news for you getting a ccna is a hell of a lot easier than becoming japanese yea na gently caress everything about that im shitpostin about wacky network stuff in the networkin thread and you invite me to self harm by going for a cisco cert?
|
# ? Feb 19, 2024 20:59 |
|
better than harming your coworkers by exposing them to routing protocols uninvited
|
# ? Feb 19, 2024 21:02 |
|
Nomnom Cookie posted:have you considered making a home lab with a 6 raspberry pi kubernetes cluster, a dual socket sandy bridge VM host, and triple-NUC san. cause then you could run all the weirdo network protocols you want without bothering the people at work about it lol
|
# ? Feb 19, 2024 23:57 |
|
I literally have ceph running on a triple nuc cluster under Proxmox
|
# ? Feb 20, 2024 00:01 |
|
i couldnt get enough rpis for the kube cluster but you bet the dual sandy bridge lives in my closet looking for a reason to exist
|
# ? Feb 20, 2024 01:03 |
|
did you know 10gbe fcoe switches are so cheap on ebay youd have to be stupid NOT to buy one
|
# ? Feb 20, 2024 01:04 |
|
12 rats tied together posted:ospf is the normal routing protocol. ibgp and is-is are the kubernetes cringe of the networking world kubernetes is good tho at least until all the vendors in the "cloud native" space get ahold of it and try to make it "easier" for people that use it but refuse to learn any of the configuration don't want to understand what a Deployment is? dont worry, we've got a lovely abstraction layer over top that somehow ends up being more complex and won't let you fix poo poo when our hardcoded automation makes bad decisions
|
# ? Feb 20, 2024 02:18 |
|
Qtotonibudinibudet posted:kubernetes is good tho vendors like this exist to tell upper management they can fire the devops dude who makes 280k and doesn’t pay attention in meetings and replace him with a portal. no director can resist such a siren song.
|
# ? Feb 20, 2024 02:33 |
|
Nomnom Cookie posted:have you considered making a home lab with a 6 raspberry pi kubernetes cluster, a dual socket sandy bridge VM host, and triple-NUC san. cause then you could run all the weirdo network protocols you want without bothering the people at work about it you mutherfuckers are perverts all trynna run routing protocols on rasberries pi when they just wanna dhcp thmselves a default route like every other normal big girl computer.
|
# ? Feb 20, 2024 03:02 |
|
seriously tho are you even doing real networking if all your routes are static?
|
# ? Feb 20, 2024 03:15 |
|
lol if you use rpi $250k supermicros or gtfo
|
# ? Feb 20, 2024 03:20 |
|
outhole surfer posted:lol if you use rpi 250k ? thats it? wheres the real computers?
|
# ? Feb 20, 2024 03:30 |
Qtotonibudinibudet posted:don't want to understand what a Deployment is? dont worry, we've got a lovely abstraction layer over top that somehow ends up being more complex and won't let you fix poo poo when our hardcoded automation makes bad decisions oh hey when did you come consult on my project?
|
|
# ? Feb 20, 2024 03:51 |
|
Qtotonibudinibudet posted:kubernetes is good tho kubernetes is real, real bad actually. it was designed on the assumption that you could use etcd to provide every kubelet and every kube-proxy and every controller in the cluster with a globally consistent view of cluster state. as anyone who has actually scaled a distributed system before would have guessed, this lasted for about five seconds after hitting a real use case and has only gotten worse since. a "properly" functioning production kube cluster is nothing more or less than an enormous pile of poo poo covered in monkeys, and all of the monkeys are constantly grabbing handfuls of poo poo to fling at each other and to different places on the pile. you see all these monkeys being extremely busy and get impressed by how much is going on, but in the end its still monkeys flinging poo poo and you hope occasionally a splat lands in the right spot to make something happen
|
# ? Feb 20, 2024 05:33 |
|
that quite literally sounds like a skill issue on the part of people using it wrong
|
# ? Feb 20, 2024 05:49 |
|
that tracks with how i saw ppl using it at that lovely startup though. they used the 'restart container if the watchdog dies' thing to enable themselves to deploy poo poo code. that was about it.
|
# ? Feb 20, 2024 05:49 |
|
Jonny 290 posted:that quite literally sounds like a skill issue on the part of people using it wrong theres no way to use it right. kubernetes is broken by design, based on the assumption that latency could be kept low enough that building the entire thing out of race conditions would be fine in practice. but! latency could not be kept low enough. who could have foreseen this
|
# ? Feb 20, 2024 06:02 |
|
|
# ? May 9, 2024 02:17 |
|
and we know a thing or two about load-bearing race conditions around these parts
|
# ? Feb 20, 2024 06:31 |