|
Schadenboner posted:The UDM Pro is similar to the UDM but it's rack-mount, it doesn't have wireless but it's got a 10GbE SFP+, 8 port switch rather than 4, same cloud key and routing poo poo, it also has a 'lil babby touchscreen . It's currently in Early Access: https://ubntwiki.com/products/unifi/unifi_dream_machine_pro lol so it’s $80 more and then you gotta but WiFi separately? seems like the opposite of what I need at home
|
# ? Dec 25, 2019 17:21 |
|
|
# ? Apr 27, 2024 10:15 |
|
k8s let’s me think about compute as pools or slabs of ram and cpu, and let’s me sleep by having a bunch of smart auto recovery stuff built in. disk fills, or Healthcheck fails, route around it. just in general there is a lot of good poo poo built in that’d I spend a decent amount of time trying to replicate and don’t have to. also kubectl is magic compared to the poo poo I used to have to do. I can do things like get logs and system events with a simple and simple decently put together syntax. typical issues I see, java shops pinned to 8 not turning on the experimental flag on to detect cgroup stuff or using the downward api to pass resource requests manually through, and apps having no handlers for graceful termination, or doing something real terrible where they fork processes on containers and don’t get why that’s bad. also devs are terrible about putting requests on pods in general, and you get “hot” nodes where something isn’t limited and is slamming a node.
|
# ? Dec 26, 2019 01:27 |
|
freeasinbeer posted:or doing something real terrible where they fork processes on containers and don’t get why that’s bad. Elaborate?
|
# ? Dec 26, 2019 04:28 |
|
freeasinbeer posted:k8s let’s me think about compute as pools or slabs of ram and cpu, and let’s me sleep by having a bunch of smart auto recovery stuff built in. “let’s” is possessive, you want “lets”
|
# ? Dec 26, 2019 04:54 |
|
abigserve posted:Elaborate? you generally don’t want background processes within a container because if the background process dies and it doesn’t take down the main process with it, you effectively end up with a zombie container, thats hiding the fact that it’s unhealthy from the management layer ideally any fatal faults in a container result in the entire container exiting. then it will be relaunched automatically, and a restart counter will be incremented somewhere, making the flake visible in your monitoring zombie containers can be ameliorated by having a health check that polls the container for liveness, but that’s really just a workaround option for when you’ve got something that you can’t get to exit when it’s died for whatever reason
|
# ? Dec 26, 2019 05:11 |
|
Unix processes don’t have a thing where you can get messages about misbehaving children like in erlang, but it’s not normalized in use like in erlang
|
# ? Dec 26, 2019 05:35 |
|
Cocoa Crispies posted:“let’s” is possessive, you want “lets” I’m phone posting drunk from a beach in the South Pacific, so so zombie processes are bad, but as mentioned liveliness and readiness probes can kinda glaze over it by taking them out of the service IP or by restarting the container they are configured on. Importantly not the whole pod tho, and they also won’t move hosts unless it is evicted for some reason like an OOM or running out of disk. more commonly I see folks not catching the sigterm when a pod is terminated, like with a new version of a deployment, and even if they do, pid1 doesn’t often cleanly exit the child process. by default k8s sends a sigkill after 30 seconds. now the termination notification of the pod also removes it from the service IP, so you should have 30 seconds of no traffic before the sigkill but I find in general that apps really don’t like being abruptly killed. simple put, child processes are a smell to me typically, that lets me know there is likely deeper issues that k8s is gonna aggravate , because processes/pods/nodes are more ephemeral. if I can’t roll a node because processes can’t handle sigterms cleanly it makes it a pain in the rear end to do upgrades or installs of new tooling.
|
# ? Dec 26, 2019 05:43 |
|
It's been a while since I wrote anything that forked but I seem to remember getting into the zombie process state was hard as gently caress and you had to jump through a bunch of hoops to get there. I.e if the parent process died or was killed, I remember (could be wrong) it's children being automatically reaped or killed along with it by default. This would have been perl. Am I crazy?
|
# ? Dec 26, 2019 09:28 |
|
ive seen people try to DIY their own init system within containers, with like a bunch of processes under a parent process that tries to keep them running of course they do it wrong so the parent process ends up silently missing failures and the container turns into a zombie where half of it isnt actually running anymore normally this is driven by them wanting to keep a bunch of processes fully adjacent to each other. they should instead put the procs in separate containers in the same pod, with a shared emptyDir volume or similar for any stuff they actually need to share. this then allows the container management to manage the containers, and if any of the processes fails then the pod gets cleanly and reproducibly reset as the lord intended
|
# ? Dec 26, 2019 09:59 |
|
rework a bit of systemd so it will run within containers and manage their local processes. problem solved
|
# ? Dec 26, 2019 16:24 |
|
CMYK BLYAT! posted:rework a bit of systemd so it will run within containers and manage their local processes. people (terrible, horrible people) have been doing this for several years, since approximately 5 seconds after it became possible. of course they have freeasinbeer posted:now the termination notification of the pod also removes it from the service IP, so you should have 30 seconds of no traffic before the sigkill this is not always true and it can take a while for the NotReady to propagate, at least for NodePort services
|
# ? Dec 26, 2019 17:17 |
|
Cocoa Crispies posted:“let’s” is possessive, you want “lets” nobody care’s
|
# ? Dec 26, 2019 17:52 |
|
we can't even put code on VM's properly --> K8s will fix our problems
|
# ? Dec 27, 2019 01:15 |
|
I got 10 Gbit at my work desk recently and I started getting audibly angry when a file transfer was only going at 2 Gbit the other day before I checked my privilege.
|
# ? Dec 27, 2019 04:49 |
|
a genuine question for all the k8s wranglers: what was wrong with docker and fleetctl?
|
# ? Dec 27, 2019 05:00 |
|
Jonny 290 posted:a genuine question for all the k8s wranglers: fleetctl? idk maybe nothing by the time I started mangling containers for money k8s had already won so I’ve never touched fleet what’s wrong with docker? that whole thing where it’s a pile of poo poo. that’s pretty bad. cri-o will save us someday
|
# ? Dec 27, 2019 07:25 |
|
Jimmy Carter posted:I got 10 Gbit at my work desk recently and I started getting audibly angry when a file transfer was only going at 2 Gbit the other day before I checked my privilege. all our switches and nics are gigabit (as you would expect), but my ip phone has a 100mbps switch in it so thats all i get. it sucks
|
# ? Dec 27, 2019 08:23 |
|
I discovered that speedtest dot net sucks dick because it stops being accurate above ~800 mbit fast.com on the other hand will have zero complaints maxing out your 40 gbit nic
|
# ? Dec 27, 2019 09:34 |
|
You won't be able to push most consumer storage devices past 2gbps for actual real data transfers
|
# ? Dec 27, 2019 09:55 |
|
Jonny 290 posted:a genuine question for all the k8s wranglers: docker the image format is fine dockerd the runtime implementation is loving garbage and there’s zero reason to use it anymore because there are roughly 500 compatible implementations that aren’t rubbish these days off the top of my head dockerd have pulled poo poo like minor point releases with hideous breaking changes, long-running bugs where containers would just turn into zombies with no network for a few hours, etc. these days the main thing that sucks with dockerd, other than how flaky it is at scales beyond a single workstation, is that for marketing reasons it hardcodes docker hub as its default registry, which is bullshit if you’re running an airgapped setup with your own separate registry. other runtimes like containerd let you simply configure the default registry to point to your own on-prem instance, but if you’re some poor soul still running dockerd then you’ve got to inject a registry prefix into all your image names everywhere or else dockerd tries to hit up docker hub and then throws up its hands when it’s unreachable also docker the company was sold for scrap a few weeks ago so I fully expect monetization to be cranked to 11 shortly never heard of fleetctl
|
# ? Dec 27, 2019 11:38 |
|
i like aws ecs if i don't want to be hemmed into aws only i need to get better at k8s generally speaking though- same with terraform over cloudformation
|
# ? Dec 27, 2019 16:38 |
|
let's talk eigrp - it sucks and I hate it
|
# ? Dec 27, 2019 17:09 |
|
Forums Medic posted:let's talk eigrp - it sucks and I hate it i would simply turn on ospf and remove eigrp. have you considered this op
|
# ? Dec 27, 2019 18:00 |
|
I use protocols that follow my projects. that’s why I use RIP
|
# ? Dec 27, 2019 18:30 |
|
Forums Medic posted:let's talk eigrp - it sucks and I hate it From the Before Time when CISCO legitimately thought they would never have competition
|
# ? Dec 28, 2019 00:22 |
|
if you’re on AWS you qualify as a network engineer for knowing what a default route is
|
# ? Dec 28, 2019 00:55 |
|
what is radius
|
# ? Dec 28, 2019 22:50 |
|
Bloody posted:what is radius authentication server for dial-up, wpa enterprise, some other stuff i think wpa enterprise is nice because stations can authenticate base stations, and also managing credentials on a per-person basis scales better than a single shared key for big installs
|
# ? Dec 28, 2019 22:55 |
|
radius is a dead protocol that relies on obsolete design parameters like fixed client ip addresses and the sooner network/cli embrace oidc the better off we will all be.
|
# ? Dec 28, 2019 23:02 |
|
everyone who keeps pushing radius over a restful protocol that can do all it does and more is the reason why most enterprise practitioners keep designing new services on windows servers. its like they love tending their beautiful bonsai tree garden of infrastructure instead of realizing that anything ops related is poo poo-tier work that needs to be minimized so we can get a good night's sleep and focus on more interesting problems.
|
# ? Dec 28, 2019 23:03 |
|
Turnquiet posted:everyone who keeps pushing radius over a restful protocol that can do all it does and more is the reason why most enterprise practitioners keep designing new services on windows servers. its like they love tending their beautiful bonsai tree garden of infrastructure instead of realizing that anything ops related is poo poo-tier work that needs to be minimized so we can get a good night's sleep and focus on more interesting problems.
|
# ? Dec 29, 2019 04:47 |
|
radius works fine and switching to oidc of all things would require such a massive rewrite of litterrally everything and idk what the benefit would be. oidc only exists because web "developers" didn't want to use any of the existing federation standards that already worked.
|
# ? Dec 29, 2019 05:07 |
|
MFA with radius is a shitshow
|
# ? Dec 29, 2019 05:17 |
|
works great for me. im not using some Linux bullshit though, so maybe that's why
|
# ? Dec 29, 2019 05:18 |
|
sort of related, can't figure out how to get my hostname to point correctly to my sql database for public facing so i gave up cname and a names don't seem to work properly
|
# ? Dec 29, 2019 06:10 |
|
Turnquiet posted:everyone who keeps pushing radius over a restful protocol that can do all it does and more is the reason why most enterprise practitioners keep designing new services on windows servers. its like they love tending their beautiful bonsai tree garden of infrastructure instead of realizing that anything ops related is poo poo-tier work that needs to be minimized so we can get a good night's sleep and focus on more interesting problems. the only problem with this argument is that those more interesting problems will be solved by somebody else. radius is job preservation, and i can respect that.
|
# ? Dec 29, 2019 06:11 |
|
Fundamentally there is no way to move to a "rest like" protocol for the functionality that RADIUS provides, because the primary function of RADIUS is to carry EAP messages. EAP messages are layer 2 only and typically are not forwarded past the switchport which means you need another protocol whose only role is a carrier for said messages to the layer 3 endpoint that serves as the AAA server. Because EAP is end-to-end between the client and the authentication server for obvious reasons, there is no plausible way you could lift a framework like OIDC into the role that RADIUS provides. If you're thinking "why not use something other than EAP", consider that your clients have literally no network access at all prior to authentication. That is the primary use case for EAP/RADIUS. The end of RADIUS is actually the end of traditional networking, which is the very slow, plodding shift for enterprise to move towards zero-trust networking via massive overlay networks, the standards for which are still not agreed upon let alone implemented.
|
# ? Dec 29, 2019 07:58 |
|
i dunno why everybody wants everything to be "restful" anyway. it's a message format that fits pretty well with data exploration apis that are expected to have many clients hitting a single dataset but it's also for some reason how you do rpc now and is supposed to be the only thing you do with any kind of http/s interface which is supposed to be the only way anything should communicate over the network including elaborate systems layered on top of it to replace lower level interfaces. even if nothing you're doing needs to be routable
|
# ? Dec 29, 2019 23:50 |
|
Phobeste posted:i dunno why everybody wants everything to be "restful" anyway. it's a message format that fits pretty well with data exploration apis that are expected to have many clients hitting a single dataset but it's also for some reason how you do rpc now and is supposed to be the only thing you do with any kind of http/s interface which is supposed to be the only way anything should communicate over the network including elaborate systems layered on top of it to replace lower level interfaces. even if nothing you're doing needs to be routable because you can jank together stuff with curl
|
# ? Dec 30, 2019 00:39 |
|
|
# ? Apr 27, 2024 10:15 |
|
abigserve posted:Fundamentally there is no way to move to a "rest like" protocol for the functionality that RADIUS provides, because the primary function of RADIUS is to carry EAP messages. EAP messages are layer 2 only and typically are not forwarded past the switchport which means you need another protocol whose only role is a carrier for said messages to the layer 3 endpoint that serves as the AAA server. Because EAP is end-to-end between the client and the authentication server for obvious reasons, there is no plausible way you could lift a framework like OIDC into the role that RADIUS provides. it's not the network client to radius client thing that people want rest, it's the radius client to radius server part, and i bet you could design it so your authentication/accounting services would work in an edge cloud or something because *faaaart*
|
# ? Dec 30, 2019 00:44 |