Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
spent like 2 hours today figuring out that some k8s pod couldn't write to a file because the docker image had a standard entrypoint that did a chown followed by a suexec to another user. the chown hardcoded the default value of this location, which i had changed to make other poo poo work. only the script did this, so all the initcontainers we ran with poo poo other than that script wrote the same files as the default user, i.e. root. turns out emptydir mounts persist after initcontainers exit or something.

this poo poo isn't half as annoying as helm rendering a template, failing to convert it to json, and then reporting an error on a line in the rendered yaml, which it doesn't show you.

Adbot
ADBOT LOVES YOU

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

abigserve posted:

Can you expand on why it isn't? All I ever hear is "valid" k8s use cases but even under the smallest scrutiny it sounds like a lot of care and feeding for it to work. Be good to hear a case where it's the other side of the coin.

windows software is perhaps a bit more of a special case, but for us, kubernetes manages to surface a lot of unfortunate shortcuts that didn't cause issues on dedicated VMs. these are probably all more just generic issues with adopting to containerized deployments, but k8s has made those more accessible:

* worker process count is determined based on core count by default. this doesn't work very well if you run on a beefy kubelet with many cores, but only allocate 2-4 CPU to the pod, since the "how many cores?" the program sees is the underlying host's core count. doubly so since these workers all allocate a baseline amount of RAM
* things that assume static IPs are poo poo in general in modern infrastructure, and kubernetes' pod lifecycle model demonstrates this quite well
* we have some temporary directories that default to a directory that also holds some static files. kubernetes makes it easy to do read-only root FS for security purposes, and while we have a setting to move the temporary files elsewhere, it turns out we hardcoded the default location loving everywhere

the largest issue, honestly, is that kubernetes operational experience is in fairly short supply, and there are a lot of people being dragged kicking and screaming into working with it because their higher-ups wanted to implement it (not without good reason, mind you, but in typical modern american corporate fashion, they want to do so without training anyone under arbitrary, too-short timelines). as vendor support for poo poo that runs in and heavily integrates with kubernetes, more than half my time ends up being spent on explaining poo poo that's covered in the kubernetes documentation and reminding people that "kubectl logs" and "kubectl describe" will explain the cause of most of their issues.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
today in kubernetes bullshit: a pod getting repeatedly OOM killed, but with a cryptic "Pod sandbox changed, it will be killed and re-created" error that everything on the internet suggests is Docker dying, which obviously wasn't the case since everything else on the kubelet was fine.

kudos to engineering for not having any sort of continuous or regularly scheduled performance testing. someone removed an "arbitrary" limit that nothing had hit, ever. turns out that, without this limit, the program instead allocates some poo poo based on a system-level setting (that should be set well above the old limit always), and configurations that used to run happily with 128MB of RAM now consume nearly 1GB while doing absolutely nothing. there is, of course, no way to set your own limit below the system-level limit.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
someday i will figure out why the gently caress my pfsense box does SLAAC RAs with ABSURDLY short lifetimes, to the point that my laptop will routinely just lose ipv6 connectivity for a few seconds on the reg since it needs to check again every 60s.

long long ago i tried to look into this and couldn't find where they'd modified radvd and gave up. i am p lazy wrt to home networking.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
i am continuously astounded by how many people work in this field without the slightest understanding of how it works and why.

:haw: hey why are the paths in our URLs case-sensitive?
:eng101: well, most filesystems have case-sensitive paths, and URL paths often map to filesystem paths. it'd be pretty hard to support those if the URL path were case-insensitive. here's the relevant bit of RFC 3986 where it's codified.
:haw: yes, but why did WE choose to have case-sensitive URLs?
:eng101: we didn't. the IETF chose it for everyone.
:haw: that's a bad answer.

okay man, you do you.

it's like we still have quacks in the era of science-based medicine or astrologers after centuries of actual astrophysics.

not to say that we *don't*, but they're relegated to pushing supplements that don't do anything and writing advice columns, whereas ours are still charged with performing the work of someone qualified.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Malcolm XML posted:

fix your app server/java version/whatever to be container-aware and the core/memory poo poo will go away

have you ever tried to get nginx to accept a patch. it's not fun.

we know the cgroup-based worker count inference is out there, it's just :effort:, so it's just cast into documentation in the hopes that someone reads it.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
rework a bit of systemd so it will run within containers and manage their local processes.

problem solved :smuggo:

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
why do so many linux systems still default to absurdly low open file limits. who is running anything approximating a multiuser system in tyool 2020.

you would think overriding this would be a thing infra people bake into their images but nope. not even if you put a nice "HEY THIS SETTING IS hosed" message into application startup on the assumption they haven't

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Ploft-shell crab posted:

any of y’all tenants trying to get you to run a god dang “service mesh”? idk what real problems these things are trying to solve, I think they’re just inventing stuff for themselves to do

worse, i work for a company that PRODUCES a service mesh

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

the talent deficit posted:

service meshes are for when you've given up on your developers giving a gently caress about monitoring, reliability or observability

me when i hear customers ask "can't we just log the whole request body cause otherwise we won't be able to figure out what went wrong with our apps"

if the only way you can figure out what went wrong in upstream applications is logging the full request body to try and reconstruct the problem, you have bigger problems than this will solve

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Nomnom Cookie posted:

lol

what are you doing that this matters. $75/cluster/mo is basically nothing in any sane k8s deployment scenario

we use it primarily to test and develop k8s tooling in a realistic environment, so it's just 3 workers and the extra cost is significant for that

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
my rear end

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

animist posted:

like say ive got some tensorflow code. so instead of running the code in a tensorflow vm on top of a python vm on top of a service mesh on top of a docker container on top of a kubernetes pod on top of a kubelet on top of a linux kernel on top of a VMWare hypervisor on top of a linux kernel i could simply run the code directly ???

yes but you carry the pager

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Progressive JPEG posted:

the point of service meshes is to get you to pay more to the cloud provider of your choice by adding overhead to your resource utilization

curiously the people putting the most effort into service meshes happen to also be cloud providers

the service mesh pattern is fine insofar as it's not unreasonable to offload some things (TLS client auth, basic telemetry spans, whatever) to a generic HTTPS app-level layer. half this poo poo everyone was already doing in some form via reverse proxies and now they're just doing it for forward proxies too: it made sense to shove inbound requests through a common HTTPS layer when that became easily doable, and it makes sense to do it for outbound requests too now that there are tools to enforce it.

proxies always add compute and latency overhead. the argument is that the technical overhead is usually much cheaper than the human overhead of making sure all your Java apps and all your Python apps and all your ancient legacy apps do all that poo poo natively, because nobody wants to deal with 5 different languages' ecosystems for adding it, if they even can (legacy service that only understands HTTP basic auth because that's what now defunct contractor used in ObscureLang back in the day cannot, and nobody wants to retrofit it now). you can't quantify the human cost as easily as the (clearly higher than before) technical cost, but that doesn't mean the human cost is therefore $0--it's still there, and will outweigh the technical cost often because human costs are both inherently expensive and more difficult to pare down

marketing departments are gonna push it with glossy nonsense because everyone wants a piece of that new market pie, but fundamentally the concept is a sane way to shift human cost into technical cost. lots of management persons are going to sit in a conference talk audience, hear the marketing fluff, and take it at face value that they can just cargo cult install some service mesh solution for instant massive gains without understanding the why or how, but that's lovely leadership in general. people that expect turnkey solutions to their exact problems make everything poo poo because they lack understanding of what they're trying to implement, but that's true no matter what you're doing. people will mcmansion their architectural ineptitude in any paradigm, and nobody will ever provide a technical solution for inept leadership

the current offerings aren't great yet because all implementations are new, but the concepts are sound. there isn't much in the way of guard rails and rough edges abound, but there are capable people working on smoothing them and trying to make them easier to use because there's a lot of money in that.

doomisland posted:

k8s is a google troll imo

k8s isn't a troll: google want to provide some sort of lingua franca around managing computing resources in modern environments based on their practical experience running one. everyone else has done so on their particular cloud compute platform in myriad ways, and there are legion sysadmins saying "by god we can continue to use provider-native tools to do the same poo poo", and they're not wrong, but they're not providing a lingua franca, they're providing an AWS or GCP or Azure or Tencent or what have you way of doing things set up to their own preferences. They may well be talented and capable of managing that system, but if you go that route, the onus is on you to provide and maintain the poo poo that works effectively and provides that infrastructure. Google has a specific market interest in k8s because they want to shear off as much AWS-specific poo poo to try and make it easier to migrate off the market leader, but that doesn't mean they've created something that's fundamentally wrong

i am a vendor and i do not want to deal with whatever bespoke system your ops people came up with, i want to say "this is how you deploy our app in a cluster based on common standards" same as we have elsewhere. if i ask for a port, i get a port, and i give zero fucks as to exactly how that port is exposed on the internet. it might be an AWS NLB or a Google's NLB equivalent, but gently caress it, it's an addressable network port. kind Service Type LoadBalancer effectively expresses as much. sure, there's plenty of unknown space filled with crazy provider-specific spaghetti, but that's part of the process of figuring out how to do it well. that k8s concept will probably endure, and you can probably fix your bad implementation of the k8s concept as or more easily than you can fix whatever bespoke solution your current senior devops engineer set up before they retired and were replaced with incompetent bodyshop mooks

there's gonna be bullshit and confusion for a long time. i am not at all happy that AWS have decided to repurpose Ingress path rules as a means to add their particular HTTP to HTTPS redirect implementation, but they did so in a vacuum of official guidance and i can only fault them so much--they chose a terrible implementation that doesn't work elsewhere and is stupid, but so flows the marketplace of ideas--sometimes you get proposals that suck, but such is the way you determine what the new standard needs to do in a less stupid way going forward

tl;dr the "there must be a simpler way" thought isn't wrong, but that simpler way exists only in your head or your team's tribal memory. someone else will have to deal with your simpler way going forward, and you better hope they can work with your simpler way indefinitely or can transition off it easily if need be. you'll probably want to have your voice heard during development of the more common, more complex way regardless, because recruiting people for your bespoke ivory tower stack is gonna be hard

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

animist posted:

i contemplate your ops wisdom and am enlightened.

question: if the problem is that ObscureLang doesn't support authentication, tracing, etc, how does a service mesh help? it seems to me that those things interact with actual functionality in complex ways. so you either need to wire the service mesh in at the ObscureLang source code layer, or you'd need some hella complicated request inspection code at the service mesh layer.

like, how do you trace a request through a language that doesn't support tracing? do you just correlate incoming and outgoing requests by time received or something?

a distributed trace having a basic "entered this service, exited this service" span in the chain of services doing something with a request is still more than nothing. ideally your application does tracing and adds its own "also i made a db query in 0.03s" span, but you take what you can get

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
ah, yeah, good point. copy-paste an existing trace-id header is easier than properly instrumenting, but still :effort:

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

uncurable mlady posted:

auto-instrumentation is the path of sadness

auto-anything is generally the path of sadness.

however, we live in a sad reality where some poor, pollyanna ops team wants to dance around ossified bigcorp app team that aint gonna do poo poo.

there's a lot of things you probably shouldn't do in the middleware that get done in the middleware because tinpot fiefdom does't want to do it properly

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Jimmy Carter posted:

OkCupid ran their entire site on 5 servers in 2012 how did we stray so far from the god's light

and in C to boot

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
i lol at the recent spate of "we didn't choose kubernetes" posts on hn recently from companies that are mature enough to employ a team of infra people qualified to make and implement that decision

realistically 80% of the audience is immature companies that have maybe one person working infra full time, who will fail to implement kubernetes or non-kubernetes effectively regardless because lol at having one person responsible for that

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

GenJoe posted:

wait what

yep.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Nomnom Cookie posted:

it’s not actually possible to guarantee that a pod stops receiving requests before it stops

"possible" is a strong word. there are some things that are indeed not possible unless you throw a whole lot of things out the window--if you want to exceed c and violate causality, for instance, some very fundamental things need to change, so that's de facto not possible. computers are usually a bit more flexible.

there's a lot of things going into why your pod is receiving requests, and depending on exactly what types of requests in flight are the problem and why they're forwarded to a dying pod still, there's probably some way to make traffic go to it less or not at all. sure, that's complex, but welcome to kubernetes, lots of things are complex and have defaults that you may not want

https://www.youtube.com/watch?v=0o5C12kzEDI&t=1m10s is good watch

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
clearly the solution is a service mesh

checkm8 mfers

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Captain Foo posted:

just yeet your packets to nullroute, who cares

mickens monitorama preso edge network guy is the hero we need

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

12 rats tied together posted:

i wont even pretend for a little bit that k8s isn't at least half marketing the google brand to get people to work there. SRE as a role sucks rear end in general, the best way to get people to stick with it at your multinational megacorp would be to convince them that they are special in some way

are there experienced SREs in the world that are not disillusioned and resigned to the touch computer forever for capitalism life

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

abigserve posted:

Any load balancing solution requires the person operating it to have a beyond-cursory understanding of the apps they are load balancing and that's an unreasonable request for a network team that may have to look after several thousand virtual servers so you get a lot of "tcp port alive" health checks and poo poo like that

"requires" here is perhaps more "should have, if you want to balance it well".

you can totally set up a load balancer with little understanding of network protocols and troubleshooting them, people do it every day

the results may be less than ideal, but hey, welcome to infrastructure

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

kitten emergency posted:

k8s is pretty decent if you design your app to run on it.

and so, everyone proceeded to take their existing applications, which were not designed with kubernetes in mind, tacked on strange hacks and middleware, and made them run in kubernetes despite the applications' many protestations

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

minato posted:

The whole "Hybrid Cloud" concept is a bit of a hype train, but it will be interesting if it becomes a reality. In the same way that most people don't care much about the brand of HW they run their Linux on, k8s makes it so you don't need to care what cloud you run on.

But cloud providers really don't want this; they'd prefer you lock into their cloud. So to differentiate themselves, they'll have to provide better/cheaper service, reliability, unique features, etc. It seems good for the consumer.

k8s poo poo is like 60% dealing with the weird idiosyncrasies of the different providers and places where the k8s spec doesn't fill in all the details

it does a good job of providing a standard language for managing and interacting with container fleets, but the stock infrastructure glue is only the bare minimum

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
fun sunday afternoon bullshit: attempting to figure out why the gently caress some site almost always errors out in firefox

apparently firefox's QUIC validation will panic and give up if you include a host header in the server response

naturally any error you can find for this is obtuse as hell, and you just get a bunch of generic protocol error/closing stream messages if you look in the firefox about :networking log or decrypted wireshark QUIC dissectors (which at this point can't even show you the contents of the HTTP stream inside the QUIC payload)

tooling for debugging protocol errors and implementations for new stuff is reliably dogshit :|

while the tools are all crap atm though, you can just tweet people working on QUIC stacks and they'll be like "oh yeah, that's a thing" so who needs computers to actually tell you why they're broken

https://twitter.com/SimmerVigor/status/1409265636262518784

Qtotonibudinibudet fucked around with this message at 00:37 on Jun 28, 2021

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
im guessing you can't just make the case to move to something new enough to support ebpf

in related news i finally got the kubernetes ebpf tool to stop being a butt and then promptly realized that my use case (adding some additional logging to some go code on the fly) is kinda hamstrung by no magic support for go structs like delve has

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
i have a good ipsec joke

it's just ipsec. all of it.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
a startup without proper process controls, what a surprise

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

well-read undead posted:

trying to imagine the tiny dev shop with no secret storage or ci/cd tooling but a robust terraform management layer

and failing

even for medium shops, it's way less work to let them self-service buy a subscription to a hosted service and start using it immediately than it is to navigate a quarter-long sales cycle with attendant staff, followed by a post-sales slog to handhold the often-incompetent customer staff through installing and maintaining the thing before they can start using it

plus, with open core software, better lock-in. if they're on your SaaS there's less risk they'll just switch back to the free version once they're comfortable running it themselves

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Progressive JPEG posted:

the tfstate file is a state cache, terraform is missing a way to refresh/populate its cache. like "tf import --all" to fetch current content for each listed resource in the config

you would think this would be more or less possible given a target state in most cases

provider APIs aren't _so_ bad that they can't tell if you if such and such instance exists with X properties, right?

realistically there will be plenty of "oops, yes, this is a (load bearing) sequence of steps artifact that you can't reliably expect or inspect unless you know you've done a particular sequence of steps" minefields, but you could at least try

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
im trying to figure out ipv6 poo poo on my home router and am loving how uninformative all the UX is. i configure pfsense LAN to track the WAN interface that is definitely getting an address, can see a router advertisement coming through and... have absolutely no feedback as to why the router is not picking up the advertised prefix and assigning addresses from it

go figure, i actually finally need it sorta, to help diagnose behavior for ipv6-only customers from a home lab

this is annoyingly complicated by my router inexplicably losing the ability to determine a media type correctly after boot. first negotiation properly selects gigabit and gets both family addresses, any attempt to release and renew after makes the interface flap constantly and somehow only get one address family at a time

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

nudgenudgetilt posted:

which provider are you on?

sometimes you need to use various stupid tricks to get ipv6 to anything but the router -- sometimes 6rd, sometimes dhcp-pd

webpass, so not a whole lot of info out there, especially for pfsense

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

12 rats tied together posted:

ospf is the normal routing protocol. ibgp and is-is are the kubernetes cringe of the networking world

kubernetes is good tho

at least until all the vendors in the "cloud native" space get ahold of it and try to make it "easier" for people that use it but refuse to learn any of the configuration

don't want to understand what a Deployment is? dont worry, we've got a lovely abstraction layer over top that somehow ends up being more complex and won't let you fix poo poo when our hardcoded automation makes bad decisions

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Nomnom Cookie posted:

kubernetes is real, real bad actually. it was designed on the assumption that you could use etcd to provide every kubelet and every kube-proxy and every controller in the cluster with a globally consistent view of cluster state

ahem, it's not globally consistent, it's eventually globally consistent

everything works fine so long as nothing ever changes in your cluster and, if it does, that those changes don't result in poo poo fighting to account for a state that only showed up due to something else trying to account for a change before settling into an equilibrium that never actually happens

the primitives generally make sense though! too bad most everyone lacks the OS theory and distributed systems background to try and use them well tho

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
go rogue and run your home network off djbdns

Adbot
ADBOT LOVES YOU

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

shackleford posted:

yeah if we're talking about the miktrotik CLI it's a little bit quirky and bespoke but it probably compares favorably to the rats nest of dozens of quirky and bespoke formats that is /etc/* on a linux box with equivalent services?

embrace BSD. consistent. thoughtfully designed. clean.

well, at least you get into the PHP script generated config from pfsense-specific metaconfig that comes with that

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply