Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Progressive JPEG
Feb 19, 2003

some days we're the hosters, other days we're the hostees

Adbot
ADBOT LOVES YOU

Progressive JPEG
Feb 19, 2003

abigserve posted:

Elaborate?

you generally don’t want background processes within a container because if the background process dies and it doesn’t take down the main process with it, you effectively end up with a zombie container, thats hiding the fact that it’s unhealthy from the management layer

ideally any fatal faults in a container result in the entire container exiting. then it will be relaunched automatically, and a restart counter will be incremented somewhere, making the flake visible in your monitoring

zombie containers can be ameliorated by having a health check that polls the container for liveness, but that’s really just a workaround option for when you’ve got something that you can’t get to exit when it’s died for whatever reason

Progressive JPEG
Feb 19, 2003

ive seen people try to DIY their own init system within containers, with like a bunch of processes under a parent process that tries to keep them running

of course they do it wrong so the parent process ends up silently missing failures and the container turns into a zombie where half of it isnt actually running anymore

normally this is driven by them wanting to keep a bunch of processes fully adjacent to each other. they should instead put the procs in separate containers in the same pod, with a shared emptyDir volume or similar for any stuff they actually need to share. this then allows the container management to manage the containers, and if any of the processes fails then the pod gets cleanly and reproducibly reset as the lord intended

Progressive JPEG
Feb 19, 2003

Jonny 290 posted:

a genuine question for all the k8s wranglers:

what was wrong with docker and fleetctl?

docker the image format is fine

dockerd the runtime implementation is loving garbage and there’s zero reason to use it anymore because there are roughly 500 compatible implementations that aren’t rubbish these days

off the top of my head dockerd have pulled poo poo like minor point releases with hideous breaking changes, long-running bugs where containers would just turn into zombies with no network for a few hours, etc.

these days the main thing that sucks with dockerd, other than how flaky it is at scales beyond a single workstation, is that for marketing reasons it hardcodes docker hub as its default registry, which is bullshit if you’re running an airgapped setup with your own separate registry. other runtimes like containerd let you simply configure the default registry to point to your own on-prem instance, but if you’re some poor soul still running dockerd then you’ve got to inject a registry prefix into all your image names everywhere or else dockerd tries to hit up docker hub and then throws up its hands when it’s unreachable

also docker the company was sold for scrap a few weeks ago so I fully expect monetization to be cranked to 11 shortly

never heard of fleetctl

Progressive JPEG
Feb 19, 2003

Ploft-shell crab posted:

any of y’all tenants trying to get you to run a god dang “service mesh”? idk what real problems these things are trying to solve, I think they’re just inventing stuff for themselves to do

it’s purely to increase latency and resource overhead and to put more money in bezos’ pocket

Progressive JPEG
Feb 19, 2003

istio specifically is google trying to regain control over (some aspect of) k8s regardless of whether the functionality even makes sense at that layer

also the istio code is a mass of spaghetti. for example all external-dns 32 bit builds panic after running several minutes because they import some istio client library, which launches static background timer threads when the client module itself is imported (as opposed to when it’s actually inited/used), which then eventually crash after several minutes due to unaligned atomics in some istio base library

Progressive JPEG fucked around with this message at 20:30 on Feb 12, 2020

Progressive JPEG
Feb 19, 2003

animist posted:

so i dont actually know what service meshes are so i googled it and... is it just people reimplementing packet switching in software over a bunch of VPNs?? whyy does anybody need this

if i were building a network of communicating computers i would simply delegate routing and traffic control to the network layer

the point of service meshes is to get you to pay more to the cloud provider of your choice by adding overhead to your resource utilization

curiously the people putting the most effort into service meshes happen to also be cloud providers

animist posted:

i have an idea. what if, instead of repeatedly reinventing the operating system layer of our software on top of previous operating system layers, we made one operating system layer, and then stopped.

like say ive got some tensorflow code. so instead of running the code in a tensorflow vm on top of a python vm on top of a service mesh on top of a docker container on top of a kubernetes pod on top of a kubelet on top of a linux kernel on top of a VMWare hypervisor on top of a linux kernel i could simply run the code directly ???

food for thought

- the docker container is effectively just the method of passing the filesystem image around as a series of tar files. btw no sane person still uses dockerd in the context of running a cluster, so use containerd or similar instead. dockerd is still acceptable for dev purposes on your local workstation but thats pretty much the only remaining use case for it since in every context it's too much of a flaky piece of poo poo and docker the company is dead
- the pod is a kernel cgroup which was created by the kubelet (or strictly the container engine attached to the kubelet). your container is effectively still running as a normal process on the host, it's just in a kernel-managed resource sandbox that was created by the kubelet as part of starting the process

so in that setup, the order boils down to: python runtime -> process within cgroup -> kernel -> vmware hypervisor. so pretty much the same as a normal process, just with cgroup rules applied to the process. to illustrate this, if you ran 'ps aux' on the host, you'd see all the container processes in there too

Progressive JPEG
Feb 19, 2003

suffix posted:

you don't need to use a service mesh but it could make sense if you want to encrypt internal traffic, do full request tracing or have a whitelist of services that can talk?
or you could build that into each service, that also works

most of this can just be handled by the CNI provider, with better performance and without the insane resource overhead of hacks like adding sidecars to every pod or whatever istio is making GBS threads out these days

for example networkpolicies provide a generic and compatible-across-clusters path for declaring L3 (host/port) rules for blocking/allowing connections between pods. if your CNI provider (calico or weave, maybe others idk) supports it then the rules are enforced, or if your CNI provider doesn't (e.g. flannel) then they're just ignored. the rules are normally implemented via iptables on the host so they're low-overhead to boot

meanwhile if you want something like protocol-level rules (blocking/allowing specific HTTP paths for example) then you could use cilium for that, but i've only gone as far as using host/port networkpolicies with calico so idk how good that is

Progressive JPEG
Feb 19, 2003

could put the requests on a queue, and commit the queue as requests are successfully processed. but i think this would usually give "at least once" guarantees when it sounds like you want "exactly once"?

another option could be to just add a client retry for the occasion when there is a flake, and hopefully it'd get kicked to a different backend instance that isn't being shut down. this redirect might be configurable via the service object's session affinity

but idk it sounds like you may have tried some of this already

Progressive JPEG
Feb 19, 2003

im not holding it wrong, YOUR holding it wrong!!!

Progressive JPEG
Feb 19, 2003

Gaukler posted:

but now the requirements have changed slightly

switching from a background processing model to a real time SLA is not a slight change in requirements, and it's a shame that their management apparently can't be convinced of that fact

instead they've apparently decided to have nomnom figure out how best to make a best-effort peg fit a guaranteed response hole

Progressive JPEG
Feb 19, 2003

we tried using the wrong tools for the job and we're all out of ideas!

Progressive JPEG
Feb 19, 2003

it's called redshift because you're supposed to run away from it as fast as possible

Progressive JPEG
Feb 19, 2003

y'all made the right choice using ER, i have a unifi usg-pro-4 and i can only assume it's short for Ultra lovely Garbage

picturing ubiquiti walking past all the reasonable fans at the fan store and paying extra for the loudest shittiest ones they could find, then going around the corner to the power supply store and deciding to just wedge an external power brick inside the switch chassis instead

not to mention unifi requires you to hand edit a json config blob if you want to have BGP

Progressive JPEG
Feb 19, 2003

my last free isp router maxed out at 50mbps or so

which is pretty bullshit given that it was for gigabit fiber service

Progressive JPEG
Feb 19, 2003

hey im routin over heah

Progressive JPEG
Feb 19, 2003


quote:

A fairer test on this point would have compared Rust on Compute@Edge with JavaScript on Cloudflare Workers, which are at more comparable stages of the product lifecycle.

err why not just do rust on both

i guess the reason as they point out is that cloudflare bans running benchmarks in their own tos lol

Progressive JPEG
Feb 19, 2003

for something that's both infra and networking:

as an istp i'm exporting my unifi configuration to terraform. mainly just trying to get a snapshot of current state so that i have an easy backup/recovery path if my controller self-destructs someday, but i also like the idea of managing future changes via tf edits. i don't change things that much anyway

but getting that initial snapshot has so far involved guessing what's already configured, writing piecemeal resource declarations for those things, and then running "terraform import unifi_type.name id" for each resource. ideally there'd be some kind of bulk import to, idk, just auto detect all of the current configuration/state and print out the equivalent resources for me to put into an initial tf file?

Progressive JPEG
Feb 19, 2003

if this is what you mean by state management, for the tfstate stuff im using git-crypt in the git repo holding the tf files. works fine when it's just me

Progressive JPEG
Feb 19, 2003

the backup is just a big ol blob containing a raw copy of the mongodb content iirc, figured i could go with something a lil more manageable than that

also if i do want to e.g. add another vlan at some point it'd be way easier to just copy/edit that block in a text file, rather than trying to remember the correct disjoint sections of the UI i need to go through to set it up

Progressive JPEG
Feb 19, 2003

my homie dhall posted:

don’t think so, terraform needs a way to figure out existing state and it can’t figure it out without storing the previously applied state

the tfstate file is a state cache, terraform is missing a way to refresh/populate its cache. like "tf import --all" to fetch current content for each listed resource in the config

Progressive JPEG
Feb 19, 2003

just thought to look it up and it looks like this is what


does

Progressive JPEG
Feb 19, 2003

my homie dhall posted:

how is it a cache? if I declare a resource, apply, and then stop declaring the resource and apply again, tf needs to know the resource was created in the first place so it can remove it. probably there are a lot of resources in a lot of providers that could do something like scan for tags that were applied on creation or something, but this is not going to work for everything

i think it's fine for state recovery to be best-effort. if there's a moronic write-only service that wouldn't work for this, then it can continue not working

i think the real motivation is that hashicorp wants to sell their tfstate manager service thing

Progressive JPEG
Feb 19, 2003

my homie dhall posted:

it’s not about being write only, it’s about having there being an entire universe of resources in a given provider and knowing which ones are associated to which resource in this particular tf set.

do you have a scenario where the resource statement doesn't identify things uniquely already?

Progressive JPEG
Feb 19, 2003

like what is this resource that doesn't have a name or any other identifying label

Progressive JPEG
Feb 19, 2003

dads friend steve posted:

when you create an ec2 instance you do not know its id until AWS tells you in the response. so without some state somewhere I don’t see how you can go backwards and say “this instance exists because of this chunk of code/whatever was run sometime in the past”

I guess you could set up some sort of tagging scheme to attach whatever data to the instance itself, but that sounds like reinventing CFN / TF, poorly

i'm not saying "no tfstate ever", i'm saying "best-effort tfstate init"

Progressive JPEG
Feb 19, 2003

echinopsis posted:

the sparky was laying the power in the trench and I went out and asked if he’d remembered the network cable and he was like gently caress imma have to dig that poo poo up lol

wait were they putting networking in the same trench as the power?

hopefully different conduit at least

Progressive JPEG
Feb 19, 2003

dont you need a gui for any kind of windows stuff anyway

its called windows not walls

Progressive JPEG
Feb 19, 2003

what kind of fiber cables is correct for a short distance, say 20 meters or less? both in terms of the ends to use and the wavelength for the transceivers

i assume the cable style to use would be this, paired with some sfp transceivers:



wanting to connect a garage to a house in a few months, targeting 10gbit because why not, but fine with going overkill to avoid needing early replacement. i assume trying to shove a prebuilt SFP cable through conduit is Incorrect

Progressive JPEG
Feb 19, 2003

the labor for fishing a replacement line through conduit between two buildings is greater than the cost of just getting the good fiber+transceivers anyway

Progressive JPEG
Feb 19, 2003

tim, where is the 10gbit appletv

Progressive JPEG
Feb 19, 2003

post your 1.6tbit rigs

Progressive JPEG
Feb 19, 2003

Trimson Grondag 3 posted:

I did some certs on the Juniper website once when I was a sales engineer and they sent me 10cm Perspex cube that said NETWORKING EXPERT on it. it was great to annoy the actual engineers with by putting it on the table during meetings etc.

that’s my Juniper story.

pics

Progressive JPEG
Feb 19, 2003

the ospf wikipedia article is a real gem

Open Shortest Path First (OSPF) is a routing protocol for Internet Protocol (IP) networks. It uses a link state routing (LSR) algorithm and falls into the group of interior gateway protocols (IGPs), operating within a single autonomous system (AS).

OSPF gathers link state information from available routers and constructs a topology map of the network. The topology is presented as a routing table to the internet layer for routing packets by their destination IP address.

...

OSPF is an interior gateway protocol (IGP) for routing Internet Protocol (IP) packets within a single routing domain, such as an autonomous system. It gathers link state information from available routers and constructs a topology map of the network. The topology is presented as a routing table to the internet layer which routes packets based solely on their destination IP address.

Progressive JPEG
Feb 19, 2003

Redistribution into an NSSA area creates a special type of LSA known as type 7, which can exist only in an NSSA area. An NSSA ASBR generates this LSA, and an NSSA ABR router translates it into a type 5 LSA, which gets propagated into the OSPF domain.

A newly acquired subsidiary is one example of where it might be suitable for an area to be simultaneously not-so-stubby and totally stubby if the practical place to put an ASBR is on the edge of a totally stubby area. In such a case, the ASBR does send externals into the totally stubby area, and they are available to OSPF speakers within that area. In Cisco's implementation,

Progressive JPEG
Feb 19, 2003

Nomnom Cookie posted:

kubernetes is real, real bad actually. it was designed on the assumption that you could use etcd to provide every kubelet and every kube-proxy and every controller in the cluster with a globally consistent view of cluster state. as anyone who has actually scaled a distributed system before would have guessed, this lasted for about five seconds after hitting a real use case and has only gotten worse since. a "properly" functioning production kube cluster is nothing more or less than an enormous pile of poo poo covered in monkeys, and all of the monkeys are constantly grabbing handfuls of poo poo to fling at each other and to different places on the pile. you see all these monkeys being extremely busy and get impressed by how much is going on, but in the end its still monkeys flinging poo poo and you hope occasionally a splat lands in the right spot to make something happen

i work on a pretty big kube deployment as a managed platform used by the rest of the company and it's ok

things are divided into groups of a few thousand nodes each and then have some basic federation on top of that

but we don't expose the kube apis to users, its effectively an implementation detail on our end

also buddy if you think kube is bad then take a look at mesos lol

Progressive JPEG
Feb 19, 2003

hey whats a reasonable way to handle two ISPs in a home situation

like if i had a WISP and a 4g modem that have similar speeds. thinking load could be distributed across both, rather than doing a priority failover setup

don't really know what i should be looking for here

Progressive JPEG
Feb 19, 2003

i just put all my internal dns entries on public dns since i don't care if the internet is able to resolve weatherstation.<domain> to 172.27.0.2

also means i can set up a real letsencrypt wildcard *.<domain> cert and have it work fine in all clients

Progressive JPEG
Feb 19, 2003

i ended up diying a linux router (debian stable) on a protectli box. running systemd-networkd and firewalld, with all the configuration in a basic ansible config in a git repo making it easy to rebuild later or revert if i break something. wanted something that could serve as a wireguard gateway and also run arbitrary docker containers

got everything working in about a day, with the one annoyance currently being that the built-in dhcp server in systemd-networkd is technically functional but extremely barebones. like afaict theres no way to list the hostnames used by clients to figure out where things are. the best i can find is a list of ips/mac addresses in "networkctl status <iface>"

meanwhile i've already got it running adguard home in a docker container for the dns server, and that has a dhcp server feature, so i might just try using that for the dhcp service too. however the file format for tracking static leases is undocumented which doesn't give me much confidence that it's much better

also firewalld is good but its docs are awful, and requiring "--" for command parameters in firewall-cmd is haram. thinking that ill want to switch to direct nftables someday but firewalld had everything i needed for now and was quick to get functioning despite the docs situation. for example i ended up creating my own zones from scratch because i didn't get the point of the built-in ones. i also don't know when you would want zone rules vs a separate policy, the latter seems to be newer and usually unnecessary

Adbot
ADBOT LOVES YOU

Progressive JPEG
Feb 19, 2003

i have a mikrotik dish for getting wifi to distant locations and i can now say that its config is more inscrutable than just doing it in linux from scratch

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply