|
Makin this thread for those of us who are only programmer adjacent talk about your artisanal k8s and docker setups or whether "the cloud" is really going to "take off" also talk about putting packets in pipes and how you have to put in a major change request to scratch your own taint catalyst for this thread was cisco announcing a technology that someone else has had out for multiple years as "the first ever"
|
# ? Dec 16, 2019 05:31 |
|
|
# ? May 8, 2024 06:20 |
|
abigserve posted:also talk about putting packets in pipes and how you have to put in a major change request to scratch your own taint feelin' real called out right now had to put in a change request the other day to get someone to pull an unplugged, powered off device from a rack, because it's in our HQ and we don't want to do anything that could possibly disturb the bean counters. our security architect caught wind of this, went into the closet, and just pulled the thing out at noon on a friday also the cloud is bad unless you're partnered-with-amazon level of pouring money into it and I'm disappointed we're moving towards it as a secondary "datacenter" instead of spending the nine hundo a month for a quarter rack colo and just shoving a couple pizza boxes and storage arrays in there with a router for DMVPN
|
# ? Dec 16, 2019 05:48 |
|
computers seem bad op
|
# ? Dec 16, 2019 05:52 |
|
Kazinsal posted:feelin' real called out right now what services will this effect? the network yeah but what services
|
# ? Dec 16, 2019 05:57 |
|
theadder posted:computers seem bad op Try connecting them together, that's when it really turns to poo poo
|
# ? Dec 16, 2019 08:34 |
i had an ethernet cord stuck in my laptop that i needed pliers to get out today. i have no idea why it was so tight. that’s the extent of my networking issues lately, thanks for listening.
|
|
# ? Dec 16, 2019 21:39 |
|
big mtu boiz 4 lyfe
|
# ? Dec 16, 2019 21:45 |
|
Laslow posted:i had an ethernet cord stuck in my laptop that i needed pliers to get out today. i have no idea why it was so tight. it was hammered into your modem port op
|
# ? Dec 16, 2019 22:00 |
|
abigserve posted:Try connecting them together, that's when it really turns to poo poo id never connect one computer to another op
|
# ? Dec 16, 2019 22:55 |
|
theadder posted:id never connect one computer to another op the most important lesson from battlestar galactica
|
# ? Dec 16, 2019 23:02 |
|
today i terminated some machines
|
# ? Dec 17, 2019 03:27 |
|
i connected a switch with a mismatched spanning tree type to some distribution stuff during the day, things went poorly tickets were opened/ports shut down and described then my senior laughed then said to delete the descriptions and never speak of it again thats my story
|
# ? Dec 17, 2019 03:53 |
|
Spanning-tree is one of those things where it's technically very easy but it relies on a bunch of diligence at virtually every layer of the IT department so its rarely correct.
|
# ? Dec 17, 2019 05:31 |
|
don't computer just don't
|
# ? Dec 17, 2019 05:40 |
|
spent like 2 hours today figuring out that some k8s pod couldn't write to a file because the docker image had a standard entrypoint that did a chown followed by a suexec to another user. the chown hardcoded the default value of this location, which i had changed to make other poo poo work. only the script did this, so all the initcontainers we ran with poo poo other than that script wrote the same files as the default user, i.e. root. turns out emptydir mounts persist after initcontainers exit or something. this poo poo isn't half as annoying as helm rendering a template, failing to convert it to json, and then reporting an error on a line in the rendered yaml, which it doesn't show you.
|
# ? Dec 17, 2019 05:40 |
|
CMYK BLYAT! posted:
powerful curse
|
# ? Dec 17, 2019 05:42 |
|
theadder posted:computers seem bad op computers are fine, the mistake was letting them talk to each other.
|
# ? Dec 17, 2019 19:16 |
|
openshift more like openSHIT nah actually seems like it might be kinda nice
|
# ? Dec 17, 2019 19:49 |
|
rotor posted:computers are fine, the mistake was letting them talk to each other. unsure op some problems with even a solitary computer
|
# ? Dec 17, 2019 23:13 |
|
theadder posted:unsure op some problems with even a solitary computer yes but in the single computer case the benefits outweigh the drawbacks
|
# ? Dec 17, 2019 23:51 |
|
420Gbps or gtfo
|
# ? Dec 18, 2019 00:02 |
|
in a 69-port link aggregate
|
# ? Dec 18, 2019 00:02 |
|
my managers all want us to use kubernetes but our software is 100% not designed for it lol
|
# ? Dec 18, 2019 03:08 |
|
akadajet posted:my managers all want us to use kubernetes but our software is 100% not designed for it lol Can you expand on why it isn't? All I ever hear is "valid" k8s use cases but even under the smallest scrutiny it sounds like a lot of care and feeding for it to work. Be good to hear a case where it's the other side of the coin.
|
# ? Dec 18, 2019 03:24 |
|
abigserve posted:Can you expand on why it isn't? All I ever hear is "valid" k8s use cases but even under the smallest scrutiny it sounds like a lot of care and feeding for it to work. Be good to hear a case where it's the other side of the coin. It's a bunch of windows services and asp.net sites tied to a big ol' self-hosted sql server instance. Maybe you could host it in Windows containers, but I was under the impression that k8s is really a linux game.
|
# ? Dec 18, 2019 03:34 |
|
We're on Azure so we'd likely want to use AKS for hosting. Even for Microsoft, Windows Servers support is considered a "preview" technology.
|
# ? Dec 18, 2019 03:39 |
|
akadajet posted:Windows Servers haha. lol
|
# ? Dec 18, 2019 04:43 |
|
websphere in kubernetes is a real thing that some people actually want to do
|
# ? Dec 18, 2019 04:48 |
|
BangersInMyKnickers posted:big mtu boiz 4 lyfe
|
# ? Dec 18, 2019 04:58 |
|
carry on then posted:websphere in kubernetes is a real thing that some people actually want to do giant thonking
|
# ? Dec 18, 2019 05:30 |
|
abigserve posted:Can you expand on why it isn't? All I ever hear is "valid" k8s use cases but even under the smallest scrutiny it sounds like a lot of care and feeding for it to work. Be good to hear a case where it's the other side of the coin. windows software is perhaps a bit more of a special case, but for us, kubernetes manages to surface a lot of unfortunate shortcuts that didn't cause issues on dedicated VMs. these are probably all more just generic issues with adopting to containerized deployments, but k8s has made those more accessible: * worker process count is determined based on core count by default. this doesn't work very well if you run on a beefy kubelet with many cores, but only allocate 2-4 CPU to the pod, since the "how many cores?" the program sees is the underlying host's core count. doubly so since these workers all allocate a baseline amount of RAM * things that assume static IPs are poo poo in general in modern infrastructure, and kubernetes' pod lifecycle model demonstrates this quite well * we have some temporary directories that default to a directory that also holds some static files. kubernetes makes it easy to do read-only root FS for security purposes, and while we have a setting to move the temporary files elsewhere, it turns out we hardcoded the default location loving everywhere the largest issue, honestly, is that kubernetes operational experience is in fairly short supply, and there are a lot of people being dragged kicking and screaming into working with it because their higher-ups wanted to implement it (not without good reason, mind you, but in typical modern american corporate fashion, they want to do so without training anyone under arbitrary, too-short timelines). as vendor support for poo poo that runs in and heavily integrates with kubernetes, more than half my time ends up being spent on explaining poo poo that's covered in the kubernetes documentation and reminding people that "kubectl logs" and "kubectl describe" will explain the cause of most of their issues.
|
# ? Dec 18, 2019 05:41 |
|
Experience is important, and so is having people motivated to learning about k8s. In Windows shops you'll run into a lack of both.
|
# ? Dec 18, 2019 05:51 |
|
has anyone moved from a network where your core speaks bgp to mpls where only your edge needs to? any pitfalls? I am tempted as the QFX range seems to be a bargain but won't take a full table
|
# ? Dec 18, 2019 05:58 |
|
my stepdads beer posted:has anyone moved from a network where your core speaks bgp to mpls where only your edge needs to? any pitfalls? I am tempted as the QFX range seems to be a bargain but won't take a full table i mean kinda? fastly doesn't use routers. all of our switches get peering/transit jammed into them and we run bird on all the cache nodes to wrangle bgp bc commodity cpu is cheap. it's saved us millions in worthless cisco expenses
|
# ? Dec 18, 2019 06:05 |
|
CMYK BLYAT! posted:windows software is perhaps a bit more of a special case, but for us, kubernetes manages to surface a lot of unfortunate shortcuts that didn't cause issues on dedicated VMs. these are probably all more just generic issues with adopting to containerized deployments, but k8s has made those more accessible: Ta, this makes sense. Definitely the static IP thing is something I've seen in prod, so this confirms that, but the worker process count is new and I'd have to do some research on it. my stepdads beer posted:has anyone moved from a network where your core speaks bgp to mpls where only your edge needs to? any pitfalls? I am tempted as the QFX range seems to be a bargain but won't take a full table you still need iBGP or whatever other protocol to advertise the routes throughout the core to populate the LIB. MPLS doesn't replace a routing protocol it just changes the way the lookup is performed for a packet as it transits the network. That doesn't mean you have to publish the full table into your core though, so I'd be considering why that was ever a requirement.
|
# ? Dec 18, 2019 06:09 |
|
rotor posted:yes but in the single computer case the benefits outweigh the drawbacks further study is needed op
|
# ? Dec 18, 2019 06:25 |
|
I have a unifi dream machine in my house. it's cool and has lots of options I don't understand. also it has an app
|
# ? Dec 18, 2019 06:52 |
|
Jonny 290 posted:i mean kinda? fastly doesn't use routers. all of our switches get peering/transit jammed into them and we run bird on all the cache nodes to wrangle bgp bc commodity cpu is cheap. it's saved us millions in worthless cisco expenses hmm I am having trouble understand how this works, so your l3 switches get a summary table from bird? abigserve posted:you still need iBGP or whatever other protocol to advertise the routes throughout the core to populate the LIB. MPLS doesn't replace a routing protocol it just changes the way the lookup is performed for a packet as it transits the network. yeah I've labbed it in VMs and have iBGP between my PE routers and OSPF with MPLS hanging off that. our current 'core' is .. please don't make me talk about our currently core. it was before my time.
|
# ? Dec 18, 2019 09:07 |
|
my stepdads beer posted:hmm I am having trouble understand how this works, so your l3 switches get a summary table from bird? theyre not even layer 3. so we have X switches which are per-transit, and then each machine gets 1 10g pipe to each of X switches. bgp is on the node so it knows which interface is the preferred path for Y IP block. this is super fun and powerful esp if you can visualize bgp updates being pushed globally in a second or two. given that we have about 12% of web traffic behind us, it's a bit of a firehose. we never use it maliciously of course but we can route around outages really hard
|
# ? Dec 18, 2019 09:12 |
|
|
# ? May 8, 2024 06:20 |
|
ohhh yeah that's a great idea. gives me something to think about. I'm not keen on MPLS as i don't want any of the advanced features at this stage. thanks
|
# ? Dec 18, 2019 09:21 |