|
I just finished validating an EVPN design for the Nexus 9k. It's a definite step forward in solving a lot of my l2 / l3 requirements but it's certainly complicated in the underlying protocol stack. There's so many protocols it has to rely on that I'm sure we'll find all the bugs at some point. I haven't ever really managed a nexus stack (always been engineering on the ASR-9000 router line) so I guess we'll see!
|
# ? Jun 17, 2020 21:20 |
|
|
# ? Jun 11, 2024 08:59 |
|
ping is objectively the best health check
|
# ? Jun 17, 2020 21:30 |
|
the talent deficit posted:ping is objectively the best health check pong
|
# ? Jun 17, 2020 21:55 |
|
Ploft-shell crab posted:just because a host no longer has an iptables rule for a target doesn’t mean it’s not going to route traffic to that target though. existing conntrack flows will continue to use that route which means you’re not just waiting on iptables to update, but also all existing connections (which may or may not even be valid) to that pod to terminate before you can delete it. what happens if neither side closes the connection, should the pod just never delete? sure, no L4 load balancer can help you much there. it doesn't know how to gracefully shut down a connection, but that's fine. kube gives you the tools out of the box to tell a process it's about to die, so it can drain itself and shut down gracefully assuming will never be sent a new connection. what i want and don't have is a guarantee that a pod won't die until after every node has stopped sending it syn packets. you can't do a graceful shutdown without that, not safely. i'm willing to accept pods living longer than they need to or occasionally hanging in terminating with manual intervention required to fix, so yeah I think the process i outlined would work without adding too much write load on the apiserver.
|
# ? Jun 18, 2020 00:33 |
|
When doing peering with AWS they only support 4byte ASN's up to 32767.65535. Why? Who knows, but it's a fuckup. If you are supporting 4byte ASN's you should support all of the bytes, not some dumb Signed 4byte integer like we are in the 90's and we need to differentiate between signed and unsigned numbers.
|
# ? Jun 24, 2020 16:46 |
|
abigserve posted:I can't think of a single bad thing to say about that platform. Ah, allow me to bitch at length about the 6500/7600 BU split.
|
# ? Nov 10, 2020 23:09 |
|
tortilla_chip posted:Ah, allow me to bitch at length about the 6500/7600 BU split. Uh hello? The 6500 is a SWITCH and the 7600 is a ROUTER, GOD. gb2ccna
|
# ? Nov 10, 2020 23:41 |
|
i hate terraform and azure networking is the devil. thanks for reading and god bless
|
# ? Nov 11, 2020 03:40 |
|
last week i was terraforming and considering today i was ansibling id rather go back
|
# ? Nov 11, 2020 04:54 |
|
Warbird posted:i hate terraform and azure networking is the devil. thanks for reading and god bless i found out this week that you can't do ipv6 between azure vms and basically anything that isn't a load balancer what's the loving deal with cloud providers half-assing their ipv6 implementations anyway
|
# ? Nov 11, 2020 08:51 |
|
Jeoh posted:i found out this week that you can't do ipv6 between azure vms and basically anything that isn't a load balancer saves money. no point properly implementing something the overwhelming majority of people are going to fully utilize. ate poo poo on live tv posted:Uh hello? The 6500 is a SWITCH and the 7600 is a ROUTER, GOD. gb2ccna the 12000 was a "gigabit switch router"! Kazinsal fucked around with this message at 09:22 on Nov 11, 2020 |
# ? Nov 11, 2020 09:15 |
|
Bored Online posted:last week i was terraforming and considering today i was ansibling id rather go back The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.
|
# ? Nov 11, 2020 17:45 |
|
Jeoh posted:i found out this week that you can't do ipv6 between azure vms and basically anything that isn't a load balancer Some combination of * a 10.0.0.0/8 is a lot of addresses. how many million things do you have running? * for migration purposes you'll have probably have to dual stack for somewhere between awhile and forever, eliminating a lot of the benefits of deploying ipv6 internally * as a consequence of the above, network regionalization/isolation can be a much more tractable solution to extending your private ipv4 footprint. In AWS for example you could have teams deploy into their own dedicated VPCs that expose applications via privatelink, which lets you hook a loadbalancer into another VPC that being said said, some reasons why ipv6 makes sense * your model requires or prefers a common globally routable private infrastructure, and you want to minimize or remove all roadblocks to making that architecture work for you * cloud providers give you ipv6 allocations from publicly routable ip ranges rather than the allocated private range, eliminating the need for NATs ($$$) and instead leveraging just routing rules for ingress/egress control * the expanded address range makes it easier to do smaller allocations of ranges to, say, VMs for containers w/o having untenable waste * customer rate of change is increasing and ip reuse is becoming more and more of a problem. a lot of existing software has assumptions that ip reuse doesnt occur within too tight a time window, and even with cold pooling in your friendly local VPC control plane if you start pushing significant rates of change you may see traffic get sent to the wrong endpoints my personal opinion is that we'll start to see more support for dualstack (iirc aws is the only one that supports it) across clouds and that large customers will drive cloud providers to offer a single stack ipv6 vpc option in the next few years. k8s singlestack ipv6 will probably be the bellwether here
|
# ? Nov 11, 2020 21:08 |
|
is it possible to tell cloud vms that want to be ipv4 that they are ipv4, and use v6 for your infra underneath? Or is that not a thing that is possible / a good idea
|
# ? Nov 12, 2020 04:25 |
|
animist posted:is it possible to tell cloud vms that want to be ipv4 that they are ipv4, and use v6 for your infra underneath? Or is that not a thing that is possible / a good idea Absolutely. Everything in AWS is virtualized anyway (i.e. customers have no knowledge of what IP is actually carrying their traffic between VPCs/regions) and I'm pretty sure most of AWS internal infra is ipv6 already. Plus isn't that how docker works? All the apps think they are running on 192.168.0.1 or whatever.
|
# ? Nov 12, 2020 05:16 |
|
animist posted:is it possible to tell cloud vms that want to be ipv4 that they are ipv4, and use v6 for your infra underneath? Or is that not a thing that is possible / a good idea so like jeoh mentioned most clouds don't offer anything in the way of ipv6 dualstack let alone single stack, but while you could almost certainly build a working ipv4 <-> ipv6 overlay in a few hours or days you have to ask yourself * what are you getting out of that abstraction? if you're just mapping your private ipv4 range onto ipv6, then you're still stuck with almost all of the limitations of ipv4. * how much effort are you going to invest in ensuring that your abstraction remains correct over time? what's the blast radius if you gently caress up that abstraction? * how much time, effort, and compute are you willing to invest in optimizing this thing when you want to run something that sends/receives a large volume of packets?
|
# ? Nov 12, 2020 05:17 |
|
ate poo poo on live tv posted:Absolutely. Everything in AWS is virtualized anyway (i.e. customers have no knowledge of what IP is actually carrying their traffic between VPCs/regions) and I'm pretty sure most of AWS internal infra is ipv6 already. Plus isn't that how docker works? All the apps think they are running on 192.168.0.1 or whatever. while it's a little outdated and very high level, i always recommend eric brandwine's talk that goes over how vpc is implemented in aws (linked to when he goes into the example) https://www.youtube.com/watch?v=3qln2u1Vr2E&t=1055s and if youre interested in some of the work that goes into stateful flow tracking for things like efs, nlb, and nat gateway here's a similar talk from colm maccarthaigh https://www.youtube.com/watch?v=8gc2DgBqo9U&t=1490s
|
# ? Nov 12, 2020 05:40 |
|
ate poo poo on live tv posted:Absolutely. Everything in AWS is virtualized anyway (i.e. customers have no knowledge of what IP is actually carrying their traffic between VPCs/regions) and I'm pretty sure most of AWS internal infra is ipv6 already. Plus isn't that how docker works? All the apps think they are running on 192.168.0.1 or whatever. I thought docker did serious nat fuckery instead of ipv6
|
# ? Nov 12, 2020 09:33 |
|
my stepdads beer posted:I thought docker did serious nat fuckery instead of ipv6 typically the host running the containers has a subnet like 192.168.0.1/24 that each container gets an IP out of. all of the containers have their own veth device in their net namespace with their IP on it and they're all connected to the same bridge device in the root network namespace (docker0) so all the containers on the same host see each other as directly connected. any traffic that needs to go elsewhere (ie the rest of your network) will get NAT'd with the host's IP
|
# ? Nov 13, 2020 01:10 |
|
if you're doing kube or some other orchestration where your containers get real IPs (from the perspective of the host), then instead of all containers being directly connected they'll each get their own veth interface in the host root namespace which will make everything routed
|
# ? Nov 13, 2020 01:14 |
|
this is timely, i was literally just doing an interview and the interviewer was telling me how they are trying to get MS to support ipv6 in Azure because they already moved most of their internal infra to ipv6 and they don't wanna change it when they shift to azure also how much should it scare me that part of the job involves supporting hundreds of bind dns servers
|
# ? Nov 13, 2020 02:14 |
|
depends on their automation and monitoring I guess
|
# ? Nov 13, 2020 02:18 |
|
my stepdads beer posted:depends on their automation and monitoring I guess sounded like getting that set up would be part of my job
|
# ? Nov 13, 2020 02:33 |
|
cheque_some posted:this is timely, i was literally just doing an interview and the interviewer was telling me how they are trying to get MS to support ipv6 in Azure because they already moved most of their internal infra to ipv6 and they don't wanna change it when they shift to azure tell them to move to gods favorite cloud jeff bezos' amazon web services, op
|
# ? Nov 13, 2020 02:37 |
|
FamDav posted:tell them to move to gods favorite cloud jeff bezos' amazon web services, op lol, in tyool 2020 rds doesn't support ipv6 and a whole bunch of other services also don't or in a limited way
|
# ? Nov 13, 2020 08:54 |
|
cheque_some posted:sounded like getting that set up would be part of my job sounds like a fun challenge to me tbh
|
# ? Nov 13, 2020 08:58 |
|
my homie dhall posted:if you're doing kube or some other orchestration where your containers get real IPs (from the perspective of the host), then instead of all containers being directly connected they'll each get their own veth interface in the host root namespace which will make everything routed is there any way to dump an interface into a container like I can with LXC? or at least make a user-defined bridge that has a real device in it?
|
# ? Nov 13, 2020 22:58 |
|
Hed posted:is there any way to dump an interface into a container like I can with LXC? or at least make a user-defined bridge that has a real device in it? Yeah LXC and docker use the same mechanism for interface isolation, you should be able to look up the net namespace of the docker container and move whatever interface you want inside of it I'd assume docker also doesn't mind if you add an interface to a user defined bridge
|
# ? Nov 13, 2020 23:36 |
|
it was decided that we are gonna adopt kubernetes
|
# ? Feb 12, 2021 06:20 |
|
Bored Online posted:it was decided that we are gonna adopt kubernetes hope you’re using a managed offering
|
# ? Feb 12, 2021 06:24 |
|
[infra person voice]: if the infra people are the ones pushing it it might be ok. if it’s devs you should be very scared
|
# ? Feb 12, 2021 06:30 |
|
Bored Online posted:it was decided that we are gonna adopt kubernetes is there a clearly articulated business need, or is it trend following
|
# ? Feb 12, 2021 06:44 |
|
Nomnom Cookie posted:is there a clearly articulated business need, or is it trend following itd be a move to a managed service which would theoretically be easier to hire for and less complicated than the byzantine artifice the previous person made with no input from anyone else. either way damned if you do damned if you dont in this situation i think
|
# ? Feb 12, 2021 07:10 |
|
Bored Online posted:it was decided that we are gonna adopt kubernetes
|
# ? Feb 12, 2021 09:09 |
|
|
# ? Feb 12, 2021 10:17 |
|
|
# ? Feb 12, 2021 10:31 |
|
hope you like yaml
|
# ? Feb 12, 2021 10:32 |
|
we're thinking about moving to kubernetes specifically, cf-for-k8s
|
# ? Feb 12, 2021 11:41 |
|
pointsofdata posted:hope you like yaml Hope you enjoy fun chats about CNIs or, why the gently caress does my networking randomly explode all the time ?
|
# ? Feb 12, 2021 12:44 |
|
|
# ? Jun 11, 2024 08:59 |
|
Bored Online posted:itd be a move to a managed service which would theoretically be easier to hire for and less complicated than the byzantine artifice the previous person made with no input from anyone else. either way damned if you do damned if you dont in this situation i think on the other hand, how many people claiming k8s experience have just spent a year running helm install with no understanding, and no ability to fix the problems they cause
|
# ? Feb 12, 2021 15:57 |