Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
FalseNegative
Jul 24, 2007

2>/dev/null
I just finished validating an EVPN design for the Nexus 9k. It's a definite step forward in solving a lot of my l2 / l3 requirements but it's certainly complicated in the underlying protocol stack.

There's so many protocols it has to rely on that I'm sure we'll find all the bugs at some point. I haven't ever really managed a nexus stack (always been engineering on the ASR-9000 router line) so I guess we'll see!

Adbot
ADBOT LOVES YOU

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





ping is objectively the best health check

FalseNegative
Jul 24, 2007

2>/dev/null

the talent deficit posted:

ping is objectively the best health check

pong

Nomnom Cookie
Aug 30, 2009



Ploft-shell crab posted:

just because a host no longer has an iptables rule for a target doesn’t mean it’s not going to route traffic to that target though. existing conntrack flows will continue to use that route which means you’re not just waiting on iptables to update, but also all existing connections (which may or may not even be valid) to that pod to terminate before you can delete it. what happens if neither side closes the connection, should the pod just never delete?

it might be workable for your specific use case (in which case, write your own kube-proxy! there are already cni providers that replace it), but I don’t think it’s as trivial as you’re making it sound

sure, no L4 load balancer can help you much there. it doesn't know how to gracefully shut down a connection, but that's fine. kube gives you the tools out of the box to tell a process it's about to die, so it can drain itself and shut down gracefully assuming will never be sent a new connection. what i want and don't have is a guarantee that a pod won't die until after every node has stopped sending it syn packets. you can't do a graceful shutdown without that, not safely. i'm willing to accept pods living longer than they need to or occasionally hanging in terminating with manual intervention required to fix, so yeah I think the process i outlined would work without adding too much write load on the apiserver.

ate shit on live tv
Feb 15, 2004

by Azathoth
When doing peering with AWS they only support 4byte ASN's up to 32767.65535. Why? Who knows, but it's a fuckup. If you are supporting 4byte ASN's you should support all of the bytes, not some dumb Signed 4byte integer like we are in the 90's and we need to differentiate between signed and unsigned numbers.

tortilla_chip
Jun 13, 2007

k-partite

abigserve posted:

I can't think of a single bad thing to say about that platform.

Ah, allow me to bitch at length about the 6500/7600 BU split.

ate shit on live tv
Feb 15, 2004

by Azathoth

tortilla_chip posted:

Ah, allow me to bitch at length about the 6500/7600 BU split.

Uh hello? The 6500 is a SWITCH and the 7600 is a ROUTER, GOD. gb2ccna

Warbird
May 23, 2012

America's Favorite Dumbass

i hate terraform and azure networking is the devil. thanks for reading and god bless

Bored Online
May 25, 2009

We don't need Rome telling us what to do.
last week i was terraforming and considering today i was ansibling id rather go back

vanity slug
Jul 20, 2010

Warbird posted:

i hate terraform and azure networking is the devil. thanks for reading and god bless

i found out this week that you can't do ipv6 between azure vms and basically anything that isn't a load balancer

what's the loving deal with cloud providers half-assing their ipv6 implementations anyway

Kazinsal
Dec 13, 2011

Jeoh posted:

i found out this week that you can't do ipv6 between azure vms and basically anything that isn't a load balancer

what's the loving deal with cloud providers half-assing their ipv6 implementations anyway

saves money. no point properly implementing something the overwhelming majority of people are going to fully utilize.

ate poo poo on live tv posted:

Uh hello? The 6500 is a SWITCH and the 7600 is a ROUTER, GOD. gb2ccna

the 12000 was a "gigabit switch router"! :eng101:

Kazinsal fucked around with this message at 09:22 on Nov 11, 2020

Nomnom Cookie
Aug 30, 2009



Bored Online posted:

last week i was terraforming and considering today i was ansibling id rather go back


The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.

FamDav
Mar 29, 2008

Jeoh posted:

i found out this week that you can't do ipv6 between azure vms and basically anything that isn't a load balancer

what's the loving deal with cloud providers half-assing their ipv6 implementations anyway

Some combination of

* a 10.0.0.0/8 is a lot of addresses. how many million things do you have running?
* for migration purposes you'll have probably have to dual stack for somewhere between awhile and forever, eliminating a lot of the benefits of deploying ipv6 internally
* as a consequence of the above, network regionalization/isolation can be a much more tractable solution to extending your private ipv4 footprint. In AWS for example you could have teams deploy into their own dedicated VPCs that expose applications via privatelink, which lets you hook a loadbalancer into another VPC

that being said said, some reasons why ipv6 makes sense

* your model requires or prefers a common globally routable private infrastructure, and you want to minimize or remove all roadblocks to making that architecture work for you
* cloud providers give you ipv6 allocations from publicly routable ip ranges rather than the allocated private range, eliminating the need for NATs ($$$) and instead leveraging just routing rules for ingress/egress control
* the expanded address range makes it easier to do smaller allocations of ranges to, say, VMs for containers w/o having untenable waste
* customer rate of change is increasing and ip reuse is becoming more and more of a problem. a lot of existing software has assumptions that ip reuse doesnt occur within too tight a time window, and even with cold pooling in your friendly local VPC control plane if you start pushing significant rates of change you may see traffic get sent to the wrong endpoints

my personal opinion is that we'll start to see more support for dualstack (iirc aws is the only one that supports it) across clouds and that large customers will drive cloud providers to offer a single stack ipv6 vpc option in the next few years. k8s singlestack ipv6 will probably be the bellwether here

animist
Aug 28, 2018
is it possible to tell cloud vms that want to be ipv4 that they are ipv4, and use v6 for your infra underneath? Or is that not a thing that is possible / a good idea

ate shit on live tv
Feb 15, 2004

by Azathoth

animist posted:

is it possible to tell cloud vms that want to be ipv4 that they are ipv4, and use v6 for your infra underneath? Or is that not a thing that is possible / a good idea

Absolutely. Everything in AWS is virtualized anyway (i.e. customers have no knowledge of what IP is actually carrying their traffic between VPCs/regions) and I'm pretty sure most of AWS internal infra is ipv6 already. Plus isn't that how docker works? All the apps think they are running on 192.168.0.1 or whatever.

FamDav
Mar 29, 2008

animist posted:

is it possible to tell cloud vms that want to be ipv4 that they are ipv4, and use v6 for your infra underneath? Or is that not a thing that is possible / a good idea

so like jeoh mentioned most clouds don't offer anything in the way of ipv6 dualstack let alone single stack, but while you could almost certainly build a working ipv4 <-> ipv6 overlay in a few hours or days you have to ask yourself

* what are you getting out of that abstraction? if you're just mapping your private ipv4 range onto ipv6, then you're still stuck with almost all of the limitations of ipv4.
* how much effort are you going to invest in ensuring that your abstraction remains correct over time? what's the blast radius if you gently caress up that abstraction?
* how much time, effort, and compute are you willing to invest in optimizing this thing when you want to run something that sends/receives a large volume of packets?

FamDav
Mar 29, 2008

ate poo poo on live tv posted:

Absolutely. Everything in AWS is virtualized anyway (i.e. customers have no knowledge of what IP is actually carrying their traffic between VPCs/regions) and I'm pretty sure most of AWS internal infra is ipv6 already. Plus isn't that how docker works? All the apps think they are running on 192.168.0.1 or whatever.

while it's a little outdated and very high level, i always recommend eric brandwine's talk that goes over how vpc is implemented in aws (linked to when he goes into the example)

https://www.youtube.com/watch?v=3qln2u1Vr2E&t=1055s

and if youre interested in some of the work that goes into stateful flow tracking for things like efs, nlb, and nat gateway here's a similar talk from colm maccarthaigh

https://www.youtube.com/watch?v=8gc2DgBqo9U&t=1490s

cowboy beepboop
Feb 24, 2001

ate poo poo on live tv posted:

Absolutely. Everything in AWS is virtualized anyway (i.e. customers have no knowledge of what IP is actually carrying their traffic between VPCs/regions) and I'm pretty sure most of AWS internal infra is ipv6 already. Plus isn't that how docker works? All the apps think they are running on 192.168.0.1 or whatever.

I thought docker did serious nat fuckery instead of ipv6

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

my stepdads beer posted:

I thought docker did serious nat fuckery instead of ipv6

typically the host running the containers has a subnet like 192.168.0.1/24 that each container gets an IP out of. all of the containers have their own veth device in their net namespace with their IP on it and they're all connected to the same bridge device in the root network namespace (docker0) so all the containers on the same host see each other as directly connected. any traffic that needs to go elsewhere (ie the rest of your network) will get NAT'd with the host's IP

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
if you're doing kube or some other orchestration where your containers get real IPs (from the perspective of the host), then instead of all containers being directly connected they'll each get their own veth interface in the host root namespace which will make everything routed

cheque_some
Dec 6, 2006
The Wizard of Menlo Park
this is timely, i was literally just doing an interview and the interviewer was telling me how they are trying to get MS to support ipv6 in Azure because they already moved most of their internal infra to ipv6 and they don't wanna change it when they shift to azure


also how much should it scare me that part of the job involves supporting hundreds of bind dns servers :stonk:

cowboy beepboop
Feb 24, 2001

depends on their automation and monitoring I guess

cheque_some
Dec 6, 2006
The Wizard of Menlo Park

my stepdads beer posted:

depends on their automation and monitoring I guess

sounded like getting that set up would be part of my job

FamDav
Mar 29, 2008

cheque_some posted:

this is timely, i was literally just doing an interview and the interviewer was telling me how they are trying to get MS to support ipv6 in Azure because they already moved most of their internal infra to ipv6 and they don't wanna change it when they shift to azure


also how much should it scare me that part of the job involves supporting hundreds of bind dns servers :stonk:

tell them to move to gods favorite cloud jeff bezos' amazon web services, op

vanity slug
Jul 20, 2010

FamDav posted:

tell them to move to gods favorite cloud jeff bezos' amazon web services, op

lol, in tyool 2020 rds doesn't support ipv6 and a whole bunch of other services also don't or in a limited way

cowboy beepboop
Feb 24, 2001

cheque_some posted:

sounded like getting that set up would be part of my job

sounds like a fun challenge to me tbh

Hed
Mar 31, 2004

Fun Shoe

my homie dhall posted:

if you're doing kube or some other orchestration where your containers get real IPs (from the perspective of the host), then instead of all containers being directly connected they'll each get their own veth interface in the host root namespace which will make everything routed

is there any way to dump an interface into a container like I can with LXC? or at least make a user-defined bridge that has a real device in it?

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

Hed posted:

is there any way to dump an interface into a container like I can with LXC? or at least make a user-defined bridge that has a real device in it?

Yeah LXC and docker use the same mechanism for interface isolation, you should be able to look up the net namespace of the docker container and move whatever interface you want inside of it

I'd assume docker also doesn't mind if you add an interface to a user defined bridge

Bored Online
May 25, 2009

We don't need Rome telling us what to do.
it was decided that we are gonna adopt kubernetes

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

Bored Online posted:

it was decided that we are gonna adopt kubernetes

hope you’re using a managed offering

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
[infra person voice]: if the infra people are the ones pushing it it might be ok. if it’s devs you should be very scared

Nomnom Cookie
Aug 30, 2009



Bored Online posted:

it was decided that we are gonna adopt kubernetes

is there a clearly articulated business need, or is it trend following

Bored Online
May 25, 2009

We don't need Rome telling us what to do.

Nomnom Cookie posted:

is there a clearly articulated business need, or is it trend following

itd be a move to a managed service which would theoretically be easier to hire for and less complicated than the byzantine artifice the previous person made with no input from anyone else. either way damned if you do damned if you dont in this situation i think

jre
Sep 2, 2011

To the cloud ?



Bored Online posted:

it was decided that we are gonna adopt kubernetes

:rip:

Asymmetric POSTer
Aug 17, 2005

Kazinsal
Dec 13, 2011

distortion park
Apr 25, 2011


hope you like yaml

vanity slug
Jul 20, 2010

we're thinking about moving to kubernetes

specifically, cf-for-k8s

:(

jre
Sep 2, 2011

To the cloud ?



pointsofdata posted:

hope you like yaml

Hope you enjoy fun chats about CNIs or, why the gently caress does my networking randomly explode all the time ?

Adbot
ADBOT LOVES YOU

Nomnom Cookie
Aug 30, 2009



Bored Online posted:

itd be a move to a managed service which would theoretically be easier to hire for and less complicated than the byzantine artifice the previous person made with no input from anyone else. either way damned if you do damned if you dont in this situation i think

on the other hand, how many people claiming k8s experience have just spent a year running helm install with no understanding, and no ability to fix the problems they cause

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply