Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
wolrah
May 8, 2006
what?

Bob Morales posted:

Would setting up a /24 in the same ip scheme work at all? 10.1.99.x or something. It seems like that would be asking for trouble, mixing a 24 and 16 with the same possible ip’s

If your current disaster scheme is 10.1.0.0/16 you definitely don't want to use 10.1.99.0/24 for a new network if you want anything to be able to speak between them or to both. If you can't take the opportunity to fix the current configuration in to something reasonable then your new subnet will need to be somewhere outside of the 10.1.0.0/16 range. Don't make another /16, that'd be silly, but use a /24 that's not within 10.1.x.x.

If you want the new subnet to be within 10.1.x.x for organizational purposes you're going to have to fix the existing setup first.

Adbot
ADBOT LOVES YOU

ate shit on live tv
Feb 15, 2004

by Azathoth
Where is the router? Does the router literally have a directly connected network of 10.1.0.0/16 and there are devices on the network configured with 255.255.0.0 subnet's?

If so then you can probably put a virtual router (hsrp/vrrp etc) on the smaller subnet on the SVI of the router. i.e. you want a new network in 10.1.99.0/24, you can just do something like this:

#arista
interface Vlan122
ip address 10.11.0.2/16
ip helper-address 10.11.122.15
ip virtual-router address 10.11.120.1 #for subnet 10.11.120.0/24
ip virtual-router address 10.11.122.1#for subnet 10.11.122.0/24
ip virtual-router address 10.11.255.254 #for subnet 10.11.0.0/16

Or for you:
pre:
interface vlan 100
  ip address 10.1.0.2/16
  ip virtual-router address 10.1.99.1 #new subnet
  ip virtual-router address 10.1.0.1 #old subnet
Migrating away from the dumb setup will be difficult, but unfortunately that is always the case when you are dealing with static IPs and changing subnets.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Ended up going with 172.16.1.x/24

Making the stupidnets into real subnets is now project #3 in line

falz
Jan 29, 2005

01100110 01100001 01101100 01111010
I'd take a step back and do an entire IPAM plan for everything before making more changes. Then move everything towards the that plan over time.

ate shit on live tv
Feb 15, 2004

by Azathoth

falz posted:

I'd take a step back and do an entire IPAM plan for everything before making more changes. Then move everything towards the that plan over time.

Yea this is step 0 before you start defining new IP space.

MF_James
May 8, 2008
I CANNOT HANDLE BEING CALLED OUT ON MY DUMBASS OPINIONS ABOUT ANTI-VIRUS AND SECURITY. I REALLY LIKE TO THINK THAT I KNOW THINGS HERE

INSTEAD I AM GOING TO WHINE ABOUT IT IN OTHER THREADS SO MY OPINION CAN FEEL VALIDATED IN AN ECHO CHAMBER I LIKE

You should also have your current space in there so you don't miss things.

Partycat
Oct 25, 2004

Honesty a /16 for 100 hosts is dumb but it could be a /8 or whatever and it’s fine - new stuff you can segment and build on another scope for sure . I would do that personally and label that VLAN as Shitlandia or whatever and let it die.

Very few things need to be within the same L3 network or L2 domain to work nowadays and if finding everything and having it reconfigured is not your problem - don’t make it one. They’ll blame you for subsequent issues no matter how unrelated.

Sepist
Dec 26, 2005

FUCK BITCHES, ROUTE PACKETS

Gravy Boat 2k
Is there anything out there like Aruba airwave but in IoT form? Basically looking to deploy hundreds of devices around offices that monitor the wireless coverage and if it drops below a threshold it will alarm.

thesurlyspringKAA
Jul 8, 2005
I have a Cisco industrial switch running a bunch of VLANs hooked up to a bunch of systems.

I have a computer connected to port 24 that is
A. Incredibly sensitive to excess traffic across its NIC
B. A highly specialized piece of equipment I don’t have the license to change settings on
C. Might be attempting to pass all network traffic it sees over a wireless link, but I can’t quite tell if that’s the case

Is there a way I can throttle or eliminate TCP traffic across port 24?

Is there a way I can throttle ALL outgoing traffic on port 24? Like have the box feed me data and the switch pass nothing back? The box is only passing UDP multicast traffic.

Thanks for your help. This is my last stop before incurring thousands of dollars in Cisco support calls.

tortilla_chip
Jun 13, 2007

k-partite
You can use an ACL applied to the specific interface in order to filter traffic.

thesurlyspringKAA
Jul 8, 2005
Is that accessible through the GUI or is that an IOS function?

tortilla_chip
Jun 13, 2007

k-partite
Not sure about the GUI, can definitely do it from the IOS CLI.

I am unclear what your sensitivity to outages is, but it may be worth paying Cisco or a MSP for some hand holding.

FatCow
Apr 22, 2002
I MAP THE FUCK OUT OF PEOPLE
After a month of working with Cisco TAC and the optical BU one of our NCSs can pass SONET frames. Why did they decide the 454 needed replacing again?

madsushi
Apr 19, 2009

Baller.
#essereFerrari
Anyone run Kubernetes and have any opinions on network models / overlays? Looking at a new K8S deployment and there's like a dozen different network models and I'm trying to figure out which isn't terrible.

Methanar
Sep 26, 2013

by the sex ghost

madsushi posted:

Anyone run Kubernetes and have any opinions on network models / overlays? Looking at a new K8S deployment and there's like a dozen different network models and I'm trying to figure out which isn't terrible.

Calico is cool. I also hear really good things about kube-router. I like them because its real IP BGP with no vxlan bullshit. Kube-router can peer with the rest of your network so you have real routes propagating around to normally cluster-internal resources like services or direct pod IPs.

Also if you're new to kubernetes networking. This and the two follow up articles for services and ingress are basically mandatory reading. https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727
Read it twice. And then read it again when you start actually building things on kubernetes.

The different network models are just the implementation of how each node gets a subnet and all nodes are aware of each other's subnets.

Methanar fucked around with this message at 20:41 on Jul 13, 2018

FatCow
Apr 22, 2002
I MAP THE FUCK OUT OF PEOPLE
The container ecosystem not being v6 native is the stupidest design decision I've ever seen. making GBS threads all over a Red Hat product guy about it was one of my best moments.

Methanar
Sep 26, 2013

by the sex ghost

FatCow posted:

The container ecosystem not being v6 native is the stupidest design decision I've ever seen. making GBS threads all over a Red Hat product guy about it was one of my best moments.

What would ipv6 solve? You'd still need some kind of overlay management.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Methanar posted:

Calico is cool. I also hear really good things about kube-router. I like them because its real IP BGP with no vxlan bullshit. Kube-router can peer with the rest of your network so you have real routes propagating around to normally cluster-internal resources like services or direct pod IPs.

Also if you're new to kubernetes networking. This and the two follow up articles for services and ingress are basically mandatory reading. https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727
Read it twice. And then read it again when you start actually building things on kubernetes.

The different network models are just the implementation of how each node gets a subnet and all nodes are aware of each other's subnets.

Thanks, really appreciate the info and link.

What's the big difference between Calico and Kube-router? It seems like both are L3 (no overlay) and use BGP. Is there something I'm missing?

Methanar
Sep 26, 2013

by the sex ghost

madsushi posted:

Thanks, really appreciate the info and link.

What's the big difference between Calico and Kube-router? It seems like both are L3 (no overlay) and use BGP. Is there something I'm missing?

Somehow, there isn't an actual article that I've found describing that. But these two are a pro-reads as well. Most notable is ebgp peering with the rest of your network so you can have real routes to service kubernetes' constructs which is useful if you're split-brained between some things inside of the cluster and somethings outside of the cluster. Especially if you're on-prem and can't use elb to handle ingress for you. (like me :( )
https://www.kube-router.io/docs/introduction/#what-is-kube-router
https://www.kube-router.io/docs/see-it-in-action/

My understanding is kube-router is a drop in replacement for Calico as a BGP node subnet manager, but it also replaces kube-proxy (the thing being the magic of service constructs) and can do microsegmentation all in one binary. Unclear exactly why this is better than its competitors and but apparently Its Good. It comes pre-built with prometheus metric endpoints so you get pretty graphs out of it easily that calico and kube-proxy don't give you.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Methanar posted:

Somehow, there isn't an actual article that I've found describing that. But these two are a pro-reads as well. Most notable is ebgp peering with the rest of your network so you can have real routes to service kubernetes' constructs which is useful if you're split-brained between some things inside of the cluster and somethings outside of the cluster. Especially if you're on-prem and can't use elb to handle ingress for you. (like me :( )

Yeah, I'm on-prem too.

This Calico doc seems to imply that you can have BGP peering with your network:
https://docs.projectcalico.org/v3.1/usage/external-connectivity
https://docs.projectcalico.org/v3.1/usage/configuration/bgp

I'll do some more research. It seems like IPVS is a big feature (vs kube-proxy).

Thanks again!

Methanar
Sep 26, 2013

by the sex ghost

madsushi posted:

Yeah, I'm on-prem too.


Tell me when you start to hate the very concept of ingress from the internet.

madsushi posted:


This Calico doc seems to imply that you can have BGP peering with your network:
https://docs.projectcalico.org/v3.1/usage/external-connectivity
https://docs.projectcalico.org/v3.1/usage/configuration/bgp

I'll do some more research. It seems like IPVS is a big feature (vs kube-proxy).

Thanks again!

Calico can do ebgp peering as well, you're right. I'm bad at phrasing. I meant the ebgp peering part as a point in favor of a bgp based network vs vxlan.

kube-proxy actually has an ipvs mode too. Since 1.9 which was December I believe.

https://kubernetes.io/docs/concepts/services-networking/service/

quote:

Proxy-mode: ipvs
FEATURE STATE: Kubernetes v1.9 beta
In this mode, kube-proxy watches Kubernetes Services and Endpoints, calls netlink interface to create ipvs rules accordingly and syncs ipvs rules with Kubernetes Services and Endpoints periodically, to make sure ipvs status is consistent with the expectation. When Service is accessed, traffic will be redirected to one of the backend Pods.

Similar to iptables, Ipvs is based on netfilter hook function, but uses hash table as the underlying data structure and works in the kernel space. That means ipvs redirects traffic much faster, and has much better performance when syncing proxy rules. Furthermore, ipvs provides more options for load balancing algorithm, such as:

rr: round-robin
lc: least connection
dh: destination hashing
sh: source hashing
sed: shortest expected delay
nq: never queue


quote:

Unclear exactly why this is better than its competitors and but apparently Its Good.

At the moment I'm using calico with iptables mode kube-proxy. Its been working well for me. If you do choose something else, let me know how it goes.

FatCow
Apr 22, 2002
I MAP THE FUCK OUT OF PEOPLE

Methanar posted:

What would ipv6 solve? You'd still need some kind of overlay management.

Unless I'm mistaken there is a ridiculous amount of NAT going on within Kubernetes. We're trying to put VoIP application in containers and the media flows are an absolute nightmare with how it currently exists. To the point where I consider it broken for UDP use cases. I'm not an engineer on the Kubernetes side, but every conversation I've had with the people working directly in it has been disappointing. We'll likely be running anything UDP/RTP in VMs while everything else goes to containers.

It's 2018, I should be able to have a globally unique IPv6 address for every container. These are the use cases IPv6 was designed for. And yes I know you shouldn't be directly accessing your containers, but that is what ACLs are for.

FatCow fucked around with this message at 17:54 on Jul 14, 2018

Methanar
Sep 26, 2013

by the sex ghost

FatCow posted:

Unless I'm mistaken there is a ridiculous amount of NAT going on within Kubernetes. We're trying to put VoIP application in containers and the media flows are an absolute nightmare with how it currently exists. To the point where I consider it broken for UDP use cases. I'm not an engineer on the Kubernetes side, but every conversation I've had with the people working directly in it has been disappointing. We'll likely be running anything UDP/RTP in VMs while everything else goes to containers.

It's 2018, I should be able to have a globally unique IPv6 address for every container. These are the use cases IPv6 was designed for. And yes I know you shouldn't be directly accessing your containers, but that is what ACLs are for.

Okay I'll give you that.

My main application is webrtc which is directly accessed by users in the public internet is a loving abomination to make work in kubernetes. Like you say, to the point of it being legitimately broken. Whats the point of I need to operate in the host network namespace anyway. But of course even that's not good enough because the webrtc thing is not on its own. Its got 3 different redises, needs something to term ssl whether thats an ingress controller or haproxy container in the pod, some really stupid endpoint that requires :443 on a public endpoint (looked at rewriting this so the pod could advertise a port and not have a statically required one but lol). But then operating in the host network namespace fucks up everything.

I'm on prem and have a fixed number of machines to work with. When I do a deployment everything eats poo poo because if I try to deploy a new pod onto a machine with an existing pod, it fails because ports are already in use. So then the issue becomes I need two separate deployments for one application with different network configurations working together and to figure out how to do service discovery so the webrtc piece in the host network can find its other parts in the real cluster network.

But of course that's not all either. I'm on-prem and doing many 10s of gbps of video transmission. Data locality becomes important. Like I guess for the http control traffic I can have an ingress controller infront and do host header matching to direct traffic to the correct container backends based off of how it registers to the core.

But then I need dynamic, public DNS of my pods. I was thinking I could delegate a subzone for CoreOS to be authoritative but wait I guess that's actually not possible for externalIPs of services unless I write in the capability to do that. Which I might need to because the incubator project external-dns literally just does not work for ingress controllers at the moment, their slack channel helpfully told me I should just use ELB instead. https://github.com/coredns/coredns/issues/1851

nvm you're right gently caress kubernetes.

quote:

madsushi posted:
Yeah, I'm on-prem too.


Tell me when you start to hate the very concept of ingress from the internet.

ElCondemn
Aug 7, 2005


FatCow posted:

Unless I'm mistaken there is a ridiculous amount of NAT going on within Kubernetes. We're trying to put VoIP application in containers and the media flows are an absolute nightmare with how it currently exists. To the point where I consider it broken for UDP use cases. I'm not an engineer on the Kubernetes side, but every conversation I've had with the people working directly in it has been disappointing. We'll likely be running anything UDP/RTP in VMs while everything else goes to containers.

It's 2018, I should be able to have a globally unique IPv6 address for every container. These are the use cases IPv6 was designed for. And yes I know you shouldn't be directly accessing your containers, but that is what ACLs are for.

I just switched the clusters I manage to the AWS vpc cni driver, every pod now has a concrete IP. It's made using Kubernetes a lot easier, especially for stateful services and services that aren't just web servers.

If I were running kubernetes on-prem I'd probably switch back to Calico and distribute my overlay network into my iBGP and just route it like any other network making all the NAT and ingress poo poo way less complicated. Don't use kube-router, I had weird issues with it but that might've been due to IP in IP across AZs in AWS. Also it would hang and pods would be unreachable until the container was cycled, but maybe they fixed that in the past 6 months.

ElCondemn fucked around with this message at 22:13 on Jul 14, 2018

ate shit on live tv
Feb 15, 2004

by Azathoth
So I'm seeing a lot of talk itt about Kubernetes and containers, but all of this is apparently happening within AWS, is it really worthwhile to create a monolithic server running like 100 services such that you actually need BGP to announce them all?

What are you guys' applications that this makes sense as opposed to have dedicated servers or AWS Services to do DB lookups or whatever. Or is it one of these things where you are scaling within AWS, but only for a few hours, then you destroy all the servers?

Apex Rogers
Jun 12, 2006

disturbingly functional

This isn't really a Cisco question, it's about setting up some basic routing functionality in Windows 10. Please point me to another thread if this isn't the right one for the question.

I want to set up my Windows 10 PC as a Router with NAT. Basically, the PC has two ethernet ports. One faces to the public network and one faces to my private network. I want to hang a switch off the PC in the private network and connect some devices to it. The hope is that the PC will do NAT between the private and public networks and allow the connected devices to talk to a TFTP server sitting in the public space.

I have already enabled IP routing on the PC by using a registry, as described here: https://superuser.com/questions/394505/how-can-i-setup-a-win-7-pc-as-a-router/394564#394564

I'm having trouble finding the right instructions for the NAT side of things. I see a lot of instructions about setting up a virtual NAT switch using Powershell, but these use cases seem to be for setting up a private network locally on the PC for HyperV virtual sessions. Will this same approach work for my case?

In any case, I tried the powershell command mentioned here:
https://4sysops.com/archives/native-nat-in-windows-10-hyper-v-using-a-nat-virtual-switch/ , but I am getting a message about how "The term 'New-VMSwitch' is not recognized". I have not gone any further than that.

Thanks in advance for any help.

MF_James
May 8, 2008
I CANNOT HANDLE BEING CALLED OUT ON MY DUMBASS OPINIONS ABOUT ANTI-VIRUS AND SECURITY. I REALLY LIKE TO THINK THAT I KNOW THINGS HERE

INSTEAD I AM GOING TO WHINE ABOUT IT IN OTHER THREADS SO MY OPINION CAN FEEL VALIDATED IN AN ECHO CHAMBER I LIKE

Apex Rogers posted:

In any case, I tried the powershell command mentioned here:
https://4sysops.com/archives/native-nat-in-windows-10-hyper-v-using-a-nat-virtual-switch/ , but I am getting a message about how "The term 'New-VMSwitch' is not recognized". I have not gone any further than that.

Thanks in advance for any help.

New-VMSwitch is a hyper-v module command, it's likely you don't have hyper-v installed and, by extension, the powershell commandlets that go with it, which is why you're getting that error.

As for how to get NAT working via windows routing, it (looks) like it's done through the "Routing and Remote Access" mmc - https://technet.microsoft.com/pt-pt/library/cc776909%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396

I have never attempted to turn a windows device into a router, so perhaps that link isn't going to help, but it seems to be what you're looking for.

ElCondemn
Aug 7, 2005


ate poo poo on live tv posted:

So I'm seeing a lot of talk itt about Kubernetes and containers, but all of this is apparently happening within AWS, is it really worthwhile to create a monolithic server running like 100 services such that you actually need BGP to announce them all?

What are you guys' applications that this makes sense as opposed to have dedicated servers or AWS Services to do DB lookups or whatever. Or is it one of these things where you are scaling within AWS, but only for a few hours, then you destroy all the servers?

I can't speak for anyone else but the reason we use Kubernetes and containers where I work (and at previous employers) is to make deployment quicker and easier for devs.

We don't run monolithic servers, we run maybe 30 pods per node. And the reason we would need to use BGP instead of like OSPF or something is because both AWS and the popular Kubernetes network drivers don't support anything else.

All of this stuff is automated using ASGs and Kubenretes controllers (Horizontal pod scaler, Cluster auto scaler, etc.).

madsushi
Apr 19, 2009

Baller.
#essereFerrari

ate poo poo on live tv posted:

So I'm seeing a lot of talk itt about Kubernetes and containers, but all of this is apparently happening within AWS, is it really worthwhile to create a monolithic server running like 100 services such that you actually need BGP to announce them all?

What are you guys' applications that this makes sense as opposed to have dedicated servers or AWS Services to do DB lookups or whatever. Or is it one of these things where you are scaling within AWS, but only for a few hours, then you destroy all the servers?

I think Methanar and I are both running on-prem, not AWS. So we have big beefy hosts and this is a better way to carve up resources. Making new VMs isn't as fast or easy for devs.

BallerBallerDillz
Jun 11, 2009

Cock, Rules, Everything, Around, Me
Scratchmo
I'm using k8s in AWS and it's basically for the reason already mentioned, it makes deployment easier. It's easier and quicker to work with docker images than full AMIs. We can scale services more quickly adding pods than we can spinning up new EC2s, and when we do need to scale nodes we can turn on a single new node and scale several services at once instead of waiting for multiple EC2s to come up. It's quicker to roll back broken changes, and HA/fail-over is faster too. I'm still just an SRE babby, I'm sure someone else on my team could explain in more detail but the up front cost of getting kubernetes configured correctly and running has paid off for us in stability and velocity. Our kubernetes environment is more stable and more flexible than either our traditional EC2 fleet or our ECS managed container infra.

I'm starting to gently caress around with kubeless now too, which so far looks like it's a solution in desperate search of a problem but is still fun to play with. Sorry, I know this has strayed away from Cisco chat, but I'd be very interested to hear if any of you on-prem k8s folks are using kubeless. It seems like it might make more sense there than in AWS.

FatCow
Apr 22, 2002
I MAP THE FUCK OUT OF PEOPLE

madsushi posted:

I think Methanar and I are both running on-prem, not AWS. So we have big beefy hosts and this is a better way to carve up resources. Making new VMs isn't as fast or easy for devs.

Same here. It is a method to automate and standardize deployments for us. We do RESTful services for our internal stuff so a huge amount of what we have fits the container model very well.

Methanar
Sep 26, 2013

by the sex ghost
I'm running kubernetes because I hate myself.

ate shit on live tv
Feb 15, 2004

by Azathoth
Yea if you are spinning up your own infrastructure, I'm sure running kubernates makes a lot of sense for scaling. I'm just confused as to why you'd do that in the cloud, if you are in the cloud it would seem like a "serverless" architecture would be better rather then janitoring your own bespoke cloud platform on someone else's cloud.


BallerBallerDillz posted:

I'm starting to gently caress around with kubeless now too, which so far looks like it's a solution in desperate search of a problem but is still fun to play with. Sorry, I know this has strayed away from Cisco chat, but I'd be very interested to hear if any of you on-prem k8s folks are using kubeless. It seems like it might make more sense there than in AWS.

This hasn't been a Cisco exclusive thread since like 2007 (when it was made, wow...)

ElCondemn
Aug 7, 2005


ate poo poo on live tv posted:

Yea if you are spinning up your own infrastructure, I'm sure running kubernates makes a lot of sense for scaling. I'm just confused as to why you'd do that in the cloud, if you are in the cloud it would seem like a "serverless" architecture would be better rather then janitoring your own bespoke cloud platform on someone else's cloud.

Now that every provider has hosted Kubernetes (EKS became GA maybe a month or two ago) there isn't much reason to run your own, but a lot of us have legacy deployments.

If your question is why you would run Kubernetes instead of using ECS, plain docker or just plain instances... well that's just a silly question.

BallerBallerDillz
Jun 11, 2009

Cock, Rules, Everything, Around, Me
Scratchmo
well the question of moving to a purely serverless architecture is valid for some workloads, but that's going to require a major shift in the way many services are designed. Lots of people have containers that they're using right now, using kubernetes to orchestrate them in the interim makes sense, especially since serverless is pretty new and hardly mature. Containers in general are much more portable than serverless is, I know that multi-cloud is mostly a myth, and data gravity is the main driver of cloud lock-in but I'd still be hesitant to tie my entire code-base to a cloud vendor specific serverless implementation at this point in the game.

FatCow
Apr 22, 2002
I MAP THE FUCK OUT OF PEOPLE

ate poo poo on live tv posted:

Yea if you are spinning up your own infrastructure, I'm sure running kubernates makes a lot of sense for scaling. I'm just confused as to why you'd do that in the cloud, if you are in the cloud it would seem like a "serverless" architecture would be better rather then janitoring your own bespoke cloud platform on someone else's cloud.

For us its so our workloads can be location agnostic and we can put things wherever we need them to be.

Also, using k8s because that's what Openshift uses and rolling your own everything is for people who don't have INNOVATION revenue chasing to do.

Thanks Ants
May 21, 2004

#essereFerrari


I have some servers for which network uptime is important, but throughput is a few hundred Mb/s at most. Is connecting these via aggregate links to a pair of Juniper EX3300 switches in a virtual chassis and distributing the links across the two switches going to provide what I want? The EX3300 seems to be the lowest model in the range that does NSSU, and the budget can accommodate them.

I don't need the QSFP+ option that the EX3400 has, can't see a future need for MC-LAG (we'll just buy new switches if that happens), don't need them to route. Are people happy with these devices? Anything else I should be looking at?

Bruno_me
Dec 11, 2005

whoa

Thanks Ants posted:

I have some servers for which network uptime is important, but throughput is a few hundred Mb/s at most. Is connecting these via aggregate links to a pair of Juniper EX3300 switches in a virtual chassis and distributing the links across the two switches going to provide what I want? The EX3300 seems to be the lowest model in the range that does NSSU, and the budget can accommodate them.

I don't need the QSFP+ option that the EX3400 has, can't see a future need for MC-LAG (we'll just buy new switches if that happens), don't need them to route. Are people happy with these devices? Anything else I should be looking at?

We use non-VC (so far) EX3300-48P's in our offices, and I'm happy with them, plus a variety of other VC/non-VC EX and QFX elsewhere. A couple EX3300s should fit nicely in that role.

ate shit on live tv
Feb 15, 2004

by Azathoth
Bakeoff!

Adbot
ADBOT LOVES YOU

Charliegrs
Aug 10, 2009
I posted this in the IT Cert thread but I think that thread might be kind of dead so apologies if this isn't the right thread for this question.

I'm working on getting a CCNA Wireless cert. And I was wondering is there any WLC gui interface to play around with? Like a virtual WLC? I see that in the current version of Packet tracer it has a few WLCs but I haven't been able to figure out if they have a full gui or not.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply