|
jwh posted:500kpps at 64 bytes would probably explode an ASA, based on what I'm seeing. I'm almost tempted to setup a FreeBSD server with top shelf Xeons and a high-performance 10GbE NIC like a Solarflare 6122 to see what performance I can get from pf.
|
# ? May 21, 2013 18:07 |
|
|
# ? May 30, 2024 04:17 |
|
Bluecobra posted:I'm almost tempted to setup a FreeBSD server with top shelf Xeons and a high-performance 10GbE NIC like a Solarflare 6122 to see what performance I can get from pf. IIRC, pf is still single-threaded, though there's someone working on a patch to make it multithreaded. That's not to say it performs poorly, but I'd go for more GHz over more cores when choosing a CPU.
|
# ? May 21, 2013 18:42 |
|
Also I hear the Chelsio cards have the best BSD support.
|
# ? May 22, 2013 21:13 |
|
Cenodoxus posted:Ah, the security-through-obscurity bullshit. "If someone breaks into your network, they won't know where they are!" The same could be said for your replacement when you're hit by a bus and your dumb rear end didn't do your due diligence with Visio. One time I was told that I can't break up the subnets and implement layer 3 switching due to "the next guy might now know how to work on it".
|
# ? May 23, 2013 03:07 |
|
Bluecobra posted:Also, this is the sole reason you should buy Arista: Can you explain this to me like I'm five? What's arista?
|
# ? May 24, 2013 13:41 |
|
ToG posted:Can you explain this to me like I'm five? What's arista? Arista makes some really killer switches. The 'show donkeys' command appears to be an easter egg.
|
# ? May 24, 2013 14:56 |
|
Does anyone around have any experience troubleshooting packet loss on Nexus 7ks? I've recently joined a team that manages a couple and haven't had any past experience wit them and the techs who had all the experience have left so I'm stuck trying to figure out what is going on. I'm observing a lot of packet vanishing somewhere in the backplane of the Nexus - I see the packets leave a port-channel interface on one VDC destined for another VDC, however the packets seem to vanish in the backplane somewhere and never make it to the gress interface. The interfaces are all operating well below capacity, and no errors of any kind are being recorded on them. In fact the only thing I've found so far that looks suspicious is in the copp policy on the admin VDC: code:
code:
To make matters worse the software is fairly old - 5.1(3) and because of some interesting design choices (this Nexus hosts gateway interfaces for public facing servers in another data centres using OTV...) management are very leery of any kind of software upgrade/reboot. Because of the software age there's a very good chance that our gold partner's support channel is going to tell us they can't help us until we upgrade the software... The only useful document I've found so far is this one: http://docwiki.cisco.com/wiki/Cisco_Nexus_7000_Series_NX-OS_Troubleshooting_Guide_--_Troubleshooting_Packet_Flow_Issues which has helped a bit but doesn't help me figure out what I need to do to fix it. Anyone have more experience than me and can offer some suggestions?
|
# ? May 26, 2013 22:35 |
|
There is an ASA 5510 up on my local auction site. Is it worth getting for a lab? Also I see it has 5 ports, now the white paper says that 2 can be made gigabit with the correct license, so I assume they are gig ports in hardware but crippled in software? Also I don't have any ASA experience at all, so are the five ports routed ports, or one WAN routed and 4 switchports?
|
# ? May 26, 2013 23:39 |
|
This is going to be a really stupid question, but I know nothing about Cisco's more robust switch offerings. I'm going through datasheets and specs, but can anyone sum up, in like a paragraph, the difference between Cisco's larger Catalysts and their Nexus line? In the product selector they seem to be interspersed so is there a good 10,000 foot high overview of both lines? I realize this is really vague and I apologize, but I hope someone understands what I'm getting at. This is just for personal knowledge, not for any project.
|
# ? May 27, 2013 06:35 |
|
the biggest thing about the nexus line, to me, is that they can control additional nexus switches like they are line cards, and they support virtual port channels, so you can effectively eliminate spanning tree and utilize multiple uplinks to diverse switches. There are probably a lot of other features, but those are the two sellers for me.
|
# ? May 27, 2013 06:43 |
|
Martytoof posted:This is going to be a really stupid question, but I know nothing about Cisco's more robust switch offerings. I'm going through datasheets and specs, but can anyone sum up, in like a paragraph, the difference between Cisco's larger Catalysts and their Nexus line? In the product selector they seem to be interspersed so is there a good 10,000 foot high overview of both lines? Nexus: data center oriented, higher speeds, FCoE, DC interconnect. Catalyst: campus oriented, more features in the edge/aggregation/wireless/auto-qos area.
|
# ? May 27, 2013 11:09 |
|
Thanks guys. Just having that bit of differentiation will make it easier to do my own research from hereon out
|
# ? May 27, 2013 18:17 |
|
The Nexus line also runs NX-OS. But all you really need to know about the Nexus vs Catalyst is that they're not Juniper.
|
# ? May 27, 2013 20:20 |
|
Oh yeah, that one I knew. Doe the high end Cats run IOS proper or is it some sort of modified environment? I gather it's IOS running on a Unix kernel or something like that, but is the admin-facing interface much different from a regular IOS switch?
|
# ? May 27, 2013 21:06 |
|
Martytoof posted:Oh yeah, that one I knew. Doe the high end Cats run IOS proper or is it some sort of modified environment? I gather it's IOS running on a Unix kernel or something like that, but is the admin-facing interface much different from a regular IOS switch? Higher end Cats run IOS. It's identical to any router IOS with the addition of a bunch of switching poo poo and stuff for line cards.
|
# ? May 27, 2013 22:52 |
|
Martytoof posted:I gather it's IOS running on a Unix kernel or something like that. I think you're thinking of IOS XR, which is a QNX-based system used on the big routers like the HFR, 12000, and such.
|
# ? May 28, 2013 02:16 |
|
adorai posted:the biggest thing about the nexus line, to me, is that they can control additional nexus switches like they are line cards, and they support virtual port channels, so you can effectively eliminate spanning tree and utilize multiple uplinks to diverse switches. There are probably a lot of other features, but those are the two sellers for me. i've been out of the loop on nexus for a couple years as we moved to an inhouse solution. Have nexus figured out to get 10gig back planes for all ports instead of a handful of 10gig? TO me that was the biggest limitation of their line card offerings was that they were super expensive for the lack of high speed port configuration.
|
# ? May 28, 2013 04:21 |
|
BigT posted:i've been out of the loop on nexus for a couple years as we moved to an inhouse solution.
|
# ? May 28, 2013 06:48 |
|
BigT posted:i've been out of the loop on nexus for a couple years as we moved to an inhouse solution. The 5500 series is line rate 10G, and so is the N7K with the release of the F2 series of line cards. You can also buy 40 and 100 gig cards for the 7k.
|
# ? May 28, 2013 08:46 |
|
wolrah posted:I think you're thinking of IOS XR, which is a QNX-based system used on the big routers like the HFR, 12000, and such. xr is more of a microkernel design under qnx, sounds like Marty was describing xe (used on ASR 1k) which is a Linux host OS running a monolithic IOS kernel as a processes.
|
# ? May 28, 2013 13:34 |
|
ragzilla posted:xr is more of a microkernel design under qnx, sounds like Marty was describing xe (used on ASR 1k) which is a Linux host OS running a monolithic IOS kernel as a processes. Yup, Cisco is moving towards IOS XE for their upcoming switches - the newer sups for 4500s, 4500x's and 3850s are already using it. I was told that it had something to do with single-core processors becoming less common and since regular IOS can only run on a single core, they're doing what ragzilla said above and running IOS as a process. It's also supposed to open up the possibility of running other software on the switches like Wireshark as well. They're also bringing out some new 2960s which will supposedly be upgradeable to IOS XE at some point - since they've got dual core processors, only one core would be usable until then. They're even supposed to be Netflow compatible on every single port! The next supervisors for the upcoming 6800 chassis' should also get IOS XE at some point if they don't come out with it already.
|
# ? May 28, 2013 19:18 |
|
ragzilla posted:xr is more of a microkernel design under qnx, sounds like Marty was describing xe (used on ASR 1k) which is a Linux host OS running a monolithic IOS kernel as a processes. Yeah I think this is what I'm thinking of, but what I was reading made it sound like there were more devices using the new setup. e: Ah, yeah, what chestnut said
|
# ? May 28, 2013 20:01 |
|
ragzilla posted:ASA 5510 single flow testing with my HST-3000s, single flow 120mbps L1: Single flow 5515 @ 235mbps 64b packets: Running the 'show interface' seems to drop a few packets consistently, but in steady state it wasn't dropping (according to the HSTs) code:
|
# ? May 28, 2013 20:03 |
|
All I have managed to learn from those whitepaper documents thus far is that they promote advertised rates and capacities under the best of conditions, usually meaning that's as far as you can max one thing or another if you are doing nothing else or with any complexity. As far as if those numbers ever come into question, it seems talking to the account team and getting a demo into place is the only way to know for sure. Very rarely does anything seem to describe the hit on resources one thing or another will cause, so the only way to find out is to try, or run into a problem later. Even with the Juniper stuff we had run, we could certainly do some basic ACL-type firewalling with them up to Gig throughput but they would have issues on session limiting, as well as any sort of logging on high flow traffic creamed the box. From my position and limited exposure with the 5520/40 units, they work rather well for VPN, which is about all we use them for, but they do not appear to scale to high volume firewalls. It makes me wary to consider them for VoIP usage.
|
# ? May 28, 2013 20:48 |
|
ragzilla posted:Single flow 5515 @ 235mbps 64b packets: That seems crazy to me. I wish I had captures of the ~10mbit of SYNs that were causing 50% overruns.
|
# ? May 28, 2013 20:52 |
|
BigT posted:i've been out of the loop on nexus for a couple years as we moved to an inhouse solution. The F2 module is nuts, if you have the money and 7000s. They are still a bit "new" and I have already experienced a bug forcing us to upgrade to 6.1.3. We have one of these in each of 2x 7000s. At this time, the F2 has to be in its own VDC and cannot mix with M1, F1, or M2 in the same VDC ... unfortunately our sales engineer did not know this until it was too late. We have them in their own VDC (in each 7K), in a VPC domain with 40Gb Port Channels into 4500s and working great since the code upgrade. My understanding is that you get 48 ports at 1 or 10Gb, with 1.1 Tbps full duplex capable IF you have five Fab2 modules. There are definitely caveats with this one. http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-685394.html Queue the "Fabulous" jokes. BoNNo530 fucked around with this message at 00:56 on May 29, 2013 |
# ? May 29, 2013 00:54 |
|
The Nexus is the most amazing and amazingly frustrating product line I've ever dealt with. VDC's, VPC's, FEX's all great things, as long as you have the undocumented flow chart of what you can and can't do based on the generation of line card and level of NX-OS you're running.
|
# ? May 29, 2013 02:25 |
|
The only problem with nexus is that they are of course super expensive and usually have a lot of features that you either don't need, or get with other commodity hardware. The overseas switch commodity hardware is insane right now and really starting to penetrate the american market. Don't need to spend all that cash when you can get cheap hardware oversees. Cisco gotta watch their market share getting slowly snipped away.
|
# ? May 29, 2013 03:34 |
|
SDN is coming. Be Afraid, be very afraid.
|
# ? May 29, 2013 16:58 |
|
Wired has done a series of puff articles about how it's going to change networking as we know it. But, uh, at least as far as I can tell it won't change much outside of large datacenters who need that flexibility. I'm also wondering how SDN will be more efficient than an ASIC with regards to packet switching.
|
# ? May 29, 2013 17:22 |
|
The theory behind SDN is that you can expand your datacenter with commodity hardware. You program the flows, and basically magic makes it work. Obviously the devil is in the details, but the commodity hardware vs merchant silicon is the biggest concern for the major players who get their profit via markup, not volume.
|
# ? May 29, 2013 17:37 |
|
psydude posted:Wired has done a series of puff articles about how it's going to change networking as we know it. But, uh, at least as far as I can tell it won't change much outside of large datacenters who need that flexibility. I'm also wondering how SDN will be more efficient than an ASIC with regards to packet switching. SDN is about the control layer, not the forwarding layer. You're still using ASICs on your forwarding plane and that's still happening on dedicated/purpose-built switching hardware. You're not stacking a whitebox with a lot of 4-port NICs. The difference is replacing IOS/Junos/etc with an open software platform that allows you to control your network in ways that you couldn't before.
|
# ? May 29, 2013 18:06 |
|
Think of SDN as an API for programming your routers so you're not limited to things like BGP/OSPF. I'm debating doing something for our DR solution that will allow me to do DR by product instead of having to cut over full networks using SDN. The TLDR is you're putting PBR into your router from software you can write.
|
# ? May 29, 2013 18:19 |
|
Circuit switching worked out so well before.
|
# ? May 29, 2013 18:28 |
|
madsushi posted:SDN is about the control layer, not the forwarding layer. You're still using ASICs on your forwarding plane and that's still happening on dedicated/purpose-built switching hardware. You're not stacking a whitebox with a lot of 4-port NICs. So it'll effectively allow you to make layer 3/4 decisions without regard to routing domain?
|
# ? May 29, 2013 18:38 |
|
tortilla_chip posted:Circuit switching worked out so well before. If you want to read some words about SDN by a smart network engineer check out this blog. http://blog.ioshints.info/ It's really good, I believe he uses that exact same analogy, you might have even gotten it from him, who knows.
|
# ? May 29, 2013 20:14 |
|
I still don't understand SDN in any practical way. I think I get that it's some kind of holistic control plane abstraction, but beyond that I don't really understand the use case.
|
# ? May 29, 2013 20:27 |
|
As far as real world implementation of SDN, I only know of Google and Goldman Sachs that have a fully deployed business critical infrastructure based on it. Both are home-grown though, so their specific implementations probably have their own gotchas and issues. The basic idea is easy enough to understand. Highly Scalable, vendor agnostic, programmable networking. But there isn't a practical solution yet. I've never even messed around with it in any kind of hands-on environment yet. Who knows how long until I do, maybe never.
|
# ? May 29, 2013 20:34 |
|
Here's my perspective on SDN: the current operating systems (IOS, Junos, etc) and the current protocols (BGP, OSPF, etc) are too limited by the need to interop with tons of legacy gear/software and aren't designed with today's high-end datacenter needs in mind. SDN allows you to have your smart programmers write their own code for the hardware so that now you have something like OSPF++ which more accurately reflects how you want your routing policies done and STP++ for switching from active/blocked to active/active, etc. Rather than waiting on Cisco to figure out TRILL or being locked to any given vendor, you run your own software on their hardware and thus you get custom code and you can push new features much faster. The idea is that the current control planes (the OS itself and the protocols they run) are limited and that SDN lets you replace those with something that's more flexible/modular/etc.
|
# ? May 29, 2013 21:32 |
|
|
# ? May 30, 2024 04:17 |
|
Met this dude while hiking one time. Super chill, even put up with some of my questions.
|
# ? May 29, 2013 22:50 |