|
Met this dude while hiking one time. Super chill, even put up with some of my questions.
|
# ¿ May 29, 2013 22:50 |
|
|
# ¿ May 14, 2024 21:11 |
|
tortilla_chip posted:Anyone attending Cisco Live in Orlando? Ixia unleashed a horrible blight on the conference with their fedora giveaways. The shortest blue haired girl at the VSS Monitoring booth was super cute, though; I definitely feigned a lot of interest in their product.
|
# ¿ Jun 28, 2013 04:01 |
|
veedubfreak posted:This isn't really a short question because it requires a little bit of background, but I'm guessing it's something simple I'm overlooking and it's getting on my last god damned nerve. I'm pretty sure the whole point of including this feature is to just make you wish you bought UCCX, too.
|
# ¿ Oct 3, 2013 23:37 |
|
So I've got the following output of a CBWFQ QoS policy, applied in the outbound direction on an interface:code:
single-mode fiber fucked around with this message at 23:50 on Dec 19, 2013 |
# ¿ Dec 12, 2013 03:25 |
|
I ended up biting the bullet and calling TAC. Originally the problem was that BGP to the RRs on the far end was flapping like wild whenever the site started hitting like 10% of their circuit capacity. So, I was expecting to find tail drops in the P2 queue (presumably the keepalives), since that was explicitly just for CS6 kind of stuff, not the P4 queue. But, after looking over some other sites, they're having a lot of weird QoS poo poo go on, they just haven't noticed yet (or don't care).
|
# ¿ Dec 13, 2013 01:09 |
|
The overall setup looked like this. Curious part is that the P2 queue does get an awful lot of matches (far more transmitted packets than what showed up as class 6 transmit packets in the P4 queue), and 0 drops of any kind. code:
single-mode fiber fucked around with this message at 23:51 on Dec 19, 2013 |
# ¿ Dec 13, 2013 03:40 |
|
The bug is on some NICs with Intel chipsets, that causes flooding of IPv6 MLD packets, when the machine hosting the NIC is put to sleep, but also has wake on LAN. There is a bug on 2960s, prior to the X model, which may not be published yet, but, if the router alert flag is set in those bogus MLD packets, it will kick each one to the processor path (in addition to flooding it everywhere throughout the ingress VLAN, if you don't have MLD snooping enabled), even if you have no IPv6 routing going on.
|
# ¿ Nov 16, 2014 06:04 |
|
Yeah, updating the NIC drivers is the best way to go, but, depending on the requirements of your environment, MLD snooping should work, IPv6 ACL that drops inbound v6 traffic (may require changing the SDM template depending on which switches you have), storm control, control plane policing, all of those will work and keep your switch CPUs from melting down. When I encountered this in the wild, it was for an org where they insisted on having thousands of users concentrated on just a couple VLANs, so, whenever the NIC bug would occur, it would be slam job dot com.
|
# ¿ Nov 18, 2014 05:06 |
|
Is it CUCM or CME? If it's the former, we used to have the best luck just putting it on the CUCM node serving TFTP (don't even need to do the cop.sgn install, you can just download the raw .loads file and friends if you poke around on Cisco's site) and changing the phone's load name on the phone configuration page (or on the device pool, or on the phone default settings page, depending on how widespread the upgrade was supposed to be).
|
# ¿ Mar 9, 2015 04:23 |
|
Docjowles posted:One due to "vandalism" (we never found out what this meant) Lots of fiber in that area is aerial because it's too expensive to bury in the mountain, it's not uncommon for people to try to break the fiber or hit repeaters with buckshot
|
# ¿ Mar 25, 2015 14:54 |
|
If call quality is exceedingly important to you, for calls from one office to another, then I would stick with your MPLS VPN for take the sake of getting to maintain your DSCP/CoS tags. However, I'll also say that, even in an MPLS VPN, where troubleshooting call quality can theoretically be done end to end, it can quickly become a total nightmare to try to pull this off, especially if you have centralized border controllers in an offsite data center. If the cost differential between commercial broadband and MPLS is pretty large, then I'd say go for it, especially if you can dump the trouble of maintaining and troubleshooting all those different telco circuits onto a chump-rear end MSP like the one funk_mata is describing. Cell phones, and their attendant call quality, are so ubiquitous now that, as long as your VoIP calls don't sound any worse than a cell phone, users aren't really going to complain.
|
# ¿ Jul 11, 2015 18:33 |
|
Hate to be that guy but is it Unity or Unity Connection?
|
# ¿ Sep 5, 2015 00:18 |
|
I read it as the bolded part was not in the command, and with disaster in hindsight, the emphasis on the crucial word to prevent future similar mistakes.
|
# ¿ Dec 7, 2015 03:01 |
|
We have it on 7Ks acting as collapsed core and did not have any trouble with failover. I think one or two pings from workstations to Level 3 were lost, but phones certainly never lost enough heartbeats to go into SRST.
|
# ¿ Oct 5, 2016 15:38 |
|
Sepist posted:Are you using FEX's on your 7k's? Apparently it was a 4-minute full outage but the onsite guy failed to tell me that. They have tested this successfully in the past but they have added FEX since that test. I know the 7k locks up during dual-homed FEX sync, not sure if the 5k does. I hope not. Looks like your problem is resolved but our FEXs are not dual homed, we did the topology where dual uplinks go to different line cards on one 7K chassis, and there's 2 FEXs in top of rack.
|
# ¿ Oct 5, 2016 19:11 |
|
mythicknight posted:Sanity check since I haven't dealt with switch stacks in a long while. I have a 2960 that I want to add another 2960 to to make a stack. The current switch has priority 10 and is already operating and the one being prepped is wiped/at default (1). Am I wrong in thinking I just rack the second switch, hook up the stack cables, and power the switch up? Software versions need to match or it won't join the stack. Newer switches support automatic software upgrade but I don't think 2960 line does. You can save yourself a little bit of time by doing a switch 2 provision (model) on the one to be your new stack master, so you can configure interfaces ahead of time.
|
# ¿ Feb 1, 2017 00:14 |
|
Apparently the part in question is the Intel Atom C2000 series, so there may be quite a few things that'll be toast if there's no way to do a firmware patch.
|
# ¿ Feb 7, 2017 00:13 |
|
Heard the same from a rep in the federal space a few months ago. Told us that 6500 and 6800 was getting abandoned sooner rather than later.
|
# ¿ Feb 10, 2017 15:43 |
|
If it's one or two specific numbers harassing you, you might just drop them via inbound dial peer
|
# ¿ Mar 16, 2017 23:17 |
|
This happened to us on Monday back before they made the bug public. It was pretty concerning watching them all fail not instantly, but in succession in the span of a couple of hours. I'm glad I thought to sanity check the network aspect from console because I was afraid it would end in a call to US-CERT.
|
# ¿ Mar 31, 2017 23:43 |
|
Judge Schnoopy posted:Is ISE a worthwhile product to look in to? I'm a sole admin, 25ish cisco network devices spread over 6 locations. Network administration is typically outsourced per hour and I'm trying to cut costs by doing more management myself, allowing for critical hardware upgrades to be purchased. Roughly 120 end points. I love ISE, but for only 120 endpoints it's probably massive overkill. I wouldn't recommend it until you have a few thousand endpoints or you absolutely need some functionality in it that nothing else can provide. The common things like using it as your RADIUS server for 802.1x and the subsequent dynamic VLAN assignment can be done even by a Windows server running NPS. Also the ASA bug ID calls out the 5500-X but it definitely can affect the previous platform too.
|
# ¿ Apr 1, 2017 15:55 |
|
Sepist posted:I've dealt with something similar. I can't remember if it was NX-OS or IOS-XR but after an update some of the config was correct but wouldn't apply. TAC had me run "configure reload ascii" to reload the box with a ascii converted config, as the normal file is apparently binary. This was probably NX-OS, just had to do this recently at TAC's suggestion for a totally different bug on the 7K platform. When we did that reload, it destroyed all config in the VDC below a certain point, which conveniently included all the routing config.
|
# ¿ Apr 28, 2017 20:35 |
|
BaseballPCHiker posted:Does anyone have any experience with any ASA to Firepower migration tools? Done it several times with the vFMC and haven’t really had any problems.
|
# ¿ Aug 31, 2018 16:10 |
|
You do have to be careful with deployment, even making changes that only affect LINA and not Snort can cause an outage if something fails during deployment for any number of reasons. Not much, less than 1% for sure, but I’ve seen a deployment fail, redeploy with no changes (like just click into the name or description of a policy, save, redeploy), and it works the 2nd time, but you did cause a small outage while the rollback takes place. For this reason you may want to only do deployments after hours for stuff that wasn’t exempted through change control process.
|
# ¿ Aug 31, 2018 21:42 |
|
BaseballPCHiker posted:
This is true but you will still have to keep track of FXOS code underneath even if running ASA on top
|
# ¿ Sep 1, 2018 06:29 |
|
IPv6 multicast is definitely a CPU punt on 2960-X platform so, sight unseen, it seems likely to also be true for a 3560
|
# ¿ Apr 30, 2019 14:25 |
|
Kazinsal posted:FTDs? No issues. That's a first
|
# ¿ Aug 3, 2019 21:36 |
|
Bob Morales posted:We have a Fortinet, but I guess this is a generic networking/failover question: If you were learning the full DFZ from each ISP that probably would've fixed the problem you described naturally (maybe add some dampening if the card is flapping or something like that) without the use of a track object. If you're just learning a default from each one then I don't think the Fortinet has enough knobs to turn where you could use only SLA track objects, you'd have to do like tortilla_chip said and make this logic run elsewhere. E.g., instead of targeting 8.8.8.8 you target the IP(s) of your most important off-net service, but what if the service itself is totally unreachable and you, at best, failed over for nothing (at worst, keeping failing over in a cycle). Depending on how well your NMS tool is integrated with your ticketing system maybe the best short term solution is to just set up multiple tracks to external business-critical services, plus a couple other barometers like 8.8.8.8 or 1.1.1.1, and configure the failure of any of those track objects to create a high-priority alarm+ticket for someone to review and make a judgement call as to whether or not to fail over. Automating the logic of when a failover should occur can be dicey in corner cases, like what if you have an external accounting service that's absolutely critical some days but not others, or if you have external services ABCDE but somehow services ABDE are up on 1, and services BCDE are up on the other, stuff like that.
|
# ¿ Aug 21, 2020 00:57 |
|
GreatGreen posted:If I rename a Cisco switch, will that require a switch reboot or can I just enter: You should regenerate your crypto key after changing the hostname as well, to avoid risk of breaking SSH.
|
# ¿ Sep 8, 2020 17:40 |
|
Maybe they're better now, but the last time I dealt heavily with FTD, 4 or 5 years ago, they were extremely bad. I've never encountered more critical, enterprise-breaking bugs on a single platform than I did with FTD. These days the only thing I can recommend buying a Firepower is if you're running the ASA code on it, and you're doing this because you have to use AnyConnect instead of some other VPN client because of whatever reason.
|
# ¿ Nov 21, 2022 19:31 |
|
|
# ¿ May 14, 2024 21:11 |
|
I think VXLAN EVPN can be a good fit if your environment is 1 or more of the following: -Very dynamic (groups of servers are constantly being deployed or torn down) -Expected to grow aggressively -Needs a high degree of segmentation between different applications and their components -Applications need very consistent latency between their components Another point of consideration is what's the labor pool look like for network people at your company, do they hire all highly-skilled people with lots of prior experience? Or are most of the new hires more on the junior side, and the expectation is that they're develop internally and promote from within? If it's the latter, a complex environment will naturally have a higher learning curve for them. If you don't have a technical or business use case for a certain technology or design, you may want to consider that it's often easier to change a network design than it is to change your company's hiring practices. One of my little personal soapboxes when it comes to technologies that provide an increasingly high level of abstraction (often through some level of automation) from what's happening underneath, it makes your distribution of failure outcomes more leptokurtic. You very tightly control and eliminate a lot of simple errors (typed the wrong VLAN tag type of stuff), but, when you do have some kind of serious fault, it ends up being a very complex, Byzantine kind of failure. It typically ends up requiring a much higher level of technical expertise to resolve, and that troubleshooting process can take much longer. It's a little bit like the thing of "would you rather fight 50 duck-sized horses, or 1 horse-sized duck?"
|
# ¿ Jan 3, 2024 16:38 |