Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
obsidian440
Apr 15, 2004

Don't question god's choices.

Girdle Wax posted:

If you're stuck on 6.x, you'll need to look into setting the levels of the messages you're trying to pick out, then setting the pix to only send messages of that severity or higher, which might help your signal-to-noise.

If that doesn't help you could look at moving to another syslog server like syslog-ng which has built in filtering capabilities so you could direct your 'interesting' logs to a special log file. If you feel like really going over the top you could then setup SEC (Simple Event Correlator) to watch that log file and take actions on the messages, like warning you if someone fails to log in too many times.

Another alternative if you have the hardware, and can get the software, to do it would be to upgrade to 7.x and use the built in message list filter feature.

As far as logging CPU/Memory, I'd have to recommend setting up cacti on a Linux host somewhere on your network and using that to graph the CPU/Mem/Interface Traffic OIDs.

I hate to come off as lazy, but can I get some links to help me with doing this ? I didn't see anything that jumped out at me while messing with the logging options in pix.

Adbot
ADBOT LOVES YOU

Herv
Mar 24, 2005

Soiled Meat

Girdle Wax posted:

Lotta stuff, but STP is killing me.

I haven't tested this in ages, but the last time I had redundant trunks i did a load balancing / failover situation, certainly tested it and did not notice any real convergence time that would make me nervous.

Here's a quick and dirty example of how it worked beween core and dist for example:

code:
Core Switch

Trunk 1  
              Unix Vlan 2 Primary
              Wintel Vlan 3 Secondary (blocked)

Trunk2
              Wintel Vlan 3 Primary
              Unix Vlan 2 Secondary (blocked)

Data Center Switch
I had both wintel and sparc in the data center, no chance for downtime. I forget if I messed with any STP parameters, but when I dumped trunk 1, the Unix traffic would failover to the second trunk rather quickly. This was the symptom I would experience throughout the campus.

Sorry time is dulling my memory of this, but I would certainly remember the 45 sec pucker time for sure.

inignot
Sep 1, 2003

WWBCD?

Girdle Wax posted:

Waiting 40 seconds for spanning tree to reconverge because a gig link dropped, really, really sucks.

I think you're looking for uplinkfast & backbonefast. PVST has a bunch of knobs besides portfast & bpduguard.

http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a0080094641.shtml
http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a00800c2548.shtml
http://www.cisco.com/en/US/tech/tk389/tk621/tsd_technology_support_protocol_home.html

ragzilla
Sep 9, 2005
don't ask me, i only work here



Backbone fast is still going to cost me 30 seconds from what I'm reading, it saves 20 seconds of max_age timer but i still have 2x15 for listening and forwarding, but the situation that that would be detecting would be rare with my environment, uplinkfast would definitely save us some convergence time on the access switches though.

jwh
Jun 12, 2002

Girdle Wax posted:

Cisco goons, you're my only hope
That's not a good sign :)

Girdle Wax posted:

Since they're 3750 stacks though, they have a snowball's chance in hell of doing anything decent BGP wise to get them to make smart routing decisions at distribution
Meaning, TCAM size is too small for full tables?

Girdle Wax posted:

- Get rid of spanning tree. How? Everything coming out of the distribution switches (pondering between 6509/Sup720 and 7609/RSP720 for this- arguments for/against these would also be welcome) is a layer 3 link, layer 3 links up to the core routers. layer 3 links down to the customer access routers. layer 3 links down to the access/aggregation switches. layer 3 links 'across' to my existing distribution switches to get the legacy network uplinked through the new network.
That's a lot of layer 3. Are you confident that your IGP is going to reconverge AS-wide faster than spanning-tree in event of a link failure? Similarly, would these problems go away if you instead replaced your older gear with switches that supported RPVST+?

As for 6500/7600, your guess is as good as mine. I think people are still feeling this one out.

Girdle Wax posted:

To accomodate getting rid of spanning tree, I want to move the layer3 termination down into the customer access switch (probably using 3560s). 2 routed uplinks, 1 to each distribution switch. Then a handful of SVI's with /28s or /29s serving a handful of customers. These switches will participate in the OSPF loopback program, and limited iBGP as listed below.

I'm not much of an ethernet wizard, honestly, and I spend more time on the WAN side these days, but I almost feel like you're putting a heavy burden on your access layer. More intelligence at the access layer might mean more revenue opportunities, ease of management, and greater provisioning flexibility, but it also means more things that can break. Plus, if you need a customer VLAN to span multiple access layer devices, are you going to have to cross-connect those switches on an as-needed basis?

Girdle Wax posted:

- All internet traffic in the access/distribution switches will be in an internet VRF (vrf-lite in all the switches basically). This is mainly a management thing- I plan to use the main routing table for management access only- haul a single eth from every switch back to an isolated switch for a new management network. So I can actually get into the switches over ssh/telnet when the poo poo hits the fan. Part of this project will also be making sure we have working consoles in case that switch fails *grumble*. This introduces somewhat of a learning curve (remembering to type 'vrf internet' all the time), but I feel the management advantages outweigh the disadvantages.
Have you worked up all of the relevant routing protocol configurations with the vrf stuff? It can be a little screwy, ie, OSPF's vrf per-process instantiation versus mBGP address-family vpnv4.

Girdle Wax posted:

- OSPF is optimized as described in the Cisco doc above (200ms hellos for fast failure detection- in the future we will probably also look at doing BDF if the 3560 ever supports it), in addition to ispf to speed up spf calculations.
Well, you probably saw it too on c-nsp the other day, but apparently BFD has trouble getting below 250ms. That could have been a platform anomaly, I don't remember. If your tuned OSPF dead timer is 4 x hello interval, that's still really quick.


Girdle Wax posted:

I've come across a couple of problems, the biggest one was resulting from a distribution switch reboot and ECMP. Basically since OSPF comes up faster than BGP, my core was sending traffic for an access switch down to DistA, which didn't know how to handle it, so it sent it back up to the Core, back down to DistA etc. I managed to resolve this by adding a high metric static in each of the distribution switches, pointing to each other over the link between them. This way when a switch is still 'becoming active' after a reload, it will continue to pass traffic across to the other distribution switch which is still up, which will be able to forward the traffic as normal.
I thought you were getting customer prefixes out of OSPF? Or am I misunderstanding? Are you talking about control plane traffic?


Girdle Wax posted:

My full table BGP customers I can continue to serve out of my customer access routers like I do now. I'll probably hang a dedicated switch off each of those GigE interfaces just for that, and keep them isolated from the distribution/core. There was a proposal to perhaps hang some these customers off 10/100/1000 aggregation switches, terminate L3 in the agg switch then have them to eBGP multihop to the distribution switches but this bugs me in 2 ways.
-- we need to redistribute the customer's routes down to that agg switch via BGP. So if they drop their session it'll be 2+ minutes until the path back to them is good.
-- Customers in distribution, get off my lawn.
Yeah, ebgp-multihop is gross unless you're doing the neighbor loopback thing.

You could take a tip from the carriers here, and simply provide the customer a session, and tell them to do whatever they want with it. So, in other words, if they drop their session, that's their own drat fault. It's not polite, but it makes a certain sense.

Girdle Wax posted:

We do have some colocation customers who do HSRP with us for failover on our and their equipment, to continue to offer this I came up with the following solution:
-- The customer will 'purchase' 4 switchports (in 2 adjacent switches), and a dedicated SVI/VLAN in 2 switches. 1 port out of each switch will go to the customer, the other 2 ports will be a dedicated 'tie' link to connect the 2 islands. Since I don't have any trunks up to the core this is the best I can think of, plus it gives the customer a dedicated link at their purchased speed to carry traffic which comes in over the secondary switch in the unlikely event that we lose all uplink on their primary switch.
This one gives me the shakes. I guess if you can charge customers for the cross-connect, it won't matter, and won't be like you're losing out on revenue producing interfaces.

There's a lot to think about here, and you're obviously much more familiar with what you've worked up than I am, not to mention familiar with your business practices. That said, it sounds like you're in the business of providing colocation at the switchport level, as well as routed interfaces. Maybe an exercise would be to take your layer-2 access-layer out of the picture completely, and develop a framework that could accomodate both layer-2 colocation and layer-3 routed interface customers identically, sans ethernet access-layer infrastructure. Then, attach access-layer devices as if they were customer owned, but provider managed.

I guess what I'm saying is, would it be easier if your colocation access devices were end-of-rack 6500's providing routed interfaces to top-of-rack customer managed switches? They could purchase HSRP and uplink diversity without your having to get stuck in spanning-tree nightmare world, because that'd be up to them.

jwh fucked around with this message at 20:27 on May 18, 2007

ragzilla
Sep 9, 2005
don't ask me, i only work here


jwh posted:

That's not a good sign :)
I know, I know, well at least this is good practice for an E/N on C-NSP if it comes to that :)

jwh posted:

Meaning, TCAM size is too small for full tables?
TCAM and main memory. We're running a mix of G-24T and G-12S 3750s so we're stuck to TCAM limits of the -24T's (the 12S's have more TCAM), but we can't trivially separate the 2 platforms to get the better TCAM. They also only have 128M (non upgradeable) of DRAM, I'm not sure if I can fit a full table into that y'know :)

jwh posted:

That's a lot of layer 3. Are you confident that your IGP is going to reconverge AS-wide faster than spanning-tree in event of a link failure? Similarly, would these problems go away if you instead replaced your older gear with switches that supported RPVST+?

As for 6500/7600, your guess is as good as mine. I think people are still feeling this one out.
Scaled down, it seems to reconverge extremely fast. The IGP is only taking place in a single datacenter so by keeping it to loopbacks only I'm fairly certain it will be reconverging <200ms per Cisco's docs (which were benchmarked on an MSFC2 I believe). Processors running in the production network will range from 12k GRP-B (R5000 - 200MHz), MSFC3 (R7000 600MHz I think?), 3560 (PPC405 - 400MHz). LCD here are the GRP-B processors but since they're R5Ks so long as I keep the OSPF db below 50 prefixes it should be sub 200msec. The problems may go away if we replaced legacy gear with gear that does RPVST+ (we were actually looking at going to MST, replacing 3500XLs with 2950G-48s), we'll probably be presenting this as one of our options, but I'm really liking the advantages of moving layer3 down to the access layer in the form of simplified troubleshooting (ping and traceroute), and not having to deal with the headaches that come along with spanning-tree, rapid or no.

jwh posted:

I'm not much of an ethernet wizard, honestly, and I spend more time on the WAN side these days, but I almost feel like you're putting a heavy burden on your access layer. More intelligence at the access layer might mean more revenue opportunities, ease of management, and greater provisioning flexibility, but it also means more things that can break. Plus, if you need a customer VLAN to span multiple access layer devices, are you going to have to cross-connect those switches on an as-needed basis?
Customer VLANs (unless HA) should never have to span multiple access layer devices. If a customer is moving rows (and thus into a new access device), we'll have them renumber. Typically the people moving so far have been ones with larger amounts of gear, and have firewalls/routers at the edge of the network. Colo, both in the past and future, is handled by giving the customer a single address in a shared network (/24s currently, wanting to move to /28s and /29s in the future), then routing a subnet to that address, which they can then use for NAT 1:1 translates, or on an internal ethernet.

jwh posted:

Have you worked up all of the relevant routing protocol configurations with the vrf stuff? It can be a little screwy, ie, OSPF's vrf per-process instantiation versus mBGP address-family vpnv4.
Yeah, I have a working 'lab' setup of all this, complete with my VRF separation of data and management planes (control is still in the data plane- I see no way around that without having to mess with more VRF than I have to, ie importing routes to the internet VRF from the control plane VRF which significantly complicates troubleshooting). As soon as my switches boot up I'll grab some configs off them to see if anyone can poke some holes in that :)

jwh posted:

Well, you probably saw it too on c-nsp the other day, but apparently BFD has trouble getting below 250ms. That could have been a platform anomaly, I don't remember. If your tuned OSPF dead timer is 4 x hello interval, that's still really quick.
BFD was a looking forward thing, with the fast ospf hellos I'm not sure it's even necessary, but if I have a feature that could make link failure detection more reliable (fast hellos, udld, bfd), and doesn't cost me much to implement, I'd have to be insane not to use it right? The 3560/3750 platform doesn't support BFD at this time, but apparently it may make it in sometime this year if enough people ask for it.

jwh posted:

I thought you were getting customer prefixes out of OSPF? Or am I misunderstanding? Are you talking about control plane traffic?
The customer prefixes are getting out of OSPF, the issue was happening due to the following conditions:
1) DistributionA (or B) had just rebooted, OSPF rapidly reconverged, BGP was still converging.
2) The Core switch(es) saw that they now had an extra path (via OSPF) to reach the loopback of the customer access switch (as this is iBGP, I am _not_ setting next-hop-self for BGP sessions or I lose my rapid recovery ECMP).
3) The core switch would now send traffic down to this distribution switch, which did not know how to reach the customer prefix, so it could not forward the traffic.

I mitigated this with a high metric static to the other distribution switch which gets removed from the table when BGP is fully converged. Thinking forward this may be better handled by making these static routes for my prefixes only, so if BGP to the core comes up faster (thus replacing the static default with the BGP one) than the BGP to access, I don't create a Core-Distribution-Core routing loop.

jwh posted:

Yeah, ebgp-multihop is gross unless you're doing the neighbor loopback thing.

You could take a tip from the carriers here, and simply provide the customer a session, and tell them to do whatever they want with it. So, in other words, if they drop their session, that's their own drat fault. It's not polite, but it makes a certain sense.
We always do neighbour loopback for BGP unless it's a point-to-(multi)point connection. And internally we always do it so we can use OSPF ECMP to provide load balancing via CEF. The issue I was thinking we'd run into in this situation (Customers doing eBGP multihop to the core):
-- Access switches would not know what prefixes the customer had until 2+ minutes had elapsed- the time for BGP scanner to run at distribution, and then again at access.
-- I can't do BGP straight out of the access switch since it'll be a 3560/3750 and thus no full tables. Customers who only take default-only could potentially be served out of an access switch though.

jwh posted:

This one gives me the shakes. I guess if you can charge customer's for the cross-connect, it won't matter, and won't be like you're losing out on revenue producing interfaces.
We'll build the charge into the contract for the cross-connect, right now we don't charge HA customers that much extra which to me is a gross oversight since they currently consume 2x port resources, and 3x IP resources of a regular customer. I'm guessing it'll be billed out as a recurring VLAN charge, and 4 port charges, and maybe a once-off cross connect fee that will be waived almost 100% of the time. That's a sales function though :)

jwh posted:

There's a lot to think about here, and you're obviously much more familiar with what you've worked up than I am, not to mention familiar with your business practices. That said, it sounds like you're in the business of providing colocation at the switchport level, as well as routed interfaces. Maybe an exercise would be to take your layer-2 access-layer out of the picture completely, and develop a framework that could accomodate both layer-2 colocation and layer-3 routed interface customers identically, sans ethernet access-layer infrastructure. Then, attach access-layer devices as if they were customer owned, but provider managed.

I guess what I'm saying is, would it be easier if your colocation access devices were end-of-rack 6500's providing routed interfaces to top-of-rack customer managed switches? They could purchase HSRP and uplink diversity without your having to get stuck in spanning-tree nightmare world, because that'd be up to them.
It would definitely be easier if my end-of-rack devices were 6500s, but there's no way I could get the funding to do that, it's going to be like pulling teeth to get the 2 I want to replace distribution. In lieu of end-of-rack 6500s I'm going to the second-next-best thing- a 3560/3750 end-of-rack device which can do medium-density layer3 termination at a pretty good speed- we're currently using these to layer3 switch a lot of our day-to-day traffic and they should be able to handle the access layer layer3 aggregation until we're pressing the limits of 2 GigE uplinks from the access.

Top-of-rack switching has only happened with 1 customer I think so far and that was a port issue- they had about a half dozen ports into our colo network for different customers of theirs, and didn't want to run their own common firewall/router for all their customers so we pressed them into letting us land a switch in their cabinet. This is an exception rather than the rule though, 95% of customers are regular-availability, single port customers. We don't have much of an issue where we're require top-of-rack anymore as the majority of our sales now are people who are taking 1/2, or an entire cabinet, we probably only have a handful (5, 6?) of shared cabinets on a floor of 120+.

tl;dr on this, we'll still be doing layer2 colocation for the majority of the customers, but we will be limiting the layer2 domain to only a handful (5-15) customers, and the broadcast domain will not leave the switch so we have less issues like broadcast storms, or spanning-tree to worry about. The only customers getting dedicated SVIs in the access layer will be those who have purchased HA services.

-edit-
I forget, was the BFD issue in RRB? I think people have been seeing other issues in there related to CPU, could be bad scheduling. They were reporting BGP usage up to 80% of the CPU on boxes where it used to be 10% back on RRA. Why does the 7600 BU hate us.
-/edit-

Tremblay posted:

As far as 7600/6500 goes the code trains are split now. More switching features will be implemented for 6k while more routing features will be making it into the 7k. I guess it mostly comes down to the capacity you need and topology (collapsed core or distributed).
Anyone have a crystal ball? :)

I guess wrt the 6500/7600 split I'm worried about :
1) Choosing 7600, hopefully getting some RSP720s- though probably not. Finding out that SRC release notes will say "The WS-X67XX switching modules will no longer be supported in SR".
2) Choosing 6500, getting screwed over by the 6500/7600 BU split and not getting decent (service provider) features in SX.

I'm currently leaning toward the 6500 and hoping that SX will continue to add useful features for an iBGP only device. I'm probably going to keep all my eBGP in the GSRs for the foreseeable future.

To make up for all these :words:, here's a picture of a kitten, no wait, my current layer2 setup: http://starshadow.com/~ragnar/731CoLo.png
Every device has a Vl513 interface, used for management on the switches, for data plane on the routers and distribution.
Vl401 and Vl421 terminate L3 in the dist switches, running HSRP for HA.
Vl192 is our offnet management network that I really wish wasn't running in the production network, it's mostly out there to get ethernet to our cameras, console servers and muxes.
Vl402 is one of our gigabit customer networks that we do BGP over (terminates in CustomerC or D I forget which). There's a customer in C201 that takes full tables from us.

ragzilla fucked around with this message at 21:26 on May 18, 2007

Tremblay
Oct 8, 2002
More dog whistles than a Petco
As far as 7600/6500 goes the code trains are split now. More switching features will be implemented for 6k while more routing features will be making it into the 7k. I guess it mostly comes down to the capacity you need and topology (collapsed core or distributed).

jwh
Jun 12, 2002

Girdle Wax posted:

also only have 128M (non upgradeable) of DRAM, I'm not sure if I can fit a full table into that y'know :)
Oh yeah, that's a good point.

Girdle Wax posted:

I'm really liking the advantages of moving layer3 down to the access layer in the form of simplified troubleshooting (ping and traceroute), and not having to deal with the headaches that come along with spanning-tree, rapid or no.
Can't argue with that. More importantly, it sounds like you're comfortable with what you've proposed, and that should translate into operational efficacy. That's never a bad thing.

Girdle Wax posted:

As soon as my switches boot up I'll grab some configs off them to see if anyone can poke some holes in that :)
That'd be neat to look at; I managed to dodge route-target import/exports too, which turned out to be a good thing. I've been meaning to go back and lab the route-target stuff just in case we have a falling-out with Nokia, and decide to move our wan firewalls to another platform.

Girdle Wax posted:

but if I have a feature that could make link failure detection more reliable (fast hellos, udld, bfd), and doesn't cost me much to implement, I'd have to be insane not to use it right?
Well, you know what they say, the road to good intentions is paved with hell. :)

Girdle Wax posted:

Why does the 7600 BU hate us.
I think a better question is why Cisco thought it was a good idea to compete with itself, just to capture a market that used a different nomenclature. It must have felt like a good idea at the time, but nowadays it's yucky for customers. I'm just glad I'm not in the market for one.

ragzilla
Sep 9, 2005
don't ask me, i only work here


jwh posted:

That'd be neat to look at; I managed to dodge route-target import/exports too, which turned out to be a good thing. I've been meaning to go back and lab the route-target stuff just in case we have a falling-out with Nokia, and decide to move our wan firewalls to another platform.
I'm glad we don't deal with a whole lot of firewalling here, but we have a lot of brain power invested in PIX, and looking at providing 'virtual' firewalls out of FWSM's in the 6500s is pretty attractive so long as we can make the money work. We do a similar thing in our GSRs doing virtual routers for customers- landing DS1/DS3/OC3 into a VRF in the GSR, handing a dot1q subif out of that VRF, over a GigE trunk into our VRF switch, then access ports to the customer. That way they can do WAN consolidation in our DC without having to take up 1-3U of space with router(s). We also like it since it greatly reduces the number of cross connects on the floor, and we don't have to buy muxes since we hand off to the GSRs over channelized OC3.

jwh posted:

I think a better question is why Cisco thought it was a good idea to compete with itself, just to capture a market that used a different nomenclature. It must have felt like a good idea at the time, but nowadays it's yucky for customers. I'm just glad I'm not in the market for one.
As part of the presentation we're of course going to do multiple options (looking at Foundry BigIron RX / Juniper MX series). But changing vendors doesn't always work out well since none of us are familiar with the other vendors quirks. The last time we changed vendors we ended up with a pair of Extreme Summit 48is (the distribution switches before the ones before the current ones) which was just a terrible, terrible experience (they failed and rebooted fairly regularly, and always took over OSPF DR).

ate shit on live tv
Feb 15, 2004

by Azathoth

Girdle Wax posted:

1) Choosing 7600, hopefully getting some RSP720s- though probably not. Finding out that SRC release notes will say "The WS-X67XX switching modules will no longer be supported in SR".
2) Choosing 6500, getting screwed over by the 6500/7600 BU split and not getting decent (service provider) features in SX.

The 7600-S with the RSP already has a completely different code base, I can guarentee that the 67xx cards will work for the foreseeable future, I can also guarentee that 6500 is not going to get the same support as the new 7600 is. A safe (but expensive) bet is to go with the 7600. The RSP720 is a pretty awesome Sup, and combined with the new 7600-s as well as the new cards that will that will undoubtedly be coming out soon with the Fast Fabric Sync. I would be feeling pretty confident about my fail over options.

Of course this may not be feasible for your situation. But if you do have the option, the 7600 is the safe way to go.

Also I'm fairly confident that the new 68xx series cards will not work in the 6500... But that is certainly not the official Cisco position.

E: Actually to clarify, certain features of the 68xx card's won't work. Like the fast fabric switching etc. But it is only a matter of time before some 7600 exclusive cards come out.

ate shit on live tv fucked around with this message at 23:17 on May 18, 2007

ragzilla
Sep 9, 2005
don't ask me, i only work here


Powercrazy posted:

The 7600-S with the RSP already has a completely different code base, I can guarentee that the 67xx cards will work for the foreseeable future, I can also guarentee that 6500 is not going to get the same support as the new 7600 is. A safe (but expensive) bet is to go with the 7600. The RSP720 is a pretty awesome Sup, and combined with the new 7600-s as well as the new cards that will that will undoubtedly be coming out soon with the Fast Fabric Sync. I would be feeling pretty confident about my fail over options.

Of course this may not be feasible for your situation. But if you do have the option, the 7600 is the safe way to go.

Also I'm fairly confident that the new 68xx series cards will not work in the 6500... But that is certainly not the official Cisco position.

Given our past purchasing habits some parts such as the chassis will probably be purchased from a company that does network rebuilds then sells the hardware pulled out to ISPs like us, so the chassis will probably be 7609 rather than 7609-S. Redundant Sups, while nice, may not be on the table (why do we need redundant sups in a manned datacenter, that's why we have a redundant chassis design) so the fast sup failover may not be a strong selling point there.

If 67xx cards are going to stay in SR for the foreseeable future I'd probably (currently lean) toward 7609 chassis + Sup720 with an option for RSP720 upgrade (since they're going to act as RRs for the rest of the network the extra CPU could be useful for pushing out BGP updates and running scanner faster).

7609 chassis, Sup720, SR software? SR isn't going to go RSP only anytime soon is it, I don't imagine people with large 720+7600 implementations are going to be happy about forklifting all the sups in their network.

jwh
Jun 12, 2002

Girdle Wax posted:

handing a dot1q subif out of that VRF, over a GigE trunk into our VRF switch, then access ports to the customer.
Yeah, isn't that great? I love it. Opened up a whole world of options when that came down the line. Being able to isolate and preserve customer aggregation across a transit area is huge.

Girdle Wax posted:

The last time we changed vendors we ended up with a pair of Extreme Summit 48is (the distribution switches before the ones before the current ones) which was just a terrible, terrible experience (they failed and rebooted fairly regularly, and always took over OSPF DR).
Ugh, that sounds like bad times. Everybody who works with the Juniper M-series seems to really fall for them, but I've never had the chance. I would be interested in seeing how they work. I'm not sure what Foundry's up to these days (besides backdating stock options), but I used to like their ServerIrons a whole bunch. I really want to play with Alcatel gear, but the likelihood of that happening is fairly small unless I move to France, I guess.

ragzilla
Sep 9, 2005
don't ask me, i only work here


obsidian440 posted:

I hate to come off as lazy, but can I get some links to help me with doing this ? I didn't see anything that jumped out at me while messing with the logging options in pix.

It looks like if you only want auth fails, (basically get rid of acl denys), you'll need to bump the logging up to Critical instead of Error severity. I typically tend to manage via pdm/asdm, but if you're managing through the console I think the command you're looking for is:
code:
logging trap critical
Which will only log Critical and Emergency priority messages. You can get a list of messages sorted by priority at http://www.cisco.com/en/US/docs/security/pix/pix63/system/message/pixemapa.html

Setting up cacti/syslog-ng is a bit beyond the scope of this thread, a bit of quick googling should get you plenty of information on them.

jwh posted:

That'd be neat to look at; I managed to dodge route-target import/exports too, which turned out to be a good thing. I've been meaning to go back and lab the route-target stuff just in case we have a falling-out with Nokia, and decide to move our wan firewalls to another platform.
This isn't a route import/export demo, like you I've been lucky enough to avoid that :) but the configs for the test layer3 access network I put together are up at: http://www.starshadow.com/~ragnar/switches/
Configs have (hopefully) been fully sanitized to protect the innocent.

ragzilla fucked around with this message at 01:25 on May 19, 2007

ragzilla
Sep 9, 2005
don't ask me, i only work here


edit: double post

jwh
Jun 12, 2002

Girdle Wax posted:

configs for the test layer3 access network
Looks good! Your OSPF tuning has given me some ideas for my own network. Bear in mind, if you go to more than one VRF, you might have to start using the 'capability vrf-lite' command under each OSPF process to disable the PE checks (your guess is as good as mine as to what actually are).

I have to admit, I was on the fence about your layer-3 to the access layer after reading your lengthy post, but after seeing the configs, I really like it. Any thought to getting your own ASN from ARIN, or did you sanitize your AS to a private?

conntrack
Aug 8, 2003

by angerbeet
Whats your take on the 4500 series? The 4503 chassies with the sup-II-plus-TS supervisor looks like a nice price/performance combo to me.

Any opinions on the chassies/supervisor?

NinjaPablo
Nov 20, 2003

Ewww it's all sticky...
Grimey Drawer
I've got a 2620 that I am trying to setup MLPPP across 2 T1s on. Spent over an hour on the phone with the ISP trying to get this working. When I mentioned it was a 2620, he immediately said that it was probably too old of a version of IOS.
code:
Cisco Internetwork Operating System Software
IOS (tm) C2600 Software (C2600-I-M), Version 12.0(7)T,  RELEASE SOFTWARE (fc2)
Copyright (c) 1986-1999 by cisco Systems, Inc.
Compiled Tue 07-Dec-99 02:12 by phanguye
Image text-base: 0x80008088, data-base: 0x807AAF70

ROM: System Bootstrap, Version 12.1(3r)T2, RELEASE SOFTWARE (fc1)

gts_gateway uptime is 3 days, 17 hours, 59 minutes
System returned to ROM by reload
System image file is "flash:c2600-i-mz.120-7.T"

cisco 2620 (MPC860) processor (revision 0x600) with 26624K/6144K bytes of memory.
I was only able to get it running on 1 T1, leaving that T1 encap as HDLC.

I was able to bring either T1 up as PPP, but they would not pass any traffic. I was also able to bring both T1s up as PPP, add them to a multilink group, and have that multilink show up/up, but not pass traffic.

I'm running a very basic config on this. Any ideas on what I need to do to get this running correctly?

ragzilla
Sep 9, 2005
don't ask me, i only work here


NinjaPablo posted:

I've got a 2620 that I am trying to setup MLPPP across 2 T1s on. Spent over an hour on the phone with the ISP trying to get this working. When I mentioned it was a 2620, he immediately said that it was probably too old of a version of IOS.
code:
Cisco Internetwork Operating System Software
IOS (tm) C2600 Software (C2600-I-M), Version 12.0(7)T,  RELEASE SOFTWARE (fc2)
Copyright (c) 1986-1999 by cisco Systems, Inc.
Compiled Tue 07-Dec-99 02:12 by phanguye
Image text-base: 0x80008088, data-base: 0x807AAF70

ROM: System Bootstrap, Version 12.1(3r)T2, RELEASE SOFTWARE (fc1)

gts_gateway uptime is 3 days, 17 hours, 59 minutes
System returned to ROM by reload
System image file is "flash:c2600-i-mz.120-7.T"

cisco 2620 (MPC860) processor (revision 0x600) with 26624K/6144K bytes of memory.
I was only able to get it running on 1 T1, leaving that T1 encap as HDLC.

I was able to bring either T1 up as PPP, but they would not pass any traffic. I was also able to bring both T1s up as PPP, add them to a multilink group, and have that multilink show up/up, but not pass traffic.

I'm running a very basic config on this. Any ideas on what I need to do to get this running correctly?

MLPPP is supported in 12.0(7)T, mind showing us your configs? Is your ISP configured for MLPPP? Can you ping across the MLPPP bundle once it comes up/up?

NinjaPablo
Nov 20, 2003

Ewww it's all sticky...
Grimey Drawer
I was able to ping and telnet by IP address only when the MLPPP config was in place, or when I was only using a single T1 as PPP. As soon as I'd have the ISP update their end to not be MLPPP, and change back to HDLC, and I switched back to HDLC on my end, all normal traffic would work.
code:
ip subnet-zero

interface Multilink1
 ip address 192.168.0.1 255.255.255.252
 no ip directed-broadcast
 no cdp enable
 ppp chap hostname gateway
 ppp multilink
 no ppp multilink fragmentation
 multilink-group 1

interface FastEthernet0/0
 ip address 10.0.0.1 255.255.255.224
 no ip directed-broadcast
 speed 100
 full-duplex

interface Serial0/0 (currently running on this alone)
 ip address 192.168.0.1 255.255.255.252
 no ip directed-broadcast
 no fair-queue

interface Serial0/1 (when I had the ISP switch to PPP, both interfaces looked exactly like this)
 ip address 192.168.0.1 255.255.255.252
 no ip directed-broadcast
 encapsulation ppp
 no fair-queue
 ppp chap hostname gateway
 ppp multilink
 multilink-group 1

ip classless
ip route 0.0.0.0 0.0.0.0 Serial0/0 (I changed this to m1 when the MLPPP config was in place)
no ip http server

NinjaPablo fucked around with this message at 23:01 on May 21, 2007

CrazyLittle
Sep 11, 2001





Clapping Larry
What's the newest firmware I can run on a Cisco 2621 with 8mb flash, 24mb dram?

ragzilla
Sep 9, 2005
don't ask me, i only work here


CrazyLittle posted:

What's the newest firmware I can run on a Cisco 2621 with 8mb flash, 24mb dram?

12.1.27b
or
12.2.12m

If you upgraded the RAM to at least 32M you could run the latest which is 12.3.22

NinjaPablo posted:

I was able to ping and telnet by IP address only when the MLPPP config was in place, or when I was only using a single T1 as PPP. As soon as I'd have the ISP update their end to not be MLPPP, and change back to HDLC, and I switched back to HDLC on my end, all normal traffic would work.
Could you ping things _beyond_ the ISP by IP when it was up in MLPPP mode? If so they may forgotten to update the route for the subnet they're sending you. Also you typically remove the IP address from the interfaces when they're in MLPPP, so they should say something like 'no ip address'

Korensky
Jan 13, 2004

conntrack posted:

Whats your take on the 4500 series? The 4503 chassies with the sup-II-plus-TS supervisor looks like a nice price/performance combo to me.

Any opinions on the chassies/supervisor?

Really poor layer 3 forwarding performance and very limited QOS and routing capabilities. It's ok if it's just one switch in the middle of your network but if you're managing a somewhat large network, I'd almost certainly go for a SupV and run native IOS.

CrazyLittle
Sep 11, 2001





Clapping Larry

Girdle Wax posted:

12.1.27b
or
12.2.12m

If you upgraded the RAM to at least 32M you could run the latest which is 12.3.22

Yeah, drat. I was hoping somebody knew of a "magic" build of 12.3 that would fit in there, but then again 2621's are pretty drat old.

I got a better question actually though. I'm trying to use OER on a 1841 across a DSL connection and a T1 connection. I setup the route maps to send mail traffic over the T1, but for some reason the ACL isn't matching, or the route-map isn't setting the next hop properly:

72.14.253.103 = DSL gateway
72.14.253.206 = T1 gateway

code:
!SIP clients on vlan 2
access-list 5 permit 10.0.10.0 0.0.0.255

!lan pcs
access-list 6 permit 10.0.0.0 0.0.0.255

!extended ACL for lan pcs (seems to catchall)
access-list 102 permit ip 10.0.0.0 0.0.0.255 any

!extended ACL for SIP clients
access-list 110 permit ip 10.0.10.0 0.0.0.255 any

!Set default route for all PC traffic over the DSL
route-map dslnat permit 10
 match ip address 6 5
 match interface ATM0/1/0.1 Serial0/0/0

!Set default route for all SIP traffic over the T1
route-map voice-t1 permit 10
 match ip address 110
 set ip next-hop 72.14.253.103 72.14.253.206

!Same as dslnat?
route-map web-dsl permit 10
 match ip address 102
 set ip next-hop 72.14.253.206 72.14.253.103

!Same as voice-t1?
route-map t1nat permit 10
 match ip address 5 6
 match interface Serial0/0/0 ATM0/1/0.1


1841router#show route-map
route-map dslnat, permit, sequence 10
  Match clauses:
    ip address (access-lists): 6 5
    interface ATM0/1/0.1 Serial0/0/0
  Set clauses:
  Policy routing matches: 0 packets, 0 bytes
route-map voice-t1, permit, sequence 10
  Match clauses:
    ip address (access-lists): 110 101
  Set clauses:
    ip next-hop 72.14.253.103 72.14.253.206
  Policy routing matches: 0 packets, 0 bytes
route-map web-dsl, permit, sequence 10
  Match clauses:
    ip address (access-lists): 102
  Set clauses:
    ip next-hop 72.14.253.206 72.14.253.103
  Policy routing matches: 14444933 packets, 1865843751 bytes
route-map t1nat, permit, sequence 10
  Match clauses:
    ip address (access-lists): 5 6
    interface Serial0/0/0 ATM0/1/0.1
  Set clauses:
  Policy routing matches: 0 packets, 0 bytes

Edit: found the solution.

code:
route-map web-dsl, permit, sequence 9
  Match clauses:
    ip address (access-lists): 101
  Set clauses:
    ip next-hop 72.14.253.103 72.14.253.206 

Herv posted:

You are applying the correct route-map to the correct interface?

You can only have one route-map per interface by the way. Have to use sequence numbers like crypto-maps.

That was the answer - the ACL I added would never get matched because it trying to match on the wrong vlan. Adding a route-map on the correct interface with a higher precedence number fixed it.

CrazyLittle fucked around with this message at 16:28 on May 22, 2007

Boner Buffet
Feb 16, 2006
I have a question about spanning tree portfast. How many here make use of it, and where do you use it? As I understand it, I would want to use portfast on ports dedicated to end nodes only. Any sort of port that is linked to a switch in either direction shouldn't have portfast enabled. Am I right in this thinking?

ragzilla
Sep 9, 2005
don't ask me, i only work here


InferiorWang posted:

I have a question about spanning tree portfast. How many here make use of it, and where do you use it? As I understand it, I would want to use portfast on ports dedicated to end nodes only. Any sort of port that is linked to a switch in either direction shouldn't have portfast enabled. Am I right in this thinking?

Yes. It enables a port to still have spanning tree enabled on it, but it skips the Listening/Learning states and heads straight to forwarding. If you do that into a switch or hub congratulations, you probably just created a loop.

conntrack
Aug 8, 2003

by angerbeet

Korensky posted:

Really poor layer 3 forwarding performance and very limited QOS and routing capabilities. It's ok if it's just one switch in the middle of your network but if you're managing a somewhat large network, I'd almost certainly go for a SupV and run native IOS.

Im going from a 3750 stack with all our fiber in a 12S model.

Not so hot either acordning to the switchperformande.pdf in the first post.

Im not that schooled on cisco but native IOS?

Herv
Mar 24, 2005

Soiled Meat

CrazyLittle posted:

Yeah, drat. I was hoping somebody knew of a "magic" build of 12.3 that would fit in there, but then again 2621's are pretty drat old.

I got a better question actually though. I'm trying to use OER on a 1841 across a DSL connection and a T1 connection. I setup the route maps to send mail traffic over the T1, but for some reason the ACL isn't matching, or the route-map isn't setting the next hop properly:



Config without the interface configs showing route-maps.
[/code]

Well you can get 3rd party memory rather cheap. I still use a 2600 just maxed out the memory/flash.

I use this place, never had a problem with their 3rd party.

http://www.ciscomemoryupgrades.com/cisco-memory.html
code:
c2611#sh ver
Cisco Internetwork Operating System Software
IOS (tm) C2600 Software (C2600-J1S3-M), Version 12.3(22), RELEASE SOFTWARE (fc2)

ROM: System Bootstrap, Version 11.3(2)XA4, RELEASE SOFTWARE (fc1)
ROM: C2600 Software (C2600-J1S3-M), Version 12.3(22), RELEASE SOFTWARE (fc2)

System image file is "flash:c2600-j1s3-mz.123-22.bin"

cisco 2611 (MPC860) processor (revision 0x203) with 61440K/4096K bytes of memory.
2 Ethernet/IEEE 802.3 interface(s)
1 Serial network interface(s)
1 ATM network interface(s)
32K bytes of non-volatile configuration memory.
16384K bytes of processor board System flash (Read/Write)
Why not classify the email traffic using TCP 25? (edit) Or the client protocols, whatever you want to policy-route.

You are applying the correct route-map to the correct interface?

You can only have one route-map per interface by the way. Have to use sequence numbers like crypto-maps.

Cisco posted:

Router(config-if)# ip policy route-map map-tag


Identifies the route map to use for PBR. One interface can have only one route map tag; but you can have several route map entries, each with its own sequence number. Entries are evaluated in order of their sequence numbers until the first match occurs. If no match occurs, packets are routed as usual.

Doc

ate shit on live tv
Feb 15, 2004

by Azathoth

conntrack posted:

Im going from a 3750 stack with all our fiber in a 12S model.

Not so hot either acordning to the switchperformande.pdf in the first post.

Im not that schooled on cisco but native IOS?

I assume that he is referring to older supervisor cards that used CatOS for the switch processor and IOS for the Route Processor. You could, with a very convoluted process, upgrade from CatOS to Native IOS where you would have IOS running on both the switch processor and the route processor. Then even though as a user you would only see the switch processor any changes to the config in the switch processor would be mirrored onto the route processor, and thus you would be running "native IOS." But I am probably missing something.

I guess it could also mean that the supervisor just came with IOS on it already and that way you wouldn't have to deal with the upgrade from CatOS to IOS.

CrazyLittle
Sep 11, 2001





Clapping Larry

Herv posted:

You are applying the correct route-map to the correct interface?

You can only have one route-map per interface by the way. Have to use sequence numbers like crypto-maps.

That was the answer - the ACL I added would never get matched because it trying to match on the wrong vlan. Adding a route-map on the correct interface with a higher precedence number fixed it.

Herv
Mar 24, 2005

Soiled Meat

CrazyLittle posted:

That was the answer - the ACL I added would never get matched because it trying to match on the wrong vlan. Adding a route-map on the correct interface with a higher precedence number fixed it.

Good deal, glad to help.

By the way, have you had luck failing over to the second ip addresses in your set ip next-hop statements?

CrazyLittle
Sep 11, 2001





Clapping Larry

Herv posted:

Good deal, glad to help.

By the way, have you had luck failing over to the second ip addresses in your set ip next-hop statements?

Yeah actually. It takes about 20-30 seconds for the initial hop to "fail" with concrete results, but it actually does roll over. That suggestion came from the Cisco TAC group. It's a shame they're too dumb to implement a -real- OER configuration though :( It turns out the configuration I have running on that 1841 is an orphaned OER border/master that does nothing while the Policy-based routing does all the heavy lifting.

GOOCHY
Sep 17, 2003

In an interstellar burst I'm back to save the universe!

jwh posted:

Ugh, that sounds like bad times. Everybody who works with the Juniper M-series seems to really fall for them, but I've never had the chance.

We have a few Juniper m7i's where I work and they are awesome. I think they're super easy to use. We were bought out by a corp that is "powered by Cisco" so now I'm getting used to the 7200 and 7600 series stuff.

Korensky
Jan 13, 2004

conntrack posted:

Im going from a 3750 stack with all our fiber in a 12S model.

Not so hot either acordning to the switchperformande.pdf in the first post.

Im not that schooled on cisco but native IOS?

3750Gs are the pinnacle of awesome. What are your port density and packet forwarding requirements? I actually hadn't checked out the Plus-TS (I instantly summoned memories of a 4500 with Sup2 with an L3SM or vanilla Sup2).

The 3750G-12S only has lower packet forwarding rates due to the total number of packets that can be forwarded on the number of interfaces it has. I doubt you are running this thing at line-rate :)

The only situation where you'd compromise on performance in the 3750G series switches is if you're stacking them and joining the shared 32gig fabric together (in which case you have 2 x 16gig rings between the entire stack).

conntrack
Aug 8, 2003

by angerbeet

Korensky posted:

3750Gs are the pinnacle of awesome. What are your port density and packet forwarding requirements? I actually hadn't checked out the Plus-TS (I instantly summoned memories of a 4500 with Sup2 with an L3SM or vanilla Sup2).

The 3750G-12S only has lower packet forwarding rates due to the total number of packets that can be forwarded on the number of interfaces it has. I doubt you are running this thing at line-rate :)

The only situation where you'd compromise on performance in the 3750G series switches is if you're stacking them and joining the shared 32gig fabric together (in which case you have 2 x 16gig rings between the entire stack).

True on the bandwidth part. We only have a few ports really. 16 LX and a hand full of 1000TP. The 4503 solution is is a tiny bit cheaper than a new 3750 12S + 24TS setup though.

I like the idea of redundant PSU's and a blade setup. The supII-plus-ts and 6p blade is suposedly wire rate, so i shouldn't lose out on anything with the 4503 over the 3750 setup?

Herv
Mar 24, 2005

Soiled Meat

CrazyLittle posted:

Yeah actually. It takes about 20-30 seconds for the initial hop to "fail" with concrete results, but it actually does roll over. That suggestion came from the Cisco TAC group. It's a shame they're too dumb to implement a -real- OER configuration though :( It turns out the configuration I have running on that 1841 is an orphaned OER border/master that does nothing while the Policy-based routing does all the heavy lifting.

I wasn't sure if NAT was being used as well, saw the word a few times, but didn't know the interface configs. That can add another stick in the spokes for the set ip next hop, at least it did for me.

Go get some ram! :buddy:

ragzilla
Sep 9, 2005
don't ask me, i only work here


Korensky posted:

The only situation where you'd compromise on performance in the 3750G series switches is if you're stacking them and joining the shared 32gig fabric together (in which case you have 2 x 16gig rings between the entire stack).

If you do lots of traffic you'll notice the limitations of the 3750 (non-E variant) fairly quickly, the ring bandwidth is used for _all traffic_, even if it's between ports on the same switch. Due to the way to original StackWise system was engineered, the packets are source-stripped from the ring, ie the switch gets a packet, sticks it on the ring so everyone can get it, then when it gets back to the source switch it is stripped from the ring. The newer 3750-Es can do destination stripping, where the packet is stripped by the destination switch. They can also do local switching so packets going from port 1 to port 2 on the same switch, are switched inside that switch instead of going around the ring.

http://www.cisco.com/en/US/products/hw/switches/ps5023/products_white_paper09186a00801b096a.shtml

CrazyLittle
Sep 11, 2001





Clapping Larry

Herv posted:

Go get some ram! :buddy:

Pfft! why would I upgrade the ram on a 2621 when I have two more 1841's and a whole box of 1720's in front of me

wolrah
May 8, 2006
what?
Is there some wizard or guide where I can look and discover what model router I need to support a specific set of interfaces? Cisco Feature Navigator is useless in this regard.

If not, what do I need to handle 2x Ethernet, 1x T1, and 1x G.DMT ADSL? I'm looking for a home router (the T1 interface is for testing T1 routers I bring home from work) that I can also learn IOS with, so the cheaper the better. It'll be a fairly simple configuration on the software side (pppoe on the DSL, NAT, and simple routing between the other 3 interfaces), it's just the number of interfaces that make things complicated.

ragzilla
Sep 9, 2005
don't ask me, i only work here


wolrah posted:

Is there some wizard or guide where I can look and discover what model router I need to support a specific set of interfaces? Cisco Feature Navigator is useless in this regard.

If not, what do I need to handle 2x Ethernet, 1x T1, and 1x G.DMT ADSL? I'm looking for a home router (the T1 interface is for testing T1 routers I bring home from work) that I can also learn IOS with, so the cheaper the better. It'll be a fairly simple configuration on the software side (pppoe on the DSL, NAT, and simple routing between the other 3 interfaces), it's just the number of interfaces that make things complicated.

I don't think any 1600/1700 series hardware can do that. Your best bet's probably a 2621 or 2621XM which has 2 FE's built in, and 2 WIC slots (1 for WIC-1DSU-T1 and another for an ADSL WIC). Be forewarned that the ADSL WICs are quite expensive, it might be more cost effective to add another ethernet (with a WIC-1ENET) and plugging into a DSL modem/bridge.

Adbot
ADBOT LOVES YOU

Herv
Mar 24, 2005

Soiled Meat

Girdle Wax posted:

I don't think any 1600/1700 series hardware can do that. Your best bet's probably a 2621 or 2621XM which has 2 FE's built in, and 2 WIC slots (1 for WIC-1DSU-T1 and another for an ADSL WIC). Be forewarned that the ADSL WICs are quite expensive, it might be more cost effective to add another ethernet (with a WIC-1ENET) and plugging into a DSL modem/bridge.

I just bought a pair of WIC1 DSL's, off ebay, never had a problem.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply