|
Girdle Wax posted:If you're stuck on 6.x, you'll need to look into setting the levels of the messages you're trying to pick out, then setting the pix to only send messages of that severity or higher, which might help your signal-to-noise. I hate to come off as lazy, but can I get some links to help me with doing this ? I didn't see anything that jumped out at me while messing with the logging options in pix.
|
# ? May 18, 2007 13:21 |
|
|
# ? May 15, 2024 02:48 |
|
Girdle Wax posted:Lotta stuff, but STP is killing me. I haven't tested this in ages, but the last time I had redundant trunks i did a load balancing / failover situation, certainly tested it and did not notice any real convergence time that would make me nervous. Here's a quick and dirty example of how it worked beween core and dist for example: code:
Sorry time is dulling my memory of this, but I would certainly remember the 45 sec pucker time for sure.
|
# ? May 18, 2007 15:18 |
|
Girdle Wax posted:Waiting 40 seconds for spanning tree to reconverge because a gig link dropped, really, really sucks. I think you're looking for uplinkfast & backbonefast. PVST has a bunch of knobs besides portfast & bpduguard. http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a0080094641.shtml http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a00800c2548.shtml http://www.cisco.com/en/US/tech/tk389/tk621/tsd_technology_support_protocol_home.html
|
# ? May 18, 2007 15:41 |
|
inignot posted:I think you're looking for uplinkfast & backbonefast. PVST has a bunch of knobs besides portfast & bpduguard. Backbone fast is still going to cost me 30 seconds from what I'm reading, it saves 20 seconds of max_age timer but i still have 2x15 for listening and forwarding, but the situation that that would be detecting would be rare with my environment, uplinkfast would definitely save us some convergence time on the access switches though.
|
# ? May 18, 2007 19:35 |
|
Girdle Wax posted:Cisco goons, you're my only hope Girdle Wax posted:Since they're 3750 stacks though, they have a snowball's chance in hell of doing anything decent BGP wise to get them to make smart routing decisions at distribution Girdle Wax posted:- Get rid of spanning tree. How? Everything coming out of the distribution switches (pondering between 6509/Sup720 and 7609/RSP720 for this- arguments for/against these would also be welcome) is a layer 3 link, layer 3 links up to the core routers. layer 3 links down to the customer access routers. layer 3 links down to the access/aggregation switches. layer 3 links 'across' to my existing distribution switches to get the legacy network uplinked through the new network. As for 6500/7600, your guess is as good as mine. I think people are still feeling this one out. Girdle Wax posted:To accomodate getting rid of spanning tree, I want to move the layer3 termination down into the customer access switch (probably using 3560s). 2 routed uplinks, 1 to each distribution switch. Then a handful of SVI's with /28s or /29s serving a handful of customers. These switches will participate in the OSPF loopback program, and limited iBGP as listed below. I'm not much of an ethernet wizard, honestly, and I spend more time on the WAN side these days, but I almost feel like you're putting a heavy burden on your access layer. More intelligence at the access layer might mean more revenue opportunities, ease of management, and greater provisioning flexibility, but it also means more things that can break. Plus, if you need a customer VLAN to span multiple access layer devices, are you going to have to cross-connect those switches on an as-needed basis? Girdle Wax posted:- All internet traffic in the access/distribution switches will be in an internet VRF (vrf-lite in all the switches basically). This is mainly a management thing- I plan to use the main routing table for management access only- haul a single eth from every switch back to an isolated switch for a new management network. So I can actually get into the switches over ssh/telnet when the poo poo hits the fan. Part of this project will also be making sure we have working consoles in case that switch fails *grumble*. This introduces somewhat of a learning curve (remembering to type 'vrf internet' all the time), but I feel the management advantages outweigh the disadvantages. Girdle Wax posted:- OSPF is optimized as described in the Cisco doc above (200ms hellos for fast failure detection- in the future we will probably also look at doing BDF if the 3560 ever supports it), in addition to ispf to speed up spf calculations. Girdle Wax posted:I've come across a couple of problems, the biggest one was resulting from a distribution switch reboot and ECMP. Basically since OSPF comes up faster than BGP, my core was sending traffic for an access switch down to DistA, which didn't know how to handle it, so it sent it back up to the Core, back down to DistA etc. I managed to resolve this by adding a high metric static in each of the distribution switches, pointing to each other over the link between them. This way when a switch is still 'becoming active' after a reload, it will continue to pass traffic across to the other distribution switch which is still up, which will be able to forward the traffic as normal. Girdle Wax posted:My full table BGP customers I can continue to serve out of my customer access routers like I do now. I'll probably hang a dedicated switch off each of those GigE interfaces just for that, and keep them isolated from the distribution/core. There was a proposal to perhaps hang some these customers off 10/100/1000 aggregation switches, terminate L3 in the agg switch then have them to eBGP multihop to the distribution switches but this bugs me in 2 ways. You could take a tip from the carriers here, and simply provide the customer a session, and tell them to do whatever they want with it. So, in other words, if they drop their session, that's their own drat fault. It's not polite, but it makes a certain sense. Girdle Wax posted:We do have some colocation customers who do HSRP with us for failover on our and their equipment, to continue to offer this I came up with the following solution: There's a lot to think about here, and you're obviously much more familiar with what you've worked up than I am, not to mention familiar with your business practices. That said, it sounds like you're in the business of providing colocation at the switchport level, as well as routed interfaces. Maybe an exercise would be to take your layer-2 access-layer out of the picture completely, and develop a framework that could accomodate both layer-2 colocation and layer-3 routed interface customers identically, sans ethernet access-layer infrastructure. Then, attach access-layer devices as if they were customer owned, but provider managed. I guess what I'm saying is, would it be easier if your colocation access devices were end-of-rack 6500's providing routed interfaces to top-of-rack customer managed switches? They could purchase HSRP and uplink diversity without your having to get stuck in spanning-tree nightmare world, because that'd be up to them. jwh fucked around with this message at 20:27 on May 18, 2007 |
# ? May 18, 2007 19:45 |
|
jwh posted:That's not a good sign jwh posted:Meaning, TCAM size is too small for full tables? jwh posted:That's a lot of layer 3. Are you confident that your IGP is going to reconverge AS-wide faster than spanning-tree in event of a link failure? Similarly, would these problems go away if you instead replaced your older gear with switches that supported RPVST+? jwh posted:I'm not much of an ethernet wizard, honestly, and I spend more time on the WAN side these days, but I almost feel like you're putting a heavy burden on your access layer. More intelligence at the access layer might mean more revenue opportunities, ease of management, and greater provisioning flexibility, but it also means more things that can break. Plus, if you need a customer VLAN to span multiple access layer devices, are you going to have to cross-connect those switches on an as-needed basis? jwh posted:Have you worked up all of the relevant routing protocol configurations with the vrf stuff? It can be a little screwy, ie, OSPF's vrf per-process instantiation versus mBGP address-family vpnv4. jwh posted:Well, you probably saw it too on c-nsp the other day, but apparently BFD has trouble getting below 250ms. That could have been a platform anomaly, I don't remember. If your tuned OSPF dead timer is 4 x hello interval, that's still really quick. jwh posted:I thought you were getting customer prefixes out of OSPF? Or am I misunderstanding? Are you talking about control plane traffic? 1) DistributionA (or B) had just rebooted, OSPF rapidly reconverged, BGP was still converging. 2) The Core switch(es) saw that they now had an extra path (via OSPF) to reach the loopback of the customer access switch (as this is iBGP, I am _not_ setting next-hop-self for BGP sessions or I lose my rapid recovery ECMP). 3) The core switch would now send traffic down to this distribution switch, which did not know how to reach the customer prefix, so it could not forward the traffic. I mitigated this with a high metric static to the other distribution switch which gets removed from the table when BGP is fully converged. Thinking forward this may be better handled by making these static routes for my prefixes only, so if BGP to the core comes up faster (thus replacing the static default with the BGP one) than the BGP to access, I don't create a Core-Distribution-Core routing loop. jwh posted:Yeah, ebgp-multihop is gross unless you're doing the neighbor loopback thing. -- Access switches would not know what prefixes the customer had until 2+ minutes had elapsed- the time for BGP scanner to run at distribution, and then again at access. -- I can't do BGP straight out of the access switch since it'll be a 3560/3750 and thus no full tables. Customers who only take default-only could potentially be served out of an access switch though. jwh posted:This one gives me the shakes. I guess if you can charge customer's for the cross-connect, it won't matter, and won't be like you're losing out on revenue producing interfaces. jwh posted:There's a lot to think about here, and you're obviously much more familiar with what you've worked up than I am, not to mention familiar with your business practices. That said, it sounds like you're in the business of providing colocation at the switchport level, as well as routed interfaces. Maybe an exercise would be to take your layer-2 access-layer out of the picture completely, and develop a framework that could accomodate both layer-2 colocation and layer-3 routed interface customers identically, sans ethernet access-layer infrastructure. Then, attach access-layer devices as if they were customer owned, but provider managed. Top-of-rack switching has only happened with 1 customer I think so far and that was a port issue- they had about a half dozen ports into our colo network for different customers of theirs, and didn't want to run their own common firewall/router for all their customers so we pressed them into letting us land a switch in their cabinet. This is an exception rather than the rule though, 95% of customers are regular-availability, single port customers. We don't have much of an issue where we're require top-of-rack anymore as the majority of our sales now are people who are taking 1/2, or an entire cabinet, we probably only have a handful (5, 6?) of shared cabinets on a floor of 120+. tl;dr on this, we'll still be doing layer2 colocation for the majority of the customers, but we will be limiting the layer2 domain to only a handful (5-15) customers, and the broadcast domain will not leave the switch so we have less issues like broadcast storms, or spanning-tree to worry about. The only customers getting dedicated SVIs in the access layer will be those who have purchased HA services. -edit- I forget, was the BFD issue in RRB? I think people have been seeing other issues in there related to CPU, could be bad scheduling. They were reporting BGP usage up to 80% of the CPU on boxes where it used to be 10% back on RRA. Why does the 7600 BU hate us. -/edit- Tremblay posted:As far as 7600/6500 goes the code trains are split now. More switching features will be implemented for 6k while more routing features will be making it into the 7k. I guess it mostly comes down to the capacity you need and topology (collapsed core or distributed). I guess wrt the 6500/7600 split I'm worried about : 1) Choosing 7600, hopefully getting some RSP720s- though probably not. Finding out that SRC release notes will say "The WS-X67XX switching modules will no longer be supported in SR". 2) Choosing 6500, getting screwed over by the 6500/7600 BU split and not getting decent (service provider) features in SX. I'm currently leaning toward the 6500 and hoping that SX will continue to add useful features for an iBGP only device. I'm probably going to keep all my eBGP in the GSRs for the foreseeable future. To make up for all these , here's a picture of a kitten, no wait, my current layer2 setup: http://starshadow.com/~ragnar/731CoLo.png Every device has a Vl513 interface, used for management on the switches, for data plane on the routers and distribution. Vl401 and Vl421 terminate L3 in the dist switches, running HSRP for HA. Vl192 is our offnet management network that I really wish wasn't running in the production network, it's mostly out there to get ethernet to our cameras, console servers and muxes. Vl402 is one of our gigabit customer networks that we do BGP over (terminates in CustomerC or D I forget which). There's a customer in C201 that takes full tables from us. ragzilla fucked around with this message at 21:26 on May 18, 2007 |
# ? May 18, 2007 21:04 |
|
As far as 7600/6500 goes the code trains are split now. More switching features will be implemented for 6k while more routing features will be making it into the 7k. I guess it mostly comes down to the capacity you need and topology (collapsed core or distributed).
|
# ? May 18, 2007 21:05 |
|
Girdle Wax posted:also only have 128M (non upgradeable) of DRAM, I'm not sure if I can fit a full table into that y'know Girdle Wax posted:I'm really liking the advantages of moving layer3 down to the access layer in the form of simplified troubleshooting (ping and traceroute), and not having to deal with the headaches that come along with spanning-tree, rapid or no. Girdle Wax posted:As soon as my switches boot up I'll grab some configs off them to see if anyone can poke some holes in that Girdle Wax posted:but if I have a feature that could make link failure detection more reliable (fast hellos, udld, bfd), and doesn't cost me much to implement, I'd have to be insane not to use it right? Girdle Wax posted:Why does the 7600 BU hate us.
|
# ? May 18, 2007 22:31 |
|
jwh posted:That'd be neat to look at; I managed to dodge route-target import/exports too, which turned out to be a good thing. I've been meaning to go back and lab the route-target stuff just in case we have a falling-out with Nokia, and decide to move our wan firewalls to another platform. jwh posted:I think a better question is why Cisco thought it was a good idea to compete with itself, just to capture a market that used a different nomenclature. It must have felt like a good idea at the time, but nowadays it's yucky for customers. I'm just glad I'm not in the market for one.
|
# ? May 18, 2007 22:42 |
|
Girdle Wax posted:1) Choosing 7600, hopefully getting some RSP720s- though probably not. Finding out that SRC release notes will say "The WS-X67XX switching modules will no longer be supported in SR". The 7600-S with the RSP already has a completely different code base, I can guarentee that the 67xx cards will work for the foreseeable future, I can also guarentee that 6500 is not going to get the same support as the new 7600 is. A safe (but expensive) bet is to go with the 7600. The RSP720 is a pretty awesome Sup, and combined with the new 7600-s as well as the new cards that will that will undoubtedly be coming out soon with the Fast Fabric Sync. I would be feeling pretty confident about my fail over options. Of course this may not be feasible for your situation. But if you do have the option, the 7600 is the safe way to go. Also I'm fairly confident that the new 68xx series cards will not work in the 6500... But that is certainly not the official Cisco position. E: Actually to clarify, certain features of the 68xx card's won't work. Like the fast fabric switching etc. But it is only a matter of time before some 7600 exclusive cards come out. ate shit on live tv fucked around with this message at 23:17 on May 18, 2007 |
# ? May 18, 2007 22:51 |
|
Powercrazy posted:The 7600-S with the RSP already has a completely different code base, I can guarentee that the 67xx cards will work for the foreseeable future, I can also guarentee that 6500 is not going to get the same support as the new 7600 is. A safe (but expensive) bet is to go with the 7600. The RSP720 is a pretty awesome Sup, and combined with the new 7600-s as well as the new cards that will that will undoubtedly be coming out soon with the Fast Fabric Sync. I would be feeling pretty confident about my fail over options. Given our past purchasing habits some parts such as the chassis will probably be purchased from a company that does network rebuilds then sells the hardware pulled out to ISPs like us, so the chassis will probably be 7609 rather than 7609-S. Redundant Sups, while nice, may not be on the table (why do we need redundant sups in a manned datacenter, that's why we have a redundant chassis design) so the fast sup failover may not be a strong selling point there. If 67xx cards are going to stay in SR for the foreseeable future I'd probably (currently lean) toward 7609 chassis + Sup720 with an option for RSP720 upgrade (since they're going to act as RRs for the rest of the network the extra CPU could be useful for pushing out BGP updates and running scanner faster). 7609 chassis, Sup720, SR software? SR isn't going to go RSP only anytime soon is it, I don't imagine people with large 720+7600 implementations are going to be happy about forklifting all the sups in their network.
|
# ? May 18, 2007 23:00 |
|
Girdle Wax posted:handing a dot1q subif out of that VRF, over a GigE trunk into our VRF switch, then access ports to the customer. Girdle Wax posted:The last time we changed vendors we ended up with a pair of Extreme Summit 48is (the distribution switches before the ones before the current ones) which was just a terrible, terrible experience (they failed and rebooted fairly regularly, and always took over OSPF DR).
|
# ? May 18, 2007 23:50 |
|
obsidian440 posted:I hate to come off as lazy, but can I get some links to help me with doing this ? I didn't see anything that jumped out at me while messing with the logging options in pix. It looks like if you only want auth fails, (basically get rid of acl denys), you'll need to bump the logging up to Critical instead of Error severity. I typically tend to manage via pdm/asdm, but if you're managing through the console I think the command you're looking for is: code:
Setting up cacti/syslog-ng is a bit beyond the scope of this thread, a bit of quick googling should get you plenty of information on them. jwh posted:That'd be neat to look at; I managed to dodge route-target import/exports too, which turned out to be a good thing. I've been meaning to go back and lab the route-target stuff just in case we have a falling-out with Nokia, and decide to move our wan firewalls to another platform. Configs have (hopefully) been fully sanitized to protect the innocent. ragzilla fucked around with this message at 01:25 on May 19, 2007 |
# ? May 19, 2007 01:23 |
|
edit: double post
|
# ? May 19, 2007 01:25 |
|
Girdle Wax posted:configs for the test layer3 access network I have to admit, I was on the fence about your layer-3 to the access layer after reading your lengthy post, but after seeing the configs, I really like it. Any thought to getting your own ASN from ARIN, or did you sanitize your AS to a private?
|
# ? May 20, 2007 05:16 |
|
Whats your take on the 4500 series? The 4503 chassies with the sup-II-plus-TS supervisor looks like a nice price/performance combo to me. Any opinions on the chassies/supervisor?
|
# ? May 20, 2007 13:49 |
|
I've got a 2620 that I am trying to setup MLPPP across 2 T1s on. Spent over an hour on the phone with the ISP trying to get this working. When I mentioned it was a 2620, he immediately said that it was probably too old of a version of IOS.code:
I was able to bring either T1 up as PPP, but they would not pass any traffic. I was also able to bring both T1s up as PPP, add them to a multilink group, and have that multilink show up/up, but not pass traffic. I'm running a very basic config on this. Any ideas on what I need to do to get this running correctly?
|
# ? May 21, 2007 21:38 |
|
NinjaPablo posted:I've got a 2620 that I am trying to setup MLPPP across 2 T1s on. Spent over an hour on the phone with the ISP trying to get this working. When I mentioned it was a 2620, he immediately said that it was probably too old of a version of IOS. MLPPP is supported in 12.0(7)T, mind showing us your configs? Is your ISP configured for MLPPP? Can you ping across the MLPPP bundle once it comes up/up?
|
# ? May 21, 2007 21:45 |
|
I was able to ping and telnet by IP address only when the MLPPP config was in place, or when I was only using a single T1 as PPP. As soon as I'd have the ISP update their end to not be MLPPP, and change back to HDLC, and I switched back to HDLC on my end, all normal traffic would work.code:
NinjaPablo fucked around with this message at 23:01 on May 21, 2007 |
# ? May 21, 2007 22:02 |
|
What's the newest firmware I can run on a Cisco 2621 with 8mb flash, 24mb dram?
|
# ? May 21, 2007 23:05 |
|
CrazyLittle posted:What's the newest firmware I can run on a Cisco 2621 with 8mb flash, 24mb dram? 12.1.27b or 12.2.12m If you upgraded the RAM to at least 32M you could run the latest which is 12.3.22 NinjaPablo posted:I was able to ping and telnet by IP address only when the MLPPP config was in place, or when I was only using a single T1 as PPP. As soon as I'd have the ISP update their end to not be MLPPP, and change back to HDLC, and I switched back to HDLC on my end, all normal traffic would work.
|
# ? May 21, 2007 23:22 |
|
conntrack posted:Whats your take on the 4500 series? The 4503 chassies with the sup-II-plus-TS supervisor looks like a nice price/performance combo to me. Really poor layer 3 forwarding performance and very limited QOS and routing capabilities. It's ok if it's just one switch in the middle of your network but if you're managing a somewhat large network, I'd almost certainly go for a SupV and run native IOS.
|
# ? May 22, 2007 00:38 |
|
Girdle Wax posted:12.1.27b Yeah, drat. I was hoping somebody knew of a "magic" build of 12.3 that would fit in there, but then again 2621's are pretty drat old. I got a better question actually though. I'm trying to use OER on a 1841 across a DSL connection and a T1 connection. I setup the route maps to send mail traffic over the T1, but for some reason the ACL isn't matching, or the route-map isn't setting the next hop properly: 72.14.253.103 = DSL gateway 72.14.253.206 = T1 gateway code:
code:
Herv posted:You are applying the correct route-map to the correct interface? That was the answer - the ACL I added would never get matched because it trying to match on the wrong vlan. Adding a route-map on the correct interface with a higher precedence number fixed it. CrazyLittle fucked around with this message at 16:28 on May 22, 2007 |
# ? May 22, 2007 00:58 |
|
I have a question about spanning tree portfast. How many here make use of it, and where do you use it? As I understand it, I would want to use portfast on ports dedicated to end nodes only. Any sort of port that is linked to a switch in either direction shouldn't have portfast enabled. Am I right in this thinking?
|
# ? May 22, 2007 03:42 |
|
InferiorWang posted:I have a question about spanning tree portfast. How many here make use of it, and where do you use it? As I understand it, I would want to use portfast on ports dedicated to end nodes only. Any sort of port that is linked to a switch in either direction shouldn't have portfast enabled. Am I right in this thinking? Yes. It enables a port to still have spanning tree enabled on it, but it skips the Listening/Learning states and heads straight to forwarding. If you do that into a switch or hub congratulations, you probably just created a loop.
|
# ? May 22, 2007 04:10 |
|
Korensky posted:Really poor layer 3 forwarding performance and very limited QOS and routing capabilities. It's ok if it's just one switch in the middle of your network but if you're managing a somewhat large network, I'd almost certainly go for a SupV and run native IOS. Im going from a 3750 stack with all our fiber in a 12S model. Not so hot either acordning to the switchperformande.pdf in the first post. Im not that schooled on cisco but native IOS?
|
# ? May 22, 2007 12:43 |
|
CrazyLittle posted:Yeah, drat. I was hoping somebody knew of a "magic" build of 12.3 that would fit in there, but then again 2621's are pretty drat old. Well you can get 3rd party memory rather cheap. I still use a 2600 just maxed out the memory/flash. I use this place, never had a problem with their 3rd party. http://www.ciscomemoryupgrades.com/cisco-memory.html code:
You are applying the correct route-map to the correct interface? You can only have one route-map per interface by the way. Have to use sequence numbers like crypto-maps. Cisco posted:Router(config-if)# ip policy route-map map-tag Doc
|
# ? May 22, 2007 14:36 |
|
conntrack posted:Im going from a 3750 stack with all our fiber in a 12S model. I assume that he is referring to older supervisor cards that used CatOS for the switch processor and IOS for the Route Processor. You could, with a very convoluted process, upgrade from CatOS to Native IOS where you would have IOS running on both the switch processor and the route processor. Then even though as a user you would only see the switch processor any changes to the config in the switch processor would be mirrored onto the route processor, and thus you would be running "native IOS." But I am probably missing something. I guess it could also mean that the supervisor just came with IOS on it already and that way you wouldn't have to deal with the upgrade from CatOS to IOS.
|
# ? May 22, 2007 15:11 |
|
Herv posted:You are applying the correct route-map to the correct interface? That was the answer - the ACL I added would never get matched because it trying to match on the wrong vlan. Adding a route-map on the correct interface with a higher precedence number fixed it.
|
# ? May 22, 2007 16:27 |
|
CrazyLittle posted:That was the answer - the ACL I added would never get matched because it trying to match on the wrong vlan. Adding a route-map on the correct interface with a higher precedence number fixed it. Good deal, glad to help. By the way, have you had luck failing over to the second ip addresses in your set ip next-hop statements?
|
# ? May 23, 2007 00:23 |
|
Herv posted:Good deal, glad to help. Yeah actually. It takes about 20-30 seconds for the initial hop to "fail" with concrete results, but it actually does roll over. That suggestion came from the Cisco TAC group. It's a shame they're too dumb to implement a -real- OER configuration though It turns out the configuration I have running on that 1841 is an orphaned OER border/master that does nothing while the Policy-based routing does all the heavy lifting.
|
# ? May 23, 2007 00:30 |
|
jwh posted:Ugh, that sounds like bad times. Everybody who works with the Juniper M-series seems to really fall for them, but I've never had the chance. We have a few Juniper m7i's where I work and they are awesome. I think they're super easy to use. We were bought out by a corp that is "powered by Cisco" so now I'm getting used to the 7200 and 7600 series stuff.
|
# ? May 23, 2007 02:49 |
|
conntrack posted:Im going from a 3750 stack with all our fiber in a 12S model. 3750Gs are the pinnacle of awesome. What are your port density and packet forwarding requirements? I actually hadn't checked out the Plus-TS (I instantly summoned memories of a 4500 with Sup2 with an L3SM or vanilla Sup2). The 3750G-12S only has lower packet forwarding rates due to the total number of packets that can be forwarded on the number of interfaces it has. I doubt you are running this thing at line-rate The only situation where you'd compromise on performance in the 3750G series switches is if you're stacking them and joining the shared 32gig fabric together (in which case you have 2 x 16gig rings between the entire stack).
|
# ? May 23, 2007 04:17 |
|
Korensky posted:3750Gs are the pinnacle of awesome. What are your port density and packet forwarding requirements? I actually hadn't checked out the Plus-TS (I instantly summoned memories of a 4500 with Sup2 with an L3SM or vanilla Sup2). True on the bandwidth part. We only have a few ports really. 16 LX and a hand full of 1000TP. The 4503 solution is is a tiny bit cheaper than a new 3750 12S + 24TS setup though. I like the idea of redundant PSU's and a blade setup. The supII-plus-ts and 6p blade is suposedly wire rate, so i shouldn't lose out on anything with the 4503 over the 3750 setup?
|
# ? May 23, 2007 09:54 |
|
CrazyLittle posted:Yeah actually. It takes about 20-30 seconds for the initial hop to "fail" with concrete results, but it actually does roll over. That suggestion came from the Cisco TAC group. It's a shame they're too dumb to implement a -real- OER configuration though It turns out the configuration I have running on that 1841 is an orphaned OER border/master that does nothing while the Policy-based routing does all the heavy lifting. I wasn't sure if NAT was being used as well, saw the word a few times, but didn't know the interface configs. That can add another stick in the spokes for the set ip next hop, at least it did for me. Go get some ram!
|
# ? May 23, 2007 11:54 |
|
Korensky posted:The only situation where you'd compromise on performance in the 3750G series switches is if you're stacking them and joining the shared 32gig fabric together (in which case you have 2 x 16gig rings between the entire stack). If you do lots of traffic you'll notice the limitations of the 3750 (non-E variant) fairly quickly, the ring bandwidth is used for _all traffic_, even if it's between ports on the same switch. Due to the way to original StackWise system was engineered, the packets are source-stripped from the ring, ie the switch gets a packet, sticks it on the ring so everyone can get it, then when it gets back to the source switch it is stripped from the ring. The newer 3750-Es can do destination stripping, where the packet is stripped by the destination switch. They can also do local switching so packets going from port 1 to port 2 on the same switch, are switched inside that switch instead of going around the ring. http://www.cisco.com/en/US/products/hw/switches/ps5023/products_white_paper09186a00801b096a.shtml
|
# ? May 23, 2007 17:04 |
|
Herv posted:Go get some ram! Pfft! why would I upgrade the ram on a 2621 when I have two more 1841's and a whole box of 1720's in front of me
|
# ? May 23, 2007 17:07 |
|
Is there some wizard or guide where I can look and discover what model router I need to support a specific set of interfaces? Cisco Feature Navigator is useless in this regard. If not, what do I need to handle 2x Ethernet, 1x T1, and 1x G.DMT ADSL? I'm looking for a home router (the T1 interface is for testing T1 routers I bring home from work) that I can also learn IOS with, so the cheaper the better. It'll be a fairly simple configuration on the software side (pppoe on the DSL, NAT, and simple routing between the other 3 interfaces), it's just the number of interfaces that make things complicated.
|
# ? May 23, 2007 17:13 |
|
wolrah posted:Is there some wizard or guide where I can look and discover what model router I need to support a specific set of interfaces? Cisco Feature Navigator is useless in this regard. I don't think any 1600/1700 series hardware can do that. Your best bet's probably a 2621 or 2621XM which has 2 FE's built in, and 2 WIC slots (1 for WIC-1DSU-T1 and another for an ADSL WIC). Be forewarned that the ADSL WICs are quite expensive, it might be more cost effective to add another ethernet (with a WIC-1ENET) and plugging into a DSL modem/bridge.
|
# ? May 23, 2007 19:24 |
|
|
# ? May 15, 2024 02:48 |
|
Girdle Wax posted:I don't think any 1600/1700 series hardware can do that. Your best bet's probably a 2621 or 2621XM which has 2 FE's built in, and 2 WIC slots (1 for WIC-1DSU-T1 and another for an ADSL WIC). Be forewarned that the ADSL WICs are quite expensive, it might be more cost effective to add another ethernet (with a WIC-1ENET) and plugging into a DSL modem/bridge. I just bought a pair of WIC1 DSL's, off ebay, never had a problem.
|
# ? May 24, 2007 13:32 |