|
This looks like a fairly standard BGP multihoming scenario. Do you have PI or PA space that you're advertising? Either way, you probably want to run iBGP between the two routers in AS3 so you can advertise the best path internally (or influence that choice via BGP policy to achieve your redundancy goals).
|
# ? Jan 10, 2017 20:02 |
|
|
# ? May 22, 2024 06:28 |
|
Yeah I don't see a problem, short of getting yourself a /24 of PA space (or PI, if you are feeling adventurous).
|
# ? Jan 10, 2017 21:37 |
|
What about this:code:
|
# ? Jan 11, 2017 00:00 |
|
madsushi posted:What about this: I was just trying to show you the upstream "provider" router from VIRL. I think everyone else got what's going on. The As1 represented the provider of ISP1 and then AS3 represents my "customer premise equipment". Thank you Goon wisdom, iBGP seems like a great idea to make sure the routers can effectively do their duty. I'm on chapter 7 of the Cisco book recommended a few pages back - advanced routing architectures 2nd edition. Bit dated in getting started but plenty of useful info. Kind of wish Cisco would publish a 3rd edition. "Time will tell how long IPv6 will take but it's still very experimental" Cisco, 2001 "Most enterprises use serial because dedicated links are too expensive."
|
# ? Jan 11, 2017 05:03 |
|
Note: I have not talked to my VAR or Cisco datacenter sales engineer yet What are the upstream networking requirements for a UCS chassis? I see a lot of mention of a 6100 or 6200, is that integrated in the chassis, or is it a separate item? As a separate question, would it be reasonable to use a single 9504 with two supervisors in a small collapsed core datacenter network? I am considering a full hardware refresh. My basic research thus far has led me to believe I could purchase a Cisco UCS chassis (or two), connect them to a single 9504 chassis with the proper line cards, and have all the networking and compute my business needs in a nice little package. However, I am concerned that I will actually need to purchase a 6100 fabric interconnect to control the networking function on the 5108 chassis. Seems like overkill at my scale.
|
# ? Jan 15, 2017 05:17 |
|
adorai posted:Note: I have not talked to my VAR or Cisco datacenter sales engineer yet You need fabric interconnects, these can either be external 6100/6200s, or for small installs (up to 20 blades) there's an internal version (6324) available which occupies the IO module slots of one of your chassis and replaces the FEX that would normally be present. Northbound from your FIs can be any typical switching environment, if you're doing a single chassis northbound you'll likely want to run the UCS FIs in appliance mode (as opposed to switch mode, not sure that's ever a recommended config unless the FIs are your only switches) with LACP groups from the switch to the UCS environment for redundancy.
|
# ? Jan 15, 2017 05:26 |
|
ragzilla posted:You need fabric interconnects, these can either be external 6100/6200s, or for small installs (up to 20 blades) there's an internal version (6324) available which occupies the IO module slots of one of your chassis and replaces the FEX that would normally be present.
|
# ? Jan 15, 2017 05:43 |
|
adorai posted:So I CAN go straight from UCS chassis to upstream switch? Only if you have 6324 FIs in the IO module slots. FIs are mandatory in a UCS deployment. http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-6300-series-fabric-interconnects/datasheet-c78-732207.html
|
# ? Jan 15, 2017 05:56 |
|
adorai posted:So I CAN go straight from UCS chassis to upstream switch? With the 6324 FI installed in the chassis, it could connect directly to the upstream switch. You're limited to supported chassis and up to 2 chassis in the configuration.
|
# ? Jan 15, 2017 05:59 |
|
Yeah, piling on, you need FI's in some form. They are the real brains of the whole UCS operation.
|
# ? Jan 15, 2017 06:02 |
|
and it looks like list on those is $22k, and I assume I would want two for each chassis. lol, cisco.
|
# ? Jan 15, 2017 06:05 |
|
Nah it's not quite that bad. With the caveat that the last time I worked with UCS was like 2014. But IIRC you just want two total to act as an HA pair in front of all of your chassis. Maybe you need more if you have a shitload of UCS servers. But we had like a dozen chassis and 2 FI's was sufficient for that setup. I will add that the only reason we used UCS was that our CTO was able to get a sweetheart deal. We had petabytes of NetApp storage and Cisco really, REALLY wanted poster children for their FlexPod platform. He was able to maneuver this into getting some UCS B blades for a song, in return for Cisco using us in some marketing materials. They even sent a loving video crew out to film us lol, which was one of the silliest things I've ever been involved with professionally. Unfortunately I think the video is gone from the internet or I would link it for hilarity. I can find the 2 page PDF citing my boss on why UCS is good and cool. But the professionally produced video of me and my coworker kneeling in front of a UCS chassis and pretending to replace a blade--while I whispered something horrifying to him on every take, forcing 400 retries as he broke down laughing--seems to be gone
|
# ? Jan 15, 2017 06:45 |
|
adorai posted:and it looks like list on those is $22k, and I assume I would want two for each chassis. You know nobody pays that price, right? Big Business? Volume discount Small Business? Growth discount First UCS purchase? Starter discount Refreshing your obsolete purchase? loyal customer discount Considering another vendor? Drop your pants discount. Cisco quotes are like a mad lib. Since you are ___________ we will give you a special ____________ discount. You can get an 8-blade UCS-MINI starter pack the (former?) term for having the FI's in with the blade chassis for less than 10k per blade, even with reasonably high core counts and 256+ GB ram. If you have a 9504, you can swing UCS. However, as much as I like UCS (we run it in 3 of our 4 data centers) - it really doesn't SHINE until you get into the 4-5+ chassis range or are constantly re-configuring servers every day (which you shouldn't be doing). It is a bit more administrative overhead if you are just going with one chassis.
|
# ? Jan 15, 2017 07:29 |
|
ragzilla posted:You need fabric interconnects, these can either be external 6100/6200s, or for small installs (up to 20 blades) there's an internal version (6324) available which occupies the IO module slots of one of your chassis and replaces the FEX that would normally be present. In general you should almost never run your fabric interconnects in switched mode. You should almost always be using end host mode and if you run into the case where you should run it in switched mode you're just doing something else wrong. quote:and it looks like list on those is $22k, and I assume I would want two for each chassis. I do want to point out that a pair of fabric interconnects can be used to manage 160 blades from a single point. It's a little different from most other blade systems where you end up with the intelligence itself built into the chassis. The UCS blade chassis itself is mostly just folded aluminium meant to deliver power to blades and keep the IOMs from falling on the floor. If you're new to UCS you can get into the smart play bundle and get a pretty solid discount. There may even be a UCS spiff right now for HP takeout that might make it worth a VAR further dropping their pants. Cisco tends to pay well for spiffs.
|
# ? Jan 15, 2017 08:55 |
|
I know no one pays list. Generally, I expect 50% from Cisco. I'm not new to UCS, we use their C series and have for a few generations. We just aren't at the scale where the centralized management of their blade system is going to be a selling point. I am looking at blades to minimize rack utilization right now, as I am considering a move from our own datacenters to leased racks. If I tie that in with a refresh I would do otherwise, it probably makes sense to go with blades, but maybe not UCS.
|
# ? Jan 15, 2017 15:32 |
adorai posted:I know no one pays list. Generally, I expect 50% from Cisco. I'm not new to UCS, we use their C series and have for a few generations. We just aren't at the scale where the centralized management of their blade system is going to be a selling point. I am looking at blades to minimize rack utilization right now, as I am considering a move from our own datacenters to leased racks. If I tie that in with a refresh I would do otherwise, it probably makes sense to go with blades, but maybe not UCS. I love it even if it's just a UCS mini sitting out somewhere. If you have a comprehensive and sane network architecture/plan after you put these guys in you generally don't have to touch them very much beyond doing the occasional software update or part replacement when some doodad or another goes bad, which in my experience thus far with 14 chassis happens very rarely. They go in easy, don't take much space and are extremely easy to manage once you are used to them. I suppose you can do an HP blade server or w/e instead if you prefer them though. We pay around 25k for a starter mini with the 6324s and 2 server blades w/ 256gb memory. If you wanted to go a bit lower budget you can stick a pair of 10gig 3850s or 4500x's on top of them as your core with enough ports to connect storage, copper switches etc along with the UCSes. I stamp these out for manufacturing facilities and they come in at about 80k for the core switches, UCS chassis and storage. Nuclearmonkee fucked around with this message at 19:28 on Jan 16, 2017 |
|
# ? Jan 16, 2017 19:21 |
|
Maybe not the right thread for this but I'm going to ask anyway. I've been tasked with setting up the networking for our company's new office, but I'm kind of a noob at networking hardware. Are there any opinions on server rack vendors? I've been looking at this 25u Tripp Lite open frame rack to go in our IT closet: https://www.amazon.com/Tripp-Lite-E...lite+open+frame I'm thinking I would like some kind of managed power strip in there so I can restart stuff when it craps out. Is Tripp-Lite a good rack vendor? I figure open frame is OK to save some $$ since it will be locked in its own room. I saw this article about Eaton racks and they look pretty good. Thinking of putting some switches thats support poe+, some patch panels, and a NAS or two. Access points will be ubiquiti AC pro.
|
# ? Jan 16, 2017 20:03 |
|
It will probably be poo poo for putting servers into. Look on Craigslist/similar and see if you can pick up a used HP/Dell (Rittal) cabinet, or an APC one. They are welded rather than bolted, can only be moved on a truck, and are sturdy as. I have no issue with a used cabinet since it's a lump of steel. Rather spend a few hundred on a good one that's otherwise going to scrap than the same amount on a thin flimsy thing.
|
# ? Jan 16, 2017 20:46 |
|
The secret to a good network rack is cable management (horizontal/vertical). Don't worry about side panels if you don't have proper cooling (ie: a datacenter) or security issues. Remote managed power strips (PDUs) are basically APC or Servertech, it's all about the interface you use to access it when poo poo's gone awry. Keep in mind electrical code changed a couple of years ago, so a lot of real PDUs now use IEC sockets rather than your standard NEMA ones, so you might have to get different power cables for your gear (or adapters) Also, most vertical PDUs are 40U in height, and the rack you listed is 20U. Rack depth is always the biggest issue that people forget about. Check on the servers you're installing into them, as sometimes the rails used will only span a specific distance (35-40"), forcing you to use shelves. And levelling feet. You always want levelling feet to come with the rack.
|
# ? Jan 16, 2017 20:52 |
|
Thanks Ants posted:They are welded rather than bolted, can only be moved on a truck, and are sturdy as. drat, what kind of truck? I've got an SUV. Is it going to be a pain in the rear end to move this thing up the elevator and into the server closet? unknown posted:The secret to a good network rack is cable management (horizontal/vertical). Is this something built into the rack or something I add? I saw some 1U cable management things you put under the switches. All really good info, thanks for the responses.
|
# ? Jan 16, 2017 21:19 |
|
Heyo, general question RE Cisco hardware lines and what can support this thing we're doing. We're getting a gigabit fiber circuit from our ISP and the router they quoted us to use (Cisco 4451) was about $25,000 with all the trimmings. This seems high? I don't normally deal with Cisco gear but we will literally only be using this to connect to the fiber hand-off from the ISP, handle BGP and that's about it. We have a Juniper SRX that handles all internal routing. Is there a less-costly model that would be able to handle a 1Gig link at full throughput on the WAN side? Not to mention that we will so be upgrading both of our ISP circuits to 1Gig so we would potentially need two of these.
|
# ? Jan 16, 2017 21:28 |
|
Internet or private WAN?
|
# ? Jan 16, 2017 21:36 |
|
3560-CX with a 1G SFP and IP services. You laugh, but tell me that doesn't do the job.
|
# ? Jan 16, 2017 21:48 |
|
What psydude said, unless you want to terminate a VPN on this router, Qos/Shape or some other ISR-ish service. I am seeing $24,996 list price for a base ISR4451 with FL-44-PERF-K9 and $21k without the 2Gbps performance package on CCW, so they're definitely sticking it to you. Edit: Also a 4431 with the performance package also does 1Gbps and has redunndant PS - that comes out to $16,857 list price. Sepist fucked around with this message at 22:19 on Jan 16, 2017 |
# ? Jan 16, 2017 21:51 |
|
adorai posted:Internet or private WAN? Internet. Century Link and (eventually) Level3. Here's the full product list on the quote: http://imgur.com/a/gETwb It just seems...excessive for a config that tops out at like 60 lines. I reached out to my CDW rep but if you guys can point to a specific model # that would help greatly.
|
# ? Jan 16, 2017 21:57 |
|
While we're on Cisco router chat - how do you size these things? Cisco seem to be ridiculously conservative for reasons I totally understand.
|
# ? Jan 16, 2017 22:02 |
|
Spring Heeled Jack posted:Internet. Century Link and (eventually) Level3. List for all that is around 29k. The extra software that comes with that CiscoONE bundle seems to be adding quite a bit, but if you don't need it then ask them to quote you for the non-C1 version.
|
# ? Jan 16, 2017 22:09 |
|
If you don't need NAT or something just get a l3switch. Do you already have a firewall or plan on it? If you do need NAT and need a firewall, just do it there but do l3switch upstream from it. I work for a provider that delivers line rate gig fibrer services all of the time and we just use an ME3400 which other than being near EOL works great. Want something non-EOL but still cheapish, ASR920.
|
# ? Jan 16, 2017 22:53 |
|
Spring Heeled Jack posted:Internet. Century Link and (eventually) Level3. Do you need full table BGP for any reason? If not, multilayer switch (3650 or similar) will handle this no problems.
|
# ? Jan 17, 2017 02:00 |
|
falz posted:If you don't need NAT or something just get a l3switch. Do you already have a firewall or plan on it? If you do need NAT and need a firewall, just do it there but do l3switch upstream from it. We do need NAT. I will look into the ASR line or the 4431, thanks. At the very least the price looks a little better. We're also totally okay with older models or anything 'certified refurbed' so long as they come with official support.
|
# ? Jan 17, 2017 03:05 |
|
Spring Heeled Jack posted:We do need NAT. I will look into the ASR line or the 4431, thanks. At the very least the price looks a little better. You'd almost be better off just buying some layer-3 switches and using whatever remaining money was budgeted to buy dedicated security hardware.
|
# ? Jan 17, 2017 17:11 |
|
jwh posted:You'd almost be better off just buying some layer-3 switches and using whatever remaining money was budgeted to buy dedicated security hardware. We have security hardware, we have an IPS, IDS, and a Juniper SRX for everything else. This would only handle the ISP fiber handoff and BGP.
|
# ? Jan 17, 2017 18:14 |
|
Why not use the SRX for BGP?
|
# ? Jan 17, 2017 18:42 |
|
You don't really need border routers anymore unless you're pulling the entire routing table (don't do this). If you've got a perimeter firewall terminate the ISP links and the BGP sessions there and be done with it.
|
# ? Jan 17, 2017 22:42 |
|
pctD posted:Why not use the SRX for BGP? poo poo why not use the SRX for NAT? It's going to already be stateful why add another stateful device in the mix
|
# ? Jan 17, 2017 22:56 |
|
Has anyone here used Noction or Border6 or another IRP-type solution? Interested in if anyone's seen actual improvements that are worth the cost of the product / buying better transit.
|
# ? Jan 17, 2017 23:40 |
|
madsushi posted:Has anyone here used Noction or Border6 or another IRP-type solution? Interested in if anyone's seen actual improvements that are worth the cost of the product / buying better transit. Don't do it, it will break your poo poo. Put the money into fatter pipes.
|
# ? Jan 18, 2017 00:20 |
|
Then when you leak your broken routes onto the internet the routing police will come after you!
|
# ? Jan 18, 2017 06:31 |
|
falz posted:Don't do it, it will break your poo poo. Put the money into fatter pipes. Seconded. It was part of an RFP we won. It's manipulating outbound traffic only which wasn't needed. Haven't noticed any improvements over some manual local preference settings. We have not used it for inbound manipulation - I just used some more specific announcements when it was a problem.
|
# ? Jan 19, 2017 01:26 |
|
|
# ? May 22, 2024 06:28 |
|
pctD posted:What are my options for improving route failover for BGP if my circuit providers don't support shorter keepalive and holdtimers? The default 3 minute holdtime is just too long. One of my providers supports using BFD but unfortunately my routers don't support that right now. Late for this, but here's a tip: BGP hold-timers are supposed to negotiate to the lowest configured value between neighbors. Regardless of whether your provider supports configuring hold timers on their end for you, if you control the CPE, your equipment supports configurable timers and you select a value supported by the far-end HW and OS, the session should negotiate to your lower configured hold timer value from their default. Keep in mind that you'll want to pick a sane value (don't go under 10s for sure, maybe not even that low) unless you know exactly what kind of HW is on the other end and what OS it's operating. Also, when you change the hold timers, the session will bounce to re-negotiate the option. After it comes back up (and before), you should be able to look at neighbor details to see the configured/negotiated hold timers.
|
# ? Jan 23, 2017 20:12 |