I can't stand FWSM's and ASA's.
|
|
# ? Dec 13, 2013 05:52 |
|
|
# ? May 31, 2024 21:25 |
|
Is the discrepancy the byproduct of an expanded ACL? Sh access-list will break down object groups line by line. I count 17 and a implicit deny on your third image.
|
# ? Dec 13, 2013 06:39 |
|
I wish it were that simple. It actually breaks out to 286 lines. For instance, that rule #9 in the picture is one source, 5 destinations, 5 ports, so 25 rules plus the group rule. Oh, and it shows as ACL #17. I know that wasn't clear because of the censorship. My apologies, that was stupid. e: hitcnt=0 for every rule on the distribute_inside list. Other hitcnts working normally. Weird. KS fucked around with this message at 07:36 on Dec 13, 2013 |
# ? Dec 13, 2013 07:17 |
|
Welp, just figured it out. There are four ACLs in the config, not two: access-list Distribute_Outbound_access_in access-list Distribute_Inbound_access_in access-list Distribute_outside access-list Distribute_inside Then this gem: access-group Distribute_Inbound_access_in in interface Distribute_inside access-group Distribute_Outbound_access_in in interface Distribute_outside ASDM reports interface names instead of access group names, so it was well hidden. Guess that explains the zero hitcounts. KS fucked around with this message at 07:36 on Dec 13, 2013 |
# ? Dec 13, 2013 07:33 |
|
KS posted:I’m having a problem with ASDM I’m really hoping someone here can shed some light on. The device is a FWSM in a 6513. It’s running in transparent mode and there are 8 bridge groups in one context (doubt this matters). This post just gave me a nightmare. I pity you.
|
# ? Dec 13, 2013 16:48 |
|
Any resources/books you guys can point me to for data center design/infrastructure? It's becoming increasingly likely we're going to be doing a complete rebuild of our four labs that total around 24 racks.
|
# ? Dec 16, 2013 22:22 |
|
Argue bitterly and tearfully for colocation. You're likely not in the datacenter business and you probably shouldn't be if you're asking about books on it. Once you're sure capex > opex and the gear has to stay on-site, hire a really solid general contractor who builds datacenters. I don't see that as a DIY project.
|
# ? Dec 16, 2013 22:58 |
|
bort posted:hire a really solid general contractor who builds datacenters And keep them around long enough to document and train a competent day to day manager.
|
# ? Dec 16, 2013 23:38 |
|
I'll be a little more forth-coming - I'm part of a small, specialized team within a 'certain' large IT organization that everyone of you should be very familiar with. Certain infrastructure costs will be heavily-discounted and our purpose is pretty niche (Service Provider Video), so it's not something that can be done off-site. I'm just a recent college graduate that was brought in to support these labs - I want/need to learn about data center design as much as possible.
|
# ? Dec 17, 2013 00:00 |
|
bort posted:Argue bitterly and tearfully for colocation. You're likely not in the datacenter business and you probably shouldn't be if you're asking about books on it. Once you're sure capex > opex and the gear has to stay on-site, hire a really solid general contractor who builds datacenters. I don't see that as a DIY project.
|
# ? Dec 17, 2013 00:09 |
|
Most of DC design is super obvious if you've ever had to support or manage a DC, but if you haven't done it before you'll get caught out on things like air flow design or not running sufficient fibre (run more than you need!). If you are in a large organization do they have any other DCs you can visit? If so is there another group of internal people who can give advice? I think that would be a good place to start.
|
# ? Dec 17, 2013 00:25 |
|
Which part of the DC design are you needing help with? Actual build from a raised floor/power/heating cooling perspective or the logistical "do I go ToR / End of Row / Collapsed Core" perspective? The former should be handled by a team that has been contracted to do so, the latter depends on the requirements, since one size does not fit all.
|
# ? Dec 17, 2013 00:37 |
|
The latter stuff.
|
# ? Dec 17, 2013 00:57 |
|
ruro posted:Most of DC design is super obvious if you've ever had to support or manage a DC, but if you haven't done it before you'll get caught out on things like air flow design or not running sufficient fibre (run more than you need!). If you are in a large organization do they have any other DCs you can visit? If so is there another group of internal people who can give advice? I think that would be a good place to start. I'm sure they do. I'm just pretty new and the organization is quite large.
|
# ? Dec 17, 2013 00:58 |
|
Ask for a facilities person one level above your position. e: that's still good advice, but I harped on colocating as a non-specialized company here some more because I sometimes post more than I read. DC work is such non-technical work but it can get really broad. Even an EE isn't up to it, just naturally. You can do something as silly as underestimating your need for door swing space and totally screw yourself -- and you don't find out until your racks are placed. My experience running DCs as a regular firm: have fun when the cottonwoods jettison their sexual gunk into your AC intake filter. Enjoy sweating on a rooftop cleaning the unit on the Fourth of July. You also get to sweat on the inside when the electrician has to move the feed to your UPS unit and hope you have enough battery life, or driving through the snow because some VP needs some system rebooted instead of calling and opening a ticket and sending them the ticket number. Generator test during Christmas for Sarbaines Oxley. Finding out some capacitors in your UPS chassis are end of life and not available anymore. Enjoy running a datacenter. e: and when you finally do hit the big time and build a really nice datacenter, you're confronted with CRAC failover problems, sensor failures, generator transfer switch maintenance, alarms report things like "FIRE CONTROL PREACTION INITIATED" freaking everyone the hell out. The problems are bigger and more expensive when done right, but don't fail. When you do them cheaply, they fail often and inconveniently. ee: Bluecobra posted:I would also reserve two cabinets for network gear (one core/distribution switch in each cabinet), patch panels, and cross-connects. Buy racks without using them? Have fun unhooking your PDUs to rack gear, or with your inability to resolve airflow and cabling problems. bort fucked around with this message at 02:23 on Dec 17, 2013 |
# ? Dec 17, 2013 01:26 |
|
sudo rm -rf posted:The latter stuff. I conceptually like the stuff Microsoft showed in a Nanog presentation and a relatively recent Packet Pushers: a leaf-and-spine top of rack system with each rack having a BGP AS and a software-based BGP controller peering with and feeding routes to the switch layer. Nanog PDF Packet Pushers That sounds pretty sick to me, since failure convergence is so potentially quick and tunable and the ability to "drain" traffic from a device or a rack is there.
|
# ? Dec 17, 2013 02:39 |
|
bort posted:People will sell you stacking at the ToR, or elsewhere but I'm unconvinced it's such a hot strategy with the software upgrade implications: if you have one large logical switch, it's difficult to swap the software on it. "quick" is a relative term which is solely based on internal tuning of timers. BGP is not inherently a fast converging protocol for the various and obvious reasons. I'm from the class of "let routers route and switches switch", the moment you start letting systems getting involved in routing protocol decisions, you are asking for trouble because you have systems guys going at it with network guys. ToR is great given the architecture layout that needs it. In some places, a collapsed 40/100Gb core is better (us for instance, HPC environment). There is no "one size fits all" design, everything is driving by requirements, but ultimately by dollars.
|
# ? Dec 17, 2013 03:20 |
|
H.R. Paperstacks posted:ToR is great given the architecture layout that needs it. In some places, a collapsed 40/100Gb core is better (us for instance, HPC environment). There is no "one size fits all" design, everything is driving by requirements, but ultimately by dollars.
|
# ? Dec 17, 2013 03:41 |
|
adorai posted:Like bort, I am against the stacked ToR, and instead prefer independent ToR switches running cross chassis lacp or etherchannel.
|
# ? Dec 17, 2013 04:15 |
|
adorai posted:Like bort, I am against the stacked ToR, and instead prefer independent ToR switches running cross chassis lacp or etherchannel. How do your independent ToR switches share the same subnet in this design? How would you multi-home your servers to both switches?
|
# ? Dec 17, 2013 07:05 |
|
So, I'm here on the Meraki webinar today in order to get the free AP. Pretty awesome to find out about it from this thread, which makes me wonder, is there a "free products for webinars/surveys/whatever for 'IT professionals' " here on SA? And yeah, I know quotation marks shouldn't be nested that way.
|
# ? Dec 17, 2013 19:16 |
|
bort posted:People will sell you stacking at the ToR, or elsewhere but I'm unconvinced it's such a hot strategy with the software upgrade implications: if you have one large logical switch, it's difficult to swap the software on it. There are only a few companies in the world that need that kind of scalability. For most datacenters using a nice fast igp like isis/ospf is ideal.
|
# ? Dec 17, 2013 19:27 |
|
Lately we've simply been consolidating our application server farms to blade chassis and dragging bundled 10gbe back to a redundant core. It's certainly easier (if not potentially more expensive) than managing top of rack and panel distribution. In my experience, horizontally scaling your application platforms is far more important than what you're doing at the individual rack level.
|
# ? Dec 17, 2013 20:46 |
|
Yea ideally if you need to powerdown a cluster/rack whatever, you let the application loadbalancer handle that by taking the affected pod out of the VIP resource Pool. Then once there are no no connections, you can do whatever you need to hardware-wise. No reason to bring BGP etc into it.
|
# ? Dec 17, 2013 20:54 |
|
Question. If I have an IPSec tunnel tested and working with the static end point on a connected subnet, is there a reason the tunnel wouldn't come up when the router is no longer on an adjacent subnet? I ask because the tunnel isn't coming up, but there is a nat boundry in the way, and upon reviewing the config I think I made a mistake. Specifically this: code:
What do you guys think?
|
# ? Dec 17, 2013 21:47 |
|
Powercrazy posted:
I think you should drop the static. I also think you should switch to gre with tunnel protection ipsec, and stop mucking with crypto maps
|
# ? Dec 17, 2013 23:23 |
|
jwh posted:I also think you should switch to gre with tunnel protection ipsec, and stop mucking with crypto maps
|
# ? Dec 17, 2013 23:28 |
|
bort posted:Isn't that six of one and half-dozen of the other? No because you don't need matching phase 2 SAs for the inner traffic after getting the GRE tunnel up, all your phase 2 needs to cover are the tunnel endpoints.
|
# ? Dec 17, 2013 23:33 |
|
And then you can have actual interfaces and IGP if you want to. Unless of course you're on an ASA.
|
# ? Dec 18, 2013 00:37 |
|
madsushi posted:How do your independent ToR switches share the same subnet in this design? How would you multi-home your servers to both switches? 2) we are using nexus switches and utilize virtual port channels Perhaps I led you to believe that I meant more independence from one another than I actually did. bort posted:Isn't that six of one and half-dozen of the other?
|
# ? Dec 18, 2013 00:38 |
|
falz posted:And then you can have actual interfaces and IGP if you want to. Unless of course you're on an ASA. Ding, Ding. Anyway I think it's a NAT device that is breaking it. I've sent the router out, and it will automatically try to reach 10.1.10.200 establishing the IPSec tunnel. Obviously I don't have direct access anymore until this starts working so I'm not positive what the remote end is seeing. The interesting traffic is sourced from an SVI that has a 1424 MTU, so hopefully it's not fragmenting/dropping the ipsec packets or anything esoteric like that. I'll probably have to muck about with the provider equipment and force it to stop blocking/redirecting/screwing with packets coming from/to this device.
|
# ? Dec 18, 2013 02:29 |
|
I would stop with the arcane vagaries of crypto maps and find something more malleable. Not a criticism, just speaking from my own experience. I hate vpns. But if you have to live with them, make your life easier. (Ditch those ASAs)
|
# ? Dec 18, 2013 06:53 |
|
jwh posted:(Ditch those ASAs)
|
# ? Dec 18, 2013 13:52 |
|
If you want cheap + tunnels + likely acceptable performance, c3825 is $200.
|
# ? Dec 18, 2013 14:16 |
|
Routers for site to site tunnels, ASAs for client based vpns has been the conventional wisdom for a while (within the Cisco product line that is).
|
# ? Dec 19, 2013 18:24 |
|
This poo poo is driving me crazy - running a cisco asa 5510 8.4(3) trying to do a twice nat so that when traffic originating from local servers on our 10.157.120.0/28 network (a vlan) tries to go over an IPsec tunnel, that IP range translates to the 192.168.151.0/28 network so it can hit servers on the other end of the tunnel which are on a 192.168.151.16/28 network I have set up a million tunnels on ASA's and I never have to have my source IP's nat to a different range. The customer I am dealing with can only allocate IP's on this 192.168.151.x network so I split it up in to two subnets to avoid a conflict. Cisco packet tracer says the tunnel is up and my IP's are getting properly translated but when I look at the packet capture, it does not show the IP's being translated Can't ping/telnet/etc to their end - attached a packet capture. I'm a bit stuck - shouldn't the server hit the firewall, get translated to the new IP and then boost off to the tunnel?
|
# ? Dec 19, 2013 21:37 |
|
Voltage posted:This poo poo is driving me crazy - running a cisco asa 5510 8.4(3) trying to do a twice nat so that when traffic originating from local servers on our 10.157.120.0/28 network (a vlan) tries to go over an IPsec tunnel, that IP range translates to the 192.168.151.0/28 network so it can hit servers on the other end of the tunnel which are on a 192.168.151.16/28 network My customers have to go through a similar hassle to establish a VPN with us. I've only set this up on 8.2, but the principles are probably the same. The ASA uses a crypto ACL to determine whether to send traffic through the VPN. In your scenario, the crypto ACL would be defined as post-NAT source network (192.168.151.0/28) to destination network (192.168.151.16/28). What to look for with packet-tracer: 1. Is this traffic even being forwarded through the VPN? Without a VPN phase subtype: encrypt, the answer is no. 2. Is this traffic being NAT'd properly? It is not enough to see the appropriate NAT statement referenced, as the output will happily display the NAT statement right after a NAT-exempt statement that bypasses subsequent NAT statements. You can also use sh ipsec sa peer x.x.x.x to confirm traffic is at least being sent through the VPN (the tunnel being up is not enough if keepalives are enabled). Packet capture will not show encrypted egress traffic, but what you can do is set up a capture on the outside interface with VPN peers as respective source and destination IPs, and match packets to ping request and replies.
|
# ? Dec 20, 2013 02:42 |
|
Today I migrated CISCO881W from "c880data-universalk9-mz.150-1.M7.bin" to "c880data-universalk9-mz.154-1.T.bin" IOS and to much of my surprise, remote access was lost. After connecting with my trusty console cable I noticed that "transport input none" has appeared under vty lines. So just a heads up for anyone doing remote upgrade to this ios version. Has anyone else seen this behaviour?
|
# ? Dec 22, 2013 17:37 |
|
What was there before? nothing, or `transport input ssh`?
|
# ? Dec 22, 2013 19:20 |
|
|
# ? May 31, 2024 21:25 |
|
falz posted:What was there before? nothing, or `transport input ssh`? nothing
|
# ? Dec 22, 2013 20:49 |