Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

HP's website is painfully designed.

Adbot
ADBOT LOVES YOU

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
The thing that's pissing me off about it right now is trying to share a single LUN to multiple machines. When I do this, it says I probably want to create a server cluster and share the LUN to that instead. Which makes sense. Except, as far as I can tell, there's no way to do that in CMC.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


The option for that is further up. I'll take a look when I get in and help you out if you'd like.

The LeftHand documentation is dogshit though. I'll agree with you on that.

Syano
Jul 13, 2005

FISHMANPET posted:

The thing that's pissing me off about it right now is trying to share a single LUN to multiple machines. When I do this, it says I probably want to create a server cluster and share the LUN to that instead. Which makes sense. Except, as far as I can tell, there's no way to do that in CMC.

You may need to update CMC then. All recent versions you right click 'servers' and then choose 'New Server Cluster...'. You can even add your new member servers from this screen. You are going to have to get that done before you can add a LUN to multiple initiators at once

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Syano posted:

You may need to update CMC then. All recent versions you right click 'servers' and then choose 'New Server Cluster...'. You can even add your new member servers from this screen. You are going to have to get that done before you can add a LUN to multiple initiators at once

Yeah, upgrading is on the plate today.

I can share the LUN to multiple machines, I just have to manually specify each machine, so in theory, when I got a new machine, I'd have to put it in a bunch of places.

And it's more mind blowing that, if the current version of CMC doesn't have server clusters, why would it tell me to make one?

E: gently caress me sideways, I knew I'd seen it in there before, and the once thing I didn't try last night was right clicking on the root of the servers menu. Welp, cluster is setup now.

FISHMANPET fucked around with this message at 16:07 on Sep 12, 2012

evil_bunnY
Apr 2, 2003

Both management interfaces on our old md3000i (don't laugh) have stopped responding. For the second time in as many weeks.

Doccykins
Feb 21, 2006
Hey storage dudes, I've just had a new file server dumped on me and have been second guessing myself for the past couple of days with regards to which RAID to put it in. Setup is a HP DL380 with 16 bays, bays 1 and 9 are 136GB SAS in RAID1 for OS and bays 2-8 and 10-16 are 14 x 600GB SAS disks. Use is exclusively going to be our new file server. The predecessor is a 2TB RAID 5 array made up of 8 x 300GB SCSI disks and is ripe for decommissioning.

I know my options are

Option a) Doing a RAID 10 for uber performance but only ending up with 3.8TB of space which is going to be eaten pretty fast by users and the inevitable question from up high is going to be 'Why are we so low on disk space, I thought we just bought a whole new array?'

Option b) Putting the 14 x 600GB SAS disks into a RAID 50 array with 3 parity groups of 4 disks and having 2 hot spares which is the safest option and gives us another TB to play with (4.9TB)

Option c) The compromise of RAID 50 with 2 parity groups of 6 disks each and 2 hot spares which give me 5.4TB to work with and the ability to have the RAID rebuild whilst a new disk is ordered and can be swapped in before the whole array shits itself.

Option d) Maximising space by doing a RAID 50 with 2 parity groups of 7 disks each (which the HP Array Config Utility defaults me to when I select 50 but am wary of the higher risk of failure (same as the RAID 10 for less performance) for the better amount of disk space (6.5TB)

Thinking aloud whilst writing this post I'm pretty sure RAID 10 is out as the users probablydefinitely won't notice the difference between going from RAID 5 SCSI to SAS in RAID 50 and going from SCSI to SAS in RAID 10 so I can claw back some of that extra real estate, and after discussing with a colleague who isn't as techy as me we're leaning for Option C. I suppose I'm down to being interested in what RAID50 loadout you guys would do in this situation and if there are any glaring problems I've missed.

Massive thanks for any input!

evil_bunnY
Apr 2, 2003

Leave hot spares.

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe
Always hot spares. Believe me you don't want to be holding up a business while waiting for a plane to arrive with replacement hard drives.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Another crosspost from the poo poo that pisses you off megathread:

Thanks, recently-acquired storage vendor, for giving me the runaround for a loving week on a controller that desperately needs replacement before saying "sorry, DDN is having problems sourcing these controllers -- by the way, because of firmware incompatibilities, we need to simultaneously replace every one in every enclosure you have, so you'll have to bring all your storage down so we can gently caress it up."

Storage is the printers of the datacenter.

evil_bunnY
Apr 2, 2003

Misogynist posted:

Storage is the printers of the datacenter.
Hahaha that's true on so many levels.

Long shot: anyone using Netapp ifgroups in combination with Nexus to do VLAN flagging on the storage controllers?
Normally our switches are managed by someone else (so of course I'm borderline incompetent), I've got the netapp config right I'm pretty sure, but I could use a second pair of eyes for the Cisco config.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

evil_bunnY posted:

Hahaha that's true on so many levels.

Long shot: anyone using Netapp ifgroups in combination with Nexus to do VLAN flagging on the storage controllers?
Normally our switches are managed by someone else (so of course I'm borderline incompetent), I've got the netapp config right I'm pretty sure, but I could use a second pair of eyes for the Cisco config.
Here is the relevant port config for one of our controllers.

interface port-channel131
description na3240_a
switchport mode trunk
switchport trunk native vlan 2999
switchport trunk allowed vlan 4,251-252,1111,2999
speed 10000
vpc 131

interface Ethernet1/31
switchport mode trunk
switchport trunk native vlan 2999
switchport trunk allowed vlan 4,251-252,1111,2999
channel-group 131 mode active

It's pretty standard.

evil_bunnY
Apr 2, 2003

Claim a free beer of your choosing next time you're in Stockholm :radcat:

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

evil_bunnY posted:

Claim a free beer of your choosing next time you're in Stockholm :radcat:

You'll want to add "spanning-tree port type edge trunk" to your port-channel config as well.

evil_bunnY
Apr 2, 2003

NippleFloss posted:

You'll want to add "spanning-tree port type edge trunk" to your port-channel config as well.
You just want a beer too don't you?


Thanks 8)

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

evil_bunnY posted:

You just want a beer too don't you?


Thanks 8)

Yes, exactly that. Now I've got a perfect excuse to visit Sweden. Can't let free beer go to waste.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Doccykins posted:

Option a) Doing a RAID 10 for uber performance
This is a performance myth. If you're doing mostly low volumes of sequential streams (i.e. you're bound by throughput rather than IOPS), RAID-5 or RAID-6 will be significantly faster on reads when your array is healthy.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So our Compellent SAN has to be setup by a Compellent engineer, and they sent us a survey to fill out beforehand. Under the iSCSI section it says this:

quote:

Best practice for most Operating Systems is to use two dedicated networks for iSCSI traffic (VMWare 3.5 is an exception). Alternately, dedicated subnets can be used by creating VLANs.
One NIC per iSCSI subnet must be dedicated per server.

I've not seen anything about running two separate iSCSI networks, and as far as I can tell, a bunch of the virtual port stuff that Compellent does wouldn't work if interfaces were on multiple subnets. What are we supposed to be doing here?

madsushi
Apr 19, 2009

Baller.
#essereFerrari

FISHMANPET posted:

So our Compellent SAN has to be setup by a Compellent engineer, and they sent us a survey to fill out beforehand. Under the iSCSI section it says this:


I've not seen anything about running two separate iSCSI networks, and as far as I can tell, a bunch of the virtual port stuff that Compellent does wouldn't work if interfaces were on multiple subnets. What are we supposed to be doing here?

That's right, that's how you are supposed to do iSCSI MPIO. You have two separate NICs on your host, two separate switches, and two separate NICs on your SAN for full redundancy. A lot of people skimp and just make two VLANs on one switch, or put the whole thing on one VLAN on one switch and just use different IP addresses.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

madsushi posted:

That's right, that's how you are supposed to do iSCSI MPIO. You have two separate NICs on your host, two separate switches, and two separate NICs on your SAN for full redundancy. A lot of people skimp and just make two VLANs on one switch, or put the whole thing on one VLAN on one switch and just use different IP addresses.

Can't you do all that physical redundancy with a single subnet?

complex
Sep 16, 2003

FISHMANPET posted:

Can't you do all that physical redundancy with a single subnet?

Multiple subnets guard against someone, say, deleting the VLAN in the core. (I've seen it)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

Can't you do all that physical redundancy with a single subnet?
...why would you? It's the money for the physical infrastructure that stalls most people, not running out of VLANs.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I can't find any information on how I would set that up. If the Compellent controller has two ports, do I put each port on a separate Vlan? How does this work with Compellent's Virtual Ports? On our VMware servers (since this is all for a VMware deployment) we have two 10Gbe NICs, which we're going to trunk into multiple VLANs. Do I put both VLANs on each interface, or do I put one iSCSI vlan on each interface?

Ugh, none of this makes any sense.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

I can't find any information on how I would set that up. If the Compellent controller has two ports, do I put each port on a separate Vlan? How does this work with Compellent's Virtual Ports? On our VMware servers (since this is all for a VMware deployment) we have two 10Gbe NICs, which we're going to trunk into multiple VLANs. Do I put both VLANs on each interface, or do I put one iSCSI vlan on each interface?

Ugh, none of this makes any sense.
Typically, yes, you would connect one port on each controller to a different fault domain. If you have properly redundant infrastructures, VLANs won't even enter into the equation unless you're routing your iSCSI traffic (don't do this) because the networks are physically separate and don't connect in any way.

If you're running converged networking over the same switches as your storage, you'll want to use separate VLANs.

KS
Jun 10, 2003
Outrageous Lumpwad
The Compellent Fault Domain concept follows your physical infrastructure. You will have one fault domain per physical switch. The two fault domains should have separate subnets. Whether those physical switches are dedicated to storage or are carrying network traffic as well doesn't really matter -- you should have a dedicated VLAN on each switch.

At least one port on each controller goes to each switch -- best practice would be 2 + from each controller to each switch. Virtual ports fail IPs from one port to another within the same fault domain to protect against controller failure. Any IP associated with a fault domain can live on any controller port within that fault domain, which should always be on the same switch/in the same VLAN.

Look at page 36 of "Storage Center 5.5 Connectivity Guide" on the KC.

Some operating systems (Win 2003 software iscsi at least) don't MPIO properly when all interfaces are in the same subnet, so just don't do it.

KS fucked around with this message at 03:41 on Sep 18, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

KS posted:

Some operating systems (Win 2003 software iscsi at least) don't MPIO properly when all interfaces are in the same subnet, so just don't do it.
Multiple interfaces in the same subnet is generally a risky proposition regardless, for any application, unless you feel like mastering policy-based routing. Asymmetric routing is a huge pain in the rear end and it's simplest to just avoid it.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
Some things with SVC that VPLEX doesn't have:

Thin provisioning
Write cache
Easy tier (automatic better performing storage)
Advanced features (mirroring, snapshots) can be done across different back end third party arrays

This guy tries to defend the VPLEX but most of his points are weak, or trivia. If you have hosts accessing the same data in multiple active data centers then that might be an argument for VPLEX, but SVC can do that too.

SVC is easy to manage as well with the new GUI, and has real-time compression built in.

http://vchetu.blogspot.com/2012/07/emc-vplex-vs-ibm-svc.html another comparison, note that you *can* encapsulate LUNs with SVC Image Mode.

Full disclosure: I work for IBM and have sold several SVCs.

Rhymenoserous
May 23, 2008

ZombieReagan posted:

I'm starting to look into Storage Virtualization appliances to help give us some more flexibility in mirroring data to different sites and dealing with different vendors arrays. Have any of you actually implemented one of these? I've been looking at EMC VPLEX and IBM SVC so far, and on paper they seem great. I'm not going to get any horror stories from anyone in sales, and I don't know if there really are any to be had as long as things are sized appropriately.

EMC is going to try and push some poo poo like replication manager on you and trust me you'd rather kill yourself than ever try to figure out what the gently caress is wrong with replication manager.

Amandyke
Nov 27, 2004

A wha?

ZombieReagan posted:

I'm starting to look into Storage Virtualization appliances to help give us some more flexibility in mirroring data to different sites and dealing with different vendors arrays. Have any of you actually implemented one of these? I've been looking at EMC VPLEX and IBM SVC so far, and on paper they seem great. I'm not going to get any horror stories from anyone in sales, and I don't know if there really are any to be had as long as things are sized appropriately.

You could also look at Cisco's DMM on MDS series switches with an SSM module.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

ZombieReagan posted:

I'm starting to look into Storage Virtualization appliances to help give us some more flexibility in mirroring data to different sites and dealing with different vendors arrays. Have any of you actually implemented one of these? I've been looking at EMC VPLEX and IBM SVC so far, and on paper they seem great. I'm not going to get any horror stories from anyone in sales, and I don't know if there really are any to be had as long as things are sized appropriately.

It depends what you are looking for. If you really just want storage virtualization then there are many different products including VPLEX and SVC. I think SVC easily wins on the storage virtualization side. SVC has been going for longer and it has a lot more features because of this.

However, VPLEX does a few things that SVC doesn't as SVC is active/passive (as far as i'm aware, been out of the game for a while now). The main thing is that VPLEX provides you the ability to do true active/active data centers with high availability at both sites.

What does this mean in lay terms?

So traditionally you have site A and site B with Data A and data B. Site A fails and you have to fail over to site B and do a little work to get data B up and running. You've got a failover plans, RTO's and staff running around.

With VPLEX you can class it as site A and A and the data has the same identity - A and A. Combine this with a technology that can handle Active/Active such as VMware HA or Oracle RAC and you're golden.

- If a site is lost things continue at the other site without any human intervention or outage. There is no panic and running around while failover options and instructions are considered. This alone is worth its weight in gold.
- Both sites can be used at the same time meaning you don't have a DR site that sits there doing nothing. It can become an active park of the environment.
- Workloads can be easily pushed to other sites in the event of maintenance or a disaster on the way (fire, flood).
- All the usual features such as the ability to move data around backend arrays without down time, ability to retire arrays, etc.

One of the main users I know of is Melbourne IT. This is a hosting company in Aus that has VPLEX just so they don't have outages with customer data (and because Australia get's everything from floods, to forest fires and hail stones the size of basketballs). A Vmotion without having to do a Storage Vmotion let's you move things around pretty quick.

Vanilla fucked around with this message at 08:17 on Sep 19, 2012

evil_bunnY
Apr 2, 2003

I'm having a routing(?) issue on my netapp system and our network team is being fantastically uncooperative (though the original issue is most probably my doing). Can any of you actually smart people spot any obvious mistakes? I don't understand why this wouldn't work. Right now I can't ping even my default GW.


I can't seem to route on our normal networks, only on my management subnet/VLAN.

This is the controller RC:
code:
#Manually Edited Filer RC file

hostname netappcontroller01


#Management port
ifconfig e0M `hostname`-e0M netmask 255.255.255.0 mtusize 1500


#Virtual interface
#Load balanced by src-dest IP over 10GBE interfaces
ifgrp create lacp ntapifgrp01 -b ip e1a e1b
ifgrp favor e1a
ifconfig ntapifgrp01 partner ntapifgrp01


#Services interfaces on appropriate VLANs
#731 is services VLAN, 739 is VM NFS
vlan create ntapifgrp01 731 739
ifconfig ntapifgrp01-731 public.81.183 netmask 255.255.255.0 mtusize 1500
ifconfig ntapifgrp01-739 192.168.225.11 netmask 255.255.255.0 mtusize 1500


#Routing
route add default public.81.1 1
routed off


#Options
options dns.domainname netappcontroller01
options dns.enable on
options nis.enable off
savecore
These are the routes:

code:
netappcontroller01> netstat -rn
Routing tables

Internet:
Destination      Gateway            Flags     Refs     Use  Interface           
default          public.81.1        UGS         4     9531  ntapifgrp01-731     
127              127.0.0.1          UGS         0        0  lo                  
127.0.0.1        127.0.0.1          UH          1        0  lo                  
127.0.10.1       127.0.20.1         UHS         2      540  losk                
public.81/24     link#13            UC          0        0  ntapifgrp01-731     
public.81.1      link#13            UHL         1        0  ntapifgrp01-731     
192.168.224      link#7             UC          0        0  e0M                 
192.168.224.11   0:a0:98:1d:21:b    UHL         2        4  lo                  
192.168.224.102  a8:20:66:0:df:43   UHL         1        0  e0M                 
192.168.225      link#14            UC          0        0  ntapifgrp01-739
And the cisco VPC/ports:

code:
interface port-channel1
  description netapp2240_1
  switchport mode trunk
  switchport trunk native vlan 731
  switchport trunk allowed vlan 731,738-739
  spanning-tree port type edge trunk
  speed 10000
  vpc 1

interface port-channel2
  description netapp2240_2
  switchport mode trunk
  switchport trunk native vlan 731
  switchport trunk allowed vlan 731,738-739
  spanning-tree port type edge trunk
  speed 10000
  vpc 2
  
  [...]
  
  interface Ethernet1/1
  switchport mode trunk
  switchport trunk native vlan 731
  switchport trunk allowed vlan 731,738-739
  channel-group 1 mode active

interface Ethernet1/2
  switchport mode trunk
  switchport trunk native vlan 731
  switchport trunk allowed vlan 731,738-739
  channel-group 2 mode active
  
[public] is the first 2 bytes of our routable IP bloc.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
Been a long time since I did networking... buuuttt...


route add default public.81.1 1

should the trailing 1 be ntapifgrp01 or ela ?

Even so you should still be able to ping the default gateway. Can you ping the gateway from another host, and ping all the filer's IPs?

evil_bunnY
Apr 2, 2003

That's the metric 8]

I think the problem is that my default gateway routes over my interface group, and there's an issue there.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

evil_bunnY posted:

That's the metric 8]

I think the problem is that my default gateway routes over my interface group, and there's an issue there.

What do your "ifconfig -a" and "ifgrp status" look like?

Also, while the switch config looks okay to me it's hard to tell without seeing the config from the vpc peer switch and the vpc config or show vpc command from one of the switches.

YOLOsubmarine fucked around with this message at 18:30 on Sep 19, 2012

madsushi
Apr 19, 2009

Baller.
#essereFerrari
Here are my thoughts:

1) You shouldn't be setting the native VLAN on the Ciscos. The native VLAN is still 1. I don't have that value set on any of my ether-channel configs.

2) Make sure your encapsulation type is dot1q, I am not sure if your Ciscos are defaulting to ISL or whatever.

3) You want "routed on" if you want to be able to use routes, so turn that on for the NetApp. Belay this order.

madsushi fucked around with this message at 18:48 on Sep 19, 2012

evil_bunnY
Apr 2, 2003

I shall get on that poo poo and report back, SIR. Thanks both of you.

bort
Mar 13, 2003

madsushi posted:

Here are my thoughts:

1) You shouldn't be setting the native VLAN on the Ciscos. The native VLAN is still 1. I don't have that value set on any of my ether-channel configs.
Do you prune VLAN 1 on the trunks? I typically always set the untagged/native VLAN on a trunk. This is because VLAN 1 has all kinds of control traffic on it, unconfigured ports end up on it and older switches could drop VLAN 1 traffic to the processor, slowing everything down. e: another reason is that if VLAN 1 is tagged on all your trunks, you can get a giant spanning tree that spans all switches and takes a long time to converge, can storm, etc.

I'd agree with your original approach, bunnY. You could try another VLAN for your native VLAN to ensure that you're tagging the traffic on 731, but I, at least, stay away from VLAN 1.

edit: How bout this?
code:
  switchport mode trunk
  switchport trunk encapsulation dot1q
  switchport trunk native vlan 731
  switchport trunk allowed vlan 738-739
The native/untagged vlan is allowed by default (all VLANs are in Cisco). On Cisco, you'd have to explicitly prune it, so it doesn't need to be in the allowed line and may further confuse things.
The one other thing I can think of is that your Netapp might be expecting tags on the traffic it receives. Cisco strips 801.Q tags on its native vlan, but there is a command vlan dot1q tag native that will add tags to native VLANs. This is a global command, so be careful -- you may get unexplained behavior on other trunks if they're expecting untagged traffic.

bort fucked around with this message at 01:15 on Sep 20, 2012

madsushi
Apr 19, 2009

Baller.
#essereFerrari

bort posted:

Do you prune VLAN 1 on the trunks? I typically always set the untagged/native VLAN on a trunk. This is because VLAN 1 has all kinds of control traffic on it, unconfigured ports end up on it and older switches could drop VLAN 1 traffic to the processor, slowing everything down.

I'd agree with your original approach, bunnY. You could try another VLAN for your native VLAN to ensure that you're tagging the traffic on 731, but I, at least, stay away from VLAN 1.

Yeah, I use "sw tru all vlan x-y" to exclude VLAN 1 from hitting the trunk.

bort
Mar 13, 2003

You're probably right about it defaulting to ISL, anyway. That's probably the issue.

e: it's actually one of the few things that Force10 does differently that I've come to love. By default, nothing's allowed on a trunk. You put the trunk interface as tagged or untagged in the VLAN interface config. None of this is-it-or-isn't-it nonsense.

bort fucked around with this message at 01:08 on Sep 20, 2012

Adbot
ADBOT LOVES YOU

evil_bunnY
Apr 2, 2003

NippleFloss posted:

What do your "ifconfig -a"
code:
netappcontroller01> ifconfig -a
e0a: flags=0x170c866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
	ether 00:a0:98:1d:21:06 (auto-unknown-down) flowcontrol full
e0b: flags=0x170c866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
	ether 00:a0:98:1d:21:07 (auto-unknown-down) flowcontrol full
e0c: flags=0x170c866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
	ether 00:a0:98:1d:21:08 (auto-unknown-down) flowcontrol full
e0d: flags=0x170c866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
	ether 00:a0:98:1d:21:09 (auto-unknown-down) flowcontrol full
e1a: flags=0x89f0c867<BROADCAST,RUNNING,MULTICAST,TCPCKSUM,VLAN,LRO> mtu 1500
	ether 02:a0:98:1d:21:06 (auto-10g_twinax-fd-up) flowcontrol full
	trunked ntapifgrp01
e1b: flags=0x89f0c867<BROADCAST,RUNNING,MULTICAST,TCPCKSUM,VLAN,LRO> mtu 1500
	ether 02:a0:98:1d:21:06 (auto-10g_twinax-fd-up) flowcontrol full
	trunked ntapifgrp01
e0M: flags=0x2b4c867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,MGMT_PORT> mtu 1500
	inet 192.168.224.11 netmask 0xffffff00 broadcast 192.168.224.255 noddns
	ether 00:a0:98:1d:21:0b (auto-100tx-fd-up) flowcontrol full
e0P: flags=0x2b4c867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,ACP_PORT> mtu 1500 PRIVATE
	inet 192.168.2.58 netmask 0xfffffc00 broadcast 192.168.3.255 noddns
	ether 00:a0:98:1d:21:0a (auto-100tx-fd-up) flowcontrol full
lo: flags=0x1b48049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 8160
	inet 127.0.0.1 netmask 0xff000000 broadcast 127.0.0.1
	ether 00:00:00:00:00:00 (VIA Provider)
losk: flags=0x40a400c9<UP,LOOPBACK,RUNNING> mtu 9188
	inet 127.0.20.1 netmask 0xff000000 broadcast 127.0.20.1
ntapifgrp01: flags=0xa2f0c863<BROADCAST,RUNNING,MULTICAST,TCPCKSUM,VLAN> mtu 1500
	partner ntapifgrp01 (not in use)
	ether 02:a0:98:1d:21:06 (Enabled interface groups)
ntapifgrp01-731: flags=0x2b4c863<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
	inet 130.238.81.183 netmask 0xffffff00 broadcast 130.238.81.255
	ether 02:a0:98:1d:21:06 (Enabled interface groups)
ntapifgrp01-739: flags=0x2b4c863<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
	inet 192.168.225.11 netmask 0xffffff00 broadcast 192.168.225.255
	ether 02:a0:98:1d:21:06 (Enabled interface groups)

NippleFloss posted:

and "ifgrp status" look like?
code:
netappcontroller01> ifgrp status
default: transmit 'IP Load balancing', Ifgrp Type 'multi_mode', fail 'log'
ntapifgrp01: 2 links, transmit 'IP Load balancing', Ifgrp Type 'lacp' fail 'default'
	 Ifgrp Status	Up 	Addr_set 
	up:
	e1b: state up, since 17Sep2012 16:26:47 (2+20:37:59)
		mediatype: auto-10g_twinax-fd-up
		flags: enabled
		active aggr, aggr port: e1a
		input packets 19729, input bytes 2444536
		input lacp packets 19699, output lacp packets 19770
		output packets 21587, output bytes 2567916
		up indications 7, broken indications 2
		drops (if) 0, drops (link) 0
		indication: up at 17Sep2012 16:26:47
			consecutive 0, transitions 9
	e1a: state up, since 17Sep2012 16:19:49 (2+20:44:57)
		mediatype: auto-10g_twinax-fd-up
		flags: enabled
		active aggr, aggr port: e1a
		input packets 765453, input bytes 76621888
		input lacp packets 19704, output lacp packets 19825
		output packets 20732, output bytes 2503914
		up indications 7, broken indications 4
		drops (if) 0, drops (link) 0
		indication: up at 17Sep2012 16:19:49
			consecutive 0, transitions 11

NippleFloss posted:

Also, while the switch config looks okay to me it's hard to tell without seeing the config from the vpc peer switch and the vpc config or show vpc command from one of the switches.
The config is exactly the same on both switches, the only change is the keepalive destination.
code:
Nex-One# sho vpc
Legend:
                (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                   : 1   
Peer status                     : peer adjacency formed ok      
vPC keep-alive status           : peer is alive                 
Configuration consistency status: success 
Per-vlan consistency status     : success                       
Type-2 consistency status       : success 
vPC role                        : primary                       
Number of vPCs configured       : 7   
Peer Gateway                    : Disabled
Dual-active excluded VLANs      : -
Graceful Consistency Check      : Enabled

vPC Peer-link status
---------------------------------------------------------------------
id   Port   Status Active vlans    
--   ----   ------ --------------------------------------------------
1    Po100  up     1,3-4,50,730-749,920                                      

vPC status
----------------------------------------------------------------------------
id     Port        Status Consistency Reason                     Active vlans
------ ----------- ------ ----------- -------------------------- -----------
1      Po1         up     success     success                    731,738-739 
2      Po2         up     success     success                    731,738-739 
11     Po11        down*  Not         Consistency Check Not      -           
                          Applicable  Performed                              
12     Po12        down*  Not         Consistency Check Not      -           
                          Applicable  Performed                              
13     Po13        down*  Not         Consistency Check Not      -           
                          Applicable  Performed                              
14     Po14        down*  Not         Consistency Check Not      -           
                          Applicable  Performed                              
15     Po15        down*  Not         Consistency Check Not      -           
                          Applicable  Performed                              
I'm looking into the encapsulation now.

evil_bunnY fucked around with this message at 13:47 on Sep 20, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply