|
Amandyke posted:Like here? http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/12169-304616-3930449-3930449-3930449-4118659.html?dnr=1 HP's website is painfully designed.
|
# ? Sep 12, 2012 01:58 |
|
|
# ? May 21, 2024 11:26 |
|
The thing that's pissing me off about it right now is trying to share a single LUN to multiple machines. When I do this, it says I probably want to create a server cluster and share the LUN to that instead. Which makes sense. Except, as far as I can tell, there's no way to do that in CMC.
|
# ? Sep 12, 2012 15:52 |
|
The option for that is further up. I'll take a look when I get in and help you out if you'd like. The LeftHand documentation is dogshit though. I'll agree with you on that.
|
# ? Sep 12, 2012 15:56 |
|
FISHMANPET posted:The thing that's pissing me off about it right now is trying to share a single LUN to multiple machines. When I do this, it says I probably want to create a server cluster and share the LUN to that instead. Which makes sense. Except, as far as I can tell, there's no way to do that in CMC. You may need to update CMC then. All recent versions you right click 'servers' and then choose 'New Server Cluster...'. You can even add your new member servers from this screen. You are going to have to get that done before you can add a LUN to multiple initiators at once
|
# ? Sep 12, 2012 16:00 |
|
Syano posted:You may need to update CMC then. All recent versions you right click 'servers' and then choose 'New Server Cluster...'. You can even add your new member servers from this screen. You are going to have to get that done before you can add a LUN to multiple initiators at once Yeah, upgrading is on the plate today. I can share the LUN to multiple machines, I just have to manually specify each machine, so in theory, when I got a new machine, I'd have to put it in a bunch of places. And it's more mind blowing that, if the current version of CMC doesn't have server clusters, why would it tell me to make one? E: gently caress me sideways, I knew I'd seen it in there before, and the once thing I didn't try last night was right clicking on the root of the servers menu. Welp, cluster is setup now. FISHMANPET fucked around with this message at 16:07 on Sep 12, 2012 |
# ? Sep 12, 2012 16:05 |
|
Both management interfaces on our old md3000i (don't laugh) have stopped responding. For the second time in as many weeks.
|
# ? Sep 13, 2012 13:53 |
|
Hey storage dudes, I've just had a new file server dumped on me and have been second guessing myself for the past couple of days with regards to which RAID to put it in. Setup is a HP DL380 with 16 bays, bays 1 and 9 are 136GB SAS in RAID1 for OS and bays 2-8 and 10-16 are 14 x 600GB SAS disks. Use is exclusively going to be our new file server. The predecessor is a 2TB RAID 5 array made up of 8 x 300GB SCSI disks and is ripe for decommissioning. I know my options are Option a) Doing a RAID 10 for uber performance but only ending up with 3.8TB of space which is going to be eaten pretty fast by users and the inevitable question from up high is going to be 'Why are we so low on disk space, I thought we just bought a whole new array?' Option b) Putting the 14 x 600GB SAS disks into a RAID 50 array with 3 parity groups of 4 disks and having 2 hot spares which is the safest option and gives us another TB to play with (4.9TB) Option c) The compromise of RAID 50 with 2 parity groups of 6 disks each and 2 hot spares which give me 5.4TB to work with and the ability to have the RAID rebuild whilst a new disk is ordered and can be swapped in before the whole array shits itself. Option d) Maximising space by doing a RAID 50 with 2 parity groups of 7 disks each (which the HP Array Config Utility defaults me to when I select 50 but am wary of the higher risk of failure (same as the RAID 10 for less performance) for the better amount of disk space (6.5TB) Thinking aloud whilst writing this post I'm pretty sure RAID 10 is out as the users Massive thanks for any input!
|
# ? Sep 13, 2012 15:51 |
|
Leave hot spares.
|
# ? Sep 13, 2012 15:55 |
|
Always hot spares. Believe me you don't want to be holding up a business while waiting for a plane to arrive with replacement hard drives.
|
# ? Sep 14, 2012 01:45 |
|
Another crosspost from the poo poo that pisses you off megathread: Thanks, recently-acquired storage vendor, for giving me the runaround for a loving week on a controller that desperately needs replacement before saying "sorry, DDN is having problems sourcing these controllers -- by the way, because of firmware incompatibilities, we need to simultaneously replace every one in every enclosure you have, so you'll have to bring all your storage down so we can gently caress it up." Storage is the printers of the datacenter.
|
# ? Sep 14, 2012 06:21 |
|
Misogynist posted:Storage is the printers of the datacenter. Long shot: anyone using Netapp ifgroups in combination with Nexus to do VLAN flagging on the storage controllers? Normally our switches are managed by someone else (so of course I'm borderline incompetent), I've got the netapp config right I'm pretty sure, but I could use a second pair of eyes for the Cisco config.
|
# ? Sep 14, 2012 13:23 |
|
evil_bunnY posted:Hahaha that's true on so many levels. interface port-channel131 description na3240_a switchport mode trunk switchport trunk native vlan 2999 switchport trunk allowed vlan 4,251-252,1111,2999 speed 10000 vpc 131 interface Ethernet1/31 switchport mode trunk switchport trunk native vlan 2999 switchport trunk allowed vlan 4,251-252,1111,2999 channel-group 131 mode active It's pretty standard.
|
# ? Sep 14, 2012 15:02 |
|
Claim a free beer of your choosing next time you're in Stockholm
|
# ? Sep 14, 2012 16:13 |
|
evil_bunnY posted:Claim a free beer of your choosing next time you're in Stockholm You'll want to add "spanning-tree port type edge trunk" to your port-channel config as well.
|
# ? Sep 14, 2012 19:46 |
|
NippleFloss posted:You'll want to add "spanning-tree port type edge trunk" to your port-channel config as well. Thanks 8)
|
# ? Sep 15, 2012 09:26 |
|
evil_bunnY posted:You just want a beer too don't you? Yes, exactly that. Now I've got a perfect excuse to visit Sweden. Can't let free beer go to waste.
|
# ? Sep 15, 2012 18:52 |
|
Doccykins posted:Option a) Doing a RAID 10 for uber performance
|
# ? Sep 17, 2012 03:27 |
|
So our Compellent SAN has to be setup by a Compellent engineer, and they sent us a survey to fill out beforehand. Under the iSCSI section it says this:quote:Best practice for most Operating Systems is to use two dedicated networks for iSCSI traffic (VMWare 3.5 is an exception). Alternately, dedicated subnets can be used by creating VLANs. I've not seen anything about running two separate iSCSI networks, and as far as I can tell, a bunch of the virtual port stuff that Compellent does wouldn't work if interfaces were on multiple subnets. What are we supposed to be doing here?
|
# ? Sep 17, 2012 22:38 |
|
FISHMANPET posted:So our Compellent SAN has to be setup by a Compellent engineer, and they sent us a survey to fill out beforehand. Under the iSCSI section it says this: That's right, that's how you are supposed to do iSCSI MPIO. You have two separate NICs on your host, two separate switches, and two separate NICs on your SAN for full redundancy. A lot of people skimp and just make two VLANs on one switch, or put the whole thing on one VLAN on one switch and just use different IP addresses.
|
# ? Sep 17, 2012 23:10 |
|
madsushi posted:That's right, that's how you are supposed to do iSCSI MPIO. You have two separate NICs on your host, two separate switches, and two separate NICs on your SAN for full redundancy. A lot of people skimp and just make two VLANs on one switch, or put the whole thing on one VLAN on one switch and just use different IP addresses. Can't you do all that physical redundancy with a single subnet?
|
# ? Sep 18, 2012 01:55 |
|
FISHMANPET posted:Can't you do all that physical redundancy with a single subnet? Multiple subnets guard against someone, say, deleting the VLAN in the core. (I've seen it)
|
# ? Sep 18, 2012 02:03 |
|
FISHMANPET posted:Can't you do all that physical redundancy with a single subnet?
|
# ? Sep 18, 2012 02:11 |
|
I can't find any information on how I would set that up. If the Compellent controller has two ports, do I put each port on a separate Vlan? How does this work with Compellent's Virtual Ports? On our VMware servers (since this is all for a VMware deployment) we have two 10Gbe NICs, which we're going to trunk into multiple VLANs. Do I put both VLANs on each interface, or do I put one iSCSI vlan on each interface? Ugh, none of this makes any sense.
|
# ? Sep 18, 2012 02:21 |
|
FISHMANPET posted:I can't find any information on how I would set that up. If the Compellent controller has two ports, do I put each port on a separate Vlan? How does this work with Compellent's Virtual Ports? On our VMware servers (since this is all for a VMware deployment) we have two 10Gbe NICs, which we're going to trunk into multiple VLANs. Do I put both VLANs on each interface, or do I put one iSCSI vlan on each interface? If you're running converged networking over the same switches as your storage, you'll want to use separate VLANs.
|
# ? Sep 18, 2012 02:33 |
|
The Compellent Fault Domain concept follows your physical infrastructure. You will have one fault domain per physical switch. The two fault domains should have separate subnets. Whether those physical switches are dedicated to storage or are carrying network traffic as well doesn't really matter -- you should have a dedicated VLAN on each switch. At least one port on each controller goes to each switch -- best practice would be 2 + from each controller to each switch. Virtual ports fail IPs from one port to another within the same fault domain to protect against controller failure. Any IP associated with a fault domain can live on any controller port within that fault domain, which should always be on the same switch/in the same VLAN. Look at page 36 of "Storage Center 5.5 Connectivity Guide" on the KC. Some operating systems (Win 2003 software iscsi at least) don't MPIO properly when all interfaces are in the same subnet, so just don't do it. KS fucked around with this message at 03:41 on Sep 18, 2012 |
# ? Sep 18, 2012 03:38 |
|
KS posted:Some operating systems (Win 2003 software iscsi at least) don't MPIO properly when all interfaces are in the same subnet, so just don't do it.
|
# ? Sep 18, 2012 04:12 |
|
Some things with SVC that VPLEX doesn't have: Thin provisioning Write cache Easy tier (automatic better performing storage) Advanced features (mirroring, snapshots) can be done across different back end third party arrays This guy tries to defend the VPLEX but most of his points are weak, or trivia. If you have hosts accessing the same data in multiple active data centers then that might be an argument for VPLEX, but SVC can do that too. SVC is easy to manage as well with the new GUI, and has real-time compression built in. http://vchetu.blogspot.com/2012/07/emc-vplex-vs-ibm-svc.html another comparison, note that you *can* encapsulate LUNs with SVC Image Mode. Full disclosure: I work for IBM and have sold several SVCs.
|
# ? Sep 18, 2012 17:31 |
|
ZombieReagan posted:I'm starting to look into Storage Virtualization appliances to help give us some more flexibility in mirroring data to different sites and dealing with different vendors arrays. Have any of you actually implemented one of these? I've been looking at EMC VPLEX and IBM SVC so far, and on paper they seem great. I'm not going to get any horror stories from anyone in sales, and I don't know if there really are any to be had as long as things are sized appropriately. EMC is going to try and push some poo poo like replication manager on you and trust me you'd rather kill yourself than ever try to figure out what the gently caress is wrong with replication manager.
|
# ? Sep 18, 2012 17:52 |
|
ZombieReagan posted:I'm starting to look into Storage Virtualization appliances to help give us some more flexibility in mirroring data to different sites and dealing with different vendors arrays. Have any of you actually implemented one of these? I've been looking at EMC VPLEX and IBM SVC so far, and on paper they seem great. I'm not going to get any horror stories from anyone in sales, and I don't know if there really are any to be had as long as things are sized appropriately. You could also look at Cisco's DMM on MDS series switches with an SSM module.
|
# ? Sep 18, 2012 17:57 |
|
ZombieReagan posted:I'm starting to look into Storage Virtualization appliances to help give us some more flexibility in mirroring data to different sites and dealing with different vendors arrays. Have any of you actually implemented one of these? I've been looking at EMC VPLEX and IBM SVC so far, and on paper they seem great. I'm not going to get any horror stories from anyone in sales, and I don't know if there really are any to be had as long as things are sized appropriately. It depends what you are looking for. If you really just want storage virtualization then there are many different products including VPLEX and SVC. I think SVC easily wins on the storage virtualization side. SVC has been going for longer and it has a lot more features because of this. However, VPLEX does a few things that SVC doesn't as SVC is active/passive (as far as i'm aware, been out of the game for a while now). The main thing is that VPLEX provides you the ability to do true active/active data centers with high availability at both sites. What does this mean in lay terms? So traditionally you have site A and site B with Data A and data B. Site A fails and you have to fail over to site B and do a little work to get data B up and running. You've got a failover plans, RTO's and staff running around. With VPLEX you can class it as site A and A and the data has the same identity - A and A. Combine this with a technology that can handle Active/Active such as VMware HA or Oracle RAC and you're golden. - If a site is lost things continue at the other site without any human intervention or outage. There is no panic and running around while failover options and instructions are considered. This alone is worth its weight in gold. - Both sites can be used at the same time meaning you don't have a DR site that sits there doing nothing. It can become an active park of the environment. - Workloads can be easily pushed to other sites in the event of maintenance or a disaster on the way (fire, flood). - All the usual features such as the ability to move data around backend arrays without down time, ability to retire arrays, etc. One of the main users I know of is Melbourne IT. This is a hosting company in Aus that has VPLEX just so they don't have outages with customer data (and because Australia get's everything from floods, to forest fires and hail stones the size of basketballs). A Vmotion without having to do a Storage Vmotion let's you move things around pretty quick. Vanilla fucked around with this message at 08:17 on Sep 19, 2012 |
# ? Sep 19, 2012 08:12 |
|
I'm having a routing(?) issue on my netapp system and our network team is being fantastically uncooperative (though the original issue is most probably my doing). Can any of you actually smart people spot any obvious mistakes? I don't understand why this wouldn't work. Right now I can't ping even my default GW. I can't seem to route on our normal networks, only on my management subnet/VLAN. This is the controller RC: code:
code:
code:
|
# ? Sep 19, 2012 12:11 |
|
Been a long time since I did networking... buuuttt... route add default public.81.1 1 should the trailing 1 be ntapifgrp01 or ela ? Even so you should still be able to ping the default gateway. Can you ping the gateway from another host, and ping all the filer's IPs?
|
# ? Sep 19, 2012 15:22 |
|
That's the metric 8] I think the problem is that my default gateway routes over my interface group, and there's an issue there.
|
# ? Sep 19, 2012 15:28 |
|
evil_bunnY posted:That's the metric 8] What do your "ifconfig -a" and "ifgrp status" look like? Also, while the switch config looks okay to me it's hard to tell without seeing the config from the vpc peer switch and the vpc config or show vpc command from one of the switches. YOLOsubmarine fucked around with this message at 18:30 on Sep 19, 2012 |
# ? Sep 19, 2012 17:46 |
|
Here are my thoughts: 1) You shouldn't be setting the native VLAN on the Ciscos. The native VLAN is still 1. I don't have that value set on any of my ether-channel configs. 2) Make sure your encapsulation type is dot1q, I am not sure if your Ciscos are defaulting to ISL or whatever. madsushi fucked around with this message at 18:48 on Sep 19, 2012 |
# ? Sep 19, 2012 18:37 |
|
I shall get on that poo poo and report back, SIR. Thanks both of you.
|
# ? Sep 19, 2012 21:21 |
|
madsushi posted:Here are my thoughts: I'd agree with your original approach, bunnY. You could try another VLAN for your native VLAN to ensure that you're tagging the traffic on 731, but I, at least, stay away from VLAN 1. edit: How bout this? code:
The one other thing I can think of is that your Netapp might be expecting tags on the traffic it receives. Cisco strips 801.Q tags on its native vlan, but there is a command vlan dot1q tag native that will add tags to native VLANs. This is a global command, so be careful -- you may get unexplained behavior on other trunks if they're expecting untagged traffic. bort fucked around with this message at 01:15 on Sep 20, 2012 |
# ? Sep 19, 2012 22:54 |
|
bort posted:Do you prune VLAN 1 on the trunks? I typically always set the untagged/native VLAN on a trunk. This is because VLAN 1 has all kinds of control traffic on it, unconfigured ports end up on it and older switches could drop VLAN 1 traffic to the processor, slowing everything down. Yeah, I use "sw tru all vlan x-y" to exclude VLAN 1 from hitting the trunk.
|
# ? Sep 20, 2012 00:51 |
|
You're probably right about it defaulting to ISL, anyway. That's probably the issue. e: it's actually one of the few things that Force10 does differently that I've come to love. By default, nothing's allowed on a trunk. You put the trunk interface as tagged or untagged in the VLAN interface config. None of this is-it-or-isn't-it nonsense. bort fucked around with this message at 01:08 on Sep 20, 2012 |
# ? Sep 20, 2012 01:04 |
|
|
# ? May 21, 2024 11:26 |
|
NippleFloss posted:What do your "ifconfig -a" code:
NippleFloss posted:and "ifgrp status" look like? code:
NippleFloss posted:Also, while the switch config looks okay to me it's hard to tell without seeing the config from the vpc peer switch and the vpc config or show vpc command from one of the switches. code:
evil_bunnY fucked around with this message at 13:47 on Sep 20, 2012 |
# ? Sep 20, 2012 12:06 |