|
Misogynist posted:my lab advocate
|
# ? Sep 4, 2012 02:26 |
|
|
# ? May 21, 2024 18:30 |
|
evil_bunnY posted:Is that position particular to IBM? Anyway, I'm going to stop complaining about this before I get myself in trouble for running my mouth
|
# ? Sep 4, 2012 02:32 |
|
Compellent MPIO question. I have the following Compellent setup: So each controller has two dual-port 1GB cards, with the top port in each card cabled to our iSCSI switch. I know, we should have two switches, we're getting to that soon. IP addresses are as follows:
If I have VMware hosts mapped to volumes on the Compellent using, on each host, two vmknics bound to one software iscsi initiator (1 vSwitch with each physical uplink being active on one vmknic and unused on the other, per normal, using Round Robin as recommended), how many paths should I see from host to volume? The answer, in my setup, is 6, and I'd like to understand why. To use a volume that is active on Controller 1 as an example, I'm guessing that ESXi sees 2 paths to the FD control port (.100), 2 paths to slot5/port1 (.110) and 2 paths to slot6/port1 (.111), but I'm just not sure. I'd expect 4 paths, absent this virtual port stuff, but given that our Compellent tech racked the controllers in the wrong order and swapped the chassis faceplates, I'm open to the idea that something's misconfigured.
|
# ? Sep 4, 2012 03:23 |
|
I am pretty sure you should be seeing 4 paths, but it's not a valid MPIO setup. I still maintain you have the vswitch stuff set up wrong. Reference this article:quote:There is another important point to note when it comes to the configuration of iSCSI port bindings. On vSwitches which contain multiple vmnic uplinks, each VMkernel (vmk) port used for iSCSI bindings must be associated with a single vmnic uplink. The other uplink(s) on the vSwitch must be placed into an unused state. This doesn't match the pic you posted in the virtualization thread. Your setup is showing 3 paths each to .110, .111, .112, and .113, all from one vmhba, which makes no sense. It's like you have a 3rd vmknic defined somewhere. Is your switch VLAN-capable? You could set up two VLANs and two fault domains and be ready to migrate to a second switch when you get it without host reconfiguration. It's probably a lot easier if you're just getting this into production, because conversion to two fault domains requires storage interruption. edit: here's what it looks like for me on a hardware iscsi setup with dual controllers. All paths show to the control ports. Ignore the warnings on the iscsi HBAs -- they have two IP addresses each because of a transition from 1g to 10g, so they show unbalanced until I add a 2nd 10g card. KS fucked around with this message at 17:07 on Sep 4, 2012 |
# ? Sep 4, 2012 16:56 |
|
I have a case open with VMware so I'll double-check the vSwitch setup, but I'm still disagreeing with you on that one. You quotedVMware posted:There is another important point to note when it comes to the configuration of iSCSI port bindings. On vSwitches which contain multiple vmnic uplinks, each VMkernel (vmk) port used for iSCSI bindings must be associated with a single vmnic uplink. The other uplink(s) on the vSwitch must be placed into an unused state. That's exactly what I've done - here's a shot of one of the two vmkernel portgroups showing that only one vmnic uplink is associated with it: The switch is VLAN capable, but since getting the second switch is a bit off I really want to know why this setup isn't working as-is. edit: Here's a shot of the Network Configuration tab of vmhba34 showing that it has two vmkernel portgroups bound to it. Two vmkernel portgroups, each with its own distinct and dedicated uplink: edit: last edit I swear. You say I have a 3rd vmknic defined somewhere, and it's true that I do - it's a management-only one on vSwitch0, and iSCSI port binding is unchecked/greyed out as expected. Mierdaan fucked around with this message at 17:57 on Sep 4, 2012 |
# ? Sep 4, 2012 17:43 |
|
Okay, the 6 path thing was a total non-issue. VMware support pinned it on the fact that I had volumes mapped to the hosts at the time I bound the vmknics to the iSCSI initiator, and those two paths that already existed (from vmhba34 to the two SAN ports) don't disappear until you reboot the host. I popped a host into maintenance, rebooted, and voila - 4 paths. Also, they say there's absolutely no issue using one vSwitch as I've done, as long as each vmknic has one active uplink only.
|
# ? Sep 4, 2012 19:19 |
|
Cool. I completely forgot about the ability to override the vswitch failover settings on a per-interface basis, so that looks reasonable.
|
# ? Sep 4, 2012 19:32 |
|
I have a lun presented via iSCSI from a lefthand array that I have attached to a two node file server cluster. At some point in the near future I need to migrate that lun (or all the data on it) to a equallogic array. I have a few ideas on how I am going to go about it but if anyone has done this and has an easy method I would sure love to hear it.
|
# ? Sep 4, 2012 19:54 |
|
What's on the LUN, precious?
|
# ? Sep 4, 2012 21:09 |
|
NTFS file shares.
|
# ? Sep 4, 2012 21:13 |
|
Syano posted:NTFS file shares.
|
# ? Sep 4, 2012 21:20 |
|
Thats where my logical thought keeps going. I was sort of hoping though there was some magical tool I had not heard about yet that could replicate a lun between two arrays from different vendors... and its free... and its easy to use... and uhh, it gives me 20 dollars when I double click on it
|
# ? Sep 4, 2012 21:25 |
|
I know Oracle has a feature with their product where you point it at your old storage, and then redirect clients to the new storage. It passes requests for data it doesn't have onto the old storage, all while copying stuff to the new storage. Sounds pretty nifty if it actually works, and no idea if anyone offers a similar product, free or otherwise.
|
# ? Sep 4, 2012 21:48 |
|
FISHMANPET posted:I know Oracle has a feature with their product where you point it at your old storage, and then redirect clients to the new storage. It passes requests for data it doesn't have onto the old storage, all while copying stuff to the new storage. Sounds pretty nifty if it actually works, and no idea if anyone offers a similar product, free or otherwise.
|
# ? Sep 4, 2012 22:42 |
|
Compellent has a really cool copy tool out of the box -- you zone the two arrays together, present LUNs to the controller ports on the Compellent, and the array has a point and click process to claim the external LUN and do a block-by-block copy to a Compellent volume. I used it for a ~40 TB migration off a Hitachi AMS2300 2 years ago over just 3-4 Sundays and I can't imagine having to migrate without it. I guess it would be easier now that I'm 100% virtualized. Dell needs to steal this feature and give it to the EQL arrays.
|
# ? Sep 4, 2012 23:06 |
|
EMC Goon checking in. I'd be happy to check into any SR's/issues any of you seem to be having. Just a lowly CE but I can help if you think your SR's getting hung up, or at least I could let you know what the current status of it is.
|
# ? Sep 5, 2012 06:51 |
|
Amandyke posted:EMC Goon checking in. I'd be happy to check into any SR's/issues any of you seem to be having. Just a lowly CE but I can help if you think your SR's getting hung up, or at least I could let you know what the current status of it is. Vanilla fucked around with this message at 09:19 on Sep 5, 2012 |
# ? Sep 5, 2012 09:14 |
|
I assume the tigers are Amandayke and his/her coworkers, and the turkey is the problem they are relentlessly tracking down, right? Right?
|
# ? Sep 5, 2012 14:41 |
|
Amandyke posted:EMC Goon checking in. I'd be happy to check into any SR's/issues any of you seem to be having. Just a lowly CE but I can help if you think your SR's getting hung up, or at least I could let you know what the current status of it is. As far as I'm concerned CE's are the only people that actually do any work at EMC, everyone else that comes out to the job site are just there to argue about where to go for lunch.
|
# ? Sep 5, 2012 14:44 |
|
Rhymenoserous posted:As far as I'm concerned CE's are the only people that actually do any work at EMC, everyone else that comes out to the job site are just there to argue about where to go for lunch. This is my experience as well.
|
# ? Sep 5, 2012 15:01 |
|
KS posted:Compellent has a really cool copy tool out of the box -- you zone the two arrays together, present LUNs to the controller ports on the Compellent, and the array has a point and click process to claim the external LUN and do a block-by-block copy to a Compellent volume. I used it for a ~40 TB migration off a Hitachi AMS2300 2 years ago over just 3-4 Sundays and I can't imagine having to migrate without it. I guess it would be easier now that I'm 100% virtualized.
|
# ? Sep 5, 2012 15:30 |
|
EMC folks can't even get credit for funny posts even when witty rejoinders are recycled jokes from up thread...
|
# ? Sep 5, 2012 19:02 |
|
wyoak posted:My impression of the Compellent migration tool is that it took the source LUN offline during the migration - am I wrong about that, and you can leave the source online while it's migrating? That'd be really nice for us, we're about to do something similar. It does a one-pass read of the LUN, so if your app is up and you're making changes it won't capture them and you won't get a consistent copy. You're going to need some downtime, but not a ton -- it saturated 8gb FC when I was doing it, so it happened pretty quick.
|
# ? Sep 5, 2012 20:25 |
|
What does EMC recommend as the amount of free space to leave for LUN snapshots on the VNX platform? 25%?
|
# ? Sep 6, 2012 14:43 |
|
Goon Matchmaker posted:What does EMC recommend as the amount of free space to leave for LUN snapshots on the VNX platform? 25%? 15-25% was what the EMC CERTIFIED INSTRUCTOR told me in class. Depending on changerate. If you don't know what your basic changerate is, or it's particularly high default to 25%.
|
# ? Sep 6, 2012 14:47 |
|
Rhymenoserous posted:15-25% was what the EMC CERTIFIED INSTRUCTOR told me in class. Depending on changerate. If you don't know what your basic changerate is, or it's particularly high default to 25%. As a CERTIFIED VNX IMPLEMENTATION SPECIALIST, I can confirm that this is correct. As Rhymenoserious said it is highly dependent on the change rate of your LUN. You don't want to go over that 25% or else your snapshot suddenly becomes useless.
|
# ? Sep 6, 2012 14:56 |
|
I'm looking for a big storage box. Wants: - 2U rackmount (can go up to 3U) - can fill with 2TB SATA drives (such as WD RE4 drives, we provide the drives) - hot swappable, hardware RAID 6 - at least 6 drive bays - redundant power supplies - some sort of on-board management (to set up the RAID) - iSCSI - cheap Any suggestions? Edit, an alternative: - NAS box that runs Windows (Gigabit connection) so we don't need a host. We have an existing Dell PowerVault NX3000 that works OK. I wasn't sure about going through Dell again: we have a bunch of Server 2008 R2 licenses and piles of 2TB RE4 drives we can use, so we didn't want to pay Dell for another license and more drives. Xenomorph fucked around with this message at 07:22 on Sep 7, 2012 |
# ? Sep 6, 2012 17:45 |
|
You say 2GB+ fibre channel. Are you good at that stuff? Because you can easily run iSCSI off a 10g network and simplify the setup. Instead of saying the kind of hardware you want why not post a list of needs instead. I.E. I need X much space, I'll be running X number of VM's and X of those will be exchange/SQL and the rest file storage. I need to be able to retain x number of snapshots with or without the ability to replicate. EDIT: Price range? Most of the 2U storage boxes in a decent range that aren't over glorified DAS really only do iSCSI (Which is why I asked above why you want FC).
|
# ? Sep 6, 2012 17:50 |
|
Xenomorph posted:I'm looking for a big storage box. Block level storage / NFS/CIFS NAS storage / don't care? Budget? edit: Actually I guess if you seriously want fibre channel that answers #1 Docjowles fucked around with this message at 17:54 on Sep 6, 2012 |
# ? Sep 6, 2012 17:51 |
|
Also I wouldn't tie myself to a solution because "I have drives that are laying around that will fit it". I'd sell that poo poo on e-bay and use that to help fund what you actually need.
|
# ? Sep 6, 2012 17:55 |
|
- We currently use some old Apple Xraids - hardware RAID 5 onboard and 2Gb fibre channel connections. We have a bunch of servers with Qlogic fibre channel cards, so I already know that is as simple as plugging in a USB thumbdrive and sharing a drive on the network. - We also have a PowerVault NX3000. It has hardware RAID 5, boots Windows and shares the drives installed into it. If I haven't actively wired it and configured it, I'm afraid of it, regardless of how much I've read about it and asked about it in the past. iSCSI doesn't sound bad - it's like Fibre channel over Ethernet, right? If that's the case, I can just add additional Ethernet cards to the servers, right? All of our existing RAIDs have around 4TB of storage (for everyone). This was fine years ago, but now we have individual people that need 4TB-10TB of storage. So, I want to add something like a 10TB RAID. With that much data, I'm guessing RAID 6 would be safer than RAID 5. I mentioned the WD RE4 drives because we ordered a lot of spares. I'm trying to use the same drives for servers and RAIDs. We've been in situations in the past where we did not have a spare when a RAID failed because every server and every RAID was ordered with different size drives and no one thought to purchase a bunch of spares for the dozen different drive configurations they went with. Now we have a pile of 73GB/146GB/300GB SCSI drives for old servers, and 500GB/2TB SATA drives for new servers. I just want a 10TB RAID 6 drive to share from Windows. Just file storage for some people, so it doesn't have to be fast. One person has been using a 4TB LaCie USB drive. There's no redundancy with that, and we don't back it up. It must be under $10,000, closer to $5,000 (without disks). Maybe something like this: http://www.newegg.com/Product/Product.aspx?Item=N82E16822108100
|
# ? Sep 6, 2012 18:18 |
|
Jesus don't pay 5 grand for a non-HA controller. If you can live with no FC then a whitebox running ZFS on FreeBSD is probably your best bet. iSCSI is SCSI over IP. Depending on the load you can do SMB on the storage box or through another one. You could also probably mount an iSCSI LUN on your NX then share that (dunno how that particular unit works).
|
# ? Sep 6, 2012 18:32 |
|
OK then, what is a good & cheap, rack-mountable hardware RAID 6 box I can slap a bunch of SATA drives into and connect via iSCSI to one of my existing servers?
|
# ? Sep 6, 2012 18:48 |
|
Xenomorph posted:OK then, what is a good & cheap, rack-mountable hardware RAID 6 box I can slap a bunch of SATA drives into and connect via iSCSI to one of my existing servers?
|
# ? Sep 6, 2012 18:52 |
|
Xenomorph posted:- We currently use some old Apple Xraids - hardware RAID 5 onboard and 2Gb fibre channel connections. We have a bunch of servers with Qlogic fibre channel cards, so I already know that is as simple as plugging in a USB thumbdrive and sharing a drive on the network. Is anything in your environment still under warranty?
|
# ? Sep 6, 2012 19:04 |
|
Xenomorph posted:OK then, what is a good & cheap, rack-mountable hardware RAID 6 box I can slap a bunch of SATA drives into and connect via iSCSI to one of my existing servers? Not to put words in his mouth, but I think what evil_bunnY is getting at is either reliability, features and performance matter for this application or they don't. If they don't, buy a cheap-rear end SuperMicro enclosure with a bunch of drive bays for like a grand. If they do, spend the extra money for the entry-level product from a reputable vendor with a support contract. Don't spend $5k on a weird prosumer NAS like that Synology that comes with retarded poo poo like an iTunes server and ~~cloud integration~~ but doesn't have redundant controllers. It's the worst of both worlds. Docjowles fucked around with this message at 18:31 on Sep 7, 2012 |
# ? Sep 6, 2012 19:28 |
|
Misogynist posted:You already posted about those goddamn Xraids a year ago and everyone who saw your post started yelling that you need to get rid of those loving things. 16 gigabit fibre channel is the current market standard. You are running poo poo that's three generations older than anything being produced today. Some of our Dell stuff has warranties. I would like to get rid of the Xraids. They're currently in use & we have a budget I'm trying to stay within. They will probably be in use for another 5 years. There's no way I'm going to spend $20,000-$50,000 on storage overkill when people have been happy with their $200-$400 USB drives. I simply want something better than those USB drives that has redundant hardware and fits in our racks. I can possibly go up to $10,000 for one item. We do not need the latest & greatest tech. Something three generations old is perfectly fine for us. Half the servers I've purchased were refurbs and half the upgrade components I've purchased were off eBay. Going with refurbs and eBay items was still an *upgrade* compared to what we had (lots of white boxes under desks).
|
# ? Sep 6, 2012 19:36 |
|
It wouldn't be my first choice (since we have some and I am actively trying to get rid of them since the management UI is garbage) but you could get a dual controller HP MSA2000 loaded with 12 1TB drives for like $5k refurbed.
|
# ? Sep 6, 2012 19:45 |
|
I guess I don't fully understand iSCSI. I've looked at a few NAS servers that advertise "Built-in iSCSI Target Service". It then mentions that it runs Linux and uses EXT4 for its file system. How does that work if a Windows system is the iSCSI initiator? I thought I could just connect the device and then a drive would show up to Windows that I could then format as NTFS. That's how I've been working with Fibre Channel.
|
# ? Sep 6, 2012 23:11 |
|
|
# ? May 21, 2024 18:30 |
|
Xenomorph posted:I guess I don't fully understand iSCSI. That's exactly how it works. The NAS box is probably just making a big fat file and then presenting that storage to the initiator as a block device.
|
# ? Sep 6, 2012 23:18 |