|
szlevi posted:I never did it but IIRC my values are ~30 secs and my hosts all tolerate failovers just fine... Do other arrays require this, and specifically state this requirement?
|
# ? Aug 17, 2012 02:37 |
|
|
# ? May 21, 2024 17:26 |
|
three posted:Do other arrays require this, and specifically state this requirement? NetApp's various host utilities (SnapDrive, VSC) will set these timeout values for you automatically.
|
# ? Aug 17, 2012 03:39 |
|
This talk of configuring iSCSI hosts to accommodate node failover reminds me of one of the reasons why I really prefer FC over iSCSI. Assuming the fabric is configured correctly hosts will failover between nodes as soon as they receive an RSCN. Propagation of RSCNs is pretty much instantaneous in a well-configured fabric which makes everything extremely tolerant to failures.
|
# ? Aug 17, 2012 11:20 |
|
cheese-cube posted:This talk of configuring iSCSI hosts to accommodate node failover reminds me of one of the reasons why I really prefer FC over iSCSI. Assuming the fabric is configured correctly hosts will failover between nodes as soon as they receive an RSCN. Propagation of RSCNs is pretty much instantaneous in a well-configured fabric which makes everything extremely tolerant to failures. Of course, very few people out there actually use iSNS, but the functionality is there for iSCSI initiators.
|
# ? Aug 17, 2012 11:33 |
|
szlevi posted:I cannot fathom what they can teach you that you cannot learn yourself in a few days, for free...
|
# ? Aug 17, 2012 11:40 |
|
Misogynist posted:http://en.wikipedia.org/wiki/Internet_Storage_Name_Service#State_Change_Notification What is the main reason that people choose iSCSI over FC? The company that I previously worked for always deployed FC SANs so the bulk of my experience is with FC which I came to prefer over iSCSI. The majority of the talk in this thread seems to be around iSCSI devices so I'm just wondering what is the deciding factor to deploy iSCSI over FC.
|
# ? Aug 17, 2012 11:54 |
|
cheese-cube posted:What is the main reason that people choose iSCSI over FC? The company that I previously worked for always deployed FC SANs so the bulk of my experience is with FC which I came to prefer over iSCSI. The majority of the talk in this thread seems to be around iSCSI devices so I'm just wondering what is the deciding factor to deploy iSCSI over FC.
|
# ? Aug 17, 2012 12:29 |
|
adorai posted:cost and simplicity. Why spend the extra for an fc switch and hba when iscsi works just fine? See that's what I thought the main reason would be: the ability to leverage existing switching equipment. However what about environments which require storage bandwidth greater than 1Gb but do not already have 10Gb switching equipment? Is that the point where deploying FC become cost-effective? On that subject have 16Gb FC HBAs and switches hit the market yet or are vendors still finalising their designs? Pile Of Garbage fucked around with this message at 12:55 on Aug 17, 2012 |
# ? Aug 17, 2012 12:53 |
|
When your IT crew's never touched FC it makes a lot of sense to not get into it.
|
# ? Aug 17, 2012 13:23 |
|
cheese-cube posted:See that's what I thought the main reason would be: the ability to leverage existing switching equipment. However what about environments which require storage bandwidth greater than 1Gb but do not already have 10Gb switching equipment? Is that the point where deploying FC become cost-effective? Not even then sometimes because of port aggregation. I got lucky (or unlucky depending on your ~views) in that my office uses nothing but Dell Powerconnect switches which all have the option to buy a fairly inexpensive module that you can plug 10G HBA's into. So I slapped together a nice 10G iSCSI backbone fairly quickly. Works like a champ too.
|
# ? Aug 17, 2012 13:26 |
|
cheese-cube posted:See that's what I thought the main reason would be: the ability to leverage existing switching equipment. However what about environments which require storage bandwidth greater than 1Gb but do not already have 10Gb switching equipment? Is that the point where deploying FC become cost-effective?
|
# ? Aug 17, 2012 14:19 |
|
Rhymenoserous posted:fairly inexpensive module that you can plug 10G HBA's into.
|
# ? Aug 17, 2012 14:26 |
|
Dell posted:Finally, please be advised that the best practices for the use of RAID 5 and RAID 50 on Dell EqualLogic arrays have changed. The changes to the RAID policy best practice recommendations are being made to offer enhanced protection for your data.
|
# ? Aug 17, 2012 17:53 |
|
What are considered "class 2" drives?
|
# ? Aug 17, 2012 19:31 |
|
evil_bunnY posted:When your IT crew's never touched FC it makes a lot of sense to not get into it. Sorry I'm really not sure what your point is here as the same can be said for iSCSI. From a configuration perspective I've found FC much easier to configure. I've mainly worked with IBM SAN24B-4 FC switches and SAN06B-R MPRs which are basically re-branded Brocade 300 and 7800 series devices respectively and they are extremely easy to use (Great GUI, very logical CLI and Brocade provides great documentation). Once you understand the basic concepts of configuring a stable fabric you can easily scale that knowledge out. It only starts to get complicated when you start utilising more advanced features like FC-FC routing, fabric merging or FCIP. From my experience with iSCSI there are way more things that need to be considered in even simple deployments (i.e. VLAN tagging for iSCSI traffic segregation, link aggregation, MPIO drivers, jumbo frame support, etc.). Of course as I said a few posts ago my experience with iSCSI is tiny when compared to my FC experience so feel free to shoot me down.
|
# ? Aug 17, 2012 19:41 |
|
Even if you have no experience with iSCSI, you probably have experience with IP, so already you know something about iSCSI and nothing about FC. Unless you just came out of a pod from another planet or something. There's also the fact that IP speeds are growing faster than FC is. And FC is really an all or nothing proposition. You get an FC infrastructure or you don't. iSCSI you can connect to your existing network. Those "way more things" you mention about iSCSI are things people running IP networks already understand. I'm really not seeing why this is so difficult to grasp. It sounds like you're viewing it from the point of view of some enormous enterprise that can easily afford to build out an entire new infrastructure, where as most of the reasons for iSCSI come from the other end of the spectrum, small units dipping their feet in the waters of IP storage.
|
# ? Aug 17, 2012 19:54 |
|
cheese-cube posted:Sorry I'm really not sure what your point is here as the same can be said for iSCSI. iSCSI is much easier for most general purpose IT people to grasp. Fabrics and zoning aren't too tricky but there is a learning curve there. Additionally when you get into FC you're also getting into the business of ensuring that you've got solid HBA firmware, that you understand the vendor specific MPIO suite you're using, that you understand the OS specific tools provided to manage those HBAs. And it's still much less likely that you have anyone on staff who knows enough about FC at the protocol layer to troubleshoot difficult issues, while it's quite easy to find IP expertise. FC simply doesn't make sense for most IT shops from a management or performance perspective. When properly configured it's great because it just works seamlessly, but getting it to the "properly configured" point is a non-trivial task for most shops, on top of the added hardware cost.
|
# ? Aug 17, 2012 20:04 |
|
BnT posted:What are considered "class 2" drives? Nearline SAS I believe, stuff that's supposed to be used for bulk storage of infrequently accessed data.
|
# ? Aug 17, 2012 20:05 |
|
We moved from FC to 10g ISCSI to support a converged network/storage fabric. When we bought UCS, support for FCOE in the Nexus 5k series was basically nonexistent -- you could present storage to ports locally on the 5k, but you could not trunk into a 6140. Updates have made it better now, but that ship has sailed. It is also considerably cheaper. Switchport cost is relatively equal, but on the HBA side it's not really close. For my DC that isn't UCS, I have 19 ESX hosts. The first 7 we bought with dual 8gb FC HBAs for $1700 each including cables. The next 12 used 10g CNAs for $900, which eliminated 4 1gig network ports per host as well. The HBA cost savings paid for one of the 10g switches. Last, you can present storage direct to VMs, which lets you do all sorts of tricks with snapshotting. VMware's NPIV support sucks. The MS iscsi initiator does not. KS fucked around with this message at 20:21 on Aug 17, 2012 |
# ? Aug 17, 2012 20:18 |
|
bort posted:We don't have any 5, but we're having to plan some conversion from RAID 50 to RAID 6. Luckily, that's online and we have the headroom. My current clients just finished installing a new Compellent and the first thing they did was force it to create raid 5 volumes And these were for the Oracle database servers... I'm trying to persuade them to choke down the extra cost and use raid 10 but I dunno, they seem pretty entrenched.
|
# ? Aug 17, 2012 20:30 |
|
Compellent arrays do all writes to 10 and rewrite to RAID-5 in the background. They should be using RAID10/RAID-5 for 15k disks and RAID10-DM/RAID-6 for the bigger 7.2k disks per Compellent's best practices. You can't even specify just RAID-10 without turning on advanced mode, I believe. PM me if you'd like the doc, but they're right. KS fucked around with this message at 20:51 on Aug 17, 2012 |
# ? Aug 17, 2012 20:46 |
|
Nebulis01 posted:Nearline SAS I believe, stuff that's supposed to be used for bulk storage of infrequently accessed data. And as for the RAID 5 on Compellent, it's very different since they assign RAID levels to blocks within a LUN, and those blocks may migrate to RAID 10 or various combinations of RAID5/6, depending on usage. RAID 5 is less attractive when you're configuring an actual disk/volume.
|
# ? Aug 17, 2012 20:56 |
|
NippleFloss posted:iSCSI is much easier for most general purpose IT people to grasp. Fabrics and zoning aren't too tricky but there is a learning curve there. Additionally when you get into FC you're also getting into the business of ensuring that you've got solid HBA firmware, that you understand the vendor specific MPIO suite you're using, that you understand the OS specific tools provided to manage those HBAs. And it's still much less likely that you have anyone on staff who knows enough about FC at the protocol layer to troubleshoot difficult issues, while it's quite easy to find IP expertise. Yes + iSCSI is a lot cheaper, even in 10GbE flavor: show me a 24-port line-rate FC16 switch for $5-6k... ...did I mention that for IB you can get a 36-port FDR switch for ~$8k?
|
# ? Aug 17, 2012 23:34 |
|
KS posted:Compellent arrays do all writes to 10 and rewrite to RAID-5 in the background. They should be using RAID10/RAID-5 for 15k disks and RAID10-DM/RAID-6 for the bigger 7.2k disks per Compellent's best practices. You can't even specify just RAID-10 without turning on advanced mode, I believe. This is odd then as they told me last week that they had to go in and specifically force it to use raid 5. I'll have a longer talk with them next week as this has my curiosity piqued now.
|
# ? Aug 18, 2012 17:16 |
|
madsushi posted:NetApp's various host utilities (SnapDrive, VSC) will set these timeout values for you automatically. Yeah, that's possible, that EQL's HIT sets it at install...
|
# ? Aug 20, 2012 17:05 |
|
KS posted:Compellent arrays do all writes to 10 and rewrite to RAID-5 in the background. Last December I was told it's RAID10 and RAID6...
|
# ? Aug 20, 2012 17:06 |
|
How is NFS with Windows these days? Going to be Server 2008 R2 writing to some random unix based NAS. I should mention about 20,000 directories with 30,000 files with NTFS permissions. My first thought was to put a gun in my mouth when asked to look into this, which means someone has convinced my CTO this is viable.
|
# ? Aug 21, 2012 04:16 |
|
ghostinmyshell posted:How is NFS with Windows these days? Going to be Server 2008 R2 writing to some random unix based NAS. I should mention about 20,000 directories with 30,000 files with NTFS permissions. Step 2) configure NFS datastore with linux guest Step 3) share NFS storage out as cifs (or even iSCSI) Step 4) create Windows 2008 R2 guest and map storage this will be better than using NFS on windows.
|
# ? Aug 21, 2012 04:56 |
|
adorai posted:Step 1) install VMware on server hardware
|
# ? Aug 21, 2012 05:04 |
|
Misogynist posted:And that's the NFS support in Windows if you already have SFU configured in Active Directory and all your SFU schema attributes populated (UID, GID, etc.).
|
# ? Aug 21, 2012 05:15 |
|
I've got a question for those of you who have messed around with HP Lefthand gear. I have to upgrade the switches I have this kit plugged in to. What will happen if I just start unplugging from the old switches and then replug into the new? Am I going to end up with some sort of split brain cluster? Or will it reconverge on its own?
|
# ? Aug 21, 2012 22:03 |
|
Syano posted:I've got a question for those of you who have messed around with HP Lefthand gear. I have to upgrade the switches I have this kit plugged in to. What will happen if I just start unplugging from the old switches and then replug into the new? Am I going to end up with some sort of split brain cluster? Or will it reconverge on its own? Why even risk it? Just take 20 minutes and label all the cables before you start ripping. Check the switch configuration too. It doesn't take very long.
|
# ? Aug 22, 2012 00:36 |
|
Nomex posted:Why even risk it? Just take 20 minutes and label all the cables before you start ripping. Check the switch configuration too. It doesn't take very long. Thats the easy part. What I am wondering is if I am going to need to shut down the array or if it can survive the cluster being down for about 5 minutes.
|
# ? Aug 22, 2012 01:28 |
|
Are the old and new switches connected either directly or through another switch? If so, just move one over completely, wait until they warnings clear and then move the other one. If you're switching subnets then you'll have a bit more work but it's still doable so long as you have routing enabled between those subnets.
|
# ? Aug 22, 2012 04:12 |
|
This will be the same subnet but what we are doing is upgrading to 10gig. These are top of rack switches so I am actually doing a one to one swap. Just about everything in these racks has storage on this array so I know I am going to need to shut down the servers. The more I think about it I guess I am going to have to power down the array as well to avoid any potential problems. I was just trying to avoid that. I am not sure why it just struck me as scary
|
# ? Aug 22, 2012 12:25 |
|
You don't need to power it down. Just move the network interfaces over one node at a time. So long as all your volumes have network raid 1 or better enabled you won't even notice a thing. Just make sure that your cluster has quorum before you move anything. Also, make sure your failover manager isn't stored on the LeftHand units.
|
# ? Aug 22, 2012 16:13 |
|
Number19 posted:
Zinger! FOM has been running on the array itself since we put it in. Nows a good a time as any to switch her up!
|
# ? Aug 22, 2012 16:26 |
|
Syano posted:Zinger! FOM has been running on the array itself since we put it in. Nows a good a time as any to switch her up! Yeah don't do this. It needs to do some quick disk access to handle failover and since losing a node halts all disk access briefly until the failover occurs you take the whole array offline. Just put it on an OpenFiler or Nexenta or whatever you can make with cheap old parts so it can be on shared storage and vMotion around easily. It doens't need ot be backed up or anything. If you lose it you can just make another one, usually quicker than restoring it.
|
# ? Aug 22, 2012 17:38 |
|
I was recently told to build a wish list of changes for my office and one of them (in a long list) is to upgrade the vCenter cluster we run. Right now it runs two nodes (both active) and stores VMs on iSCSI LUNs held on a QNAP 459U. The first thing I am going to do is get rid of the lovely Trendnet 8-port GigE switch the cluster is using in favor of something else (potentially two switches in HA). The other thing I want to do is HA the storage. Problem being, QNAP doesn't support iSCSI replication to another QNAP unit without taking the LUN offline. DRBD isn't supported on the units yet, nor does there appear to be plans to put in any kind of HA features in the immediate future. Is there a cheap(ish) rackmount NAS device that can replicate iSCSI LUNs (in realtime as changes are made or daily at a scheduled time) between two identical units? Ideally, they'd have two (or more) GigE ports. The QNAP unit cost me ~$2000 for 4TB in RAID5 with a hot spare, for an example of the price point I'm looking at. I'm going to post this in the VM thread as well. Thanks.
|
# ? Aug 22, 2012 19:39 |
|
|
# ? May 21, 2024 17:26 |
|
Local replication is not often used for this kind of scenario. The common method used to solve the problem is to buy enterprise-grade storage that is highly available internally -- SAS drives with two paths, dual controllers with redundant power supplies, dual switches, etc. You will not get that in your budget, but that is what you should probably aim for if you want to improve reliability. Replication involves a manual failover process and generally some data loss until you get up into the arrays that can do synchronous replication, which is what you'd want for the local replication situation you're proposing. It is not really the solution I'd recommend for a single cluster.
|
# ? Aug 23, 2012 04:15 |