|
What do folks use for backup to disk storage? Currently we have a Buffalo Terastation at each site. Other than not supporting SNMP monitoring (seriously Buffalo?) they've met our needs. Is this a bad idea or acceptable? I will need to increase our B2D capacity and will probably just get a second NAS, unless that's a bad idea for some reason.
|
# ? Jan 27, 2010 20:41 |
|
|
# ? May 13, 2024 08:24 |
|
I work at a small company and have basically gravitated towards being the IT admin. It was decided we needed to replace our completely non-redundant old server with something new. High availability was discussed, and this was passed to Dell. Dell sold us 2x servers and an MD3000i. Both servers are Server 2008 Enterprise edition, configured as a failover cluster, both connected to the MD3000i via iSCSI. I've now been given the task of moving all of our services to this new cluster. It was quickly established that Microsoft highly recommends that a failover cluster is not an active directory domain controller - so we've just had in another two new servers to be domain controllers (and WSUS). As I understand it iSCSI works at the block level - so only one host can be connected to one virtual disk at a time. The fileserver role is perfect. When one host is shutdown, the iSCSI initiator picks up the virtual disk and it's seamless. I think that clients with open handles may be interrupted though. The idea is that if power is yanked from one server (as an example; it's all hooked to a beefy UPS) or something goes catastrophically wrong, the other will instantly pick up the slack and nobody has to get out of bed at 3am. This cluster also needs to be our mail server and Sophos Antivirus master server. Since a long time before I joined and for the foreseeable future, our mail server is Merak Icewarp. This is not 'cluster aware'. Applications running on each host of the cluster obviously need their configuration and data on shared storage (i.e. the SAN), but this is only accessible to one host at a time. But in the case of failover they need to be ready and waiting. We also have some applications that act as a server for various scientific instruments that likely will never be cluster aware - is there any way to hack these into working in a fail-over mode? Perhaps a script that detects failover has occured and fires up the various applications once the host has picked up the virtual disk from the SAN? Also I think the top brass were under the impression that shared storage meant literally that - that both hosts could use a single virtual disk simulataneously and that both hosts would be doing work, except in failure when one would take over all duties. It seems we're going to have one very expensive hot standby. Personally I'd have got VMWare high availability and moved virtual machines around instead of this clustering lark, but I think it's a bit late for that now. Does anyone have any advice? Have Dell screwed us over and sold us something not fit for purpose? Also I'm pretty up on IT and knew how a lot of this worked beforehand, but this is the first time I've ever physically had my hands on such kit.
|
# ? Feb 3, 2010 12:38 |
|
Cowboy Mark posted:Personally I'd have got VMWare high availability and moved virtual machines around instead of this clustering lark, but I think it's a bit late for that now. Does anyone have any advice? Have Dell screwed us over and sold us something not fit for purpose?
|
# ? Feb 3, 2010 13:53 |
|
Cowboy Mark posted:I work at a small company and have basically gravitated towards being the IT admin. It was decided we needed to replace our completely non-redundant old server with something new. High availability was discussed, and this was passed to Dell. Nothing in iSCSI precludes two systems from working on the storage array. Windows clustering however will mean that one node will 'own' a drive, so partition your md logically and create groups of services and their associated disks in the cluster manager. Then distribute ownership of those service groups amongst cluster nodes as needed. Most basic Windows services can be clustered without being specially cluster aware, you'll need to look at it on an app by app basis though. Worst case as adorai says, stick them in a hyperv session and cluster that.
|
# ? Feb 3, 2010 16:00 |
|
Excellent! Thank you guys. I forgot about Hyper-V. Dell shipped the servers with a 32bit OS, so I'm digging some discs out now.
|
# ? Feb 3, 2010 16:15 |
|
How much experience do people here have with Sun's x4500/x4550 (thumper/thor)? I've got one at work and I'm going to be having a lot of stupid little questions, and I'm wondering if I can get them answered here or if I need to go register on the OpenSolaris forums or something.
|
# ? Feb 5, 2010 06:00 |
|
I'm looking at the sun gear too. Does anyone know if it's possible to connect FC luns to the openstorage appliances? Or if we could get one of their storage servers with the slick GUI? Support is needed for this as people get fired if there is dataloss. I'm looking to make a poor mans vfiler. I like the openstorage GUI but don't have any kidnapped orphans left to sell to netapp.
|
# ? Feb 5, 2010 09:02 |
|
FISHMANPET posted:How much experience do people here have with Sun's x4500/x4550 (thumper/thor)? I've got one at work and I'm going to be having a lot of stupid little questions, and I'm wondering if I can get them answered here or if I need to go register on the OpenSolaris forums or something.
|
# ? Feb 5, 2010 19:49 |
|
Sweet. Now this is probably a stupid question, but I only ask it because the purchase went by at least a few people who should know better... Is it possible to use an iSCSI card to share a target? We don't have a lot of experience with iSCSI here, only having a StorageTek array that acts as an iSCSI target, and an iSCSI card in the server acting as the initiator. Now we've got our thumper, and I know ZFS can export volumes as iSCSI (I actually know a fair amount about ZFS, so I'm not screwed in that regard) but I'm assuming it has to do that over the system network interfaces. The more I think about this, the stupider it sounds like that we got an iSCSI card to share file systems for iSCSI. That's the same as doing something like buying a network card for your hard drive, right? It doesn't make any sense? Also, since you guys manage a bunch of thumpers, what should I tell my boss as to why I shouldn't make 2 20+2 RAIDZ2 pools? I had a hard enough time convicing him to let me use *both* system drives for my root partitions (1 TB? But it has Compact Flash!) and now I'm trying to to use one of these from the 'ZFS Best Practices Guide' code:
|
# ? Feb 5, 2010 20:01 |
|
You don't need any card for iSCSI - just use the four built-in GigE ports. If you're feeling really spendy then get a PCIx 10gE card. As far as the disk layout goes it can be pretty flexible, but try to build vdevs that have one disk from each of the controllers on them. I went with 7 RAIDZ2 vdevs and four hotspares. At first I thought that many hotspares was a waste, but then we started swapping out disks with larger ones and we do that by failing half of a vdev to the spares, replacing them, then doing it once more (which means for a full vdev upgrade we crack the chassis twice).
|
# ? Feb 5, 2010 20:06 |
|
I have a CentOS 5.3 host with a "phantom" scsi device because the LUN it used to point to on the SAN got un-assigned to this host. Every time I run multipath it tries to create an mpath3 device mapper name for it and complains that its failing. How do you get rid of /dev/sde if its not really there? edit: as usual I figure something out the moment I post about it. I ran this echo "1" > /sys/block/sde/device/delete and it worked. Anyone care to tell me I just made a huge mistake? StabbinHobo fucked around with this message at 20:45 on Feb 5, 2010 |
# ? Feb 5, 2010 20:41 |
|
I have a new EMC AX4 iSCSI array in place and it seems quite a bit slower than I think it should be. Is there a reliable way to benchmark its performance and any statistics for similar devices that I can compare it to? I've tried googling around but I can't find any "here is what speed you should expect with this iSCSI array" information.
|
# ? Feb 5, 2010 21:07 |
|
Erwin posted:I have a new EMC AX4 iSCSI array in place and it seems quite a bit slower than I think it should be. Is there a reliable way to benchmark its performance and any statistics for similar devices that I can compare it to? I've tried googling around but I can't find any "here is what speed you should expect with this iSCSI array" information. You really need to provide some more info. What's slow exactly (i.e. whats the throughput you are getting)? Which disks do you have in there? How is it connected (how many ports, port speeds, what switch, switch config, jumbo frames, etc...)? What is it connected to (what is the server hardware, OS, application)?
|
# ? Feb 5, 2010 23:40 |
|
oblomov posted:You really need to provide some more info. What's slow exactly (i.e. whats the throughput you are getting)? Which disks do you have in there? How is it connected (how many ports, port speeds, what switch, switch config, jumbo frames, etc...)? What is it connected to (what is the server hardware, OS, application)? Sorry, I didn't provide information because I wanted to run a benchmarking test to see if the speeds are really slower than they should be before I ask for more help. I was getting 20-30MB/s read and write when copying files in either direction. I found CrystalDiskMark and here's what I get: Sequential throughput: about 30MB/s write and read. Random 512k blocks: 0.8 - 1MB/s read, 28-30MB/s write. Random 4k blocks: 7MB/s read, 2MB/s write It's an iSCSI AX4. The array I'm dealing with is a 7-disk RAID 5 with 1TB SATA drives (12 total disks, 4 SAS drives for the Flare software, 7 disks in RAID, one hot spare). The AX4 has dual controllers, so it has a total of 4 gigabit iSCSI ports. iSCSI is on its own switch, a ProCurve 1810-24g, gigabit managed switch. Jumbo frames is currently off. I tested from two servers, one 2008 R2, one 2003 R2. Both use the Microsoft initiator over one regular gigabit ethernet adapter (not an HBA). EMC PowerPath is installed on both servers. I realize there are a few things keeping me from optimal speed: SATA drives, no jumbo frames, and no HBAs. I still feel like the speeds are lower than they should be, even considering those factors. Maybe my expectations are too high?
|
# ? Feb 6, 2010 03:14 |
|
lilbean posted:I've used one for a year now on Solaris 10, beat the poo poo out of it and I love it. H10hawk follows the thread too and manages like a dozen of them, so this is as good a place to ask questions as any. I actually quit that job a few months ago. And it was 30+ thumpers, I lost count. :X I also only used Solaris 10 to great success and my replacement was hell bent on OpenSolaris. Last I heard it kept ending in tears. Stay clear of that hippie bullshit and you should be fine. FISHMANPET posted:Is it possible to use an iSCSI card to share a target? I've never used iSCSI, but from what I've read about it an "iSCSI card" is nothing more than a glorified networking card with an iSCSI stack inside of it. ZFS handles this internally and you wasted money. I would keep this fact around if they try and lord over you other things they don't understand, as this one is them spending money they shouldn't have. The following is justifying you wasting space. quote:Also, since you guys manage a bunch of thumpers, what should I tell my boss as to why I shouldn't make 2 20+2 RAIDZ2 pools? I had a hard enough time convicing him to let me use *both* system drives for my root partitions (1 TB? But it has Compact Flash!) and now I'm trying to to use one of these from the 'ZFS Best Practices Guide' In all fairness, it does have a compact flash port. Use it. Hell, use it as a concession to them. Did they buy the support contract with your thumper, even Bronze? Call them suckers up and ask for a best practices configuration on your very first thumper (Of Many!), and if they balk at it call your sales rep and ask them to get it for you. Get it in an email from Sun. Tell them your honest reliability concerns. Now, think long and hard about how many parity disks you need, and how many hot spares you want. Your target with snapshots is 50% raw space as usuable. I tended to get 11T/thumper. In all honesty it isn't going to matter, because management is going to be Right(tm) and you are going to be Wrong(tm). I would setup 6 raid groups and "waste" those last 4 drives or whatever on hotspares, or just use RAIDZ instead of RAIDZ2 and reclaim a few terabytes. You have 4 hotspares, but you will need to very diligently monitor it for failure as it takes forever to rebuild a raidgroup. Caveats: Update to the latest version of Solaris10 and upgrade your zpool. When resilvering a raidgroup do not take snapshots or other similar operations. Unless they've fixed it, doing anything like that restarts the resilvering process. Edit: Oh, and stop swearing at Solaris. It can hear you, and it will punish you. Instead, embrace it, and hold a smug sense of superiority of others over knowing how things were done Back In The Day. Back when they did things the Right Way(tm).
|
# ? Feb 6, 2010 03:44 |
|
H110Hawk posted:I've never used iSCSI, but from what I've read about it an "iSCSI card" is nothing more than a glorified networking card with an iSCSI stack inside of it. ZFS handles this internally and you wasted money. I would keep this fact around if they try and lord over you other things they don't understand, as this one is them spending money they shouldn't have. Haha, suckers. quote:In all fairness, it does have a compact flash port. Use it. Hell, use it as a concession to them. Did they buy the support contract with your thumper, even Bronze? Call them suckers up and ask for a best practices configuration on your very first thumper (Of Many!), and if they balk at it call your sales rep and ask them to get it for you. Get it in an email from Sun. Tell them your honest reliability concerns. 3rd of a few. I work for a University in a perpetually poor department. The only time we get thumpers is when big phat grants come in, or when we invent new departments (my current situation). The first thumper has a 1 disk UFS root system, 6 disks in a RAIDZ for one of my bossess tests, and the other is a 23 disks raidz2. No hot spares for either of those pools (not sure where the other 18 disks are). The second thumper has a single UFS root disk, 2 23 disk RAIDZs, and a single hot spare between all of them My 'boss' doesn't really have much power here, and can in fact be easily over ruled by other people who know better. Except the Solaris admin is gone this week, so it's up to me to be a bastion of sanity. Ironically, the Solaris admin doesn't know much about ZFS, so he defers to my knowledge. So it will go something like this: 'boss' asks Solaris guy, Solaris guy asks me, I tell Solaris guy, Solaris guy tells 'boss', 'boss' tells me to do what I told him to do all along.
|
# ? Feb 6, 2010 05:15 |
|
On the topic of ZFS, we had a flaky drive the other day in our J4400 storage array and we decided to offline the drive and assign a hotspare to it. It took 50 hours to resliver a 1TB volume. Granted, this is a giant raidz2 pool with 24 disks and two hotspares so I kind of expected a long rebuild time. I'm thinking about going a step further and doing raidz3 because to me 50 hours is a pretty big window for Murphy's law to kick in and gently caress poo poo up.
|
# ? Feb 6, 2010 17:01 |
|
Erwin posted:Sorry, I didn't provide information because I wanted to run a benchmarking test to see if the speeds are really slower than they should be before I ask for more help. I was getting 20-30MB/s read and write when copying files in either direction. I found CrystalDiskMark and here's what I get: Can anybody give me an idea as to whether these speeds are to be expected? The application that the server is for has been installed, and it's hanging whenever you do anything that involves reading files from the SAN.
|
# ? Feb 8, 2010 15:43 |
|
Erwin posted:Can anybody give me an idea as to whether these speeds are to be expected? The application that the server is for has been installed, and it's hanging whenever you do anything that involves reading files from the SAN. Yea, those are pretty terrible. In my shittiest SAN, an esxi 4 VM running against a dell md3000i I get, @ 5/100mb Seq: 108.3 read / 69 write 512: 101.1 read / 69 write 4k: 8.8 read / 4.9 write No fancy hba's, no jumbo frames. It is using vmware round robin across two nics however. Not knowing anything about AX4's or EMC in general. I would guess your cache setup is messed up, maybe something like your LUN owned by ctrl-0 is being accessed via ctrl-1.
|
# ? Feb 8, 2010 16:37 |
|
Nukelear v.2 posted:Yea, those are pretty terrible. In my shittiest SAN, an esxi 4 VM running against a dell md3000i I get, That's good to know. I've opened a ticket with EMC.
|
# ? Feb 8, 2010 16:52 |
|
Erwin posted:Can anybody give me an idea as to whether these speeds are to be expected? The application that the server is for has been installed, and it's hanging whenever you do anything that involves reading files from the SAN. Is there any particular reason jumbo frames are off? We pulled a cx3-10 array off a Cisco and put it on a Dell and the performance was absolutely abysmal until we turned jumbo frames on. I didnt realize how much of a difference the two switches would make until I saw it with my own two eyes. Not sure if the procurve is your culprit but its worth a shot if you can turn jumbo frames on.
|
# ? Feb 8, 2010 17:22 |
|
Syano posted:Is there any particular reason jumbo frames are off? We pulled a cx3-10 array off a Cisco and put it on a Dell and the performance was absolutely abysmal until we turned jumbo frames on. I didnt realize how much of a difference the two switches would make until I saw it with my own two eyes. Not sure if the procurve is your culprit but its worth a shot if you can turn jumbo frames on. The contractor who set up the SAN didn't enable them, and I haven't been able to schedule downtime to enable them (I'm under the impression that the AX4 will reset connections when changing MTU size). It's certainly something that should be done, but I don't know if it's the entire cause of the poor performance. I'll see what EMC says.
|
# ? Feb 8, 2010 17:47 |
|
Erwin posted:The contractor who set up the SAN didn't enable them, and I haven't been able to schedule downtime to enable them (I'm under the impression that the AX4 will reset connections when changing MTU size). It's certainly something that should be done, but I don't know if it's the entire cause of the poor performance. I'll see what EMC says. If this is a platform that you need to schedule downtime on, and if I read your original post correctly, I would suggest adding a second switch and (at least) a second NIC to your hosts and doing MPIO. Once you get your main performance issue resolved, this will give you even more performance and more importantly availability. In the example numbers above, that was off a pair of Dell PC6224s, which are cheap as dirt.
|
# ? Feb 8, 2010 19:46 |
|
I'm helping some engineers out setting up a scalability lab to test some of our software products, we're looking at a SAN to use for an oracle 11g DB. At the max we'll have 5 DL380's connected via FC or 10GigE to it. Right now We're considering 5.4TB raw on either a: EMC CX4 FC LeftHand P4xxx 10Gig I'm also willing to look at a EqualLogic solution, or NetApp 2050, but the NetApp will probably be cost prohibitive. Anyone have any pro's or con's to these units? Budget is up to 50K, maybe 60. I get aggressive pricing from my VAR's. The CX4 is borderline our max price range and an AX4 might be a better fit, but doesn't offer 10Gig. Drives need to be 15K SAS or FC
|
# ? Feb 10, 2010 21:24 |
|
Quick question about MD3000i; the status LED keeps flashing between blue and intermittent amber - this seems to be because the RAID module owner is constantly alternating and it's warning that the preferred path is not being used. Am I correct in thinking that this is the node doing round-robin MPIO? Is this how it is expected to be used?
|
# ? Feb 16, 2010 09:51 |
|
I just got my new storage array setup, and I have a server available for benchmark testing. Its a dual-dual-core w/32GB of ram and a dual-port hba. each port goes through a separate fc-switch one hop to the array. all links 4Gbps. Very basic setup. I've created one 1.1TB RAID-10 LUN, and one 1.7TB RAID-50 LUN, both have their own but identical underlying spindles. Running centos 5.4, with the epel repo setup and sysbench, iozone, and bonnie++ I'm pretty familiar with sysbench and have a batch of commands to compare to a run on different equipment earlier in the thread. But not so much with bonnie and iozone. I'd be particularly interested in anyone with an md3000 to compare with.
|
# ? Feb 17, 2010 03:21 |
|
Never mind. :/
EoRaptor fucked around with this message at 16:51 on Feb 17, 2010 |
# ? Feb 17, 2010 16:47 |
|
Why are the numbers for sdb so different from the underlying dm-2, and where the hell does dm-5 come from?code:
code:
code:
|
# ? Feb 17, 2010 19:06 |
|
StabbinHobo posted:I'm pretty familiar with sysbench and have a batch of commands to compare to a run on different equipment earlier in the thread. But not so much with bonnie and iozone. I'd be particularly interested in anyone with an md3000 to compare with. I just so happen to have an md3000 sitting idle. It's connected to Windows hosts though so you'll have to live with iozone. But I can run some numbers against an 8 disk raid10 using 15k rpm sas drives. I'll get something generated tomorrow when I get back to work.
|
# ? Feb 18, 2010 00:08 |
|
So, calling around trying to find a replacement for our old Lefthand boxes. I need something in the 2-4 TB range for 2-3 ESX hosts running a typical 6-8 server Windows environment (files, exchange, sql, sharepoint etc) I spoke with someone from EMC last week and just got quoted for a dual blade NX4 with 15 x 450GB SAS drives and CIFS/NFS/iSCSI for $40+K. What's the difference between the NX4 and the AX4? I was surprised when they started going on about the NX4 after reading all the AX4 talk in this thread. Is he just trying to sell me what seems to be the more expensive unit or is the AX4 going away? We really don't have any out of the ordinary requirements. I've been playing phone-tag with Netapp for a couple days and look forward to seeing what they have but was a little thrown by the above EMC quote.
|
# ? Mar 2, 2010 23:04 |
|
Insane Clown Pussy posted:What's the difference between the NX4 and the AX4? The NX4 is a Celerra family product, which is EMC's NAS product line & has CIFS/NFS interfaces out of the box. If you need or WILL need a NAS (or NAS capabilities), you'd want to go the NX4 route. The AX4 is a Clariion family product, which is NOT a NAS product line by design. If you don't need a NAS, an AX4 will work fine (or whatever model they're at). You can add NAS capabilities later, but, if I remember my EMC rep correctly, you have to place a device to sit in front of a Clariion to achieve NAS functionality. I'm sure there's a host of other nit-picky differences (i.e. redundant storage processors, FC and/or iSCSI options) and maybe you can tweak one's configuration to look like the other, but by default that's the base difference.
|
# ? Mar 2, 2010 23:47 |
|
Thanks, that makes sense. I think the sales guy took my offhand comment about NFS and ran with it. $40+K is way more than I want to spend on this.
|
# ? Mar 3, 2010 00:04 |
|
Insane Clown Pussy posted:$40+K is way more than I want to spend on this. EMC is known to be rather expensive. If you plan to leverage some of their array-level features (LUN cloning/snapshotting, array replication for DR, caching), they implement them quite well and efficiently from what I've seen. Other vendors usually have these abilities also, but EMC tends to stand out when you start talking high-end or high-utilization arrays. If all you need is basic SAN storage (even with a NAS front-end), you can probably get what you need elsewhere for a lot less money.
|
# ? Mar 3, 2010 00:18 |
|
If you don't need HA, you might want to take a look at the lower end 7000 series devices from sun. They are the only vendor that won't nickel and dime you on every little feature.
|
# ? Mar 3, 2010 00:20 |
|
Does anyone have any experience, horror stories, success stories, etc regarding NexentaStor? I've been looking into as a cheaper alternative to a Sun 7000 series. I've been running 2.2 through its paces as well as checking out the 3.0 alpha. I've been using a Dell R710, which is what we'd use if we built this out for production. Primary application is NFS for vSphere. I've been pretty impressed with it so far. PERC 6i/6e isn't ideal for it, they force hardware raid so I've setup some ghetto 1 disk RAID0s. I know I'd need some SAS HBAs if I wanted to do it properly. Probably will use a pair of e1000 4 port pcie cards for LACP or invest in some 10g gear. I've had the latest nightly build of 3.0 alpha up and running on the same hardware and ZFS dedupe is working and integrated into the management GUI. Anyways, I'm looking for any experiences anyone may have, particularly regarding their support, CDP, off the shelf ssd for ZIL/L2ARC, poo poo exploding and losing all your data, etc.
|
# ? Mar 3, 2010 00:23 |
|
Insane Clown Pussy posted:So, calling around trying to find a replacement for our old Lefthand boxes. I need something in the 2-4 TB range for 2-3 ESX hosts running a typical 6-8 server Windows environment (files, exchange, sql, sharepoint etc) Just get another LeftHand node, that way you can use your existing LeftHand stuff for secondary storage / backups / etc. Its a good platform and if your infrastructure is right it will perform more than adequately for the kind of environment you are talking about.
|
# ? Mar 3, 2010 07:27 |
|
Reposting a question from January if it's ok. We have some other options now: We're planning a secondary site for a disaster recovery solution. We have around 7TB of VMware data now, and another 7TB of databases on physical blades. Does anyone have an opinion of this beast (literally) as a complete storage solution for this scenario? http://www.nexsan.com/satabeast.php We're thinking of getting atleast 2 ESX hosts at the 2nd site and transfer backups from the primary site daily to the 2nd storage as the source of recovery, then go from there (for our non-VM server - either get replacement physical servers or recover backups into to new VMs). We're having a meeting with HP next week where they'll probably recommend a Lefthand solution, I just want to get some alternatives.
|
# ? Mar 3, 2010 14:14 |
|
adorai posted:If you don't need HA, you might want to take a look at the lower end 7000 series devices from sun. They are the only vendor that won't nickel and dime you on every little feature.
|
# ? Mar 3, 2010 16:03 |
|
Misogynist posted:Interestingly, the Fishworks stuff also has better analytics than most of the solutions I've seen.
|
# ? Mar 3, 2010 16:28 |
|
|
# ? May 13, 2024 08:24 |
|
zapateria posted:secondary site for a disaster recovery My concern is you would move from a higher performance, and maybe fiber channel connected (?), EVA4400 array to a SATA-based bulk storage product (SATABeast or LeftHand). If your DR site is expected to work like your primary site, then SATABeast/LeftHand might not be able to handle the IOPS. Shoving 7TB of databases onto the same array as 7TB of VMWare data when they are currently separated has good potential to drastically change your array IOPS requirements. Someone previously suggested direct attach, which may still be your best option for a cold site as it could be less costly than many SAN solutions. You might even be able to upgrade to higher speed (SAS) direct attached disk to boost your potential IOPS and still come under the total cost of implementing a SATABeast/LeftHand array (don't forget the supporting infrastructure!). I think the same person also suggested looking into an array which would support array-to-array replication with your existing EVA4400. This is also a very good idea for down the road, especially if management wants to speed recovery times. You may not have to buy another EVA4400, but perhaps a smaller-scale array that is compatible with your EVA4400 for array-to-array replication - probably over iSCSI. You may get stuck with having to implement a FC infrastructure at your DR site for host connectivity, however, so you may want to save this idea for when you have to re-evaluate your storage environment altogether.
|
# ? Mar 3, 2010 17:34 |