|
Speaking of lefthand, for those with a p4300/4500, how do you set up the NICs on your equipment? Do you bond the nics on each shelf/unit?
|
# ? Jul 14, 2011 14:16 |
|
|
# ? May 10, 2024 00:34 |
|
This seems like the logical place to leave this question: Someone gave me an HP Storageworks 1000 and i'm trying to see if its even feasible to use this for a massive home storage unit. I'm not trying to use SAS drives, though I do have a few for it. So what i'm getting at here is this: Do you think a SAS to SATA converter would work to get the unit to recognize 10 hard drives?
|
# ? Jul 15, 2011 05:42 |
|
Posts Only Secrets posted:This seems like the logical place to leave this question: The link you gave is for a fibre channel device? You would need a FC HBA card and a FC cable. It has the potential to be a fast but LOUD device. Massive it will however never be, unless it's physical mass you are refering to.
|
# ? Jul 15, 2011 10:30 |
|
Posts Only Secrets posted:This seems like the logical place to leave this question: We have one of those in our test server room leftover from a long time ago. It is holy poo poo loud.
|
# ? Jul 15, 2011 13:36 |
|
InferiorWang posted:Speaking of lefthand, for those with a p4300/4500, how do you set up the NICs on your equipment? Do you bond the nics on each shelf/unit? Yes we bonded the NICs and we went further and had each NIC in the bond plug in to a separate switch stack member
|
# ? Jul 15, 2011 14:06 |
|
So you did the adaptive load balancing then. I'm debating whether to do that or just go with LACP bonding to one switch. Switch one is a cisco 4507 while switch two is a 2960G. I wonder if there is any performance difference between the two options.
|
# ? Jul 15, 2011 14:52 |
|
InferiorWang posted:So you did the adaptive load balancing then. I'm debating whether to do that or just go with LACP bonding to one switch. Switch one is a cisco 4507 while switch two is a 2960G. I wonder if there is any performance difference between the two options.
|
# ? Jul 15, 2011 14:58 |
|
Ok, BACKUPS. How can we do this better? We back up about a dozen Servers and 50 Desktop computers. *One* of our servers has a full, 4 TB storage unit (RAID 5 w/ a hotspare). It gets a full backup once a month, and then incremental every day. There are millions of individual files, and our backup program is now choking on this. Since no one knew what most of the files were for, we were told by the person that does most of the work on the server to go ahead and delete the stuff that was old. This was like 90% of the data. Of course, while in the middle of deleting the data, someone else was like "no wait, we need this still. All of it." I said we can't hold all of it. It's not organized, it filled up all storage, and it uses up most of our backup resources. Seriously, four terabytes of "mystery files". Their solution? They want another 4 TB RAID unit - and then we should back that up as well. That is over-kill in multiple ways. The user probably doesn't have the money for that (I'm estimating $3,000 for the storage), and we would need another backup drive (estimated $5,000). How do people with dozens/hundreds of terabytes handle backups? Multiple backup drives? HUGE backup drives? What about bandwidth? We back up over the network. Do you have backup drives attached locally to servers? Does each server get its own backup unit?
|
# ? Jul 15, 2011 17:32 |
|
Xenomorph posted:How do people with dozens/hundreds of terabytes handle backups? Multiple backup drives? HUGE backup drives? What about bandwidth? We back up over the network. Do you have backup drives attached locally to servers? Does each server get its own backup unit? Quite a few people just rely on snapshots (many manufacturers allow snaps to be on separate disk shelves than the filer, so if you lose a shelf, you still have your snaps and can restore from that). Others rely on tape (though LTO4+ would be a necessity at that level). Others rely on disk-based vtls like a Quantum DXi.
|
# ? Jul 15, 2011 17:54 |
|
We do scheduled Snapshots + SAN Replication.
|
# ? Jul 15, 2011 18:39 |
|
three posted:We do scheduled Snapshots + SAN Replication. We do this + NMDP dumps to tape, although the tape dump is primarily for contractual reasons.
|
# ? Jul 15, 2011 18:44 |
|
optikalus posted:Quite a few people just rely on snapshots (many manufacturers allow snaps to be on separate disk shelves than the filer, so if you lose a shelf, you still have your snaps and can restore from that). NetApps allow you to use either SnapMirror (complete replication) or SnapVault (snapshot archiving) to put all of their data on a secondary filer, usually filled up with all SATA disks.
|
# ? Jul 15, 2011 19:03 |
|
I've always been very dubious of the 'treat snapshots as a backup' line. The safest method is backing up using a different medium (tape, backup disk, backup appliance). Replicating it to another array isn't going to help you if a disgruntled employee wants to cause some damage or a hacker gains access to your arrays ALA Distribute IT http://www.theregister.co.uk/2011/06/21/hacks_wipe_aus_web_and_data/ Get tape in there somewhere and get it going to tape regularly!! Vanilla fucked around with this message at 00:44 on Jul 16, 2011 |
# ? Jul 16, 2011 00:36 |
|
Xenomorph posted:Ok, BACKUPS. So when it comes to file backups I see a number of strategies. 1.) Archive. Archive anything that has not been modified or accessed in 30 days. Stick the archive on the other array and replicate it. Don't back it up (anything archived hasn't been modified so it will already exist in the backups). This means you only backup the active data and not the stale data. This ends up being a fraction of the backup and also means people can access the older files as it's an onlne archive. Check out Enterprise Vault from Symantec. 2.) File-deduplication based backup technologies such as Avamar which only send changed data to the backup. Not only for the big servers, also can be deployed on laptops and desktops for local backups. This may be out of your budget but there is a software only edition for smaller environments.
|
# ? Jul 16, 2011 01:04 |
|
conntrack posted:The link you gave is for a fibre channel device? You would need a FC HBA card and a FC cable. It has the potential to be a fast but LOUD device. Massive it will however never be, unless it's physical mass you are refering to. I have 2 fibre cards in the expansion slot already, along with a fibre switch. That's why i'm wondering about using the adapter, I have practically everything needed to get this running. Edit: I'm running out for the night, but i'll post pics of the hardware when I get in. Posts Only Secrets fucked around with this message at 01:32 on Jul 16, 2011 |
# ? Jul 16, 2011 01:30 |
|
Vanilla posted:I've always been very dubious of the 'treat snapshots as a backup' line. The safest method is backing up using a different medium (tape, backup disk, backup appliance). I'm sorry, but snapshots are not backups, they're useful tools, but if the poo poo hits the fan you can't rely on them. To trust a backup, you need 3 copies of the data, on 2 different types of media, one of them offsite.
|
# ? Jul 17, 2011 11:40 |
|
Alright, here's the pics of the hardware I have: Storageworks 1000 SAN Switch 2/8-EL Dual fibre ports on the Storageworks 1000 Prolient DL380 So using the hardware above, I'm looking to make a storage server. I wanted to know if I used a SAS to SATA adapter, that the SW1000 would still recognize the drives.
|
# ? Jul 17, 2011 17:45 |
|
Posts Only Secrets posted:I have 2 fibre cards in the expansion slot already, along with a fibre switch. The adapter wouldn't work. The drives that MSA uses have hot plug SCSI connections right on the drive system board, so you wouldn't be able to slide the drive cages in with an adapter installed. The only way you could really use it and be massive would be to get some 300 or 450 GB hot plug SCSI drives. Even then you won't be breaking 4TB, and 1 modern SSD on a SATA-3 controller will eclipse the maximum performance of it.
|
# ? Jul 18, 2011 02:50 |
|
Nomex posted:The adapter wouldn't work. The drives that MSA uses have hot plug SCSI connections right on the drive system board, so you wouldn't be able to slide the drive cages in with an adapter installed. The only way you could really use it and be massive would be to get some 300 or 450 GB hot plug SCSI drives. Even then you won't be breaking 4TB, and 1 modern SSD on a SATA-3 controller will eclipse the maximum performance of it. Would the only issue be that the cage won't fit with the adapter? We thought of that and were trying to think of a bay extender that would sit in front of the unit. That or a lot of the adapters we saw had 3-4 foot cables. We could custom fab a small rig to hold the drives in front of the unit.
|
# ? Jul 18, 2011 04:08 |
|
It sounds like a lot of work for something that isn't going to offer tremendous performance.
|
# ? Jul 18, 2011 05:30 |
|
Hok posted:I'm sorry, but snapshots are not backups, they're useful tools, but if the poo poo hits the fan you can't rely on them. That is correct, sir. And given how cheap tape storage is now with Dell and Quantum Adic's line of low-end units (think Superloader 3 or the bottom tier PowerVaults), there's really no excuse. And this is especially true if you're a new financial, political, or marketing operation that has to have data stored for more than the length of time it takes for a drive or RAID to crater.
|
# ? Jul 18, 2011 06:35 |
|
Honky_Jesus posted:That is correct, sir. And given how cheap tape storage is now with Dell and Quantum Adic's line of low-end units (think Superloader 3 or the bottom tier PowerVaults), there's really no excuse. Tape is awesome, however don't actually buy Dell autoloaders. Both powervaults we've owned have died in only a couple years, and when they were working they changed slowly and made horrible grinding noises. We've replaced ours with HP G2 loaders and I have zero complaints about them. Still quite cheap as well. Edit: Small sample I know but we opened them up and the mechanical build quality is on par with consumer ink jets. I can't imagine them actually surviving anywhere. Nukelear v.2 fucked around with this message at 14:57 on Jul 18, 2011 |
# ? Jul 18, 2011 14:54 |
|
Nukelear v.2 posted:Tape is awesome, however don't actually buy Dell autoloaders. Both powervaults we've owned have died in only a couple years, and when they were working they changed slowly and made horrible grinding noises. We've replaced ours with HP G2 loaders and I have zero complaints about them. Still quite cheap as well. We've had good luck with Dell's tape drives. They're just rebranded IBM drives, I believe. vv We bought a Quantum tape drive and it made the most horrible sounds ever, as if there were rocks being ground up.
|
# ? Jul 18, 2011 15:36 |
|
Nukelear v.2 posted:Tape is awesome, however don't actually buy Dell autoloaders. Both powervaults we've owned have died in only a couple years, and when they were working they changed slowly and made horrible grinding noises. We've replaced ours with HP G2 loaders and I have zero complaints about them. Still quite cheap as well. It really doesn't matter who you're buying your autoloaders from. Chances are they're either manufactured by Overland, Quantum or Storagetek. Dell, IBM, HP and a ton of other major brands just rebadge. Also, tape sucks. Disk to disk backup is where it's at.
|
# ? Jul 18, 2011 16:04 |
|
Nomex posted:
I really don't understand how disk to disk is more popular than tape with actual usage scenarios taken into consideration. With our LTO4 tapes, we can store 1TB of data per tape, offsite. How can disk to disk compare to that? We have about 4 TB of an older MD3000 we could use for disk to disk, but it's slower than LTO4, and would fill up within a week. Sure, de-duplication would solve that issue, but it still means we'd need to spend big $$ to get disks faster than our tape, and we'd still have no offsite backup solution.
|
# ? Jul 18, 2011 17:01 |
|
Nomex posted:It really doesn't matter who you're buying your autoloaders from. Chances are they're either manufactured by Overland, Quantum or Storagetek. Dell, IBM, HP and a ton of other major brands just rebadge. Nomex posted:Also, tape sucks. Disk to disk backup is where it's at. It's safest from intruders if it's in a safe instead of on the network. Vulture Culture fucked around with this message at 17:09 on Jul 18, 2011 |
# ? Jul 18, 2011 17:07 |
|
Nomex posted:
We never could make disk to disk as our only source of backups make sense for us as a company. We dump initial backups to disk yes but we archive off weekly and monthly sets to LTO4. We havent found any solution yet for storing the entire set of company data that beats a turtle LTO5 in an offsite safe to protects against mega disaster or insane admin or both.
|
# ? Jul 18, 2011 17:30 |
|
Nomex posted:It really doesn't matter who you're buying your autoloaders from. Chances are they're either manufactured by Overland, Quantum or Storagetek. Dell, IBM, HP and a ton of other major brands just rebadge. Well, the newer STK is Oracle now, and they're hocking the StreamLine series something fierce. The newer models are petty spiffy; I touch myself thinking about the SL8500. But the SL500 sucked pretty hard; robot was a hassle to replace, no front console, and most end users never installed the SL software (which is pretty typical for Sun products; think SADE and the 6320). Overland...Bull Systems...BullOS...yuck. Quantum...they keep releasing froggy firmware and beta patches. I like the Scalar i-line, but stable they are not and almost all the issues are firmware or media related. And sure. You show me a drive that will last 25 years or 50 years, and I will agree disks rock. Doesn't happen? Failure rate of hard drives is around 1-in-8 a year? RAID 10 fails because drives in the same mirror broke? Bad data wrote across all the drives in a RAID and now your OS is hosed? Having to constantly pester SAs that they can't have 3 drives in a system, have 1 drive in a critical failure state and 1 drive degraded, and still expect to be totally okay? I like tape.
|
# ? Jul 18, 2011 18:23 |
|
Misogynist posted:IBM rebrands certain pieces in their low-end line, but they do manufacture most of their libraries themselves. The 3584 I used to administer was an amazing piece of gear. The 3584 is a solid library. And very easy to maintain. Not as easy as the L180 or L700, but still a fine piece of work. But yeah, a lot of the manufacturers cross-build. STK's L700 was made by STK, Sun, HP, Dell, and IBM under the same name or slightly different. Nice thing about that is, aside from the controller board, the parts were interchangeable. Robot was a robot, regardless of who made it, although the different firmwares could make it a little wonky. But the Powderhorn...Yeeeesh...what a beast. DoT in Maryland still uses like 10 of those.
|
# ? Jul 18, 2011 18:28 |
|
three posted:We've had good luck with Dell's tape drives. They're just rebranded IBM drives, I believe. vv The ML6000's are Quantum's, the TL2000/4000's are IBM's, and the smaller Dells are mostlty Quantums. Tape libraries are pretty reliable these days, which is more than can be said of a certain brand of LTO5 tapes which are trashing drives left right and center
|
# ? Jul 19, 2011 10:28 |
|
Let's talk arrays for a minute. I have a pretty small setup- 4TB HP P2000, 2 VM boxes with 15-20 VM's total, for an office of 25-50 people. Is there any merit to creating my vdisks on multiple raid arrays, one for storage purposes, one for VM host purposes, with the intention of improving performance? I feel like just making one large Raid-50 array over all of the disks is a reasonable decision, but others in my organization feel like there is some reason why that wouldn't be a "best practice". Does anyone have any input?
|
# ? Jul 19, 2011 17:30 |
|
Dreadite posted:Let's talk arrays for a minute. I have a pretty small setup- 4TB HP P2000, 2 VM boxes with 15-20 VM's total, for an office of 25-50 people. The arguments for doing it differently can hold merit, but it really depends on your actual workloads. We run all of our file shares on SATA disk but mix all of our other workloads.
|
# ? Jul 19, 2011 17:43 |
|
adorai posted:The arguments for doing it differently can hold merit, but it really depends on your actual workloads. We run all of our file shares on SATA disk but mix all of our other workloads. Well, that's the thing. The actual file share workload is going to be pretty low, and it's 12 of all of the same drives- 300gb SAS 15k. I just want to prevent bottlenecks at the vdisk/raid level, but I don't know if that's a reasonable concern, or enough reason to chop up my already small storage space into a Raid-5 and two Raid 10's or something.
|
# ? Jul 19, 2011 17:48 |
|
I run about twice that on 2 Equallogic SANs in RAID-50. Total of 28 SATA disks. It's pretty bad, but usable in most cases. It really depends on your IOPS and your Read/Write ratios. If you are doing a lot of writes then I would consider the RAID-10. If not you could probably do okay with RAID-5, although I would probably do RAID-6 if you can. Check to see if you can do RAID-50, would be the lazy way out. And before anyone comments on my current setup, yes it causes me to drink heavily, and yes I am trying to explain to everyone involved that running our entire business on a bunch of SATA disks is retarded. But at least I'm not the RAID-0 guy.
|
# ? Jul 19, 2011 18:15 |
|
I think I'm about to roll out two raid-50 arrays, six disks each. That gives me 1.2TB usable on each array, which is actually 300GB more than if I used raid 10 + raid 5.
|
# ? Jul 19, 2011 18:35 |
|
Why split them in 2? We're not a big enough shop to have a storage admin, so I do that in addition to anything else that has flashy lights, but I would think that would give you more IOPS because you'll have more spindles.
|
# ? Jul 19, 2011 18:39 |
|
I'm a team leader for the storage team for a fairly large UK company, and we're currently in the final stage of discussions with three of the larger vendors in the market (block based) to refresh our existing EMC estate (DMX / Clariion / VMAX). I was wondering if anyone has had any practical experience with an IBM Storwize V7000 midrange disk system? Are there anythings to be aware of, or whether it does exactly as advertised. Also, to add to the tape vs disk debate, were in the final throes of removing our Powderhorns and replacing with Data Domain. We're finding our backups (around 500Tb a night) and restores are at least 30% faster. Coupled with the not-insignificant amount of data centre floorspace we're reclaiming, going to a disk based solution is a bit of a no brainer.
|
# ? Jul 20, 2011 00:11 |
|
We use almost everything mentioned here in our backup scheme. The NetApp SAN runs snapshots, and nightly backups are pushed to a DataDomain, and that's backed up weekly to LTO4 where it's then shipped offsite to Iron Mountain. Monthly backups stay off for a long long time, and weekly's get rotated every X weeks. I'm personally a big advocate of offsite storage, if an unhappy admin went in an wiped out the SAN (which runs all our VM's, and storage), and then nuked the DD, and we didn't have tapes the company would be irreparably harmed. It wouldn't be able to recover. Lots of you guys work for smaller companies so this kind of hardware is outside of your budgets. It's nice as hell to have though.
|
# ? Jul 20, 2011 00:39 |
|
skipdogg posted:I'm personally a big advocate of offsite storage, if an unhappy admin went in an wiped out the SAN (which runs all our VM's, and storage), and then nuked the DD, and we didn't have tapes the company would be irreparably harmed. It wouldn't be able to recover. This is a topic that has been brought up recently due to what happended to the Aussie company Distribute IT. As far as i'm aware Data Domain have the ability to dial into the box and recover any backups, it's something initiated by engineering. Even if someone deleted all the backups they could still get them back - feel free to ask them.
|
# ? Jul 20, 2011 00:46 |
|
|
# ? May 10, 2024 00:34 |
|
grobbendonk posted:I'm a team leader for the storage team for a fairly large UK company, and we're currently in the final stage of discussions with three of the larger vendors in the market (block based) to refresh our existing EMC estate (DMX / Clariion / VMAX). The V7000 is alright. I've no practical experience - only what i've seen and heard. IBM are offering some rock bottom prices to get footprint and I think they updated the platform a few weeks ago. - It doesn't have ny kind of compression or deduplication like EMC or Netapp can offer. - It doesn't offer any kind of built-in file capability like EMC or Netapp (even thought you say it's for your block environment the file capbility often comes in handy somewhere over the next 3-5 years!). - I recall it comes with SVC to allow you to virtualize any old arrays. This requires some effort but not heard of anyone actually bothering to do so. - As SVC sits over the top expect any VMware plugins and features to be available for use quite some time after actual VMware release (12 months+) - Doesn't have the platform maturity of other vendors offerings. Lacks many things which the others have. Might be a contender for the Clariion footprint, but certainly not your DMX/VMAX. The new EMC VNX range is very good (i'm biased) so consider using the IBM proposition to beat down the price.
|
# ? Jul 20, 2011 01:03 |