|
bmoyles posted:Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days? If you want file-presentation instead of block-presentation from your unit, the new HP Extreme Data Storage 9100 is pretty badass. Really low cost per GB, very fast NFS, CIFS, managed through one console. Its also extremely dense due to their super fancy new disk shelves that hold 82 LFF disks in a 5u shelf. Dreamworks just purchased a few petabytes of it.
|
# ? Jun 15, 2009 23:53 |
|
|
# ? May 13, 2024 10:32 |
|
You could do a coraid etherdrive setup... 5x 24TB shelves would work... and come out to way less than 100K, and it's supposedly unlimited expandability.
|
# ? Jun 16, 2009 00:05 |
|
bmoyles posted:Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days? http://www.sun.com/storage/disk_systems/unified_storage/ You can probably save some money if you go with their disk arrays instead. At my work I recently put together a J4400 array which is 24TB raw (24x1TB), and after setting up ZFS with RAIDZ2 + 2 hotspares, it is about 19TB usable. You can daisy chain up to 8 (192 disks) J4400's together so you can expand to roughly ~150TB. One J4440 was about $20k after getting two host cards, dual SAS HBA's, and gold support plus you will need a server to hook it up to. I would find a decent box and load it up with a boatload of memory for the ZFS ARC cache. http://www.sun.com/storage/disk_systems/expansion/4400/
|
# ? Jun 16, 2009 01:56 |
|
Hmm, interesting stuff, thanks for the comments. This was more for hypothetical stuff. Another group is on the cusp of picking up some EqualLogic gear (their big 48-drive nodes) for this purpose, but it just seemed like overkill given the task. Figured I'd lend a hand and see if there were alternatives for them, save the company some cash, at least assuming there were other alternatives to build-it-yourself solutions. I'll shop these around. What's the scoop on Coraid, btw? I tried looking into them a few years back, but they didn't do demo units for some reason so I passed.
|
# ? Jun 16, 2009 03:55 |
|
bmoyles posted:What's the scoop on Coraid, btw? I tried looking into them a few years back, but they didn't do demo units for some reason so I passed.
|
# ? Jun 16, 2009 04:46 |
|
Yeah, I haven't been able to find any good data on them at all. The price is pretty awesome, and the concept makes sense, but I'm not going to take the plunge if they won't let me play with the boxes first...
|
# ? Jun 16, 2009 13:01 |
|
Bluecobra posted:Sun's Amber Road system looks pretty nifty: Vulture Culture fucked around with this message at 16:16 on Jun 16, 2009 |
# ? Jun 16, 2009 16:13 |
|
bmoyles posted:What's the scoop on Coraid, btw? I tried looking into them a few years back, but they didn't do demo units for some reason so I passed. I have four Coraid SR1521's populated with 15 x 500GB SATA each. I was excited when I got them a couple years ago, basically 15 hot swap drive bays, N+2 power supply configuration and 2 x 1 GbE network connectivity all for $5k each (not including the drives, which was another 2k). I started off doing some xen virtulization since AOE wasn't (and still isn't) supported by VMware. I enabled jumbo frames, configured the drives as a single RAID-10 with one hot spare. Never was able to achieve anywhere near the published numbers. With a moderate amount of disk I/O the shelf would really start to lag bad. They fact that it has zero cache really hurts the performance. If they would slap in a 2GB cache and decent management and alerting tools it would be killer. I ended up buying an equallogic shelf and consolidated the VMs from all four CR1521's on to it, and still have IOPS to spare. Keep in mind though that the EQ box has 16 x 300GB 15k SCSI. I will sell the CR1521's to anyone who wants them for a song. Coraid sells an HBA that allows you to use them in VMware now. brent78 fucked around with this message at 18:32 on Jun 16, 2009 |
# ? Jun 16, 2009 18:29 |
|
Misogynist posted:You can also use an x4600 as an interface to a bunch of Thumpers (up to 6 at 48x1TB each). They tend not to advertise this functionality much. If you're going to do this, I recommend OpenSolaris/SXCE over Solaris 10 because of the substantial improvements in native ZFS kernel CIFS sharing. You've got to post a link detailing the "x4600 as a master to multiple x4500s" setup. I've never heard of that.
|
# ? Jun 16, 2009 18:54 |
|
complex posted:You've got to post a link detailing the "x4600 as a master to multiple x4500s" setup. I've never heard of that.
|
# ? Jun 16, 2009 20:22 |
|
Misogynist posted:You can also use an x4600 as an interface to a bunch of Thumpers (up to 6 at 48x1TB each). They tend not to advertise this functionality much. If you're going to do this, I recommend OpenSolaris/SXCE over Solaris 10 because of the substantial improvements in native ZFS kernel CIFS sharing. Though I don't see why you would need something like an X4600 when you can get a X4440 instead. With the J4500 array, you will daisy chain each expansion tray together instead of having a dedicated external SAS port in the server for each tray. One cool thing is that the SAS HBA's support MPxIO so you can be connected to both host cards in the tray.
|
# ? Jun 16, 2009 20:39 |
|
bmoyles posted:What's the scoop on Coraid, btw? I tried looking into them a few years back, but they didn't do demo units for some reason so I passed. They suck balls, for lack of a more elegant way of phrasing it. Their "device" is just plan9 with a AOE stack on it. They're slow, latent, and bug prone. We bought 400-500TB worth of them a few years back, using 750gb disks, and regretted it every second of the way.
|
# ? Jun 16, 2009 21:48 |
|
bmoyles posted:Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days? HDS (the piece that used to be Archivas) makes a widget called HCAP that's designed for long-term data archiving. http://www.hds.com/products/storage-systems/content-archive-platform/index.html If you have any really specific questions about what it can do, I can get them answered.
|
# ? Jun 18, 2009 23:23 |
|
bmoyles posted:Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days? EMC Centera http://www.emc.com/products/detail/hardware/centera.htm Works with hundreds of apps including Symantec Enterprise Vault, DiskXtender, etc. Also replicates to a second Centera so you don't have to back it up. If you go with the Parity model you're looking at 97TB usable in a full rack.
|
# ? Jun 19, 2009 11:12 |
|
Are there any knowledgeable goons here that would care to comment on an HDS AMS 2500 vs an IBM DS5300? We're looking at around 100Tb with a mix of FC/SAS and SATA drives, but will need to expand that to +200Tb over the next couple of years. HDS are claiming their new internal SAS architecture is all the rage; IBM are basically saying HDS are full of poo poo and their stuff is way cooler. The IBM kit is about 10-15% more expensive but supposedly faster, so they tell me. On the other hand, we've been previously using HDS for many years and had no major problems, it's all worked very well and their support has been excellent. While we have plenty of IBM servers we have never used or bought anything from their storage range. For this project the "fast" drives (FC or SATA) would be used for VMWare for Exchange, AD, etc while the SATA disks would be used for archiving of medical records. Can anyone give any advice or reasons why it might be worth (or not) spending the extra $s on the IBM?
|
# ? Jun 19, 2009 19:26 |
|
EscapeHere posted:Are there any knowledgeable goons here that would care to comment on an HDS AMS 2500 vs an IBM DS5300? We're looking at around 100Tb with a mix of FC/SAS and SATA drives, but will need to expand that to +200Tb over the next couple of years. HDS are claiming their new internal SAS architecture is all the rage; IBM are basically saying HDS are full of poo poo and their stuff is way cooler. The IBM kit is about 10-15% more expensive but supposedly faster, so they tell me. On the other hand, we've been previously using HDS for many years and had no major problems, it's all worked very well and their support has been excellent. While we have plenty of IBM servers we have never used or bought anything from their storage range. I've never been terribly happy with the IBM storage I've used in the past (we have an older DS4000 series that I'm in the process of retiring), although I haven't used the DS5300. Performance was ok, and the hardware itself was fairly reliable (with the exception of cache batteries dying every 5 or 6 months, which requires you to pull and disassemble the controller with a screwdriver), but support and management of the hardware itself was a huge pain in the rear end. By and large though, that's the same general complaint have about all IBM equipment. Stupid poo poo like 17 different serial numbers being on a piece of equipment and IBM has no idea which one they actually want, FRU numbers that are meaningless 3 weeks after you buy something, 12 different individual firmware updates that need to manually tracked and aren't updated regularly vs. regularly updated packages, etc. I have no idea how IBM stays in business, dealing with them is terrible. Maneki Neko fucked around with this message at 20:46 on Jun 19, 2009 |
# ? Jun 19, 2009 20:41 |
|
EscapeHere posted:For this project the "fast" drives (FC or SATA) would be used for VMWare for Exchange, AD, etc while the SATA disks would be used for archiving of medical records. Can anyone give any advice or reasons why it might be worth (or not) spending the extra $s on the IBM? What you get in performance for that 10-15% bump is going to be erased by the utter pain in the rear end that managing IBM kit is. IBM gear is great if your company is already an IBM dynasty.
|
# ? Jun 19, 2009 22:32 |
|
Maneki Neko posted:By and large though, that's the same general complaint have about all IBM equipment. Stupid poo poo like 17 different serial numbers being on a piece of equipment and IBM has no idea which one they actually want, FRU numbers that are meaningless 3 weeks after you buy something, 12 different individual firmware updates that need to manually tracked and aren't updated regularly vs. regularly updated packages, etc. I have no idea how IBM stays in business, dealing with them is terrible.
|
# ? Jun 20, 2009 22:23 |
|
A good rack-mountable NAS with 4 to 8 TB and under 2 grand?
|
# ? Jun 22, 2009 17:34 |
|
BonoMan posted:A good rack-mountable NAS with 4 to 8 TB and under 2 grand? If you do not need any performance at all you could look at the Netgear RNR4410-100EUS, should be between $1700 - $2500. It is rack mountable and has 4 1Tb disks. The drawbacks are the absolute terrible speed, the annoying web interface and the crappy build quality. If you want something with a bit more performance you could look at HP X1400 NAS (AP787A), the 4TB model costs around 6 grand. This is still not a absolute speed monster, but is multiple times faster than the netgear. If you can support it yourself you could build something from newegg and use FreeNAS but I have no idea what components to choose.
|
# ? Jun 22, 2009 18:15 |
|
KoeK posted:If you do not need any performance at all you could look at the Netgear RNR4410-100EUS, should be between $1700 - $2500. It is rack mountable and has 4 1Tb disks. The drawbacks are the absolute terrible speed, the annoying web interface and the crappy build quality. I have one of these I think, from when they were made by Infrant and mine only has 4x 500GB drives. It is absolutely terrible and I wouldn't recommend it to anyone.
|
# ? Jun 22, 2009 18:34 |
|
Mierdaan posted:I have one of these I think, from when they were made by Infrant and mine only has 4x 500GB drives. It is absolutely terrible and I wouldn't recommend it to anyone. I have a client which didn't want to listen to my advice and took the 2 TB ReadyNAS. And yes it sucks, but what to expect for 2 grand.
|
# ? Jun 22, 2009 18:52 |
|
Sweet thanks for the recommends. Doesn't have to be ultra fast as it will only be used to pull graphic stills and not video.
|
# ? Jun 22, 2009 18:52 |
|
What about Qnap? I haven't heard crappy things and we're looking at this: http://www.newegg.com/Product/Product.aspx?Item=N82E16822107023 Which seems decent enough and has 8 bays which is nice. Any thoughts?
|
# ? Jun 24, 2009 16:31 |
|
BonoMan posted:What about Qnap? I haven't heard crappy things and we're looking at this: Looks like a cheaper version of the Adaptec SnapAppliance, but the SnapAppliance has a decent OS and is proven reliable (ours has a few year uptime). I wouldn't use it for anything but archive.
|
# ? Jun 24, 2009 19:47 |
|
optikalus posted:Looks like a cheaper version of the Adaptec SnapAppliance, but the SnapAppliance has a decent OS and is proven reliable (ours has a few year uptime). I wouldn't use it for anything but archive. Huh. Well 2500 is our budget and I can't find good pricing info on SnapAppliance anywhere. We're looking at 6TB storage.
|
# ? Jun 24, 2009 20:34 |
|
BonoMan posted:Huh. Well 2500 is our budget and I can't find good pricing info on SnapAppliance anywhere. We're looking at 6TB storage. Well, $2100 + tax and shipping doesn't leave you much room for drives, and I'd heavily recommend RAID6 for SATAs, so 8 1TB drives.
|
# ? Jun 24, 2009 20:48 |
|
optikalus posted:Well, $2100 + tax and shipping doesn't leave you much room for drives, and I'd heavily recommend RAID6 for SATAs, so 8 1TB drives. Yeah there might be a little bit of flux we'll have to see. What's pricing like on SnapAppliances? I realize that's kind of a vague question, but any ideas? Thanks for the advice!
|
# ? Jun 24, 2009 21:15 |
|
BonoMan posted:Yeah there might be a little bit of flux we'll have to see. What's pricing like on SnapAppliances? I realize that's kind of a vague question, but any ideas? Looks like Adaptec sold it to Overland, and I can't find any current pricing. I remember them running about $5k for an 8TB box. At that price, you might as well look at Hitachi as well.
|
# ? Jun 24, 2009 21:50 |
|
optikalus posted:Looks like Adaptec sold it to Overland, and I can't find any current pricing. I remember them running about $5k for an 8TB box. At that price, you might as well look at Hitachi as well. At that price I'm just gonna stab myself in the eye. So we have 1500 budgeted for a firewall. Only we need a very simple firewall (it's a simple simple simple network). So we're thinking maybe we can get a simple firewall for 400-700 and use the rest for drives?? Any firewall ideas?
|
# ? Jun 24, 2009 21:54 |
|
BonoMan posted:At that price I'm just gonna stab myself in the eye. Linksys WRT54G family? Basic PC running a linux firewall distribution? (IPCop, Smoothwall, etc) -- A former employer of mine ran 4 or 5 sites using IPCop and cable modem connections, the largest being maybe 100 office workers.
|
# ? Jun 25, 2009 07:03 |
|
BonoMan posted:At that price I'm just gonna stab myself in the eye. Depending on what kind of requirements you need, a Juniper SSG-5 (Should be between $500 - $600) or a Cisco ASA 5505 are both simple firewalls with comparable features. I'd go for the Juniper, but that is personal preference
|
# ? Jun 25, 2009 16:52 |
|
BonoMan posted:A good rack-mountable NAS with 4 to 8 TB and under 2 grand?
|
# ? Jun 25, 2009 20:16 |
|
Bluecobra posted:You can do this if you roll your own with a 3U Supermicro case, 1.5TB drives, a decent Intel motherboard/processor, and OpenSolaris so you can use ZFS. Once you get OpenSolaris installed, it is pretty trival to make a ZFS pool and you can do something like a RAIDZ2 which similar to RAID 6 in redundancy. You can then share out the ZFS pool you just created to Windows hosts with a CIFS share.
|
# ? Jun 25, 2009 23:49 |
|
Misogynist posted:Just note that if you take this route and expect AD integration, you had better be very familiar with LDAP and Kerberos (or at least know enough to troubleshoot when the tutorial you're following misses a step), because it's not much more straightforward than it is in Samba. OpenSolaris is an amazing OS for Unix/Linuxy people, but Sun bet the storage farm on the 7000 series' secret sauce, not the OpenSolaris CLI.
|
# ? Jun 26, 2009 02:05 |
|
What's the general process for replacing drives in a NAS with newer ones? Say you have an 8 bay NAS and they're all taken up and then 3 years down the line you want to replace the drives? How does that happen without just copying everything over to a duplicate NAS or whatever?
|
# ? Jun 26, 2009 16:19 |
|
BonoMan posted:What's the general process for replacing drives in a NAS with newer ones? Say you have an 8 bay NAS and they're all taken up and then 3 years down the line you want to replace the drives? How does that happen without just copying everything over to a duplicate NAS or whatever? Typically copy and replace is how it is done. With the way disk sizes grow you will likely be able to do some sideline magic where you take half of your new disks, make a quick software array on your current computer, copy data. Yank all of your disks from NAS, make software array, copy data. Put all new disks including original array into NAS and build, copy data final time. Some controllers allow you to do one at a time disk swaps/rebuilds. Once you have rebuilt onto the larger disks it will automatically (or with some button pressing) expand the raw device to be the larger size. Once you've done that you have to expand the overlaid filesystem somehow. If it's a "black box" device you are at the mercy of the device, if it's linux/windows you are at the mercy of the filesystem grow commands. (NTFS can grow with tools a-la partition magic. Other common filesystems have similar things, http://www.google.com/search?q=ext3+grow+filesystem ) Sometimes OS's don't take kindly to raw block devices that are directly attached changing size while booted. I would suggest mounting your filesystem read only for the rebuild where the FS can grow. (In reality you just restore from backups, right? )
|
# ? Jun 26, 2009 17:17 |
|
H110Hawk posted:Typically copy and replace is how it is done. With the way disk sizes grow you will likely be able to do some sideline magic where you take half of your new disks, make a quick software array on your current computer, copy data. Yank all of your disks from NAS, make software array, copy data. Put all new disks including original array into NAS and build, copy data final time. We do have an LTO2 system layin' around so I guess we could use that.
|
# ? Jun 26, 2009 17:26 |
|
Alright, I feel like a cheap bastard for asking this but here goes. We have an X4540 at work and it's awesome. It came loaded up with 250 gigabyte Seagate SATA drives, and I'd like to upgrade one of the vdevs (6 drives) and a hot spare with 1 terrabyte drives. My Sun vendor (unsurprisingly) wants $850 for the Sun-branded Seagate ES.2 1TB drives. I can get the same drives from CDW for about $250. These are Canadian prices by the way. Now I also have Sun J4200 disk arrays here running since January or so and a few of them have had their 250GB drives replaced with the cheaper Seagate Barracuda 7200.11 drives (after upgrading their firmware fortunately) and they're working fine. Only one has failed with block errors, and I haven't had any strange RAID drop outs, caching issues, or other strange problems that could be associated to non-Enterprise firmware. So the real question is, can I cheap out and use the 7200.11/7200.12 drives for the X4540 without any issue? They're literally half the cost of the ES.2 disks. Also, I'm not worried about support since we've confirmed that issues not caused by third-party disks are still supported.
|
# ? Jun 29, 2009 15:38 |
|
|
# ? May 13, 2024 10:32 |
|
lilbean posted:So the real question is, can I cheap out and use the 7200.11/7200.12 drives for the X4540 without any issue? They're literally half the cost of the ES.2 disks. Also, I'm not worried about support since we've confirmed that issues not caused by third-party disks are still supported. You should be fine. The hardest part of the operation is breaking the loctite.
|
# ? Jun 29, 2009 16:43 |