|
El_Matarife posted:Dell has some amazingly cheap dedupe appliances the DR4000 / DR4100 but I should warn you they're not very pleasant to use. I had a lot of issues with replication, cleanup, storage space displayed and used. We're using it as basically a CIFS dump, not NDMP because no one bought the dedupe option for BackupExec2012. (Which is itself a huge piece of crap. Baby's first backup for sure. If you've got more than the smallest of small business environments, it'll make you miserable. I wouldn't use it with more than 10 servers.) I think the Dell software is maturing though. But how does that solve my export for a client problem? I like the idea of disk over tape but one thing I don't get with disk is the ability to export, stick it in a box and forget about it. Disk to disk requires network connections. My environment (one of them) doesn't have it. Burning 35TB to DVD or even USB 4tb hard disk is not an option (:hur: 10 disks @ 35mb/sec. Rofl). What if I didn't want dedupe or backup agents or basically any fancy bullshit. I want right click, send to tape. Why should I have to pay Commvault 200k for the privilege?
|
# ? Feb 10, 2014 00:08 |
|
|
# ? May 10, 2024 04:09 |
|
I've been reading a bit into Scale-Out File Servers and it's almost impossible to read anything about Hyper-V that Microsoft have produced without also coming across it. What I'm really struggling to work out is what the point of it is? It seems to be pitched at the lower end of the market as a less expensive SAN alternative, but needs two servers to act as the redundant hosts, a dual-ported JBOD shelf, RAID controllers that can understand what's going on etc. I'm really failing to see how it's a better proposition than just buying a low-end SAN seeing as the people making hardware for it aren't particularly big and there's still an element of 'building it yourself' and all the support issues that go along with that. Is anyone using them / used them in the past and can explain why it exists?
|
# ? Feb 12, 2014 12:54 |
|
Caged posted:I've been reading a bit into Scale-Out File Servers and it's almost impossible to read anything about Hyper-V that Microsoft have produced without also coming across it. What I'm really struggling to work out is what the point of it is? It seems to be pitched at the lower end of the market as a less expensive SAN alternative, but needs two servers to act as the redundant hosts, a dual-ported JBOD shelf, RAID controllers that can understand what's going on etc. Almost every jbod is dual controller these days anyway. The RAID controllers don't have to know -- gfs2 and ocfs2 keep small quorum data. Windows has a quorum partition. In ye olden days when SANs were incredibly expensive, people ran clusters with SCSI JBOD shelves. These days, it's supported just because. SoFS is used so you can expand these setups, present data on FC to systems without HBAs, etc. More importantly, think of SoFS like a ZFS/gluster hybrid. You want plain hbas so windows can see the disks. You want jbods because they're plain disks and the easiest way to expand storage on a storage chassis. You want to expand storage so Windows can pretend to be a SAN head and present it to other systems. It's more of a SAN replacement. As for the last, all the major vendors still make DAS.
|
# ? Feb 12, 2014 14:53 |
|
Caged posted:I've been reading a bit into Scale-Out File Servers and it's almost impossible to read anything about Hyper-V that Microsoft have produced without also coming across it. What I'm really struggling to work out is what the point of it is? It seems to be pitched at the lower end of the market as a less expensive SAN alternative, but needs two servers to act as the redundant hosts, a dual-ported JBOD shelf, RAID controllers that can understand what's going on etc. I had sort of the same mental break you did a couple years ago when reading about the tech. To be flat honest, a scale out solution doesn't make a whole lot of sense, to me at least, when you can beat it on price with a VNXe or an MD3200i and take up less rack space to boot.
|
# ? Feb 12, 2014 15:53 |
|
Did they buy that SoFS acronym from IBM, or what? SOFS was the name of their NAS product before they rebranded it to SONAS.
|
# ? Feb 12, 2014 16:30 |
|
Microsoft and picking product names that other people haven't already used don't really go hand in hand.
|
# ? Feb 12, 2014 16:32 |
|
Disregard.
Moey fucked around with this message at 18:56 on Feb 12, 2014 |
# ? Feb 12, 2014 18:16 |
|
Caged posted:I've been reading a bit into Scale-Out File Servers and it's almost impossible to read anything about Hyper-V that Microsoft have produced without also coming across it. What I'm really struggling to work out is what the point of it is? It seems to be pitched at the lower end of the market as a less expensive SAN alternative, but needs two servers to act as the redundant hosts, a dual-ported JBOD shelf, RAID controllers that can understand what's going on etc. I see it as particularly useful for large data capacity with flexibility over a longer term. As Syano mentioned you can generally beat it on price by a Dell MD3200i, but when that unit is out of warranty you're looking at a forklift upgrade of it and all it's chained disk shelves regardless of their status. With a SOFS you'd be able to gracefully add and remove disks and shelves from your pools without any interruption of service. Plus you get the additional features of a SAN that wouldn't normally be available on something like an MD3200i like the auto-teiring, ssd cache, and others. It does come at a cost of manual setup and maintenance though.
|
# ? Feb 12, 2014 18:18 |
|
Microsoft's SOFS is them dipping their toes in the software defined storage waters. SDS is getting a lot of hype recently. EMC is pushing ViPR, Atlantis just announced their SDS solution, different VSAN appliances are popping up...The appeal is obvious and mirror the appeal of VMWare. You take some relatively cheap, heterogenous, commodity hardware and build the storage abstraction layer on top of that. Why buy an expensive SAN that locks you into one vendor when you can reuse servers you already have, or invest in commodity gear and not have to worry about hardware lock in. For Microsoft specifically, the appeal is that they sell more Windows operating systems. Microsoft has been making a push lately to engineer out SAN dependencies from their applications. DAGs in Exchange 2010 and SQL 2012 allow you to run fully fault tolerant services without centralized storage and without even requiring traditional backups. Hyper-V making use of SMB3 coupled with SOFS means that you can run your virtual environment entirely on windows without requiring hardware or software from another vendor. They're to push storage hardware vendors out of the picture entirely so they can monopolize the hardware stack. VMWARE is doing a little bit of the same thing with VSAN, but they have to be careful not to shoot themselves in the foot since they are owned by EMC, and EMC sells a LOT of traditional storage. Which is part of why EMC is making a push for SDS. They see the traditional storage market dwindling over time and want to make sure they have their foot in the door for whatever comes next, be it AFA, SDS, storage appliances...
|
# ? Feb 12, 2014 21:52 |
|
Posted a new thread like a jackass, should have just found this thread. Anyone know anything about Astute Networks' VISX Appliance?
|
# ? Feb 13, 2014 23:58 |
|
NippleFloss posted:Microsoft's SOFS is them dipping their toes in the software defined storage waters. The feature set is impressive for the price that is for sure.
|
# ? Feb 14, 2014 23:54 |
|
NippleFloss posted:Microsoft's SOFS is them dipping their toes in the software defined storage waters. SDS is getting a lot of hype recently. EMC is pushing ViPR, Atlantis just announced their SDS solution, different VSAN appliances are popping up...The appeal is obvious and mirror the appeal of VMWare. You take some relatively cheap, heterogenous, commodity hardware and build the storage abstraction layer on top of that. Why buy an expensive SAN that locks you into one vendor when you can reuse servers you already have, or invest in commodity gear and not have to worry about hardware lock in. On a lark I decided to look at these supermicro SuperStorage Server with 4TB 7200 / 128 GB Ent SSDs (half HD \ half SSD) was the total cost like 26~k, for 3x the storage and speed of what i just paid for in storage (still a small fry). It would have provided hot-cold tiers and 2 way redundancy. If if this SDS stack is as reliable as I hope it is, its a game changer. Just look at what SDN is doing now. But...you're taking more the burden/risk at the hardware layer. Microsoft isn't going to 4hr you a replacement drive and now you're going to have to keep a real inventory of drives, or shelves, or controller cards. Either you can buy lots and lots of cheap storage hardware and let the software abstract those issues away, or you lean on someone getting an e-mail from your SAN at 3am to have the vendor to dispatch a new drive in your colo, miles or states away. The ideal of just pulling and slotting bigger drives and SSDs in the same footprint is really, really alluring. The unfortunate thing is I can see how someone who doesn't know what they're doing implementing SDS on poor quality hardware or not understanding how to properly maintain the infrastructure.
|
# ? Feb 15, 2014 08:30 |
|
Toshiba steps up for 5TB disks, and Seagate says 6TB by year end: http://www.anandtech.com/show/7760/5tb-35-enterprise-hdd-from-toshiba-announced
|
# ? Feb 17, 2014 22:12 |
|
MrMoo posted:Toshiba steps up for 5TB disks, and Seagate says 6TB by year end: Going to be interesting to watch what these ballooning disk sizes do to RAID tech. Rebuilding after a disk failure will take ages, and there will be a much greater risk of additional failures during that extended rebuild. Will this finally push wide adoption of distributed storage architectures like Ceph?
|
# ? Feb 17, 2014 23:19 |
|
What's the current limitation on rebuild times? Is it something that could be solved by better silicon?
|
# ? Feb 17, 2014 23:22 |
|
Docjowles posted:Going to be interesting to watch what these ballooning disk sizes do to RAID tech. Rebuilding after a disk failure will take ages, and there will be a much greater risk of additional failures during that extended rebuild. Will this finally push wide adoption of distributed storage architectures like Ceph? People will just invent some RAID level that has a 3rd parity disk. I'd say RAID10 everywhere, but they're likely to be long-term archival storage with SSDs nearline anyway, so speed isn't all that critical. It'd be nice if it were Ceph/Swift/Gluster/whatever, but those filesystems are almost always on top of mdraid or something anyway, since configuring so you don't actually treat node1.disk1 as a possible redundancy for node1.disk2 is a PITA. Maybe they'll improve this on the software level first.
|
# ? Feb 17, 2014 23:24 |
|
I believe it's Mean time to unrecoverable error. Around 2TB it hit the point where if you read every bit on the drive you've exceeded the mean.
|
# ? Feb 17, 2014 23:26 |
|
FISHMANPET posted:I believe it's Mean time to unrecoverable error. Around 2TB it hit the point where if you read every bit on the drive you've exceeded the mean. That's the limitation on "when do I need raid6". The limit on how fast you rebuild is parity calculations, basically. You can't just copy unless you replace before it fails, so you're stuck reading everything and calculating what parity data to write, which gets worse the larger the array is (and generally slower with more drives, not faster)
|
# ? Feb 17, 2014 23:40 |
|
evol262 posted:People will just invent some RAID level that has a 3rd parity disk. I'd say RAID10 everywhere, but they're likely to be long-term archival storage with SSDs nearline anyway, so speed isn't all that critical. adorai fucked around with this message at 00:38 on Feb 18, 2014 |
# ? Feb 18, 2014 00:33 |
|
adorai posted:It already exists, ZFS has had a third parity disk for years. Either way, a 6+2 raidgroup is already statistically unlikely to lose data even at 10TB drives. Using a very pessimistic estimate of a 25% chance of drive failure annually, the manufacturers consumer grade bit read error rates, 5MB/sec rebuild rate and 6 data drives and 2 parity drives, the chances of data loss over 5 years are 0.0002% (which is 1 in 500k for those that aren't good with numbers). If you drop the estimated annual drive failure rate to a more reasonable 10%, the chance of data loss is basically zero even with 20TB drives.
|
# ? Feb 18, 2014 03:52 |
|
evol262 posted:It'd be nice if it were Ceph/Swift/Gluster/whatever, but those filesystems are almost always on top of mdraid or something anyway, since configuring so you don't actually treat node1.disk1 as a possible redundancy for node1.disk2 is a PITA. Maybe they'll improve this on the software level first. I'm not even a Ceph noob but I thought the entire point of it was that as long as your config accurately describes disks/servers/rows/data centers, it will automatically spread your copies as wide as it can. Are the docs misleading?
|
# ? Feb 18, 2014 06:56 |
|
Zorak of Michigan posted:I'm not even a Ceph noob but I thought the entire point of it was that as long as your config accurately describes disks/servers/rows/data centers, it will automatically spread your copies as wide as it can. Are the docs misleading? Ceph recommends one osd per drive. It doesn't make it less of a pain in the rear end to configure.
|
# ? Feb 18, 2014 07:06 |
|
So I'm not a storage engineer by trade, but I am unfortunately being required to plan storage needs for our campus. I have inherited an older model Dell Compellant which seems super great on paper, except when I went to ask for a quote on enough drives to max out my slots, it came out to roughly $1000/2TB Drive. furthermore, this august they are discontinuing sales on these drives, and because the compellant can only use Dell drives, we're basically screwed if we lose one after our warranty expires. What are the main options for storage arrays that allow use of any SATA 6GB/s drive? Are some manufacturers much better than others? I've been poking around google and haven't found any real relevant information for my scenario, but came up with Zstax open storage. Ideally I'd like a 24 slot enclosure with web interface for volume management, and 10GB/e interface.
|
# ? Feb 18, 2014 18:53 |
|
I think you might be the target market for the Windows SoFS thing that was talked about a few posts back, especially if you want to be able to grow the system without being tied to a particular vendor. I'm not qualified enough to say for certain that it will do what you need, there are people in the thread that are though. The Oracle storage appliances are quite popular as well.
|
# ? Feb 18, 2014 19:01 |
|
Spudalicious posted:So I'm not a storage engineer by trade, but I am unfortunately being required to plan storage needs for our campus. I have inherited an older model Dell Compellant which seems super great on paper, except when I went to ask for a quote on enough drives to max out my slots, it came out to roughly $1000/2TB Drive. Spudalicious posted:furthermore, this august they are discontinuing sales on these drives, and because the compellant can only use Dell drives, we're basically screwed if we lose one after our warranty expires. That said, you can obtain compatible parts for big-label products from third-party resellers for a very long time. Talk to your vendor/VAR and see if they have any recommendations for parts resellers once the warranty expires. Spudalicious posted:What are the main options for storage arrays that allow use of any SATA 6GB/s drive? Are some manufacturers much better than others? I've been poking around google and haven't found any real relevant information for my scenario, but came up with Zstax open storage. Ideally I'd like a 24 slot enclosure with web interface for volume management, and 10GB/e interface.
|
# ? Feb 18, 2014 19:31 |
|
Misogynist posted:This is what you should expect to pay across the board. Enterprise drives are subjected to much more rigorous QA than your typical $100 desktop drive, so you run a much smaller risk of drive failure. They're typically also dual-ported to deal with a full controller failure. Since RAID rebuild times on 2 TB drives can already creep into the range of days, and random read performance on a RAID array is beyond awful when it's mid-reconstruction, this is very important. Thanks for the information, this is helpful. I did read up on some SOFS stuff here which seemed pretty neat, if a little bit complicated for our use: http://www.petri.co.il/windows-server-2012-smb-3-scale-out-file-server.htm I think we'll stick with Compellant for a while. Our warranty is for another three years, so maybe by then we'll have the money for a true storage infrastructure. Right now we're so limited that each individual project manager is responsible for data backups/archiving. Eventually I'd like a competitive, scalable infrastructure so that instead of "no" I can say "sure" when someone comes to me asking for 4-5TB of project data storage.
|
# ? Feb 18, 2014 19:44 |
|
Spudalicious posted:What are the main options for storage arrays that allow use of any SATA 6GB/s drive? Seconding that nobody does this because it's a stupid idea. Nexenta, a company that tried, now has an HCL that you have to adhere to if you want to buy support from them. They had too many problems with customers running consumer grade HDs. You do not want to be stuck self-supporting a storage solution for a campus. You will realize your mistake when it breaks and you have nobody to turn to. There are few faster ways to get fired. There's definitely a new breed of storage (Nimble, Tintri, etc) that is a bit cheaper than the price you'd pay for EMC/Netapp/Compellent/etc, but you don't want to go cheaper than that. If you're an enterprise, pay for enterprise storage. e: wow, 3 years to run on a system that was bought with 2TB drives? How old is that? Did they buy 5-year support for it or something? That's nuts. KS fucked around with this message at 19:52 on Feb 18, 2014 |
# ? Feb 18, 2014 19:47 |
|
KS posted:Seconding that nobody does this because it's a stupid idea. Nexenta, a company that tried, now has an HCL that you have to adhere to if you want to buy support from them. They had too many problems with customers running consumer grade HDs. Misspoke, we expire in May 2016, so 2 years! Still long enough to forget about for now. We received it in 2012 as a donation from Dell (we're a nonprofit). Not sure how the warranty was set up. It was bought with primarily 1TB tier 3 drives that I am expanding with 2TB drives, we have 10k 400GB tier 1 drives for our database i/o. I just started working here like 3 months ago so I'm still learning new things every day. Like how apparently my job title of System/Network Administrator also means Storage and database admin, IT Manager, IT Director, and other things that get randomly appointed to me. Spudalicious fucked around with this message at 20:02 on Feb 18, 2014 |
# ? Feb 18, 2014 20:00 |
|
The reason you can't pop in any SATA drive is that enterprise drives tend to have radically different firmware from consumer drives. Consumers don't realize this ("they're all 2TB drives!!!") but the firmware is really, really important in enterprise storage arrays.
|
# ? Feb 18, 2014 20:00 |
|
Spudalicious posted:Thanks for the information, this is helpful. I did read up on some SOFS stuff here which seemed pretty neat, if a little bit complicated for our use: http://www.petri.co.il/windows-server-2012-smb-3-scale-out-file-server.htm Maybe in that scenario they need to come to you with check in hand? My University runs this mammoth 2PB Isilon thing and it's all free to anybody and I don't really understand why/how.
|
# ? Feb 18, 2014 20:03 |
|
madsushi posted:The reason you can't pop in any SATA drive is that enterprise drives tend to have radically different firmware from consumer drives. Consumers don't realize this ("they're all 2TB drives!!!") but the firmware is really, really important in enterprise storage arrays. It's good to hear that I'm getting something for the money, as most of my spending requests are met by scientists who spent $500 on a 4-drive desktop enclosure so why can't I just get like 20 of those and store everything!! It can be tiring to re-explain why enterprise grade hardware is so expensive. I get away with it on networking hardware because for some reason everyone here has a near-evangelical reverence for cisco. Especially when they are just smart enough about IT to find articles like this: http://www.pcpro.co.uk/news/385792/consumer-hard-drives-as-reliable-as-enterprise-hardware Someday I will make them understand...someday. Spudalicious fucked around with this message at 20:08 on Feb 18, 2014 |
# ? Feb 18, 2014 20:05 |
|
Spudalicious posted:It's good to hear that I'm getting something for the money, as most of my spending requests are met by scientists who spent $500 on a 4-drive desktop enclosure so why can't I just get like 20 of those and store everything!! It can be tiring to re-explain why enterprise grade hardware is so expensive. Yeah you're paying a dude to show up at your door like 4 hours from a drive failure to install a new drive for you, and people on the phone to troubleshoot stuff. Time is money, as well as money. OldPueblo fucked around with this message at 20:10 on Feb 18, 2014 |
# ? Feb 18, 2014 20:08 |
|
On the subject of traditional raid and large disks, there are a number of methods for improving rebuild times and minimizing degraded performance that don't require a RAIN architecture, which has it's own set of problems. There are things like disk pre-fail that detect disk errors early on and copy the data off to a spare and then fail it preemptively, saving the cost of parity calculations and reads across all other disks in the raid group. I think most vendors do this at this point and it's one of those custom firmware things that makes enterprise class drives so much more expensive than business class drives. There are also modifications like those in ZFS where the coupling of filesystem and volume manager means that you can store block/parity locations in filesystem metadata, rather than having them determined algorithmically by a dumb raid controller, so you can avoid rebuilding disk blocks that aren't active in the filesystem. This requires walking the filesystem metadata tree, which means CPU utilization goes up, but CPUs are rarely the bottleneck on systems with large capacity disks so that's not a big deal. And if you keep your metadata on SSD then those metadata reads are very quick and very efficient. Then there are distributed spare capacity solutions like E-Series Dynamic Disk Pools. Data, parity and spare capacity are all distributed across the pool and every volume does not necessarily have data on every disk, so when a drive fails only those volume segments that lived on that drive must be rebuilt, and because spare capacity is distributed as well you can rebuild on all remaining disks in the pool simultaneously, removing the bottleneck of write throughput on a single spare disk. Again, this type of design requires more CPU and memory resources to manage all of the metadata associated with handling data placement, but CPU and Memory performance are growing much faster than spinning disk performance, so that's an easy trade to make. Hardware raid that is divorced from the controlling OS will probably disappear entirely at some point in the relatively near future because doing it in software provides so much more flexibility and fast multi-core CPUs make it just as performant.
|
# ? Feb 19, 2014 23:57 |
|
Yeah, IBM's storage/filesystem offerings are starting to move in that direction also. No more hardware RAID controllers, just have your NSDs (I/O servers) and filesystem layer handle everything. They use a "declustered" RAID to reduce rebuild times and lower the performance degradation during rebuilds. Here's a video if anyone's interested: https://www.youtube.com/watch?v=VvIgjVYPc_U
|
# ? Feb 20, 2014 00:19 |
|
Anyone have any experience with coraid? (http://www.coraid.com/) Considering them for a small project at work. The technology sounds great but of course it does coming from the vendor. Anyone know anyone using this in production?
|
# ? Feb 20, 2014 02:44 |
|
The_Groove posted:Yeah, IBM's storage/filesystem offerings are starting to move in that direction also. No more hardware RAID controllers, just have your NSDs (I/O servers) and filesystem layer handle everything. They use a "declustered" RAID to reduce rebuild times and lower the performance degradation during rebuilds. Here's a video if anyone's interested: https://www.youtube.com/watch?v=VvIgjVYPc_U
|
# ? Feb 20, 2014 07:07 |
|
Spudalicious posted:It's good to hear that I'm getting something for the money, as most of my spending requests are met by scientists who spent $500 on a 4-drive desktop enclosure so why can't I just get like 20 of those and store everything!! It can be tiring to re-explain why enterprise grade hardware is so expensive. I get away with it on networking hardware because for some reason everyone here has a near-evangelical reverence for cisco. As a scientist who just bodged together two eSATA boxes and an old Dell workstation to provide some 30+TB of storage at the lowest price possible, I can sympathize. I'd never dare to run anything remotely critical on anything like this, but there is a bit of sticker shock when moving into the more serious end. But gently caress me - I'd love to pay three times what I did to get a nice solid setup with OOB NFS4/SMB + AD authentication and well-built hardware. I'll happily trade away a few nines of uptime and under-rebuild performance to get it down to that price, but I couldn't quite find anything like that from our usual vendors. (Heh, maybe we are the market for Synology's SMB products.) Computer viking fucked around with this message at 15:04 on Feb 20, 2014 |
# ? Feb 20, 2014 14:38 |
|
Misogynist posted:Is IBM still going to be selling engineered systems around GPFS? I was under the impression SONAS was going over to Lenovo.
|
# ? Feb 20, 2014 18:50 |
|
What is everyone using for PCIe SSD storage? We're going to be rolling out an ElasticSearch cluster and want to run it on SSD. Initially was planning to just plug some 2.5 drives into hotswap bays, but SSD cards seems to have gotten much more reasonable and then I won't have to try to use aftermarket/unsupported drives in a Dell. The two choices I've kind or narrowed down to at 800Gig are the Intel 910 for $4k or the Micron P320h for $6500. Dell sells ioDrive2 directly, but it's $8k and performance is mediocre.
|
# ? Feb 20, 2014 19:49 |
|
|
# ? May 10, 2024 04:09 |
|
Nukelear v.2 posted:What is everyone using for PCIe SSD storage? We're going to be rolling out an ElasticSearch cluster and want to run it on SSD. Initially was planning to just plug some 2.5 drives into hotswap bays, but SSD cards seems to have gotten much more reasonable and then I won't have to try to use aftermarket/unsupported drives in a Dell.
|
# ? Feb 20, 2014 19:58 |