|
bmoyles posted:100k might not get you much SAN from Pillar...
|
# ? Feb 4, 2009 15:34 |
|
|
# ? May 21, 2024 14:18 |
|
brent78 posted:I just saw an article about Pillar Data laying off 30% of their workforce.. and here I am with 100k to spend on a SAN and can't even get them to return my phone call. Anyone using Lefthand VSA in production? It sounds very cool and scary at the same time. Oblomov has posted some pretty positive stuff about Lefthand in this thread. That said, make sure you look at a couple of manufacturers to make sure you get exactly what you want. Just because you have 100 large to spend, it doesn't necessarily mean you should spend all of it.
|
# ? Feb 4, 2009 18:15 |
|
Catch 22 posted:What?!? Please give your definition on "Small SAN"?
|
# ? Feb 4, 2009 19:17 |
|
bmoyles posted:I've got a quote from Compellent on a clustered solution that pushed 170k for about 8TB raw.
|
# ? Feb 4, 2009 20:04 |
|
My company has about 200 TB of file storage with a custom homegrown solution and low I/O requirements. Been talking with EMC and they're steering me towards Atmos, which they only recently (November) announced, so it's still pretty new. Does anyone have any experience with these new clustered storage solutions? I've also briefly looked at Caringo, as well as Gluster on the open source side.
|
# ? Feb 4, 2009 21:21 |
|
Hey Storage Gurus, I have a question for you if you will permit: At work we have an EMC Clariion (CX3-80) now replacing our old Intransa and a couple Isilon units (which have performed very poorly for our I/O needs). EMC promised us >=350 megabytes per second per datamover and we're not really seeing that despite having been working with engineers for months. Also it appears there is no jumbo framing on the 10GbE interfaces which could cause us some poorer performance. In addition, getting CIFS and NFS working on one file system cooperatively proved to be a hassle. Any idea whats up with that? What other issues have you seen with EMC and performance or otherwise?
|
# ? Feb 4, 2009 21:45 |
|
mkosmo posted:Hey Storage Gurus, I have a question for you if you will permit: First my two cents, your EMC engineer who sold and implemented the system for you should have properly designed your system to meet your necessary workload performance requirements. Now on to the real question of why this is a difficult thing to achieve. >350MB/s is quite a bit of throughput, but throughput alone is not the most important factor when it comes to disk performance. It can actually be extremely misleading. Disk performance is a very careful balancing act of the right raid level at the right block size for an optimal IOPS & throughput level to best meet your I/O pattern. For example 350MB/s @ 8 IOPS would be a pretty poorly performing disk system, whereas 80MB/s @ 5000 IOPS could be an extremely well performing disk system. A few things that will affect your performance that you'll want to look into, be sure to use IOMeter to do your performance testing, anything else is basically full of lies (I'm looking at you HDtach) 1: I/O pattern. This means what % is read, what % is write, and what % is random vs sequential. 2: Block sizes. The block size of your stripe (I think EMC calls this element size?) and the block size of your filesystems partition. Per the same number of disks in a disk group you will have to balance the block size for both IOPS and throughput (MB/s). The smaller the block size the more IOPS you are going to get, but at a lower MB/s. Conversely the larger the block size the more MB/s you are going to get but at a much lower IOPS. Additionally you are going to get better IOPS AND MB/s the more sequential your workload is, and also even more if your workload is more read than write. 8kb block size on both the disk and filesystem size is one of the better balanced configurations. With a 24x 450gb 15k disk group on an HP EVA4400 I've gotten in excess of 6000 IOPS (some of the figure would be cached performance) at 250-300 MB/s in certain access patterns. Database servers and anything that has a highly random and mildly write heavy, like operating system drives and things like that. Fileservers you can increase your block size to 32kb or higher and get good throughput but at a lower IOPS rate (which is generally fine for a file server). IOMeter is going to become your best friend for evaluating your disk/filesystem configurations to see if you get the performance you need to meet your workload. rage-saq fucked around with this message at 22:30 on Feb 4, 2009 |
# ? Feb 4, 2009 22:28 |
|
Catch 22 posted:Still seams high with clustering (assuming you mean 2 Mirrored SANs) and full SAS. Nope. Clustered heads. To be completely fair, this might be close to list as I asked for ballpark pricing, so after discount it would've been somewhat cheaper, but still in that same ballpark. To add a set of tier 3 storage (750G disks, same aggregate capacity) would've added another 40k onto the cost. Edit: this doesn't have Compellent, but it's interesting nonetheless: http://storagemojo.com/storagemojos-pricing-guide/ The Pillar pricing is in line with the quote we got from them as well. For example, SLM 500-SAN SAN SLAMMER SAN SLAMMER $39,840 BRX 500-144F15J BRICK,144GB FC 15000RPM DRIVES,JBOD CONF BRICK,144GB FC 15000RPM DRIVES,JBOD CONFIGURATION $27,325 Now, if I remember correctly, a brick is 12 drives, so $30k there gets you 1.7TB. You're at 70k for a 1.7TB SAN, and this is before support and software. We added a bit more to our original quote, came up with just under 4TB of storage, a 2nd tier, as well as built-in NFS support from Pillar, and it was listing at almost 200k. bmoyles fucked around with this message at 01:17 on Feb 5, 2009 |
# ? Feb 5, 2009 01:02 |
|
bmoyles posted:Nope. Clustered heads. To be completely fair, this might be close to list as I asked for ballpark pricing, so after discount it would've been somewhat cheaper, but still in that same ballpark. To add a set of tier 3 storage (750G disks, same aggregate capacity) would've added another 40k onto the cost. EDIT: looked again at the final: 25K, and I had them toss on the 3rd year of warranty for free Catch 22 fucked around with this message at 01:30 on Feb 5, 2009 |
# ? Feb 5, 2009 01:19 |
|
Yeap, the tier 3 was SATA, tier 1 was 15k FC. We ended up with a MD3ki for a stopgap solution for VMware and Isilon for NAS. Prolly gonna go with a EqualLogic box to replace that MD3ki later this year. The Compellent solution was really nice, and I'd recommend it to anyone who's got the cash.
|
# ? Feb 5, 2009 02:53 |
|
bmoyles posted:Yeap, the tier 3 was SATA, tier 1 was 15k FC. If you are going to look at Equallogic, look at Lefthand too.
|
# ? Feb 5, 2009 02:55 |
|
Catch 22 posted:What?!? Please give your definition on "Small SAN"? $100K is smallish. I think anything up to say $150K is on the small size. To give you an example, we just paid about half a mil for a 6080 NetApp SAN with whole bunch of FAS storage with a few hundred TB. And that's really mid-size not high-end SAN, IMO, although it is going toward the high-end. On the LeftHand, I am still testing it in the lab and it's pretty good from everything I am seeing. Don't expect huge IO, i.e. 1600-1800 per G2 node (SAS). So max you can get is maybe 40K IOPS out of a cluster of 20-25 boxes.
|
# ? Feb 5, 2009 04:30 |
|
bmoyles posted:Yeap, the tier 3 was SATA, tier 1 was 15k FC. I've got a Compellent SAN in active/passive, and like it very much. The data progression stuff from Teir1 -> TeirX is really nice too.
|
# ? Feb 6, 2009 00:09 |
|
Anyone here dealt with Compellent Storage Center equipment? I'm trying to find out what drawbacks they may have from people who've actually used the stuff.
|
# ? Feb 6, 2009 15:10 |
|
Has anyone played around with Sun's 7000 Storage line? Specifically the 7210? I can get a sweetheart of a deal, but even the best deal is no good if it's not ready yet.
|
# ? Feb 6, 2009 22:41 |
|
Mr. Fossey posted:Has anyone played around with Sun's 7000 Storage line? Specifically the 7210?
|
# ? Feb 6, 2009 23:37 |
|
Misogynist posted:It's a fantastic tier-3 NAS for the money but don't try to use it as a SAN yet. We are thinking of using it primarily for 5-6TB over CIFS, and possibly a handful of VMs over iSCSI. The most intense would be a 80 user exchange VM. Is the SAN piece something that will come into its own as the software matures, or are there hardware or architecture inadequacies?
|
# ? Feb 9, 2009 17:25 |
|
Mr. Fossey posted:We are thinking of using it primarily for 5-6TB over CIFS, and possibly a handful of VMs over iSCSI. The most intense would be a 80 user exchange VM. Is the SAN piece something that will come into its own as the software matures, or are there hardware or architecture inadequacies?
|
# ? Feb 9, 2009 17:30 |
|
Just wanted to post that I got a shelf of EqualLogic 5000VX setup in the lab and I'm very impressed with it's performance. It's configured with 16 x 300GB 15k disks, active/backup controller and all 6 gigE connected to a pair of 3750's. Using jumbo frames and receive flow control as well. I'm achieving 200 MB/s writes with ease and barely sweats with mixed random reads/writes. This shelf as configured was 40k, not the cheapest thing out there but on par with 15k SAS. The equivalent NetApp or EMC solution would have been double considering all their retarded licensing costs. Ohh you want iSCSI, caa-ching.
|
# ? Feb 11, 2009 02:40 |
|
brent78 posted:Just wanted to post that I got a shelf of EqualLogic 5000VX setup in the lab and I'm very impressed with it's performance. It's configured with 16 x 300GB 15k disks, active/backup controller and all 6 gigE connected to a pair of 3750's. Using jumbo frames and receive flow control as well. I'm achieving 200 MB/s writes with ease and barely sweats with mixed random reads/writes. This shelf as configured was 40k, not the cheapest thing out there but on par with 15k SAS. The equivalent NetApp or EMC solution would have been double considering all their retarded licensing costs. Ohh you want iSCSI, caa-ching. Equalogic is not bad at all performance wise. Management is straightforward, support is good and hardware is pretty neat. However, I must say that I like LeftHand more, mainly for flexibility of software there. Also, to be fair to NetApp (less so with EMC), you will see pricing converge closer together as you "fill up" on nodes. With NetApp and EMC (and Hitachi, HP EVA, etc...) you pay a lot more in the front, but at the end once you start scaling up, pricing is going to be much closer (if still more) then Equalogic (and LeftHand). So once you compare say NetApp 3160 with whole bunch of shelves and a similarly large Equalogic deployment, prices are much closer then you'd think at the start. There are other advantages to Equalogic (Lefthand too) compared to traditional SANs though.
|
# ? Feb 11, 2009 05:46 |
|
brent78 posted:Just wanted to post that I got a shelf of EqualLogic 5000VX setup in the lab and I'm very impressed with it's performance. It's configured with 16 x 300GB 15k disks, active/backup controller and all 6 gigE connected to a pair of 3750's. Using jumbo frames and receive flow control as well. I'm achieving 200 MB/s writes with ease and barely sweats with mixed random reads/writes. This shelf as configured was 40k, not the cheapest thing out there but on par with 15k SAS. The equivalent NetApp or EMC solution would have been double considering all their retarded licensing costs. Ohh you want iSCSI, caa-ching. Chiming in as well that you should give Lefthand a try. We just purchased it and haven't regerted it yet.
|
# ? Feb 11, 2009 10:54 |
|
Is it possible to buy a NAS or even a SAN bare board ? I have a dead surestore tape drive and two spare SATA disks and would like to combine the two into an external storage box if it was just one drive, that would be easy, but two drives is causing problems. SATA 'hubs' are silly money at the moment, so the only other option I can think of is to get two SATA > USB converters AND a small USB hub and stick the lot in the case. expensive and not very elegant. any thoughts people ?
|
# ? Feb 11, 2009 12:54 |
|
spiny posted:Is it possible to buy a NAS or even a SAN bare board ? I have a dead surestore tape drive and two spare SATA disks and would like to combine the two into an external storage box if it was just one drive, that would be easy, but two drives is causing problems. SATA 'hubs' are silly money at the moment, so the only other option I can think of is to get two SATA > USB converters AND a small USB hub and stick the lot in the case. expensive and not very elegant. Check out SanMelody from Datacore. Not exactly what you want, but might still fit what you need.
|
# ? Feb 11, 2009 14:37 |
|
Intrepid00 posted:Chiming in as well that you should give Lefthand a try. We just purchased it and haven't regerted it yet. Edit: Whats a ballpark figure for a fully populated SAS lefthand solution? brent78 fucked around with this message at 00:27 on Feb 12, 2009 |
# ? Feb 11, 2009 18:52 |
|
mkosmo posted:Hey Storage Gurus, I have a question for you if you will permit: Can you give us an idea of the environment being worked in? Give as much detail as possible if you can. Nas or San et al et al.
|
# ? Feb 11, 2009 21:41 |
|
Rhymenoserous posted:Can you give us an idea of the environment being worked in? Give as much detail as possible if you can. Nas or San et al et al. also how many disks, what type, what raid config, etc.
|
# ? Feb 11, 2009 22:15 |
|
brent78 posted:Edit: Whats a ballpark figure for a fully populated SAS lefthand solution? Take whatever cost the hardware is and add like another 10-20. This is very rough, the other guy that had a lab with clusters of them can proably give a much better figure. Who's trying to push you to the VSA? Lefthand or the reseller? They just literly came out with a G2 box for one of their NSM's.
|
# ? Feb 12, 2009 02:30 |
|
kind of a corner case question... pretend for a moment you've been stuck with a pretty decent SAN. We're talking raid10 across 40 15k spindles and 2GB of write cache (mirrored, 4gb raw), plenty of raw iops horsepower. but you need nas your application is specifically designed around a shared filesystem (gfs), changing that would require lots of rewrite work. gfs, for various reasons, is not an option going forward. So its nfs or something more exotic, and exotic makes me angry. what product do you shim inbetween the servers and the san to transform nfs into scsi? Preferably under 30k with 4hr support, a failover pair would be nice too. Now, I know about the obvious "pair of rhel boxes active/passive'ing a gfs volume", but I also want to evaluate my alternatives. Extra special bonus points if it can do snapshots and replication. Does netapp make a "gateway" model this cheap? The ideal product would be two 1u box's running some embedded nas software on an ssd disk, with ethernet and fibrechannel ports, all manageable through a web interface with *very* good performance analysis options. Can you tell I wish sun would sell a 7001 gateway-only product real bad?
|
# ? Feb 14, 2009 23:39 |
|
You can in fact use a NetApp as a gateway in front of whoever. http://www.netapp.com/us/products/storage-systems/v3000/ http://www.netapp.com/us/products/storage-systems/v3100/ http://www.netapp.com/us/products/storage-systems/v6000/
|
# ? Feb 15, 2009 22:45 |
|
1000101 posted:You can in fact use a NetApp as a gateway in front of whoever. How well do these V-filers work? Haven't tried them yet and we were thinking of trying to front some EMC and Hitachi storage with it.
|
# ? Feb 16, 2009 16:47 |
|
oblomov posted:How well do these V-filers work? Haven't tried them yet and we were thinking of trying to front some EMC and Hitachi storage with it. Just FYI, this would be unsupported by EMC.
|
# ? Feb 16, 2009 18:35 |
|
oblomov posted:How well do these V-filers work? Haven't tried them yet and we were thinking of trying to front some EMC and Hitachi storage with it. I have a customer thats front-ending HDS USPs with it and he is pretty happy about it. That was actually my first and only experience with it. A series of AMS frontended with HDS USP which again has the NetApp in front of it. They're using iSCSI for their ESX project. Two of my colleagues at work seem to think pretty highly of it though.
|
# ? Feb 16, 2009 18:57 |
|
yea I stumbled across the v3020 the other day and it seemed perfect, until my san vendor said it'd be unsupported on both sides. Right now I'm looking at Exanet, anyone got any opinions?
|
# ? Feb 18, 2009 01:32 |
|
I think unless you put a Cellera in front of your EMC, EMC won't support you period. Another option is OnStor though.
|
# ? Feb 18, 2009 01:57 |
|
Performance question. I've got an ESXi server with a VMFS LUN on our Netapp FAS2020. I need to create a file server VM which needs to serve up two shares, 500G each. I can't cram all of this inside the VMFS LUN because the A-SIS engine on the FAS2020 won't run against a volume larger than 500G, so I'm stuck separating at least the shares out in some way. Will I see any performance benefit by creating these as VMDKs in additional VMFS LUNs, or by just hooking the Server 2008 VM directly (ISCSi) to the LUNs and letting it format them with NTFS? What's best practice here? Thanks storage goons.
|
# ? Mar 3, 2009 17:21 |
|
Mierdaan posted:Performance question. I have mine setup with the first VMDK as the boot and OS drive, then I have a RDM to the LUN from the host. I think this performs better than the iSCSI initiator pulling from the guest, but I don't have metrics to prove that.
|
# ? Mar 3, 2009 17:26 |
|
Thanks. I found this study on VMware's website that seems to indicate it doesn't make too much of a difference, and honestly this isn't a high IO file server. I'm probably worrying too much.
|
# ? Mar 3, 2009 18:20 |
|
I'm just gonna put this out there for anyone looking for cheap SAN stuff: You can get an HP Enterprise Virtual Array 4400 dual controller with 12 x 400GB 10k FC drives and 5TB of licensing for less than $12k. The part number is AJ813A. Need more space? Order a second one and use just the shelf, then keep a spare set of controllers. You can get ~38TB for less than $96k this way. The only things you need to add are SFPs and switches. Nomex fucked around with this message at 18:20 on Mar 5, 2009 |
# ? Mar 5, 2009 18:16 |
|
What kind of performance hit will deduplication incur? Say I have 26 servers, A through Z. They all have 72G drives now, but they use ~10GB on each of them, and assume that ~3GB of that is exactly the same base OS image. Dedeuplication will obviously save us a lot of space. I've seen the NetApp demo videos and it sounds awesome. But people are now telling me that performance will suffer. Still others say that all your duped blocks will probably be sitting in cache or on SSD anyway, so performance actaully increases. I can see both sides of it: if I am just reading the same block all the time (say, a shared object in Linux or a DLL in Windows), then if that block is deduped then I'll be winning. But lets say I modify that block, then the storage array will sort of have to pull that block out and now start keeping a second copy of it, and managing that slows the array down. Thoughts?
|
# ? Mar 6, 2009 02:28 |
|
|
# ? May 21, 2024 14:18 |
|
Update since we put Lefthand boxes in production... They are awesome Users are starting to notice the increased preformance as crap is moved off the DAS.
|
# ? Mar 6, 2009 07:40 |