|
rage-saq posted:Just get another LeftHand node, that way you can use your existing LeftHand stuff for secondary storage / backups / etc. Its a good platform and if your infrastructure is right it will perform more than adequately for the kind of environment you are talking about. What their marketing doesn't tell you is that to increase capacity/iops you can't "just add another NSM to the management group" when they discontinue the models you purchased a couple months prior and can't source another one anywhere. Of course you can purchase one of the new models but due to the way their network RAID works it'll perform no better than the old modules and won't take advantage of any extra drive space despite being almost twice the price. Their "thanks for your money, now gently caress off" sales approach didn't impress me too much. I'll no doubt get a quote for their new machines for the sake of completeness but they're as far removed from my first choice as they can be. I did take a look at the Sun 7xxx machines and have a local guy calling me back this afternoon so we'll see what happens there. Also, EMC's AX4 quote was half the price of the NX4 (NFS would be nice but not $20K nice). NetApp still don't seem too bothered about taking our money, maybe their phones are broken.
|
# ? Mar 3, 2010 18:49 |
|
|
# ? May 10, 2024 02:08 |
|
Insane Clown Pussy posted:So, calling around trying to find a replacement for our old Lefthand boxes. I need something in the 2-4 TB range for 2-3 ESX hosts running a typical 6-8 server Windows environment (files, exchange, sql, sharepoint etc) I spoke with someone from EMC last week and just got quoted for a dual blade NX4 with 15 x 450GB SAS drives and CIFS/NFS/iSCSI for $40+K. I have a setup similar to your description running on Dell MD3000i's. Dual controller, 15x450GB SAS 15k RPM drives. Was a bit over $13k. The Sun 7110's do look fairly nice, not very familiar with their kit. Are they basically PC's or does it have any sort of dual controller redundancy?
|
# ? Mar 3, 2010 20:27 |
|
AFAIK all of the 7000 series use ZFS, which if your as over ZFS as I am, is awesome.
|
# ? Mar 3, 2010 20:58 |
|
Nukelear v.2 posted:I have a setup similar to your description running on Dell MD3000i's. Dual controller, 15x450GB SAS 15k RPM drives. Was a bit over $13k.
|
# ? Mar 3, 2010 23:49 |
|
For "cheap" SATA FC storage we recently got two Hitachi AMS2100 systems. Besides having the wonky HDS style of management software i like the AMS2100 series fine. We will get a free HDP (thin prov) license next time we buy a new batch of drives so performance should (might) be the same as a really wide stripe when we have the boxes filled with 2TB spindles. Sun quoted us a higher price for ONE 7000 series with 24 spindles than what we paid for two 2100 systems with active-active controllers and 30 1TB spindles in each box. HDS is looking better and better these days. HP can die in a fire, buying anything from them is a hassle and the sales people are all retards.
|
# ? Mar 5, 2010 07:31 |
|
This might not be the right crowd, but any idea how something like a Netapp box has a pool of drives which are connected to the two head units? They can assign which drives go to which in software as well. The only solutions I can come up with involve a single point of failure (eg, using a controller and serving up the drives each as their own LUN to the heads).
|
# ? Mar 6, 2010 09:18 |
|
You're asking how it works? Basically there are two FCAL loops, with a controller at the 'head' of each loop. In a clustered configuration each disk actually has two different addresses, so each head could access it if the other went down. Picture a disk that has two connectors on the back of it, connected to two different controllers. As long as each controller agrees on who is the doing the work, everything is fine cause they won't step on each other. Imagine a street of 6 houses. The are numbered 1, 2, 3, 4, 5, and 6. They are served by one mailman and everything is happy. Now, you could add a second 'label' to the houses. Label them A, B, C, D, E, and F. House 1 could also be called House A, House E could also be called House 5. To split the work the postal service adds a second mailperson; Alice serves houses 1, 2, and 3 while Bob serves houses D, E, and F. If either Alice or Bob were sick (i.e. a storage controller failed) they will simply pick up the other half of the mail route. Check out http://www.docstoc.com/docs/23803079/Netapp-basic-concepts-quickstart-guide, particularly from page 40 and on, for more details (and without ridiculous analogies).
|
# ? Mar 6, 2010 10:19 |
|
complex posted:You're asking how it works? Basically there are two FCAL loops, with a controller at the 'head' of each loop. In a clustered configuration each disk actually has two different addresses, so each head could access it if the other went down. Picture a disk that has two connectors on the back of it, connected to two different controllers. As long as each controller agrees on who is the doing the work, everything is fine cause they won't step on each other. That's essentially what I figured they were doing. Multipathing, but instead running the second path to the other controller. It seemed to me though, that whatever they were using for that became a new single point of failure.
|
# ? Mar 6, 2010 19:26 |
|
Serfer posted:This might not be the right crowd, but any idea how something like a Netapp box has a pool of drives which are connected to the two head units? They can assign which drives go to which in software as well. The only solutions I can come up with involve a single point of failure (eg, using a controller and serving up the drives each as their own LUN to the heads). Via magic, faeries, pixie dust, and most importantly lots of money. This blog has a really great picture to illustrate the setup: http://netapp-blog.blogspot.com/2009/08/netapp-activeactive-vs-activepassive.html The closest thing to a single point of failure is the cluster interconnect for NVRAM mirroring. However, if the interconnect fails your cluster continues to serve data from its non-fault tolerant state, but will not transition to a new state. This means if filer A is currently active for A and B, it will continue to do so upon cluster link failure. If filer A and filer B are both serving their own data, they will never fail over to the other automatically. The filers maintain some state information on the disks themselves in a reserved few blocks for filer A and filer B respectively so they can make educated guesses about the other filer's state. There are VERY dire warnings and consequences to acting upon a filer when it cannot sense its neighbor. Never disagree with what a RAID setup thinks about your array without very good reason. (This is just unsolicited advice. It's the most concise way I train people in using storage systems, as it is what every action boils down to on a fileserver.)
|
# ? Mar 6, 2010 20:25 |
|
H110Hawk posted:Via magic, faeries, pixie dust, and most importantly lots of money.
|
# ? Mar 6, 2010 22:30 |
|
Serfer posted:This might not be the right crowd, but any idea how something like a Netapp box has a pool of drives which are connected to the two head units? They can assign which drives go to which in software as well. The only solutions I can come up with involve a single point of failure (eg, using a controller and serving up the drives each as their own LUN to the heads). If you're asking what I think you're asking, it works like this; with ONTAP 7 and above, the disks contain metadata written at the RAID level which assigns the disks to their respective controllers. This is called software disk ownership. Generally in NetApp clusters which are described as "active/active", each node owns some portion of the disks. During normal operation, i/o is written to the NVRAM and shared to each node through the cluster interconnect. Once the NVRAM is full for a node the data is then written to disks which it owns. In the event of a failover, the node which has "taken over" its partner's disks can continue serving data and writing data to its partner's disks, all thanks to the magic of software disk ownership. In the old days of hardware disk ownership, each node in the cluster owned disks plugged into a specific HBA port/loop on the filer. With software disk ownership, it doesn't matter what loop is plugged into where as ownership only depends on the RAID metadata written to the disk. I've seen systems where disk ownership is scattered all over the stacks of disk trays.
|
# ? Mar 7, 2010 20:54 |
|
H110Hawk posted:Via magic, faeries, pixie dust, and most importantly lots of money. Just wanted to add that all of NetApp's current product line, except for the 6000 series have interconnects on a circuit board backplane.
|
# ? Mar 7, 2010 20:58 |
|
Cultural Imperial posted:If you're asking what I think you're asking, it works like this; with ONTAP 7 and above, the disks contain metadata written at the RAID level which assigns the disks to their respective controllers. This is called software disk ownership.
|
# ? Mar 7, 2010 21:31 |
|
Serfer posted:Well, I was trying to figure out how they had drives connected to both systems really. I wanted to build something similar and use something like Nexenta, but it's looking more like these things are specialized and not available to people who want to roll their own.
|
# ? Mar 7, 2010 21:34 |
|
Serfer posted:Well, I was trying to figure out how they had drives connected to both systems really. I wanted to build something similar and use something like Nexenta, but it's looking more like these things are specialized and not available to people who want to roll their own. Hmm sorry I can't help you there.
|
# ? Mar 7, 2010 21:53 |
|
lilbean posted:You may be better off looking at just building a couple of systems and using something like DRBD to mirror a slab of disks.
|
# ? Mar 7, 2010 21:57 |
|
Serfer posted:Yeah, that's pretty much where it's going. 100% overhead. But still cheaper that way than buying Netapp or EMC. Which reminds me, despite calling a half dozen times, and having a conference with Sun, they never sent me a quote despite promising that it would get there in x<7 days every time I called. I can get a quote from our Sun vendor in a few hours. You've gotta find yourself a local vendor. They're always cheaper than anything we can get directly from Sun or CDWG, although we save money being an EDU.
|
# ? Mar 7, 2010 21:59 |
|
FISHMANPET posted:I can get a quote from our Sun vendor in a few hours. You've gotta find yourself a local vendor. They're always cheaper than anything we can get directly from Sun or CDWG, although we save money being an EDU.
|
# ? Mar 7, 2010 22:09 |
|
Serfer posted:Yeah, that's pretty much where it's going. 100% overhead. But still cheaper that way than buying Netapp or EMC. Which reminds me, despite calling a half dozen times, and having a conference with Sun, they never sent me a quote despite promising that it would get there in x<7 days every time I called. I would imagine Sun staff are busy trying to find new jobs right now.
|
# ? Mar 7, 2010 22:29 |
|
rage-saq posted:Just get another LeftHand node, that way you can use your existing LeftHand stuff for secondary storage / backups / etc. Its a good platform and if your infrastructure is right it will perform more than adequately for the kind of environment you are talking about. This is what I would probably do. The new G2's just came out, and we got some really impressive pricing on them.
|
# ? Mar 8, 2010 03:45 |
|
Not sure if this is the thread, but I want to talk Linux md for a bit. I've worked with ZFS and VxVM and with high-end enterprise arrays (DMX, etc), but not much with Linux md on cheap 1U/2Us with SATA disks. My understanding is that it's commonly recommended to disable the write-back cache on SATA disks to protect from corruption and/or data loss in the event of a power failure or crash, when using software RAID without a battery-backed RAID controller. I understand that this risk exists even on a single, non-RAID drive, but that it's multiplied in software RAID configurations, especially so for RAID5/RAID6. Here are some things I am not 100% clear on: RAID1 / RAID10: Does doing RAID1 or RAID10 pose any increased risk of data corruption or data loss due to power failure and pending writes (writes ACKed and cached by the disks, but uncommitted)? If so, how does this work? Barriers: Does using ext3 or XFS barriers afford the same amount of protection from this situation as disabling the write cache entirely (again, say, for RAID10)? I also understand that barriers do not work with Linux md RAID5/6 .. what about RAID10? Disabling the disks' write cache: I know how to do this using hdparm, but I also know that it is not a persistent change. If the machine reboots, the disks will come back up with the write-cache re-enabled. Worse, if there's a disk reset, they will come back up with the write-cache re-enabled (making the idea of doing it with a startup script inadequate). In RHEL4, there used to be an /etc/sysconfig/harddisks, but this no longer exists in RHEL5. What is the current method of persistently disabling the write cache on SATA disks? Is there a kernel option?
|
# ? Mar 10, 2010 18:02 |
|
I would just mitigate most of the risk by using a UPS and - if you can afford it - systems with redundant power. I mean a motherboard can still blow and take down the whole system immediately, but most drives follow the flush and sync commands enough to not worry that much.
|
# ? Mar 10, 2010 19:02 |
|
another loving random filesystem-goes-read-only-because-you-touched-your-san waste of an afternoon. I have four identically configured servers, each with a dualport hba, each port going to a separate fc-switch, each fc-switch linking to a separate controller on the storage array. All four configured identically with their own individual 100GB LUN using dm-multipath/centos5.3. I created two new LUNs and assigned them to two other and completely unrelated hosts to the four mentioned above. *ONE* of the four blades immediatly detects a path failure, then it recovers, then detects a path failure on the other link, then it recovers, then detects a failure on the first path again, and says it recovers, but somewhere in here ext3 flips its poo poo and remounts the filesystem readonly. Now, if I try to remount it, it says it can't because the block device is write protected. However multipath -ll says its [rw].
|
# ? Mar 10, 2010 19:51 |
|
lilbean posted:I would just mitigate most of the risk by using a UPS and - if you can afford it - systems with redundant power. I mean a motherboard can still blow and take down the whole system immediately, but most drives follow the flush and sync commands enough to not worry that much. The colo these servers are in has issues often enough that that is not enough for me. I also want to understand what's what, even if I had bulletproof systems and datacenters.
|
# ? Mar 10, 2010 22:21 |
|
Did a search and saw knocking on Fujitsu drives but not their hardware. Anybody have any experience with the Eternus dx60/dx80 lines? Spec sheet makes these units look pretty awesome for the range... reviews on them that don't look like PR campaigns are hard to come by though.
|
# ? Mar 10, 2010 22:26 |
|
StabbinHobo posted:another loving random filesystem-goes-read-only-because-you-touched-your-san waste of an afternoon. I had this *exact* same problem with my colocation provider plugged both powersupplies of one of my servers into the same (overloaded) PDU. When the PDU finally tripped, ext3 lost its mind and everything was readonly. However, it also incorrectly claimed that it was r/w, but could not remount rw. I had to reboot single user, then run fsck against the partition (2TB!) and finally was able to mount it again.
|
# ? Mar 10, 2010 23:11 |
|
My iSCSI / multipath notes for CentOS 5.4, using an EqualLogic PS5000XV and 6000XV. Hopefully they will be of some help to someone. Configure 2 NICs on the iSCSI network, for me that was eth2 and eth3, then run a discovery code:
code:
code:
code:
code:
brent78 fucked around with this message at 23:49 on Mar 10, 2010 |
# ? Mar 10, 2010 23:41 |
|
Okay, so take two of this: So, I really got my budget today for a new NAS/SAN. I have $30k Cdn to spend on a device that needs to do the following (in no particular order): Minimum 2TB NFS, maybe iSCSI Data De-duplication, and support for backing up deduped data (NDMP?). Snapshots Multi-path I/O (nice to have, but not critical) Expandability, both additional I/O and Additional Disks. AD/LDAP integration for user permissions Planned usage for the device is to host, via NFS or iSCSI, several virtual machines running on ESXi and about 1.5TB of data via NFS for our design department (OS X support). Future plans include expansion to hold video editing data with dedicated connectivity to video editors (additional network runs) and expanded storage to accommodate this. Additional virtual machines are also possible. At the current time, there are no plans to put any major database on the SAN, though a few virtual machines might have small databases (SQL express and the like), but they aren't heavily loaded. The largest virtual machine is an Exchange server with about 50 users (SBS 2003) I'm completely supplier neutral, however included in the price will need to be any installation costs (running additional power, new rack for it at least, I think) and at least one proper network router (core switch) because all I have right now are some Dell 2x24s. I'm in Toronto, so vendor/reseller recommendations are also accepted. If you think I missed anything in the feature set that is a must have or would benefit me, please mention it. this will be my first SAN purchase, so I have lots to learn.
|
# ? Mar 19, 2010 15:56 |
|
EoRaptor posted:Okay, so take two of this: poo poo, I think you might be so close to a low end thumper with that budget, and not that I know anything about enterprise storage, I think it would do most of what you need. Bonus points for getting 24T for your budget. Fake Edit: Oh poo poo, looks like they got rid of the 500GB model on their site, so the 1TB disk model is $50k US.
|
# ? Mar 19, 2010 18:43 |
|
FISHMANPET posted:poo poo, I think you might be so close to a low end thumper with that budget, and not that I know anything about enterprise storage, I think it would do most of what you need.
|
# ? Mar 19, 2010 19:00 |
|
FISHMANPET posted:Fake Edit: Oh poo poo, looks like they got rid of the 500GB model on their site, so the 1TB disk model is $50k US.
|
# ? Mar 19, 2010 19:27 |
|
Misogynist posted:That's the retail price, but you could probably get Sun to cut at least 30% off of that on a quote if they like you. But honestly, a Thumper seems like mega overkill for a requirement of only 2 TB. I'd consider an X4275 instead. I have absolutely no problem with more space, as long as the feature set is met within budget, which the sun boxes all seem to do. Which leads to two questions: I know zfs does de-duplication, but can this de-duped data be backed up, or am I still working with the full set? How is the management of the boxes? I'm okay with command line stuff, but other people will need to pick up slack from me if I'm not around, so a management interface that's not horrible is a must.
|
# ? Mar 19, 2010 22:15 |
|
EoRaptor posted:I have absolutely no problem with more space, as long as the feature set is met within budget, which the sun boxes all seem to do. For management, you are stuck with the command line for everything. If you want a pretty web interface with good analytics, then check out the Sun Storage 7000 systems. I think I read somewhere that they did add de-duplication.
|
# ? Mar 19, 2010 23:14 |
|
Bluecobra posted:I would assume that a backup program like NetBackup would ignore the ZFS de-deduplication and will backup all the files. With ZFS you can create a snapshot, and then use the "zfs send" command to send that snapshot to another host (more info here). It looks like they added a dedup option to zfs send, though this is pretty new. In fact, I am pretty sure that you will need to be on the developer build of Opensolaris to even get ZFS de-duplication so if this for your enterprise, I would just err on the side of caution. The 7000 series is just a ZFS + Fishworks, but Fishworks does add some cost. I would say give Sun a serious look. The X4275 is a 2U box that holds 12 3.5" drives. Get that, throw Solaris on it with some drives and go nuts. There should be something of equivalent size in your budget in the 7000 series.
|
# ? Mar 19, 2010 23:28 |
|
if HA is not a requirement, a 7100 series from sun with 2TB raw can be had for around $10k.
|
# ? Mar 19, 2010 23:49 |
|
EoRaptor posted:Minimum 2TB The EMC AX4-5i is down in your price range, but, AFAIK, (still) doesn't have thin provisioning or deduplication. As mentioned, deduplication on ZFS still hasn't made it out of development builds, AFAIK.
|
# ? Mar 20, 2010 00:00 |
|
EnergizerFellow posted:As mentioned, deduplication on ZFS still hasn't made it out of development builds, AFAIK. That is true. Solaris 10U8 (the most recent) has ZFS version 15 as the max. My OpenSolaris box running the latest devel build goes up to 22. Dedup is in 21. The latest release version of OpenSolaris comes out in a few days and will have dedup, so you can do that if you're comfortable, but otherwise you won't have dedup.
|
# ? Mar 20, 2010 00:08 |
|
EnergizerFellow posted:As mentioned, deduplication on ZFS still hasn't made it out of development builds, AFAIK.
|
# ? Mar 20, 2010 00:15 |
|
adorai posted:It will be, and it will be an in place upgrade. The Netapp isn't going to cut it, NFS is a $5k add on on a 2050, I can't imagine it's much cheaper on a 2040. Yeah, I just upgrade my thumper from v10 to v15 (somehow the Jumpstart installs ZFS as v10 instead of the latest).
|
# ? Mar 20, 2010 00:18 |
|
|
# ? May 10, 2024 02:08 |
|
Price of entry for clustering Sun storage hardware?
|
# ? Mar 20, 2010 02:54 |