|
echo465 posted:My google skills must be failing me, because most of the whining about the Promise VTrak line that I am finding is from all the Mac fags when Apple discontinued the X-RAID and told everyone to buy Promise instead. I'm interested in hearing it if this is a widespread problem.
|
# ¿ Sep 8, 2008 16:59 |
|
|
# ¿ Apr 28, 2024 00:15 |
|
H110Hawk posted:I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work. You could also use Solaris 10 with Samba if OpenSolaris makes you uncomfortable, but OpenSolaris is just so much goddamn nicer, especially with the built-in CIFS server, and just as stable. If you're really afraid of Sun OSes, you could also run Linux on the thing, but if you're trying to manage a 48TB pool without using ZFS you're kind of an rear end in a top hat. Vulture Culture fucked around with this message at 00:00 on Sep 18, 2008 |
# ¿ Sep 17, 2008 23:54 |
|
M@ posted:Wanted to chime in this thread and say that if anyone needs any used NAS/SAN hardware, I stock this stuff in my warehouse. Mostly Netapp, low to mid-high end equipment (960s, 980s, 3020s, and I've got one 6080 ) and HP, and Sun, although we do get the occasional EMC system in. I stock used disk shelves and individual drives too, so if you're looking for a cheap spare or something, let me know.
|
# ¿ Sep 18, 2008 20:26 |
|
Mierdaan posted:(if low-end)
|
# ¿ Sep 18, 2008 21:13 |
|
Mierdaan posted:What do you mean by not spectacular? I mean, the most IOPS-intensive thing we're throwing onto it is Exchange 2007 install for 250-ish users pushing about 600 IOPS. Are we going to be unhappy?
|
# ¿ Sep 19, 2008 21:19 |
|
tinabeatr posted:Does anyone have any experience with BlueArc? The way that some of their application-level stuff makes me a bit wary. Their call-home application works by parsing logs sent via email, rather than dealing with SNMP traps or anything proper. This is conspicuous, and leads me to believe that there might not be great cooperation between some of their engineering groups. Still, it works without a problem, so I guess I shouldn't really care how they do things until stuff starts not working right. Short of that, though, they're a pretty decent company. The hardware is good and does what it's supposed to. I don't think we've had a single problem with the fibre or SATA hardware outside of the high-density SA-48 enclosures which seem to experience perpetual controller failures. They've never screwed us, they're hard-working, and we haven't had any stability or performance issues with it in spite of hitting it seriously hard with 504 nodes at once over NFS. I have had major issues on critical systems with companies like IBM and Dell that BlueArc hasn't given me yet. For low-cost high-performance storage for an HPC environment or similar scratch space, I'd recommend them. I don't know that I trust them enough for anything truly "enterprise" grade, though. Vulture Culture fucked around with this message at 01:52 on Jan 20, 2009 |
# ¿ Jan 19, 2009 23:47 |
|
Jadus posted:In general, what are people doing to back up these large multi-TB systems? There's a lot of different options available to you and they all depend on what you need and what you're willing to pay. If you don't need file-level granularity, lots of SAN vendors have flash copy capability, and the ability to plug into enterprise backup products like Tivoli Storage Manager, CommVault or NetBackup, which will perform differential block-level backups of the SAN volumes. The advantage of block-level backups is that you don't waste any time analyzing files to see what changed, because the SAN keeps an inventory of dirty blocks on the volume. This is generally a good idea for volumes with a ton of small files, whereas larger files back up easier with a file-based backup product, and it improves both your RTO/RPO really substantially for full-on disaster recovery scenarios. The disadvantage is that you lose all of your granularity to restore specific data from the backup set. So, really, when you plan this out, you need to look at your most likely failure scenarios, what they'll cost you, and what you need to restore first. If your system management tools are good enough, ability to do a bare-metal restore really doesn't matter, because you can restore configurations just as fast from your tools while your backup product works on important data. This is the case with my Linux systems managed by Puppet. Keep in mind that even if you have a huge volume of files on a mission-critical storage server, people don't need 100% of the files restored immediately. When it comes to file storage, users need what they're working on, which you can restore first, and then restore everything else later. Contrary to popular belief, RTO does not always apply to full volumes. If you can get the users satisfied with what's restored first within a quick enough timeframe, they're not going to care much about the rest of the restore time (within reason).
|
# ¿ Jan 21, 2009 23:41 |
|
H110Hawk posted:From what I hear, and contrary to what their sales guys insisted, BlueArc appears to be doing demo units now, or is it still their "if we think you like it you have to buy it" try-and-buy program?
|
# ¿ Jan 28, 2009 19:56 |
|
skullone posted:You guys are scaring me... I already have a drive with predictive failure on my Sun box. Haven't reported it to Sun yet... but now I'm thinking "this RAID-Z set with hot spares isn't look as good as RAID-Z2 anymore"
|
# ¿ Jan 29, 2009 05:02 |
|
Mr. Fossey posted:Has anyone played around with Sun's 7000 Storage line? Specifically the 7210?
|
# ¿ Feb 6, 2009 23:37 |
|
Mr. Fossey posted:We are thinking of using it primarily for 5-6TB over CIFS, and possibly a handful of VMs over iSCSI. The most intense would be a 80 user exchange VM. Is the SAN piece something that will come into its own as the software matures, or are there hardware or architecture inadequacies?
|
# ¿ Feb 9, 2009 17:30 |
|
complex posted:Hmm. We could not get out x4500 anywhere near wirespeed using iSCSI. Wondering now if the x4540 is that much better...
|
# ¿ Mar 9, 2009 16:07 |
|
complex posted:Windows Server, not sure what version, 2003? I'm not a Windows guy.
|
# ¿ Mar 9, 2009 16:19 |
|
Syano posted:Next, I am having a difficult time wrapping my head around the idea of a snapshot. Is it really as awesome as I am thinking it is? Because I am thinking if I were able to move all my storage into the array then I would be able to use snapshots and eventually replication to replace my current backup solution. Are snapshots really that awesome or do I have an incorrect vision of what they do? It's still RAID. I remember a story here about some guy with a big-rear end BlueArc NAS that was replicating to another head. The firmware hit a bug and imploded the filesystem, including snapshots. It then replicated the write to the other head, which imploded the filesystem on the replica. This is probably less of a concern when your snapshots happen at the volume level instead of the filesystem level, but there's still plenty of disaster scenarios to consider without even getting into the possibilities of malicious administrators/intruders or natural disasters. You really need to keep offline, offsite backups. Vulture Culture fucked around with this message at 16:17 on Jun 13, 2009 |
# ¿ Jun 13, 2009 15:22 |
|
adorai posted:Can someone tell me what the practicality of using a Sun 7210 w/ 46 7200 rpm disks as the backend for approximately 40 VMware ESX guests is? On the one hand, I am very afraid of using 7200rpm disks here, but on the other hand there are 46 of them.
|
# ¿ Jun 13, 2009 16:12 |
|
Bluecobra posted:Sun's Amber Road system looks pretty nifty: Vulture Culture fucked around with this message at 16:16 on Jun 16, 2009 |
# ¿ Jun 16, 2009 16:13 |
|
complex posted:You've got to post a link detailing the "x4600 as a master to multiple x4500s" setup. I've never heard of that.
|
# ¿ Jun 16, 2009 20:22 |
|
Maneki Neko posted:By and large though, that's the same general complaint have about all IBM equipment. Stupid poo poo like 17 different serial numbers being on a piece of equipment and IBM has no idea which one they actually want, FRU numbers that are meaningless 3 weeks after you buy something, 12 different individual firmware updates that need to manually tracked and aren't updated regularly vs. regularly updated packages, etc. I have no idea how IBM stays in business, dealing with them is terrible.
|
# ¿ Jun 20, 2009 22:23 |
|
Bluecobra posted:You can do this if you roll your own with a 3U Supermicro case, 1.5TB drives, a decent Intel motherboard/processor, and OpenSolaris so you can use ZFS. Once you get OpenSolaris installed, it is pretty trival to make a ZFS pool and you can do something like a RAIDZ2 which similar to RAID 6 in redundancy. You can then share out the ZFS pool you just created to Windows hosts with a CIFS share.
|
# ¿ Jun 25, 2009 23:49 |
|
DDUP made some neat products, but I don't get what all the fuss was about. From the benchmarks I've seen, Falconstor etc. have products that perform an entire order of magnitude better. Their corporate strategy that was never sure how it felt about post-process deduplication made me think the place was run by Scott McNealy.
|
# ¿ Jul 26, 2009 15:17 |
|
I just moved part of my ESXi development environments off of local storage and onto a 48TB Sun x4500 I had lying around, shared via ZFS+NFS on OpenSolaris 2009.06 over a 10GbE link. I was worried about performance because it's SATA disk, but holy poo poo this thing screams with all those disks. I have never seen a Linux distro install so fast ever in my life. The bottleneck seems to be the 10GbE interface, which apparently maxes out around 6 gig. If I can find some sane way to replicate this to another Thumper, I will be a very, very happy man. Vulture Culture fucked around with this message at 18:16 on Aug 6, 2009 |
# ¿ Aug 6, 2009 18:14 |
|
Bluecobra posted:Are you using these 10GbE cards? They have been working extremely well for us but on some servers we added another card and turned on link aggregation. There is also this new feature in Solaris 10 Update 7 that may or may not be in OpenSolaris: I also don't think our non-HPCC switching could handle it well. We're running some impressive Force10 gear on the HPCC side, but I think we're bottlenecked down to 10gig to any given rack group for our regular network. H110Hawk posted:What zpool configuration are you using? Any specific kernel tweaks?
|
# ¿ Aug 6, 2009 21:52 |
|
Weird Uncle Dave posted:This is probably an invitation to all sorts of weird PMs, but do any of you do SAN consulting and/or sales? Profiling Windows and the various MS apps is pretty easy -- your best bet is to download the 180-day trial of Systems Center Operations Manager, load it onto a VM, point it at your AD/Exchange/SQL/whatever servers and let it go to town. Within a couple of days you should have most of the relevant performance information you need. Microsoft also has a lot of best practices guides for how to obtain relevant performance information out of applications like Exchange, just Google for the whitepapers. Linux is a lot trickier, especially if you're running on RHEL/CentOS or something else that doesn't have iotop and the other nice stuff that have made it into the system in the last couple of years. You'll have to babysit iostat for awhile.
|
# ¿ Aug 24, 2009 16:10 |
|
Haha, oh man, I pity you remaining IMail admins.
|
# ¿ Aug 25, 2009 16:12 |
|
H110Hawk posted:Sometimes it's hard to get 200v power in datacenters.
|
# ¿ Sep 8, 2009 06:04 |
|
Echidna posted:Can't mix SAS and SATA in the same enclosure. The controllers do support SATA as well as SAS, although SATA drives don't show up as options in the Dell pricing configuration thingy. Our account manager advised us that although technically you can mix SAS and SATA in the same enclosure, they'd experienced a higher than average number of disk failures in that configuration, due to the vibration patterns created by disks spinning at different rates (15K SAS and 7.2K SATA). If you need to mix the two types, your only real option is to attach a MD1000 array to the back (you can add up to two of these) and have each chassis filled with just one type of drive. Pretty much every SAN vendor I've ever seen has mixed and matched storage types in entry-level to mid-end SANs for enclosure-level redundancy and last I heard EMC and IBM are still in business
|
# ¿ Sep 14, 2009 19:20 |
|
oblomov posted:Speaking of storage, anyone have experience with fairly large systems, as in 600-800TB, with most of that being short-term archive type of storage? If so, what do you guys use? NetApp wasn't really great solution for this due to volume size limitations, which I guess one could mask with a software solution on top, but that's clunky. They just came out with 8.0, but I have 0 experience with that revision. What about EMC, say Clariion 960, anyone used that? Symmetrix would do this, but that's just stupidly expensive. Most of my experience is NetApp with Equallogic thrown in for a good measure (over last year or so). I haven't gotten to play with ours yet; it's sitting in boxes in the corner of the datacenter. TrueWhore posted:Can someone tell me if my basic idea here is feasible: On top of this, I don't think Windows will even mount a SCSI LUN that's in read-only mode. I don't have any idea about Mac or various Unixes. Why can't you just use a network filesystem? Vulture Culture fucked around with this message at 05:01 on Sep 17, 2009 |
# ¿ Sep 17, 2009 04:54 |
|
rage-saq posted:Skip it. IMO the only clustered (as opposed to traditional aggregate arrays like HP EVA, EMC CX, netapp etc) systems worth getting into are LeftHand or some kind of Solaris ZFS system.
|
# ¿ Nov 13, 2009 20:15 |
|
adorai posted:If you don't need HA, you might want to take a look at the lower end 7000 series devices from sun. They are the only vendor that won't nickel and dime you on every little feature.
|
# ¿ Mar 3, 2010 16:03 |
|
FISHMANPET posted:Fake Edit: Oh poo poo, looks like they got rid of the 500GB model on their site, so the 1TB disk model is $50k US.
|
# ¿ Mar 19, 2010 19:27 |
|
TobyObi posted:However, what I am trying to figure out, is am I limited to using it as a NAS device, ie, NFS only, or will the optional FC card allow me to use it as an FC target in some way? There is no support for this whatsoever if you want to use plain Solaris 10.
|
# ¿ Apr 2, 2010 16:33 |
|
TobyObi posted:I figured that would be the answer. bmoyles posted:Speaking as someone who has nuked 1TB of production porn (Playboy) because a drive without a partition table looked just like the new drive I was going to format for a quick BACKUP of said data, it can be helpful StabbinHobo posted:I can't think of any reason to actually use partition tables on most of my disks. Multipath devices are one example, but really even if I just add a second vmdk to a VM... why bother with a partition table? Why mount /dev/sdb1 when you can just skip the whole fdisk step and mount /dev/sdb ? Real question: My role has apparently been hugely expanded regarding management of our SAN. I've got most of the basics down, but can anyone recommend any really good books to start with that don't assume I'm a non-technical manager or some kind of moron? Something that pragmatically covers LAN-free backups, best practices for remote mirroring and that kind of stuff is a big plus for me. Vulture Culture fucked around with this message at 21:10 on Apr 9, 2010 |
# ¿ Apr 9, 2010 20:56 |
|
TobyObi posted:To do either NFS or iSCSI, it's time to fork out for 10Gb ethernet infrastructure, otherwise it's just pissing into the wind.
|
# ¿ Apr 10, 2010 05:28 |
|
Can anyone recommend me a good book on SAN architecture and implementation that doesn't assume I'm either retarded or non-technical management? I'm apparently now in charge of an IBM DS4800 and an IBM DS5100, which is nice because I'm no longer going to be talking out of my rear end in this thread, but it sucks because I'm a tiny bit in over my head with IBM Redbooks right now.
|
# ¿ Apr 12, 2010 06:11 |
|
Sorry to bump again, but is anyone managing an IBM SAN using IBM Systems Director? I installed the SANtricity SMI-S provider on a host and connected it up to the SAN and can see all the relevant details if I look at the instance view in the included WBEM browser. However, when I try to connect to it using IBM Director, it can't discover it, even when given the server's IP address directly. Anyone have any ideas?
|
# ¿ Apr 21, 2010 22:18 |
|
oblomov posted:There is nothing on Amazon beside their "IBM Press" stuff? I usually just google for things, but then again I have never dealt with IBM, only NetApp, Dell (their crappy MD3000i setup), Equallogic and some EMC. I've never had to get a book for anything there, usually between vendor docs and google, it's been good enough. Of course, I might be accepting a new job tomorrow, in which case I'd be learning the EMC side of things (particularly as it relates to Oracle). Having more experience never hurts
|
# ¿ Apr 22, 2010 02:11 |
|
Cyberdud posted:Does netgear make good switches ?
|
# ¿ Apr 22, 2010 21:22 |
|
Can anyone unfortunate enough to be managing an IBM SAN tell me if there's a way to get performance counters on a physical array, or am I limited to trying to aggregate my LUN statistics together using some kind of LUN-array mapping and a cobbled-together SMcli script?
|
# ¿ Apr 30, 2010 16:50 |
|
paperchaseguy posted:I just started working at IBM (XIV really). PM me or post your specific question and hardware and I'll look it up on Monday. I'm going to be setting up a Vulture Culture fucked around with this message at 21:59 on May 13, 2010 |
# ¿ May 3, 2010 02:28 |
|
|
# ¿ Apr 28, 2024 00:15 |
|
This is a long shot, but what the hell, lemme run it by you guys. I've got a pair of Brocade 300 switches (rebranded as IBM SAN24B-4), and I'm trying to connect up each switch's Ethernet management port to a separate Cisco Nexus fabric. Each one is run with a 5020, and the switches link up to a 2048T FEX. Problem is, whenever I do this, there is no link. I can hook up the 300 to a laptop and get a link. I can hook up the 300s to each other and get a link on both. I can hook it up to some old-rear end Cisco 100-megabit 16-port switch and get a link. I can hook up other devices, like an IBM SAN and a Raritan KVM, and get links. But for some reason, the goddamn things just will not show a link when I hook them up to the gigabit ports on the 2048T. Any ideas? The only thing I can think is that the Nexus has issues with stuff below 1 gigabit, but if that's the case, that's some of the most braindead poo poo I've ever heard. Vulture Culture fucked around with this message at 21:12 on May 20, 2010 |
# ¿ May 20, 2010 21:08 |