Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

echo465 posted:

My google skills must be failing me, because most of the whining about the Promise VTrak line that I am finding is from all the Mac fags when Apple discontinued the X-RAID and told everyone to buy Promise instead. I'm interested in hearing it if this is a widespread problem.
People in the enterprise world tend to have better things to do than bitch about their hardware on the Internet.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

H110Hawk posted:

I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work.

Our requirements:

Hardware raid, dual parity preferred (RAID6), BBU
Cheap!
Runs or attachable to Debian Etch, 2.6 kernel.
Power-dense per gb.
Cheap!

To give you an idea, we currently have a 24-bay server with a 3ware 9690SA hooked up to a SAS/SATA backplane, and have stuffed it full of 1TB disks. People are using SFTP/FTP and soon RSYNC to upload data, but once the disks are full they are likely to simply stay that way with minimal I/O.

We are looking for the cheapest/densest cost/gig. If that is buying an external array to attach to this system via SAS/FC, OK! If it's just buying more of these systems, then so be it. I've started poking around at Dot Hill, LSI, and just a JBOD looking system, but I figured I would ask here as well, since LSI makes it hard to contact a sales rep, and Dot Hill stuff on CDW has no pricing.

The ability to buy the unit without any disks is a big plus, we are frequently able to purchase disks well below retail.

I need about 50-100tb usable in the coming month or two, then it will scale back. I am putting these in 60amp/110v four post 19" racks.

Edit: Oh and I will murder anyone who suggests coraid, and not just because it is neither power dense nor hardware raid.
If OpenSolaris is a good alternative to Debian Etch, you might try a SunFire X4500, which drops by a really substantial amount if you get them to discount you over the phone. 48TB on ZFS, you end up with about 38TB usable and all the redundancy you could ever need. OpenSolaris's built-in CIFS server is awfully good at UID/GID mapping for mixed Unix/Windows environments, and outperforms the poo poo out of Samba. I'm running three of them in production with two more in testing and two more on the way. They're rock solid and, in spite of using 7200 RPM SATA disks, benchmark really close to the fibre channel arrays on my [vendor name removed] tiered NAS which does NFS and CIFS in hardware.

You could also use Solaris 10 with Samba if OpenSolaris makes you uncomfortable, but OpenSolaris is just so much goddamn nicer, especially with the built-in CIFS server, and just as stable.

If you're really afraid of Sun OSes, you could also run Linux on the thing, but if you're trying to manage a 48TB pool without using ZFS you're kind of an rear end in a top hat.

Vulture Culture fucked around with this message at 00:00 on Sep 18, 2008

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

M@ posted:

Wanted to chime in this thread and say that if anyone needs any used NAS/SAN hardware, I stock this stuff in my warehouse. Mostly Netapp, low to mid-high end equipment (960s, 980s, 3020s, and I've got one 6080 :monocle: ) and HP, and Sun, although we do get the occasional EMC system in. I stock used disk shelves and individual drives too, so if you're looking for a cheap spare or something, let me know.
I need a handful of IBM GNS300C3 drives, since in spite of me having 4-hour replacement on these IBM cannot direct me to a single one anywhere in the country

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mierdaan posted:

(if low-end)
And you ain't kidding, the performance on the MD3000 is really not spectacular

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mierdaan posted:

What do you mean by not spectacular? I mean, the most IOPS-intensive thing we're throwing onto it is Exchange 2007 install for 250-ish users pushing about 600 IOPS. Are we going to be unhappy? :(
At 600 IOPS you're probably going to be fine, just be aware that if you're using an NX1950, you have a 2.1 TB LUN limit. This probably isn't a huge dealbreaker for Exchange since you can just break up the message stores pretty transparently, but it's something to be aware of if you're looking for generic storage.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

tinabeatr posted:

Does anyone have any experience with BlueArc?

We have a Titan 2500 set up at my office, and the performance is incredible. I was curious if anyone else here is using anything similar to this, what their experience has been, and what they're using it for.

I'm in the litigation support business and my company handles e-Discovery projects using our own in-house software. Due to the nature of how e-Discovery production works, we needed a storage solution that could handle very fast throughput of massive amounts of small files. So far, the BlueArc solution has worked really well. I'm a software developer, so I don't know many of the real-world performance metrics we're getting, but if anyone is interested I can find out.
Their support really tries to go the extra mile when they can, but they seem extremely disorganized. We've frequently had them say they're showing up on a certain day to support something and then not answer emails about what time to expect them, after which they either might or might not show up. Whatever, these have always been for non-critical issues and support staff get diverted to more critical problems from other people, it's not a big deal. Sometimes they show up and can't complete the work they were supposed to. (They installed their call-home application awhile ago, which they insisted required a distribution we don't support; who the gently caress writes software for Linux and doesn't support Red Hat? Thank God that Puppet makes one-offs like this easy enough to maintain.) We had a try-and-buy that we've had boxed up for a few months (we have a different unit in production) and it took them a couple of months to get it out of our storage room. The stuff they redistribute from third-parties like Xyratex is sometimes off-quality, and sometimes utter garbage; I can't find enough bad things to say about the disk controllers in the SA-48s. The switch they installed for the NAS's private management network is some piece-of-poo poo $250 Netgear managed switch. If we're going to be paying six figures for storage gear, I at least expect some real vendors on the backend. Still, it hasn't broken on us yet, and I guess that's what's important.

The way that some of their application-level stuff makes me a bit wary. Their call-home application works by parsing logs sent via email, rather than dealing with SNMP traps or anything proper. This is conspicuous, and leads me to believe that there might not be great cooperation between some of their engineering groups. Still, it works without a problem, so I guess I shouldn't really care how they do things until stuff starts not working right.

Short of that, though, they're a pretty decent company. The hardware is good and does what it's supposed to. I don't think we've had a single problem with the fibre or SATA hardware outside of the high-density SA-48 enclosures which seem to experience perpetual controller failures. They've never screwed us, they're hard-working, and we haven't had any stability or performance issues with it in spite of hitting it seriously hard with 504 nodes at once over NFS. I have had major issues on critical systems with companies like IBM and Dell that BlueArc hasn't given me yet. For low-cost high-performance storage for an HPC environment or similar scratch space, I'd recommend them. I don't know that I trust them enough for anything truly "enterprise" grade, though.

Vulture Culture fucked around with this message at 01:52 on Jan 20, 2009

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Jadus posted:

In general, what are people doing to back up these large multi-TB systems?

Our company is currently looking at scanning most of the paper trail from the last 20 years and putting it on disk. We've already got a direct-attached MD3000 from Dell so we're not worried about storage space. However, backing up that data doesn't seem to be as easy.

If it's to tape, does LTO4 provide enough speed to complete a backup within a reasonable window? If it's back up to disk, what are you doing for offsite backups, and how can you push so much data within the same window?

I think I may be missing something obvious here, and if so proceed to call me all sorts of names, but I don't see an ideal solution.
Yes. All enterprise backup software on the market has the ability to stream data from a single server to multiple drives at once, though generally for smaller servers you're going to be going disk-to-disk and then the backup server is going to stream it off to tape at once. This is because if you don't stream data fast enough, the tape starts and stops, starts and stops, starts and stops, and in addition to killing your speeds and latency, it's not good for the drive or the media. We're running IBM 3584 (now TS3500) cabinets with 10 drives apiece, running on a SAN and managed by a single server running Tivoli Storage Manager. We're going to scale it out to 2 servers very soon.

There's a lot of different options available to you and they all depend on what you need and what you're willing to pay. If you don't need file-level granularity, lots of SAN vendors have flash copy capability, and the ability to plug into enterprise backup products like Tivoli Storage Manager, CommVault or NetBackup, which will perform differential block-level backups of the SAN volumes.

The advantage of block-level backups is that you don't waste any time analyzing files to see what changed, because the SAN keeps an inventory of dirty blocks on the volume. This is generally a good idea for volumes with a ton of small files, whereas larger files back up easier with a file-based backup product, and it improves both your RTO/RPO really substantially for full-on disaster recovery scenarios. The disadvantage is that you lose all of your granularity to restore specific data from the backup set. So, really, when you plan this out, you need to look at your most likely failure scenarios, what they'll cost you, and what you need to restore first. If your system management tools are good enough, ability to do a bare-metal restore really doesn't matter, because you can restore configurations just as fast from your tools while your backup product works on important data. This is the case with my Linux systems managed by Puppet.

Keep in mind that even if you have a huge volume of files on a mission-critical storage server, people don't need 100% of the files restored immediately. When it comes to file storage, users need what they're working on, which you can restore first, and then restore everything else later. Contrary to popular belief, RTO does not always apply to full volumes. If you can get the users satisfied with what's restored first within a quick enough timeframe, they're not going to care much about the rest of the restore time (within reason).

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

H110Hawk posted:

From what I hear, and contrary to what their sales guys insisted, BlueArc appears to be doing demo units now, or is it still their "if we think you like it you have to buy it" try-and-buy program?
Regardless of circumstances, the only vendors who get away with this deal with IT people who have no balls (or female equivalent).

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

skullone posted:

You guys are scaring me... I already have a drive with predictive failure on my Sun box. Haven't reported it to Sun yet... but now I'm thinking "this RAID-Z set with hot spares isn't look as good as RAID-Z2 anymore"

I pretty much fired our Sun reseller last night, and I'm trying to find a new rep at Sun to get us the stuff we need.
Siiiigh.
Give Joe Morgan at MTM Technologies (jmorgan@fakesubdomain.seriouslyremovethis.mtm.com) a shout, he does good things for us.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mr. Fossey posted:

Has anyone played around with Sun's 7000 Storage line? Specifically the 7210?

I can get a sweetheart of a deal, but even the best deal is no good if it's not ready yet.
It's a fantastic tier-3 NAS for the money but don't try to use it as a SAN yet.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mr. Fossey posted:

We are thinking of using it primarily for 5-6TB over CIFS, and possibly a handful of VMs over iSCSI. The most intense would be a 80 user exchange VM. Is the SAN piece something that will come into its own as the software matures, or are there hardware or architecture inadequacies?
It's mostly the software end. For example, the iSCSI target server, as far as I'm aware, doesn't support SCSI-3 persistent reservations. This means that you can't run Windows Clustering off of it, which is kind of a big deal for a lot of people. There's a few other shortcomings, but it's mostly related to feature set rather than performance and stability. We're quite happy with the performance. There's support for it now in OpenSolaris nightlies, though, so I'm sure it's really not much longer before it finds its way into some Amber Road updates.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

complex posted:

Hmm. We could not get out x4500 anywhere near wirespeed using iSCSI. Wondering now if the x4540 is that much better...

Any see my question on the last page on deduplication?
What iSCSI target stack were you using, out of curiosity?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

complex posted:

Windows Server, not sure what version, 2003? I'm not a Windows guy.
I mean the target server running on the x4500 (unless you mean you were actually using Windows on the x4500 and I'm misunderstanding). The new COMSTAR stack in OpenSolaris has seen some pretty substantial improvements over the last few releases.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Syano posted:

Next, I am having a difficult time wrapping my head around the idea of a snapshot. Is it really as awesome as I am thinking it is? Because I am thinking if I were able to move all my storage into the array then I would be able to use snapshots and eventually replication to replace my current backup solution. Are snapshots really that awesome or do I have an incorrect vision of what they do?
You know the old adage of how RAID isn't backup?

It's still RAID. I remember a story here about some guy with a big-rear end BlueArc NAS that was replicating to another head. The firmware hit a bug and imploded the filesystem, including snapshots. It then replicated the write to the other head, which imploded the filesystem on the replica.

This is probably less of a concern when your snapshots happen at the volume level instead of the filesystem level, but there's still plenty of disaster scenarios to consider without even getting into the possibilities of malicious administrators/intruders or natural disasters. You really need to keep offline, offsite backups.

Vulture Culture fucked around with this message at 16:17 on Jun 13, 2009

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

Can someone tell me what the practicality of using a Sun 7210 w/ 46 7200 rpm disks as the backend for approximately 40 VMware ESX guests is? On the one hand, I am very afraid of using 7200rpm disks here, but on the other hand there are 46 of them.

I realize that without me pulling real IOPS numbers this is relatively stupid, but I need to start somewhere and this seems like a good place.
You're also not telling us what kind of workload you're trying to use here. I've got close to 40 VMs running off of 6 local 15K SAS disks in an IBM x3650, but mostly-idle development VMs have very different workloads than real production app servers.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bluecobra posted:

Sun's Amber Road system looks pretty nifty:

http://www.sun.com/storage/disk_systems/unified_storage/

You can probably save some money if you go with their disk arrays instead. At my work I recently put together a J4400 array which is 24TB raw (24x1TB), and after setting up ZFS with RAIDZ2 + 2 hotspares, it is about 19TB usable. You can daisy chain up to 8 (192 disks) J4400's together so you can expand to roughly ~150TB. One J4440 was about $20k after getting two host cards, dual SAS HBA's, and gold support plus you will need a server to hook it up to. I would find a decent box and load it up with a boatload of memory for the ZFS ARC cache.

http://www.sun.com/storage/disk_systems/expansion/4400/
You can also use an x4600 as an interface to a bunch of Thumpers (up to 6 at 48x1TB each). They tend not to advertise this functionality much. If you're going to do this, I recommend OpenSolaris/SXCE over Solaris 10 because of the substantial improvements in native ZFS kernel CIFS sharing.

Vulture Culture fucked around with this message at 16:16 on Jun 16, 2009

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

complex posted:

You've got to post a link detailing the "x4600 as a master to multiple x4500s" setup. I've never heard of that.
I don't have anything handy, but it's something a Sun VAR recommended to us for low-cost cluster storage. I don't actually have any idea how it works.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Maneki Neko posted:

By and large though, that's the same general complaint have about all IBM equipment. Stupid poo poo like 17 different serial numbers being on a piece of equipment and IBM has no idea which one they actually want, FRU numbers that are meaningless 3 weeks after you buy something, 12 different individual firmware updates that need to manually tracked and aren't updated regularly vs. regularly updated packages, etc. I have no idea how IBM stays in business, dealing with them is terrible.
I'm not that familiar with their SAN gear (we're just now installing our DS4800) but doesn't IBM Director usually do a pretty bang-up job of handling all the firmware bullshit?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bluecobra posted:

You can do this if you roll your own with a 3U Supermicro case, 1.5TB drives, a decent Intel motherboard/processor, and OpenSolaris so you can use ZFS. Once you get OpenSolaris installed, it is pretty trival to make a ZFS pool and you can do something like a RAIDZ2 which similar to RAID 6 in redundancy. You can then share out the ZFS pool you just created to Windows hosts with a CIFS share.
Just note that if you take this route and expect AD integration, you had better be very familiar with LDAP and Kerberos (or at least know enough to troubleshoot when the tutorial you're following misses a step), because it's not much more straightforward than it is in Samba. OpenSolaris is an amazing OS for Unix/Linuxy people, but Sun bet the storage farm on the 7000 series' secret sauce, not the OpenSolaris CLI.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
DDUP made some neat products, but I don't get what all the fuss was about. From the benchmarks I've seen, Falconstor etc. have products that perform an entire order of magnitude better. Their corporate strategy that was never sure how it felt about post-process deduplication made me think the place was run by Scott McNealy.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I just moved part of my ESXi development environments off of local storage and onto a 48TB Sun x4500 I had lying around, shared via ZFS+NFS on OpenSolaris 2009.06 over a 10GbE link.

I was worried about performance because it's SATA disk, but holy poo poo this thing screams with all those disks. I have never seen a Linux distro install so fast ever in my life. The bottleneck seems to be the 10GbE interface, which apparently maxes out around 6 gig.

If I can find some sane way to replicate this to another Thumper, I will be a very, very happy man.

Vulture Culture fucked around with this message at 18:16 on Aug 6, 2009

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bluecobra posted:

Are you using these 10GbE cards? They have been working extremely well for us but on some servers we added another card and turned on link aggregation. There is also this new feature in Solaris 10 Update 7 that may or may not be in OpenSolaris:
The Thumpers are PCI-X, so we're using an Intel 82597EX. I haven't even considered link aggregation on this thing, since it's first-gen and probably out of warranty soon. We're a lot more likely to go x4540 if we put these in production.

I also don't think our non-HPCC switching could handle it well. We're running some impressive Force10 gear on the HPCC side, but I think we're bottlenecked down to 10gig to any given rack group for our regular network.

H110Hawk posted:

What zpool configuration are you using? Any specific kernel tweaks?
RAID-Z1, 6 disks per array, single gigantic zpool and ZFS version 16. No real performance tweaks yet, though I've looked at some stuff with the I/O schedulers in OpenSolaris. I'm still playing with it, because I'm experimenting with iSCSI vs. NFS and iSCSI's performance is kind of poo poo with the default configuration on the VMware and Solaris sides.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Weird Uncle Dave posted:

This is probably an invitation to all sorts of weird PMs, but do any of you do SAN consulting and/or sales?

I'm pretty sure I'm in over my head with my boss's request to virtualize several of our bigger physical servers. The virtualizing part is easy enough, but I don't really know enough about how to measure my hardware requirements, or how to shop intelligently for a low-end SAN that will meet those requirements, and I don't want to clutter the thread with all my newbie-level questions.
It depends a lot on the type of platform you're running. What are the operating systems and major applications you're trying to profile?

Profiling Windows and the various MS apps is pretty easy -- your best bet is to download the 180-day trial of Systems Center Operations Manager, load it onto a VM, point it at your AD/Exchange/SQL/whatever servers and let it go to town. Within a couple of days you should have most of the relevant performance information you need. Microsoft also has a lot of best practices guides for how to obtain relevant performance information out of applications like Exchange, just Google for the whitepapers.

Linux is a lot trickier, especially if you're running on RHEL/CentOS or something else that doesn't have iotop and the other nice stuff that have made it into the system in the last couple of years. You'll have to babysit iostat for awhile.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Haha, oh man, I pity you remaining IMail admins. :(

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

H110Hawk posted:

Sometimes it's hard to get 200v power in datacenters. :(
What? No it isn't.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Echidna posted:

Can't mix SAS and SATA in the same enclosure. The controllers do support SATA as well as SAS, although SATA drives don't show up as options in the Dell pricing configuration thingy. Our account manager advised us that although technically you can mix SAS and SATA in the same enclosure, they'd experienced a higher than average number of disk failures in that configuration, due to the vibration patterns created by disks spinning at different rates (15K SAS and 7.2K SATA). If you need to mix the two types, your only real option is to attach a MD1000 array to the back (you can add up to two of these) and have each chassis filled with just one type of drive.
Hahahahaha you bought it

Pretty much every SAN vendor I've ever seen has mixed and matched storage types in entry-level to mid-end SANs for enclosure-level redundancy and last I heard EMC and IBM are still in business

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

oblomov posted:

Speaking of storage, anyone have experience with fairly large systems, as in 600-800TB, with most of that being short-term archive type of storage? If so, what do you guys use? NetApp wasn't really great solution for this due to volume size limitations, which I guess one could mask with a software solution on top, but that's clunky. They just came out with 8.0, but I have 0 experience with that revision. What about EMC, say Clariion 960, anyone used that? Symmetrix would do this, but that's just stupidly expensive. Most of my experience is NetApp with Equallogic thrown in for a good measure (over last year or so).
Isilon is really big in this space. They mostly deal with data warehousing and near-line storage for multimedia companies and high-performance computing. They're very competitive on price for orders of this size, but I can't speak yet for the reliability of their product.

I haven't gotten to play with ours yet; it's sitting in boxes in the corner of the datacenter.

TrueWhore posted:

Can someone tell me if my basic idea here is feasible:

I'd like to set up a zfs volume, and share it out as a iscsi target (rw)

I'd also like to take that same volume and share it out as several read only iscsi targets. I think this should be possible using zfs clones? But then the clones wouldn't update when the master rw volume does correct? Is there some other way to get it set up the way I want it to work, ie one iscsi initiator can write to a volume and several others can read it at the same time, and see updates as they happen?

Basically if you haven't guessed I am trying to get a semi SAN setup, as a stopgap measure until we can get a real SAN. I have 4 video editing stations that need access to archived material, and I am willing to have just one writer and several readers. If worst comes to worst I will go with the clones method, and just unmount, destroy clone, create new clone, remount, whenever I need a reader client to see the updated master volume.
Tell us a little more (read: a lot more) about what you're trying to do with this shared LUN (especially in terms of operating systems involved), because unless you're using a cluster filesystem, I don't think this is going to work the way you think it's going to work. Operating systems maintain extensive caches to speed disk I/O, and unless those caches stay coherent (meaning something forces them to update when something changes on the cluster), they're going to be seeing garbage all over the drive.

On top of this, I don't think Windows will even mount a SCSI LUN that's in read-only mode. I don't have any idea about Mac or various Unixes.

Why can't you just use a network filesystem?

Vulture Culture fucked around with this message at 05:01 on Sep 17, 2009

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

rage-saq posted:

Skip it. IMO the only clustered (as opposed to traditional aggregate arrays like HP EVA, EMC CX, netapp etc) systems worth getting into are LeftHand or some kind of Solaris ZFS system.
Edit: Beaten to the punch on Isilon

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

If you don't need HA, you might want to take a look at the lower end 7000 series devices from sun. They are the only vendor that won't nickel and dime you on every little feature.
Interestingly, the Fishworks stuff also has better analytics than most of the solutions I've seen.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

Fake Edit: Oh poo poo, looks like they got rid of the 500GB model on their site, so the 1TB disk model is $50k US.
That's the retail price, but you could probably get Sun to cut at least 30% off of that on a quote if they like you. But honestly, a Thumper seems like mega overkill for a requirement of only 2 TB. I'd consider an X4275 instead.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

TobyObi posted:

However, what I am trying to figure out, is am I limited to using it as a NAS device, ie, NFS only, or will the optional FC card allow me to use it as an FC target in some way?
It's not straightforward or particularly well-documented whatsoever, but the COMSTAR stack in OpenSolaris will let you run it as an FC target through ZFS. The process is almost exactly the same as setting up an iSCSI target, except you're zoning it out to WWNs instead of IQNs. I haven't used it personally, and can't speak for its performance or reliability, but my iSCSI experiences using COMSTAR have been extremely positive.

There is no support for this whatsoever if you want to use plain Solaris 10.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

TobyObi posted:

I figured that would be the answer.

I've already got an interesting device utilising COMSTAR and FC (and it has been rock solid), but for this, I think NFS over 10Gb ethernet is going to be easier, considering raw device access isn't a necessity, and the whole Oracle having OpenSolaris up in the air bit.
I'm using both NFS and iSCSI extensively in my VMware test lab, and I don't really have any complaints about the way either one is implemented in OpenSolaris. I don't think there's necessarily any benefit to FC unless you're connecting up with an existing fabric.

bmoyles posted:

Speaking as someone who has nuked 1TB of production porn (Playboy) because a drive without a partition table looked just like the new drive I was going to format for a quick BACKUP of said data, it can be helpful :)
I've used raw disks for LVM before (I stopped), but I think this general sentiment is a strong one -- do something, anything, to label your partitions so you know what they are at a glance without any guesswork bullshit. I don't know about other filesystems, but I know ext2/3/4 and XFS support partition labels.

StabbinHobo posted:

I can't think of any reason to actually use partition tables on most of my disks. Multipath devices are one example, but really even if I just add a second vmdk to a VM... why bother with a partition table? Why mount /dev/sdb1 when you can just skip the whole fdisk step and mount /dev/sdb ?
Well, I can only think of one good reason to ever not partition your disk -- you don't have to worry about alignment issues if your partition always starts at block 0. And that's nice, but you potentially lose transparency as to what the LUN is for. This might not be a big deal if there's hugely tight integration between systems and storage administration in a shop, but you can still run into huge disasters if, for example, you inadvertently zone the wrong LUN to a server. If you always partition your disks, you know exactly what your unused LUNs look like -- unpartitioned disks.

Real question:

My role has apparently been hugely expanded regarding management of our SAN. I've got most of the basics down, but can anyone recommend any really good books to start with that don't assume I'm a non-technical manager or some kind of moron? Something that pragmatically covers LAN-free backups, best practices for remote mirroring and that kind of stuff is a big plus for me.

Vulture Culture fucked around with this message at 21:10 on Apr 9, 2010

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

TobyObi posted:

To do either NFS or iSCSI, it's time to fork out for 10Gb ethernet infrastructure, otherwise it's just pissing into the wind.
This might have been true in the 3.x days, but 4.0 has iSCSI MPIO that's worked very well in our testing. (We still mostly use NFS on the development side because it's easy as hell to provision new VMs. Also, we can't afford Storage VMotion.)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Can anyone recommend me a good book on SAN architecture and implementation that doesn't assume I'm either retarded or non-technical management? I'm apparently now in charge of an IBM DS4800 and an IBM DS5100, which is nice because I'm no longer going to be talking out of my rear end in this thread, but it sucks because I'm a tiny bit in over my head with IBM Redbooks right now.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Sorry to bump again, but is anyone managing an IBM SAN using IBM Systems Director? I installed the SANtricity SMI-S provider on a host and connected it up to the SAN and can see all the relevant details if I look at the instance view in the included WBEM browser. However, when I try to connect to it using IBM Director, it can't discover it, even when given the server's IP address directly. Anyone have any ideas?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

oblomov posted:

There is nothing on Amazon beside their "IBM Press" stuff? I usually just google for things, but then again I have never dealt with IBM, only NetApp, Dell (their crappy MD3000i setup), Equallogic and some EMC. I've never had to get a book for anything there, usually between vendor docs and google, it's been good enough.
I was just looking for generic SAN stuff, not something necessarily vendor-specific, but IBM did have a couple of free redbooks that helped me out quite a bit.

Of course, I might be accepting a new job tomorrow, in which case I'd be learning the EMC side of things (particularly as it relates to Oracle). Having more experience never hurts :)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Cyberdud posted:

Does netgear make good switches ?
Netgear anything is universally, without exception, poo poo.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Can anyone unfortunate enough to be managing an IBM SAN tell me if there's a way to get performance counters on a physical array, or am I limited to trying to aggregate my LUN statistics together using some kind of LUN-array mapping and a cobbled-together SMcli script?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

paperchaseguy posted:

I just started working at IBM (XIV really). PM me or post your specific question and hardware and I'll look it up on Monday.
I ended up just modifying my Nagios perfdata collector plugin to map the LUNs to their physical arrays, then aggregate together the LUN-level statistics into some pretty and easily-graphable numbers. Today, I woke up early to upgrade our SAN firmware, fix some physical cabling issues and fix the totally broken-rear end multipathing on our fabric, so I look to be getting the hang of this pretty quickly. :)

I'm going to be setting up a DS5300DS5100 at our DR site this week, though, so I'll let you know if I have any specific questions about it.

Vulture Culture fucked around with this message at 21:59 on May 13, 2010

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
This is a long shot, but what the hell, lemme run it by you guys.

I've got a pair of Brocade 300 switches (rebranded as IBM SAN24B-4), and I'm trying to connect up each switch's Ethernet management port to a separate Cisco Nexus fabric. Each one is run with a 5020, and the switches link up to a 2048T FEX. Problem is, whenever I do this, there is no link. I can hook up the 300 to a laptop and get a link. I can hook up the 300s to each other and get a link on both. I can hook it up to some old-rear end Cisco 100-megabit 16-port switch and get a link. I can hook up other devices, like an IBM SAN and a Raritan KVM, and get links. But for some reason, the goddamn things just will not show a link when I hook them up to the gigabit ports on the 2048T.

Any ideas? The only thing I can think is that the Nexus has issues with stuff below 1 gigabit, but if that's the case, that's some of the most braindead poo poo I've ever heard.

Vulture Culture fucked around with this message at 21:12 on May 20, 2010

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply