Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Cidrick
Jun 10, 2001

Praise the siamese
I was caretaker for 4 vTrak m500i units at a contract gig once, filled with 500GB Seagate SATA drives. One of the units had the controller take a poo poo, twice, about 6 months apart. After a very stressful morning on the phone with Promise, they had me force each drive online in the web management tool, and that caused the controller to magically fix itself somehow and everything was hunky dory. No data loss or anything. I used the exact same procedure for the second instance and that also fixed everything.

I think I was just very very lucky :shobon:

Adbot
ADBOT LOVES YOU

Cidrick
Jun 10, 2001

Praise the siamese

No... no! Not again! I won't go back!

Cidrick
Jun 10, 2001

Praise the siamese

1000101 posted:

If you can, consider adding some aliases for your devices to make those screens a little readable.

I highly recommend this as well. Nothing's more annoying when you're trying to troubleshoot FC issues and you're looking at a bunch of WWPNs with zero idea on what host that actually is without having to log onto every single attached box and comparing hex addresses.

Here's my Brocade FabricOS cheat sheet I use since the CLI is much much faster to do bulk changes, if you're feeling up to it. Remember you have to do this on each side, so change it up to "path1" for your second switch, or however you choose to name it.

code:
# Show a list of attached ports and WWNs
switchshow
# Find a port attached to the switch
# You can give a full WWPN or a partial one works too
nodefind 50:01:0c
# Create a new alias
alicreate "foohost_path0", "WWPN1"
# Add to existing alias
aliadd "foohost_path0", "WWPN1"
# Add alias to a new zone
zonecreate "foohost_path0_zone", "foohost_path0"
# Add alias to existing zone
zoneadd "foohost_path0_zone", "foohost_path0"
# Add the new zone to the config
cfgadd "Name_Of_My_SAN_Config", "foohost_path0_zone"
# Save the config
cfgsave
# Enable this new config, applying changes and loading it into the running config
cfgenable "Name_Of_My_SAN_Config"
As 1000101 said, in a nutshell, create an alias for every WWN, create a zone and add the alias for your hosts along with the alias for your SAN, and then save and enable your changes.

Cidrick
Jun 10, 2001

Praise the siamese

cheese-cube posted:

The only thing I miss about my previous job was working with Brocades and FC fabrics. At my current job it's all NFS/iSCSI :sigh:

I'm kind of jealous, to be honest. FC is a great, robust protocol, but with 10GB ethernet being so cheap and simple to work with, and the fact that a lot of shops are moving away from big expensive SANs to a more appliance or commodity-based storage platform, I've been touching it a lot less and less these days. My next dream is to set up distributed ceph or gluster distributed storage platform across several rows of 2U servers packed with drives and using something like opendedup on top of it.

It'll probably never happen, but I can dream.

Cidrick
Jun 10, 2001

Praise the siamese
My team is playing with the idea of going tapeless when we refresh our Netbackup environment. However, we'd like to do it without throwing hundreds of thousands of dollars at a particular storage vendor if we can help it (like DataDomain or something similar).

Does anyone have experience for setting up a high-density, cheap, non-performant storage array for the purposes of backups attached to a Netbackup media server? Preferably something with dedupe and compression? We've also thought of just rolling some dense HP servers with a bunch of 6TB SATA drives in it and running something like OpenDedupe on top of it, but I'm not sure if that would be more trouble than it'd be worth to maintain all that, and have to worry about setting up our own alarming and scheduling drive replacements and whatnot.

Cidrick
Jun 10, 2001

Praise the siamese

There's an outside chance I'll need to set up some manner of scalable storage backend for Cloudstack, and Ceph seems to be a popular option for backing VMs. Do you have any recommendations for reference architectures that I can look at to do some light research, from a hardware and network equipment standpoint? One of the concerns I have is having enough of a pipe for all the storage cross-chatter, but most of the network designs I'm familiar with would require going from top-of-rack Nexus 2Ks to middle-of-row 5Ks in order to keep costs down, which would likely get saturated real fast at scale.

Adbot
ADBOT LOVES YOU

Cidrick
Jun 10, 2001

Praise the siamese
I'm spitballing a new Openstack design in the coming weeks for a new data center space that will use rack-local iscsi-based storage as the backing storage device for all VMs in a particular rack. I have some experience with Solidfire, but little else, and while I like Solidfire's Openstack integration and how their HA model works, it'd be foolish for me to only look at what I've already used when there's other stuff out there like ScaleIO and Purestorage and Nimble.

I'm not comfortable enough going the Ceph or GlusterFS route right now, and my company is fine paying for a flash-based storage appliance so we don't have the extra headache of managing storage. I have no hard I/O requirements since I'm merely spit-balling at this point, but the array will be hosting QCOW2 images on KVM hypervisors for 1000-1200 general-use VMs. I'm not planning on backing databases on this yet, but if said storage appliance has a bigger-and-beefier version that can scale up to meet higher I/O, then bonus.

Can anyone points me towards some vendors - and specifically what models in their product lines - I should be looking at?

My needs are:

- <=10TB of raw storage (most dedupe is handled via qcow)
- <=6U of rackspace
- 10GB iSCSI connectivity
- Native Cinder driver support

My nice-to-haves are:

- All flash (but hybrid is fine too)
- As simple as possible to manage
- Centralized management so we can manage an entire pod (or greater) of storage appliances from one pane of glass
- HA between arrays, if feasible, so if we lose an appliance

Budget is about 300k, but this number is flexible.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply