Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
dum2007
Jun 13, 2001
I may be the victim of indigestion, but she is the product of it.
I spent last year working as an AIX / pSeries consultant (woah! I'm certified!?), so I thought I'd chime in with a really cool device I've personally worked with.

Say you have a typical fiber SAN.

code:
Storage Controller A ----== Fiber Channel Switch ==== Hosts
Storage Controller B ---/
...etc
You can login to the management interface of one of your SAN boxen and allocate some disk to your host.

When you RAID 0 two disk drives you get a performance boost, because you're pulling in parallel from two drives, right? Same goes for if you were using two FC disk controllers and you could get aggregate performance from both.

Wanna do this the cheap way? Use software RAID on the host itself. If your controllers are both very resilient and you're not worried about the risk of one of them going down altogether, you could stripe across them. I wouldn't always recommend this for a business-critical high-availability setup, but you see where the performance would come from.

Now, say you get two brand new controllers that have faster disks in them and you want to migrate the data from one controller to another? Or maybe you need to shrink or grow the volume? Or maybe your controller sucks and only lets you allocate sets of whole disks to a host, and not portions of them? When hours count and you need to maintain your storage environment, this is all time consuming.

Say hello to the IBM SAN Volume Controller:

code:
Storage Controller A ----== Fiber Channel Switch == -[b]SVC Node[/b] -== FC Switch ===  Hosts
Storage Controller B ---/                           \-[b]SVC Node[/b]-/

The SVC basically sits between your managed Disks (mDisks) and presents virtual Disks (vDisks) to your hosts. vDisks are very flexible - now you can:

- Migrate a vDisk to some other group of physical mDisks. This is crucial when you can't take your production database down and you need to migrate it to newer, faster disk.

- Create a vDisk which spans eight controllers full of disks for extreme speed via their aggregate throughput. Your host will still see this vDisk as one LUN.

- Use mirroring capabilities to keep a synchronous or asynchronous* mirror of a vDisk at another location (disaster recovery).


Physically, the SVC is a couple of 1U Intel boxes running a proprietary OS configuration that keeps a translation table of blocks. [vDisk "ORACLEDB"@Block1234 -> mDisk "DS4700A"@Block8093]. This translation is extremely fast, and even a 2-node SVC can handle fairly large installations. I think they support up to 8 nodes (4 pairs) and the system holds a world record for disk throughput as long as you back it with appropriate ($$$) storage controllers.

Oh, did I mention the boxes do caching, too? Just an added bonus - but that's why you must install each node with its own UPS.

Example real world application: I worked at a place with a 2 gigabit pipe across a city so their production SAP databases would be live mirrored to their disaster recovery site. I'm told this is one of the few clients that actually got such a Global Mirror working. They used a pair of Cisco FCOE units, two SVC installations and whatever backend disk they had on hand.

Oh, and the appropriate config for an SVC means that you have no single point of failure in the system. There are two nodes, they should be connected to two fiber channel switches. If one node dies, disk operations continue uninterrupted.

The coolest feature - I thought - was recovery mode. If an SVC node's hard disk dies or it can't boot, it will boot up, communicate with the other node over fiber channel, mount an operating system disk from the good node and boot back up. You get some freaky chevrons on the server's LCD panel when this is happening.

Anyway, this is all off the top of my head and I haven't worked with an SVC for a year - although I have a pretty cool certificate from a training course I did. It's definitely my favourite piece of SAN hardware.

...if only it weren't licensed by the TB and ludicrously expensive: Something like $40,000 for a base config + $7000 per terabyte, last I heard.

Adbot
ADBOT LOVES YOU

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply