Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
wolrah
May 8, 2006
what?

Catch 22 posted:

GET some metrics on you IOPS and MB read/write. THIS is a must for any environment looking to buy ANY SAN

What is the best way to gather this information on a Linux system? I'm usually pretty good at the Google, but all I'm finding is how to use iometer to test maximums rather than how to monitor for a week or two and see how much I actually need. My few Windows systems seem to be the easy part, Performance Monitor is capable of gathering all the information I'll need as far as I've found. The Linux boxes in question are running Debian and Ubuntu FWIW.

Migrating all of our storage needs to a SAN will be the first step in virtualizing and consolidating to hopefully a pair of identical servers rather than the clusterfuck of small boxes we have right now. My testing of ESXi is showing it to be everything I had hoped for and more, so with a recent notable inflow of cash I'm hoping to be able to clean up my infrastructure.

Adbot
ADBOT LOVES YOU

wolrah
May 8, 2006
what?
Does anyone have experience with the Intel Modular Server's storage system and particularly it's "Shared LUN" feature? I'm under the impression that this would be a better choice to provide shared storage to the Hyper-V blades housed within compared to running a *nix storage distro of some sort on one of the blades and using iSCSI from there.

The previous administrators of this machine built a whole bunch of small LUNs, one per VHD, and attached them to the individual blades so changing anything is a real pain in the rear end right now and any kind of failover or migration between blades is just a dream.

I'm usually the one supporting using random *nix appliances instead of licensed features, but since we already have the hardware and it's only a few hundred bucks to license the feature it seems like the obvious solution unless it has some fatal flaw.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply