Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
MasterOSkillio
Aug 27, 2003
Hi everyone,

I currently have a server I setup about 5-6 years ago that is an Intel serverboard in a 4u case. It is really mostly a glorified media server that I use as a testbed for VMs from time to time, and well as backups of desktops. Currently the hypervisor is ESXI, with a windows server VM, I spin up a few other VMs to test things here and there. Soon I am going to blow away thie current config and go with a Xen server hypervisor because it fits what I want to do better then ESXI.

That said I don’t really like my current Raid controller and I am looking to upgrade. Over the time I have had the server I have had a few HDDs fail and the process is always painful. Currently VM OSes reside on a RAID 1 array, there is a data storage pool that is RAID 6, and the server boots to the hypervisor from SataDOM. The issue with the RAID card is that when a drive fails, I have to reboot the server, then go into the RAID config menu, figure out the broken disk, and then replace,/rebuild, which can take up to 24 hrs. The problem is that I cannot use the server while it is rebuilding the array at all. During this time the server is stuck in that RAID bios menu pre hypervisor. I can’t get to any of the RAID functions while the server is running. Considering I can and do often run other VMs that often at times are not using the array being rebuilt means I can’t do anything with those VMs while the rebuild is happening. This can be an annoying inconvenience, that modern day seems unnecessary.

In production environments I have used Dell or HP RAID cards that allow you to access the RAID bios menu locally (or even off-site, but I don’t need that functionality) to configure or rebuild an array without rebooting the server or requiring you wait in the RAID bios menu while that array while is rebuilding. Without having to junk the rest of my setup can someone recommend a RAID card to me that supports:

1. Roughly the same amount of drives the controller I currently have now with the expander (28+8) supports
2. I would like something that has support more advanced SSD trim control, or at least modern enough that it won’t be a problem if my arrays are a mixture of SSDs and spinning disks (obviously I would not be mixing SSDs and spinning disks within the same grouping).
3. And most importantly, some way to remotely control (locally) the RAID card so that I can make configuration changes on the fly without having to take the server and the VMs down to fix a drive failure.
4. Support for RAID types 1/5/6. 10/50/60 would be nice but I probably would never use.
5. Super capacitor would be preferred over a battery.

I am open to other suggestions of things I should be looking at

Relevant Hardware:

Intel-Integrated-RAID-Module-RMS25PB080
intel-storage-expander-res3fv288
Intel Server Board S2600CW2SR
Xeon E5-2630Lv4 (x2)

Adbot
ADBOT LOVES YOU

nielsm
Jun 1, 2009



Counterpoint: Don't use hardware RAID in 2022.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Hardware RAID is dead and buried. There's no reason to keep using it on a single server as long as your OS can support software RAID like mdadm or ZFS. Software RAID is the new hotness and offers a lot more flexibility than you'd typically get even with the most high-end hardware RAID controllers. CPUs are fast enough now that there are very few good reasons to need to offload RAID tasks like parity calculation and scrubbing.

You already mentioned you're in the market for a new hypervisor so I'll recommend Proxmox due to its built-in ZFS support. You can do just about any flavor of striped, mirrored, parity raid and even tiered storage. ZFS doesn't care about your specific device layout - you can spread your drives out however you like on as many different controllers and buses as you please and ZFS will be happy as long as it can find the disk somewhere.

stray
Jun 28, 2005

"It's a jet pack, Michael. What could possibly go wrong?"
I've been using an LSI SAS9220-8i (also known as the IBM M1015), flashed into "IT" (JBOD) mode, for almost ten years. It works great, and I let ZFS handle the redundancy for the disks plugged into it.

redeyes
Sep 14, 2002

by Fluffdaddy
A RAID card? Hell no!

forbidden dialectics
Jul 26, 2005





stray posted:

I've been using an LSI SAS9220-8i (also known as the IBM M1015), flashed into "IT" (JBOD) mode, for almost ten years. It works great, and I let ZFS handle the redundancy for the disks plugged into it.

I do the same, with the same card! It really does work perfectly!

It does require some airflow, however; otherwise, it gets extremely hot. That said, I ran it like that for like 4 years before I noticed how hot it was and it still works perfectly, so :shrug:

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
ZFS with a controller in JBOD/IT mode is the way to go. RAID is just not gonna cut it anymore.

Nystral
Feb 6, 2002

Every man likes a pretty girl with him at a skeleton dance.

CommieGIR posted:

ZFS with a controller in JBOD/IT mode is the way to go. RAID is just not gonna cut it anymore.

What about for your vrtx?

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Nystral posted:

What about for your vrtx?

I can do RAID 0 passthrough of single drives to pseudo JBOD it

Adbot
ADBOT LOVES YOU

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

stray posted:

I've been using an LSI SAS9220-8i (also known as the IBM M1015), flashed into "IT" (JBOD) mode, for almost ten years. It works great, and I let ZFS handle the redundancy for the disks plugged into it.

I have four of these in my home lab, two each in TrueNAS boxes, and they are bombproof. eBay has plenty of them, there are a bazillion articles about how to flash them and how to set them up in your storage setup, and yeah - stay away from hardware RAID.

source- I have two HP SmartArray P410 RAID cards with 1gb battery-backed write cache that work really well until they don't. They will tolerate drive failures and keep your exotic RAIDs limping along (RAID 5+0, RAID 6, etc) after a drive failure, but auto rebuild using a hot-spare is flaky at best and can kill a second drive during the intense I/O of a rebuild.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply