|
Hi everyone, I currently have a server I setup about 5-6 years ago that is an Intel serverboard in a 4u case. It is really mostly a glorified media server that I use as a testbed for VMs from time to time, and well as backups of desktops. Currently the hypervisor is ESXI, with a windows server VM, I spin up a few other VMs to test things here and there. Soon I am going to blow away thie current config and go with a Xen server hypervisor because it fits what I want to do better then ESXI. That said I don’t really like my current Raid controller and I am looking to upgrade. Over the time I have had the server I have had a few HDDs fail and the process is always painful. Currently VM OSes reside on a RAID 1 array, there is a data storage pool that is RAID 6, and the server boots to the hypervisor from SataDOM. The issue with the RAID card is that when a drive fails, I have to reboot the server, then go into the RAID config menu, figure out the broken disk, and then replace,/rebuild, which can take up to 24 hrs. The problem is that I cannot use the server while it is rebuilding the array at all. During this time the server is stuck in that RAID bios menu pre hypervisor. I can’t get to any of the RAID functions while the server is running. Considering I can and do often run other VMs that often at times are not using the array being rebuilt means I can’t do anything with those VMs while the rebuild is happening. This can be an annoying inconvenience, that modern day seems unnecessary. In production environments I have used Dell or HP RAID cards that allow you to access the RAID bios menu locally (or even off-site, but I don’t need that functionality) to configure or rebuild an array without rebooting the server or requiring you wait in the RAID bios menu while that array while is rebuilding. Without having to junk the rest of my setup can someone recommend a RAID card to me that supports: 1. Roughly the same amount of drives the controller I currently have now with the expander (28+8) supports 2. I would like something that has support more advanced SSD trim control, or at least modern enough that it won’t be a problem if my arrays are a mixture of SSDs and spinning disks (obviously I would not be mixing SSDs and spinning disks within the same grouping). 3. And most importantly, some way to remotely control (locally) the RAID card so that I can make configuration changes on the fly without having to take the server and the VMs down to fix a drive failure. 4. Support for RAID types 1/5/6. 10/50/60 would be nice but I probably would never use. 5. Super capacitor would be preferred over a battery. I am open to other suggestions of things I should be looking at Relevant Hardware: Intel-Integrated-RAID-Module-RMS25PB080 intel-storage-expander-res3fv288 Intel Server Board S2600CW2SR Xeon E5-2630Lv4 (x2)
|
# ? Oct 4, 2022 18:17 |
|
|
# ? May 4, 2024 02:08 |
Counterpoint: Don't use hardware RAID in 2022.
|
|
# ? Oct 26, 2022 14:12 |
|
Hardware RAID is dead and buried. There's no reason to keep using it on a single server as long as your OS can support software RAID like mdadm or ZFS. Software RAID is the new hotness and offers a lot more flexibility than you'd typically get even with the most high-end hardware RAID controllers. CPUs are fast enough now that there are very few good reasons to need to offload RAID tasks like parity calculation and scrubbing. You already mentioned you're in the market for a new hypervisor so I'll recommend Proxmox due to its built-in ZFS support. You can do just about any flavor of striped, mirrored, parity raid and even tiered storage. ZFS doesn't care about your specific device layout - you can spread your drives out however you like on as many different controllers and buses as you please and ZFS will be happy as long as it can find the disk somewhere.
|
# ? Oct 26, 2022 19:38 |
|
I've been using an LSI SAS9220-8i (also known as the IBM M1015), flashed into "IT" (JBOD) mode, for almost ten years. It works great, and I let ZFS handle the redundancy for the disks plugged into it.
|
# ? Nov 2, 2022 22:02 |
|
A RAID card? Hell no!
|
# ? Nov 2, 2022 22:04 |
|
stray posted:I've been using an LSI SAS9220-8i (also known as the IBM M1015), flashed into "IT" (JBOD) mode, for almost ten years. It works great, and I let ZFS handle the redundancy for the disks plugged into it. I do the same, with the same card! It really does work perfectly! It does require some airflow, however; otherwise, it gets extremely hot. That said, I ran it like that for like 4 years before I noticed how hot it was and it still works perfectly, so
|
# ? Nov 3, 2022 01:18 |
|
ZFS with a controller in JBOD/IT mode is the way to go. RAID is just not gonna cut it anymore.
|
# ? Nov 3, 2022 19:34 |
|
CommieGIR posted:ZFS with a controller in JBOD/IT mode is the way to go. RAID is just not gonna cut it anymore. What about for your vrtx?
|
# ? Nov 4, 2022 03:18 |
|
Nystral posted:What about for your vrtx? I can do RAID 0 passthrough of single drives to pseudo JBOD it
|
# ? Nov 4, 2022 03:25 |
|
|
# ? May 4, 2024 02:08 |
|
stray posted:I've been using an LSI SAS9220-8i (also known as the IBM M1015), flashed into "IT" (JBOD) mode, for almost ten years. It works great, and I let ZFS handle the redundancy for the disks plugged into it. I have four of these in my home lab, two each in TrueNAS boxes, and they are bombproof. eBay has plenty of them, there are a bazillion articles about how to flash them and how to set them up in your storage setup, and yeah - stay away from hardware RAID. source- I have two HP SmartArray P410 RAID cards with 1gb battery-backed write cache that work really well until they don't. They will tolerate drive failures and keep your exotic RAIDs limping along (RAID 5+0, RAID 6, etc) after a drive failure, but auto rebuild using a hot-spare is flaky at best and can kill a second drive during the intense I/O of a rebuild.
|
# ? Dec 25, 2022 23:07 |