Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
wolrah
May 8, 2006
what?

SpartanIvy posted:

e: Also the power supply is proprietary. DO NOT try to use a normal ATX power supply because it will fry your mobo.
WTF someone brought back this horrible idea? Weren't there a bunch of Dells in the early '00s that were the same, looking like ATX but with a wonky pinout?

Using a standard connector so wrongly that things will break if connected to the standard version is one of those things that in a just world would have everyone responsible for designing, approving, and implementing it blacklisted from the industry as a whole.

Adbot
ADBOT LOVES YOU

SpartanIvy
May 18, 2007
Hair Elf

wolrah posted:

WTF someone brought back this horrible idea? Weren't there a bunch of Dells in the early '00s that were the same, looking like ATX but with a wonky pinout?

Using a standard connector so wrongly that things will break if connected to the standard version is one of those things that in a just world would have everyone responsible for designing, approving, and implementing it blacklisted from the industry as a whole.

The connector looks different to my eye but I found someone online saying they hooked up a standard PSU to it somehow so :shrug:

Figured I'd mention it here in case anyone was thinking of a PSU swap.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

wolrah posted:

WTF someone brought back this horrible idea? Weren't there a bunch of Dells in the early '00s that were the same, looking like ATX but with a wonky pinout?

Using a standard connector so wrongly that things will break if connected to the standard version is one of those things that in a just world would have everyone responsible for designing, approving, and implementing it blacklisted from the industry as a whole.

Yeah, this site has some details. There are more recent Dells which also use a nonstandard pinout like the PowerEdge T20, but at least it's an 8-pin which is obviously not compatible with standard ATX and wouldn't let you fry something by accident.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

SpartanIvy posted:

The drive shows as only having used like 700 megs so I really think it's a corrupt file somewhere.

Is it possible that it's a counterfeit flash drive that is smaller than it was advertised as?

SpartanIvy
May 18, 2007
Hair Elf

fletcher posted:

Is it possible that it's a counterfeit flash drive that is smaller than it was advertised as?

I don't think so, but it's worth testing when I get home. It had a unique GUID and I would be surprised if a counterfeit would.

Less Fat Luke
May 23, 2003

Exciting Lemon

SpartanIvy posted:

I don't think so, but it's worth testing when I get home. It had a unique GUID and I would be surprised if a counterfeit would.
It's improbable but you could be out of inodes instead; can you paste the output here of `df -h` and `df -i`?

IOwnCalculus
Apr 2, 2003





wolrah posted:

WTF someone brought back this horrible idea? Weren't there a bunch of Dells in the early '00s that were the same, looking like ATX but with a wonky pinout?

Using a standard connector so wrongly that things will break if connected to the standard version is one of those things that in a just world would have everyone responsible for designing, approving, and implementing it blacklisted from the industry as a whole.

Once you get to "servers sold to businesses", interoperability standards with things like power supplies go right back out the window. HPE doesn't care that you can't swap that PSU with a generic one, their entire concern for that server is that you either buy Official Spare HPE parts to repair it, or replace it with a More Better HPE Server when things do start breaking.

Yaoi Gagarin
Feb 20, 2014

IOwnCalculus posted:

Once you get to "servers sold to businesses", interoperability standards with things like power supplies go right back out the window. HPE doesn't care that you can't swap that PSU with a generic one, their entire concern for that server is that you either buy Official Spare HPE parts to repair it, or replace it with a More Better HPE Server when things do start breaking.

I think system vendors would love to go back to the days where everybody had their own ISA and their own flavor of Unix, but that business model is not viable so instead we get almost-but-not-quite interchangeable commodity hardware

IOwnCalculus
Apr 2, 2003





I also suspect it's bleed-over from their rackmount server divisions, where even if you do use a standard ATX power supply connector... what's the point? I've got a Supermicro 2U box sitting here that has what I believe to be a standard ATX power supply connector on the motherboard, so sure, I could probably power it from a regular PSU. But the board form factor is Supermicro's own WIO spec, so it only fits in Supermicro WIO cases, none of which accept any standard power supply.

SpartanIvy
May 18, 2007
Hair Elf

Less Fat Luke posted:

It's improbable but you could be out of inodes instead; can you paste the output here of `df -h` and `df -i`?
code:
root@Domain:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          1.8G  1.8G  1.5M 100% /
tmpfs            32M  232K   32M   1% /run
/dev/sda1        15G  667M   15G   5% /boot
overlay         1.8G  1.8G  1.5M 100% /lib/firmware
overlay         1.8G  1.8G  1.5M 100% /lib/modules
devtmpfs        8.0M     0  8.0M   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M  188K  128M   1% /var/log
/dev/md1        932G   68G  864G   8% /mnt/disk1
shfs            932G   68G  864G   8% /mnt/user0
shfs            932G   68G  864G   8% /mnt/user
/dev/loop2       20G  657M   19G   4% /var/lib/docker
code:
root@Domain:~# df -i
Filesystem        Inodes IUsed     IFree IUse% Mounted on
rootfs            471641 10467    461174    3% /
tmpfs             492342   358    491984    1% /run
/dev/sda1              0     0         0     - /boot
overlay           471641 10467    461174    3% /lib/firmware
overlay           471641 10467    461174    3% /lib/modules
devtmpfs          471644   371    471273    1% /dev
tmpfs             492342     1    492341    1% /dev/shm
cgroup_root       492342    15    492327    1% /sys/fs/cgroup
tmpfs             492342    58    492284    1% /var/log
/dev/md1       488381248   533 488380715    1% /mnt/disk1
shfs           488381248   533 488380715    1% /mnt/user0
shfs           488381248   533 488380715    1% /mnt/user
/dev/loop2             0     0         0     - /var/lib/docker
I'm not exactly sure what this means but the 100% usage of the overlay directory is concerning I suspect.

I also watched through the commands as it started Unraid, and it first gets the "no space available" error when it's trying to load the NVIDIA plugin.

Less Fat Luke
May 23, 2003

Exciting Lemon
First line is saying your root filesystem (/) is full at just under 2 gigabytes which is why everything is failing. Can you run `fdisk -l` to show the partitions? I've never used Unraid though and have no idea if it maybe created a partition to small.

Edit: Maybe the nvidia driver should have been downloaded somewhere other than that tiny root? I don't know.

Less Fat Luke fucked around with this message at 01:05 on Jan 20, 2023

SpartanIvy
May 18, 2007
Hair Elf

Less Fat Luke posted:

First line is saying your root filesystem (/) is full at just under 2 gigabytes which is why everything is failing. Can you run `fdisk -l` to show the partitions? I've never used Unraid though and have no idea if it maybe created a partition to small.

Looks like it's all one partition if I'm reading this right.

code:
root@Domain:~# fdisk -l
Disk /dev/loop0: 117.91 MiB, 123633664 bytes, 241472 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 19.43 MiB, 20373504 bytes, 39792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 14.91 GiB, 16005464064 bytes, 31260672 sectors
Disk model: Cruzer Fit      
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot Start      End  Sectors  Size Id Type
/dev/sda1  *     2048 31260671 31258624 14.9G  c W95 FAT32 (LBA)


Disk /dev/sdb: 512 MiB, 536870912 bytes, 1048576 sectors
Disk model: LUN 00 Media 0  
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000046

Device     Boot Start    End Sectors  Size Id Type
/dev/sdb1          63 514079  514017  251M  c W95 FAT32 (LBA)


Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Samsung SSD 970 EVO Plus 2TB            
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: MB1000GDUNU     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot Start        End    Sectors   Size Id Type
/dev/sdc1          64 1953525167 1953525104 931.5G 83 Linux
I appreciate the troubleshooting help, but I'll just blow it away and start over again. I'll chock it up to part of the learning process.

Less Fat Luke
May 23, 2003

Exciting Lemon
Yeah that's what I'd do, and when it comes to the nvidia driver install make sure you're downloading it to a location with enough space just in case.

SpartanIvy
May 18, 2007
Hair Elf
It's definitely something to do with the NVIDIA driver plugin. I get this error when I try to install it on a fresh copy of Unraid.

quote:

plugin: installing: nvidia-driver.plg
Executing hook script: pre_plugin_checks
plugin: downloading: nvidia-driver.plg ... done

plugin: downloading: nvidia-driver-2022.10.05.txz ... done


+==============================================================================
| Installing new package /boot/config/plugins/nvidia-driver/nvidia-driver-2022.10.05.txz
+==============================================================================

Verifying package nvidia-driver-2022.10.05.txz.
Installing package nvidia-driver-2022.10.05.txz:
PACKAGE DESCRIPTION:
Package nvidia-driver-2022.10.05.txz installed.

+==============================================================================
| WARNING - WARNING - WARNING - WARNING - WARNING - WARNING - WARNING - WARNING
|
| Don't close this window with the red 'X' in the top right corner until the 'DONE' button is displayed!
|
| WARNING - WARNING - WARNING - WARNING - WARNING - WARNING - WARNING - WARNING
+==============================================================================

-----------------Downloading Nvidia Driver Package v525.85.05------------------
----------This could take some time, please don't close this window!------------

----Successfully downloaded Nvidia Driver Package v525.85.05, please wait!----

-----------------Installing Nvidia Driver Package v525.85.05-------------------

Warning: file_put_contents(): Only -1 of 274 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/notify on line 218

Warning: file_put_contents(): Only -1 of 304 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/notify on line 219


------------Installation of Nvidia driver v525.85.05 successful----------------

Please make sure to disable and enable Docker if you installed the Nvidia driver for the first time! Settings -> Docker -> Enable Docker 'No' -> Apply -> Enable Docker 'Yes' -> Apply
plugin: run failed: /bin/bash
Executing hook script: post_plugin_checks

After some googling it looks like the issue is the machine doesn't have enough ram. That could be the answer because I only have 4 GB right now, which is pretty small by modern standards. I was planning to buy more anyway, so I'll do that now.

e:

a skeleton posted:

Sounds good, I grabbed these two sticks to try independently, since i was under budget thanks to your suggestion.

DDR4 ECC UDIMM

DDR4 ECC RDIMM

Hopefully one will work.

Did you ever get these in and test them? I am in the market :v:

SpartanIvy fucked around with this message at 02:48 on Jan 20, 2023

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
interested to find out of the RDIMMs work in the ML30, I'm still stuck away from home and haven't gotten to play with it at all yet. :argh:

SpartanIvy
May 18, 2007
Hair Elf
I bought the linked RDIMM so I'll be sure to post an update when it gets here if a skeleton doesn't beat me to it.

Hughlander
May 11, 2005

SpartanIvy posted:

code:
root@Domain:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          1.8G  1.8G  1.5M 100% /
tmpfs            32M  232K   32M   1% /run
/dev/sda1        15G  667M   15G   5% /boot
overlay         1.8G  1.8G  1.5M 100% /lib/firmware
overlay         1.8G  1.8G  1.5M 100% /lib/modules
devtmpfs        8.0M     0  8.0M   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M  188K  128M   1% /var/log
/dev/md1        932G   68G  864G   8% /mnt/disk1
shfs            932G   68G  864G   8% /mnt/user0
shfs            932G   68G  864G   8% /mnt/user
/dev/loop2       20G  657M   19G   4% /var/lib/docker
code:
root@Domain:~# df -i
Filesystem        Inodes IUsed     IFree IUse% Mounted on
rootfs            471641 10467    461174    3% /
tmpfs             492342   358    491984    1% /run
/dev/sda1              0     0         0     - /boot
overlay           471641 10467    461174    3% /lib/firmware
overlay           471641 10467    461174    3% /lib/modules
devtmpfs          471644   371    471273    1% /dev
tmpfs             492342     1    492341    1% /dev/shm
cgroup_root       492342    15    492327    1% /sys/fs/cgroup
tmpfs             492342    58    492284    1% /var/log
/dev/md1       488381248   533 488380715    1% /mnt/disk1
shfs           488381248   533 488380715    1% /mnt/user0
shfs           488381248   533 488380715    1% /mnt/user
/dev/loop2             0     0         0     - /var/lib/docker
I'm not exactly sure what this means but the 100% usage of the overlay directory is concerning I suspect.

I also watched through the commands as it started Unraid, and it first gets the "no space available" error when it's trying to load the NVIDIA plugin.

https://wiki.debian.org/rootfs is a ram disk.

Tatsujin
Apr 26, 2004

:golgo:
EVERYONE EXCEPT THE HOT WOMEN
:golgo:
I'm already down to <5 TB free on my 8x6TB RAID6 NAS I built two years ago. Unforutately, the Fractal Node 804 case only has space for one more 3.5" and 2.5" drive. I already have a 2.5" 128 GB SSD boot drive and 3.5" 14 TB partial backup drive in addition to the NAS storage running off a LSI 9211-8i. I'm trying to determine what would be the best upgrade path in terms of storage capacity/performance and cost. Primary use case is media storage that is written once and then read many times.

Possible upgrades:

    * Replace the existing drives with larger ones one at a time (sucks as while they are hot swappable there's no backplane/drive trays).
    * Surplus 2U rackmount server with at least 12x 3.5" bays
    * Surplus 2U rackmount DAS with at least 12x 3.5" bays that connect to existing NAS via USB 3.0 or an external SATA/SAS RAID controller
    * Some SMB-levl offering from QNAP/Synology with at least 12x 3.5" bays

IOwnCalculus
Apr 2, 2003





SpartanIvy posted:

I bought the linked RDIMM so I'll be sure to post an update when it gets here if a skeleton doesn't beat me to it.

Doesn't look like it:

quote:

General memory population rules and guidelines
The HPE ProLiant ML30 Gen9 Server has four memory slots.

There are two channels per server with two DIMM slots per channel.

Memory channel 1 consists of the two (2) DIMMs that are closest to the processor.

Memory channel 2 consists of the two (2) DIMMs that are furthest from the processor.

Support for single/dual-rank 2133 MT/s ECC UDIMM (unbuffered DIMMS).

The server supports up to 64GB (4 x 16-GB) for Unbuffered DIMMs.

No support for LRDIMMs; RDIMMs; Non-ECC UDIMMs.

Do not install DIMMs if the processor is not installed.

Populate DIMMs from heaviest load (double-rank) to lightest load (single-rank).

Non-ECC DIMMs are not supported.

Always use HPE qualified DIMMs.

I don't think this is just HPE being HPE either, I don't think any of the Xeon E3 line supports RDIMMs.

Less Fat Luke
May 23, 2003

Exciting Lemon

Tatsujin posted:

I'm already down to <5 TB free on my 8x6TB RAID6 NAS I built two years ago. Unforutately, the Fractal Node 804 case only has space for one more 3.5" and 2.5" drive. I already have a 2.5" 128 GB SSD boot drive and 3.5" 14 TB partial backup drive in addition to the NAS storage running off a LSI 9211-8i. I'm trying to determine what would be the best upgrade path in terms of storage capacity/performance and cost. Primary use case is media storage that is written once and then read many times.

Possible upgrades:

    * Replace the existing drives with larger ones one at a time (sucks as while they are hot swappable there's no backplane/drive trays).
    * Surplus 2U rackmount server with at least 12x 3.5" bays
    * Surplus 2U rackmount DAS with at least 12x 3.5" bays that connect to existing NAS via USB 3.0 or an external SATA/SAS RAID controller
    * Some SMB-levl offering from QNAP/Synology with at least 12x 3.5" bays
I've been rebuilding my NAS and going from 8 to 16 drives in a Fractal Meshify 2 XL. You can fit like 18 3.5" drives in there, it's incredibly spacious. I suspect if you bought cheap cages instead of using their brackets you could squeeze even more out of it (or maybe even 3d print some mounting cages).

SpartanIvy
May 18, 2007
Hair Elf

IOwnCalculus posted:

Doesn't look like it:

I don't think this is just HPE being HPE either, I don't think any of the Xeon E3 line supports RDIMMs.

These have Pentium G4400 CPUs, so maybe there's a chance?

Tatsujin
Apr 26, 2004

:golgo:
EVERYONE EXCEPT THE HOT WOMEN
:golgo:

Less Fat Luke posted:

I've been rebuilding my NAS and going from 8 to 16 drives in a Fractal Meshify 2 XL. You can fit like 18 3.5" drives in there, it's incredibly spacious. I suspect if you bought cheap cages instead of using their brackets you could squeeze even more out of it (or maybe even 3d print some mounting cages).

Thanks. What would you recommend for an internal RAID controller and desktop power supply that can connect to that many drives? I get that I'd probably be getting some E-ATX board for that much storage.

Wibla
Feb 16, 2011

Tatsujin posted:

I'm already down to <5 TB free on my 8x6TB RAID6 NAS I built two years ago. Unforutately, the Fractal Node 804 case only has space for one more 3.5" and 2.5" drive. I already have a 2.5" 128 GB SSD boot drive and 3.5" 14 TB partial backup drive in addition to the NAS storage running off a LSI 9211-8i. I'm trying to determine what would be the best upgrade path in terms of storage capacity/performance and cost. Primary use case is media storage that is written once and then read many times.

Possible upgrades:

    * Replace the existing drives with larger ones one at a time (sucks as while they are hot swappable there's no backplane/drive trays).
    * Surplus 2U rackmount server with at least 12x 3.5" bays
    * Surplus 2U rackmount DAS with at least 12x 3.5" bays that connect to existing NAS via USB 3.0 or an external SATA/SAS RAID controller
    * Some SMB-levl offering from QNAP/Synology with at least 12x 3.5" bays

What OS are you running? I assume software raid since you're running a SAS HBA?

I'd get 8x14-16TB, whatever is cheaper per TB, along with another 9211 from ebay, then migrate the data over from your old array. 6TB drives are probably old enough at this point that it's time to retire them anyway. Your current PSU will more than likely be able to power 16 drives (as long as it's >500W) and most m-ATX boards will have enough slots for two raid controllers.

Here's from when I migrated servers, though I used 10gbe ethernet between machines instead of doing in-system copying:

There's a fan behind the drives :v:

Enos Cabell
Nov 3, 2004


Do you guys start replacing drives when they hit a certain age, or wait until they start showing errors? My 8tb drives are creeping up on 5 years old now, and I don't have a plan in place.

Wibla
Feb 16, 2011

I usually try to retire drives after 5-6 years, or at least make sure it's not holding anything I care about.

That said I generally fill an array in 2-3 years, so I get 2-3 years of backup duty out of a set of drives after I've phased them out of the "prod" array.

Right now I have two (three) fileservers, 11x4TB (entire box being retired, it's an old dual X5675 setup, most drives have 5-6 years of runtime), 9x8TB (not re-assembled after my main fileserver got upgraded, have all the parts though), and an 8x14TB box that lives in my apartment.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

SpartanIvy posted:

Seems like a weird design choice for HPE, but whatever. I googled the weird 6 pin connector a lot today and discovered that it is indeed HPE proprietary. There are some people out there who have made Arduinos and circuit boards that can convert the 6 pin interface to a standard 4-pon fan connector, but the easier and cheaper solution is to just use one of the SATA power plugs available and power a normal fan with a power adapter.

The 6-pin fan connector largely makes sense since they used to be two tiny fans strapped together.

Only registered members can see post attachments!

Klyith
Aug 3, 2007

GBS Pledge Week

Enos Cabell posted:

Do you guys start replacing drives when they hit a certain age, or wait until they start showing errors? My 8tb drives are creeping up on 5 years old now, and I don't have a plan in place.

IMO a lot would depend on whether the current drives are enough space. If I wanted to have more storage, I'd start looking for sales on bigger drives ahead of any failures at that 5-6 year mark.

Otherwise, the reason you have redundancy is to tolerate failures. Replace drives as they fail -- even at 6 years you can expect more than half of your drives to be ok. The main question is how critical your NAS is for day-to-day stuff. If a drive died, would it be very annoying to turn the NAS off for 3-4 days while you waited for a replacement drive to arrive? If so maybe buy a spare ahead of time to minimize downtime.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
i'm having a problem with my computer and idk where to post but here

I made a btrfs raid10 array out of 14 external hard drives that I'm too lazy to shuck. I haven't used btrfs before. Before I was using two big LVM logical volumes and put a postgresql replica on the other for "redundancy". I only use this array for postgresql. When the machine boots dmesg shows a bunch of errors for each drive like this:
pre:
[    3.596932] usb-storage 2-2.1.3:1.0: USB Mass Storage device detected
[    3.597099] scsi host14: usb-storage 2-2.1.3:1.0
[    3.613177] scsi 10:0:0:0: Direct-Access     WD       easystore 264D   3012 PQ: 0 ANSI: 6
[    3.613552] scsi 10:0:0:1: Enclosure         WD       SES Device       3012 PQ: 0 ANSI: 6
[    3.620282] sd 10:0:0:0: Attached scsi generic sg7 type 0
[    3.620432] scsi 10:0:0:1: Attached scsi generic sg8 type 13
[    3.620490] sd 10:0:0:0: [sdg] Very big device. Trying to use READ CAPACITY(16).
[    3.620611] sd 10:0:0:0: [sdg] 15628052480 512-byte logical blocks: (8.00 TB/7.28 TiB)
[    3.620614] sd 10:0:0:0: [sdg] 4096-byte physical blocks
[    3.621882] sd 10:0:0:0: [sdg] Write Protect is off
[    3.621886] sd 10:0:0:0: [sdg] Mode Sense: 47 00 10 08
[    3.623103] sd 10:0:0:0: [sdg] No Caching mode page found
[    3.623111] sd 10:0:0:0: [sdg] Assuming drive cache: write through
[    3.629101] sd 10:0:0:0: [sdg] Attached SCSI disk
[    3.632653] scsi 9:0:0:1: Wrong diagnostic page; asked for 1 got 8
[    3.632664] scsi 9:0:0:1: Failed to get diagnostic page 0x1
[    3.632669] scsi 9:0:0:1: Failed to bind enclosure -19
[    3.634420] scsi 10:0:0:1: Wrong diagnostic page; asked for 1 got 8
[    3.634426] scsi 10:0:0:1: Failed to get diagnostic page 0x1
[    3.634429] scsi 10:0:0:1: Failed to bind enclosure -19
This prevents it from being mounted at startup by this fstab entry
pre:
PARTUUID=293b3a6d-a7ac-4bff-86a9-0cba3d88f8b9 /mnt/array btrfs defaults 0 1
I tried changing "pass" to 0 and that didn't help.

`btrfs check` says it's ok, and I was using these drives before without any issues, so I don't think it's the drives themselves. It also mounts fine if I do it manually. I'm pretty sure it has something to do with starting 14 spinny drives over usb at once. I think they have plenty of power though, as they all use the included power adapter. I've got plans to use a single power supply or enclosure for all of them but it's cold and microcenter is far away.

I think the solution is to put them into an actual enclosure, but like I said I'm lazy and I don't have one right now so idk if there's a way to make it work like this.

e: Notably, I haven't had this issue with LVM/ext4.

dougdrums fucked around with this message at 21:30 on Jan 20, 2023

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
raid10 array of 14 external hard drives :psyduck:

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
yeah i know

e: oh hah it's actually 16 too, i forgot i added two

dougdrums fucked around with this message at 22:12 on Jan 20, 2023

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
:siren: ML30 gang :siren:

there's a bunch of used 4x8gb UDIMMS on ebay right now for $50

Less Fat Luke
May 23, 2003

Exciting Lemon

Tatsujin posted:

Thanks. What would you recommend for an internal RAID controller and desktop power supply that can connect to that many drives? I get that I'd probably be getting some E-ATX board for that much storage.

So much room for activities!

You'd want internal PCIe LSI HBA cards - 9211, 9240 and so on. I usually go on eBay and just search for "LSI IT mode" which are cards flashed already to initiator mode (where the card won't do any hardware RAID). There are lots of clones so make sure the seller has good ratings.

If you need to expand the gold standard are the Intel RES2SV240 cards - they can be powered by PCI or molex directly and have 6 ports (1 used for upstream).

Edit: also for PSU honestly drives don't use that much but I went overkill and use an EVGA 1000W G3, mostly for the absolute plethora of SATA power cable connections it has.

Less Fat Luke fucked around with this message at 22:18 on Jan 20, 2023

SpartanIvy
May 18, 2007
Hair Elf

e.pilot posted:

:siren: ML30 gang :siren:

there's a bunch of used 4x8gb UDIMMS on ebay right now for $50

Are they the non ECC ones from gwzllc2008? Would those even work since they're not ECC?
https://www.ebay.com/itm/275604969575?hash=item402b561867:g:nWMAAOSwIqRjtYY~


e: returns offered by seller so I bought it to try

SpartanIvy fucked around with this message at 22:41 on Jan 20, 2023

Wibla
Feb 16, 2011

Less Fat Luke posted:


So much room for activities!

You'd want internal PCIe LSI HBA cards - 9211, 9240 and so on. I usually go on eBay and just search for "LSI IT mode" which are cards flashed already to initiator mode (where the card won't do any hardware RAID). There are lots of clones so make sure the seller has good ratings.

If you need to expand the gold standard are the Intel RES2SV240 cards - they can be powered by PCI or molex directly and have 6 ports (1 used for upstream).

Edit: also for PSU honestly drives don't use that much but I went overkill and use an EVGA 1000W G3, mostly for the absolute plethora of SATA power cable connections it has.

Post a pic with everything cabled up :sun:

Less Fat Luke
May 23, 2003

Exciting Lemon

Wibla posted:

Post a pic with everything cabled up :sun:
LOL never. It's not the worst cabling I've done and the SAS breakouts make things easier but I will take pictures of the internal cabling to my grave.

Enos Cabell
Nov 3, 2004


Klyith posted:

IMO a lot would depend on whether the current drives are enough space. If I wanted to have more storage, I'd start looking for sales on bigger drives ahead of any failures at that 5-6 year mark.

Otherwise, the reason you have redundancy is to tolerate failures. Replace drives as they fail -- even at 6 years you can expect more than half of your drives to be ok. The main question is how critical your NAS is for day-to-day stuff. If a drive died, would it be very annoying to turn the NAS off for 3-4 days while you waited for a replacement drive to arrive? If so maybe buy a spare ahead of time to minimize downtime.

I'm getting close to needing to expand for storage reasons, so I think best bet will be to pick up a few externals as they go on sale and start swapping 8s for 14s over the next year or so. Fortunately with Unraid I can do that 1 drive at a time and not need to build a whole new pool.

Less Fat Luke posted:


So much room for activities!

Really wish I'd labeled my drives like this when I set up the server! I'm going to have to pull them one at a time when I start replacing.

Wibla
Feb 16, 2011

Enos Cabell posted:

Really wish I'd labeled my drives like this when I set up the server! I'm going to have to pull them one at a time when I start replacing.


Don't have to print the whole serial number either :v:

I bought some SATA power cables that have the plugs in a string, but they "feed from the top", so it just became a mess. sigh.

Zorak of Michigan
Jun 10, 2006


Enos Cabell posted:

Do you guys start replacing drives when they hit a certain age, or wait until they start showing errors? My 8tb drives are creeping up on 5 years old now, and I don't have a plan in place.

I'm in the same boat. I'm gradually replacing them with 16TB drives. I was originally minded to wait and do the replacements over the course of a couple weeks, but after a couple of 8TB drives started throwing errors, I went to this approach instead.

IOwnCalculus
Apr 2, 2003





Enos Cabell posted:

Really wish I'd labeled my drives like this when I set up the server! I'm going to have to pull them one at a time when I start replacing.

I just keep a spreadsheet in Google Docs and use a grid arranged like the drive bays in my server / DAS, with a drive model number and serial number in each cell.

Adbot
ADBOT LOVES YOU

Less Fat Luke
May 23, 2003

Exciting Lemon

IOwnCalculus posted:

I just keep a spreadsheet in Google Docs and use a grid arranged like the drive bays in my server / DAS, with a drive model number and serial number in each cell.
I thought about this, but I also have to somehow use the label maker(s) my mother bought me for Christmas twice in a row.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply