|
SpartanIvy posted:e: Also the power supply is proprietary. DO NOT try to use a normal ATX power supply because it will fry your mobo. Using a standard connector so wrongly that things will break if connected to the standard version is one of those things that in a just world would have everyone responsible for designing, approving, and implementing it blacklisted from the industry as a whole.
|
# ? Jan 19, 2023 16:34 |
|
|
# ? Jun 4, 2024 00:03 |
|
wolrah posted:WTF someone brought back this horrible idea? Weren't there a bunch of Dells in the early '00s that were the same, looking like ATX but with a wonky pinout? The connector looks different to my eye but I found someone online saying they hooked up a standard PSU to it somehow so Figured I'd mention it here in case anyone was thinking of a PSU swap.
|
# ? Jan 19, 2023 17:34 |
|
wolrah posted:WTF someone brought back this horrible idea? Weren't there a bunch of Dells in the early '00s that were the same, looking like ATX but with a wonky pinout? Yeah, this site has some details. There are more recent Dells which also use a nonstandard pinout like the PowerEdge T20, but at least it's an 8-pin which is obviously not compatible with standard ATX and wouldn't let you fry something by accident.
|
# ? Jan 19, 2023 19:55 |
SpartanIvy posted:The drive shows as only having used like 700 megs so I really think it's a corrupt file somewhere. Is it possible that it's a counterfeit flash drive that is smaller than it was advertised as?
|
|
# ? Jan 19, 2023 20:04 |
|
fletcher posted:Is it possible that it's a counterfeit flash drive that is smaller than it was advertised as? I don't think so, but it's worth testing when I get home. It had a unique GUID and I would be surprised if a counterfeit would.
|
# ? Jan 19, 2023 20:08 |
|
SpartanIvy posted:I don't think so, but it's worth testing when I get home. It had a unique GUID and I would be surprised if a counterfeit would.
|
# ? Jan 19, 2023 20:11 |
|
wolrah posted:WTF someone brought back this horrible idea? Weren't there a bunch of Dells in the early '00s that were the same, looking like ATX but with a wonky pinout? Once you get to "servers sold to businesses", interoperability standards with things like power supplies go right back out the window. HPE doesn't care that you can't swap that PSU with a generic one, their entire concern for that server is that you either buy Official Spare HPE parts to repair it, or replace it with a More Better HPE Server when things do start breaking.
|
# ? Jan 19, 2023 20:27 |
|
IOwnCalculus posted:Once you get to "servers sold to businesses", interoperability standards with things like power supplies go right back out the window. HPE doesn't care that you can't swap that PSU with a generic one, their entire concern for that server is that you either buy Official Spare HPE parts to repair it, or replace it with a More Better HPE Server when things do start breaking. I think system vendors would love to go back to the days where everybody had their own ISA and their own flavor of Unix, but that business model is not viable so instead we get almost-but-not-quite interchangeable commodity hardware
|
# ? Jan 19, 2023 20:55 |
|
I also suspect it's bleed-over from their rackmount server divisions, where even if you do use a standard ATX power supply connector... what's the point? I've got a Supermicro 2U box sitting here that has what I believe to be a standard ATX power supply connector on the motherboard, so sure, I could probably power it from a regular PSU. But the board form factor is Supermicro's own WIO spec, so it only fits in Supermicro WIO cases, none of which accept any standard power supply.
|
# ? Jan 19, 2023 21:13 |
|
Less Fat Luke posted:It's improbable but you could be out of inodes instead; can you paste the output here of `df -h` and `df -i`? code:
code:
I also watched through the commands as it started Unraid, and it first gets the "no space available" error when it's trying to load the NVIDIA plugin.
|
# ? Jan 20, 2023 00:31 |
|
First line is saying your root filesystem (/) is full at just under 2 gigabytes which is why everything is failing. Can you run `fdisk -l` to show the partitions? I've never used Unraid though and have no idea if it maybe created a partition to small. Edit: Maybe the nvidia driver should have been downloaded somewhere other than that tiny root? I don't know. Less Fat Luke fucked around with this message at 01:05 on Jan 20, 2023 |
# ? Jan 20, 2023 00:57 |
|
Less Fat Luke posted:First line is saying your root filesystem (/) is full at just under 2 gigabytes which is why everything is failing. Can you run `fdisk -l` to show the partitions? I've never used Unraid though and have no idea if it maybe created a partition to small. Looks like it's all one partition if I'm reading this right. code:
|
# ? Jan 20, 2023 01:10 |
|
Yeah that's what I'd do, and when it comes to the nvidia driver install make sure you're downloading it to a location with enough space just in case.
|
# ? Jan 20, 2023 01:16 |
|
It's definitely something to do with the NVIDIA driver plugin. I get this error when I try to install it on a fresh copy of Unraid.quote:plugin: installing: nvidia-driver.plg After some googling it looks like the issue is the machine doesn't have enough ram. That could be the answer because I only have 4 GB right now, which is pretty small by modern standards. I was planning to buy more anyway, so I'll do that now. e: a skeleton posted:Sounds good, I grabbed these two sticks to try independently, since i was under budget thanks to your suggestion. Did you ever get these in and test them? I am in the market SpartanIvy fucked around with this message at 02:48 on Jan 20, 2023 |
# ? Jan 20, 2023 02:44 |
|
interested to find out of the RDIMMs work in the ML30, I'm still stuck away from home and haven't gotten to play with it at all yet.
|
# ? Jan 20, 2023 04:42 |
|
I bought the linked RDIMM so I'll be sure to post an update when it gets here if a skeleton doesn't beat me to it.
|
# ? Jan 20, 2023 04:46 |
|
SpartanIvy posted:
https://wiki.debian.org/rootfs is a ram disk.
|
# ? Jan 20, 2023 05:30 |
|
I'm already down to <5 TB free on my 8x6TB RAID6 NAS I built two years ago. Unforutately, the Fractal Node 804 case only has space for one more 3.5" and 2.5" drive. I already have a 2.5" 128 GB SSD boot drive and 3.5" 14 TB partial backup drive in addition to the NAS storage running off a LSI 9211-8i. I'm trying to determine what would be the best upgrade path in terms of storage capacity/performance and cost. Primary use case is media storage that is written once and then read many times. Possible upgrades:
* Surplus 2U rackmount server with at least 12x 3.5" bays * Surplus 2U rackmount DAS with at least 12x 3.5" bays that connect to existing NAS via USB 3.0 or an external SATA/SAS RAID controller * Some SMB-levl offering from QNAP/Synology with at least 12x 3.5" bays
|
# ? Jan 20, 2023 05:56 |
|
SpartanIvy posted:I bought the linked RDIMM so I'll be sure to post an update when it gets here if a skeleton doesn't beat me to it. Doesn't look like it: quote:General memory population rules and guidelines I don't think this is just HPE being HPE either, I don't think any of the Xeon E3 line supports RDIMMs.
|
# ? Jan 20, 2023 06:14 |
|
Tatsujin posted:I'm already down to <5 TB free on my 8x6TB RAID6 NAS I built two years ago. Unforutately, the Fractal Node 804 case only has space for one more 3.5" and 2.5" drive. I already have a 2.5" 128 GB SSD boot drive and 3.5" 14 TB partial backup drive in addition to the NAS storage running off a LSI 9211-8i. I'm trying to determine what would be the best upgrade path in terms of storage capacity/performance and cost. Primary use case is media storage that is written once and then read many times.
|
# ? Jan 20, 2023 14:31 |
|
IOwnCalculus posted:Doesn't look like it: These have Pentium G4400 CPUs, so maybe there's a chance?
|
# ? Jan 20, 2023 14:55 |
|
Less Fat Luke posted:I've been rebuilding my NAS and going from 8 to 16 drives in a Fractal Meshify 2 XL. You can fit like 18 3.5" drives in there, it's incredibly spacious. I suspect if you bought cheap cages instead of using their brackets you could squeeze even more out of it (or maybe even 3d print some mounting cages). Thanks. What would you recommend for an internal RAID controller and desktop power supply that can connect to that many drives? I get that I'd probably be getting some E-ATX board for that much storage.
|
# ? Jan 20, 2023 15:58 |
|
Tatsujin posted:I'm already down to <5 TB free on my 8x6TB RAID6 NAS I built two years ago. Unforutately, the Fractal Node 804 case only has space for one more 3.5" and 2.5" drive. I already have a 2.5" 128 GB SSD boot drive and 3.5" 14 TB partial backup drive in addition to the NAS storage running off a LSI 9211-8i. I'm trying to determine what would be the best upgrade path in terms of storage capacity/performance and cost. Primary use case is media storage that is written once and then read many times. What OS are you running? I assume software raid since you're running a SAS HBA? I'd get 8x14-16TB, whatever is cheaper per TB, along with another 9211 from ebay, then migrate the data over from your old array. 6TB drives are probably old enough at this point that it's time to retire them anyway. Your current PSU will more than likely be able to power 16 drives (as long as it's >500W) and most m-ATX boards will have enough slots for two raid controllers. Here's from when I migrated servers, though I used 10gbe ethernet between machines instead of doing in-system copying: There's a fan behind the drives
|
# ? Jan 20, 2023 16:05 |
|
Do you guys start replacing drives when they hit a certain age, or wait until they start showing errors? My 8tb drives are creeping up on 5 years old now, and I don't have a plan in place.
|
# ? Jan 20, 2023 18:12 |
|
I usually try to retire drives after 5-6 years, or at least make sure it's not holding anything I care about. That said I generally fill an array in 2-3 years, so I get 2-3 years of backup duty out of a set of drives after I've phased them out of the "prod" array. Right now I have two (three) fileservers, 11x4TB (entire box being retired, it's an old dual X5675 setup, most drives have 5-6 years of runtime), 9x8TB (not re-assembled after my main fileserver got upgraded, have all the parts though), and an 8x14TB box that lives in my apartment.
|
# ? Jan 20, 2023 18:35 |
|
SpartanIvy posted:Seems like a weird design choice for HPE, but whatever. I googled the weird 6 pin connector a lot today and discovered that it is indeed HPE proprietary. There are some people out there who have made Arduinos and circuit boards that can convert the 6 pin interface to a standard 4-pon fan connector, but the easier and cheaper solution is to just use one of the SATA power plugs available and power a normal fan with a power adapter. The 6-pin fan connector largely makes sense since they used to be two tiny fans strapped together.
|
# ? Jan 20, 2023 19:49 |
|
Enos Cabell posted:Do you guys start replacing drives when they hit a certain age, or wait until they start showing errors? My 8tb drives are creeping up on 5 years old now, and I don't have a plan in place. IMO a lot would depend on whether the current drives are enough space. If I wanted to have more storage, I'd start looking for sales on bigger drives ahead of any failures at that 5-6 year mark. Otherwise, the reason you have redundancy is to tolerate failures. Replace drives as they fail -- even at 6 years you can expect more than half of your drives to be ok. The main question is how critical your NAS is for day-to-day stuff. If a drive died, would it be very annoying to turn the NAS off for 3-4 days while you waited for a replacement drive to arrive? If so maybe buy a spare ahead of time to minimize downtime.
|
# ? Jan 20, 2023 20:42 |
|
i'm having a problem with my computer and idk where to post but here I made a btrfs raid10 array out of 14 external hard drives that I'm too lazy to shuck. I haven't used btrfs before. Before I was using two big LVM logical volumes and put a postgresql replica on the other for "redundancy". I only use this array for postgresql. When the machine boots dmesg shows a bunch of errors for each drive like this: pre:[ 3.596932] usb-storage 2-2.1.3:1.0: USB Mass Storage device detected [ 3.597099] scsi host14: usb-storage 2-2.1.3:1.0 [ 3.613177] scsi 10:0:0:0: Direct-Access WD easystore 264D 3012 PQ: 0 ANSI: 6 [ 3.613552] scsi 10:0:0:1: Enclosure WD SES Device 3012 PQ: 0 ANSI: 6 [ 3.620282] sd 10:0:0:0: Attached scsi generic sg7 type 0 [ 3.620432] scsi 10:0:0:1: Attached scsi generic sg8 type 13 [ 3.620490] sd 10:0:0:0: [sdg] Very big device. Trying to use READ CAPACITY(16). [ 3.620611] sd 10:0:0:0: [sdg] 15628052480 512-byte logical blocks: (8.00 TB/7.28 TiB) [ 3.620614] sd 10:0:0:0: [sdg] 4096-byte physical blocks [ 3.621882] sd 10:0:0:0: [sdg] Write Protect is off [ 3.621886] sd 10:0:0:0: [sdg] Mode Sense: 47 00 10 08 [ 3.623103] sd 10:0:0:0: [sdg] No Caching mode page found [ 3.623111] sd 10:0:0:0: [sdg] Assuming drive cache: write through [ 3.629101] sd 10:0:0:0: [sdg] Attached SCSI disk [ 3.632653] scsi 9:0:0:1: Wrong diagnostic page; asked for 1 got 8 [ 3.632664] scsi 9:0:0:1: Failed to get diagnostic page 0x1 [ 3.632669] scsi 9:0:0:1: Failed to bind enclosure -19 [ 3.634420] scsi 10:0:0:1: Wrong diagnostic page; asked for 1 got 8 [ 3.634426] scsi 10:0:0:1: Failed to get diagnostic page 0x1 [ 3.634429] scsi 10:0:0:1: Failed to bind enclosure -19 pre:PARTUUID=293b3a6d-a7ac-4bff-86a9-0cba3d88f8b9 /mnt/array btrfs defaults 0 1 `btrfs check` says it's ok, and I was using these drives before without any issues, so I don't think it's the drives themselves. It also mounts fine if I do it manually. I'm pretty sure it has something to do with starting 14 spinny drives over usb at once. I think they have plenty of power though, as they all use the included power adapter. I've got plans to use a single power supply or enclosure for all of them but it's cold and microcenter is far away. I think the solution is to put them into an actual enclosure, but like I said I'm lazy and I don't have one right now so idk if there's a way to make it work like this. e: Notably, I haven't had this issue with LVM/ext4. dougdrums fucked around with this message at 21:30 on Jan 20, 2023 |
# ? Jan 20, 2023 21:27 |
|
raid10 array of 14 external hard drives
|
# ? Jan 20, 2023 22:04 |
|
yeah i know e: oh hah it's actually 16 too, i forgot i added two dougdrums fucked around with this message at 22:12 on Jan 20, 2023 |
# ? Jan 20, 2023 22:06 |
|
ML30 gang there's a bunch of used 4x8gb UDIMMS on ebay right now for $50
|
# ? Jan 20, 2023 22:08 |
|
Tatsujin posted:Thanks. What would you recommend for an internal RAID controller and desktop power supply that can connect to that many drives? I get that I'd probably be getting some E-ATX board for that much storage. So much room for activities! You'd want internal PCIe LSI HBA cards - 9211, 9240 and so on. I usually go on eBay and just search for "LSI IT mode" which are cards flashed already to initiator mode (where the card won't do any hardware RAID). There are lots of clones so make sure the seller has good ratings. If you need to expand the gold standard are the Intel RES2SV240 cards - they can be powered by PCI or molex directly and have 6 ports (1 used for upstream). Edit: also for PSU honestly drives don't use that much but I went overkill and use an EVGA 1000W G3, mostly for the absolute plethora of SATA power cable connections it has. Less Fat Luke fucked around with this message at 22:18 on Jan 20, 2023 |
# ? Jan 20, 2023 22:16 |
|
e.pilot posted:ML30 gang Are they the non ECC ones from gwzllc2008? Would those even work since they're not ECC? https://www.ebay.com/itm/275604969575?hash=item402b561867:g:nWMAAOSwIqRjtYY~ e: returns offered by seller so I bought it to try SpartanIvy fucked around with this message at 22:41 on Jan 20, 2023 |
# ? Jan 20, 2023 22:25 |
|
Less Fat Luke posted:
Post a pic with everything cabled up
|
# ? Jan 20, 2023 22:28 |
|
Wibla posted:Post a pic with everything cabled up
|
# ? Jan 20, 2023 22:37 |
|
Klyith posted:IMO a lot would depend on whether the current drives are enough space. If I wanted to have more storage, I'd start looking for sales on bigger drives ahead of any failures at that 5-6 year mark. I'm getting close to needing to expand for storage reasons, so I think best bet will be to pick up a few externals as they go on sale and start swapping 8s for 14s over the next year or so. Fortunately with Unraid I can do that 1 drive at a time and not need to build a whole new pool. Less Fat Luke posted:
Really wish I'd labeled my drives like this when I set up the server! I'm going to have to pull them one at a time when I start replacing.
|
# ? Jan 20, 2023 22:50 |
|
Enos Cabell posted:Really wish I'd labeled my drives like this when I set up the server! I'm going to have to pull them one at a time when I start replacing. Don't have to print the whole serial number either I bought some SATA power cables that have the plugs in a string, but they "feed from the top", so it just became a mess. sigh.
|
# ? Jan 20, 2023 23:02 |
|
Enos Cabell posted:Do you guys start replacing drives when they hit a certain age, or wait until they start showing errors? My 8tb drives are creeping up on 5 years old now, and I don't have a plan in place. I'm in the same boat. I'm gradually replacing them with 16TB drives. I was originally minded to wait and do the replacements over the course of a couple weeks, but after a couple of 8TB drives started throwing errors, I went to this approach instead.
|
# ? Jan 20, 2023 23:41 |
|
Enos Cabell posted:Really wish I'd labeled my drives like this when I set up the server! I'm going to have to pull them one at a time when I start replacing. I just keep a spreadsheet in Google Docs and use a grid arranged like the drive bays in my server / DAS, with a drive model number and serial number in each cell.
|
# ? Jan 21, 2023 00:55 |
|
|
# ? Jun 4, 2024 00:03 |
|
IOwnCalculus posted:I just keep a spreadsheet in Google Docs and use a grid arranged like the drive bays in my server / DAS, with a drive model number and serial number in each cell.
|
# ? Jan 21, 2023 00:59 |