Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



Speaking of storage hardware, I got a bit of money back in taxes.
Which promptly got applied to a couple of used LSI 9207-8e's, a couple of SFF-8088 cables, a SFF8088-8087 cable (that was apparently a spare, and I got for free), and another of those EMC KTN-STL3 chassis that I mentioned.

Adbot
ADBOT LOVES YOU

Hughlander
May 11, 2005

Tangentially NAS related...

Since I did some work in December my NAS randomly reboots when under fairly hi IO load. (Duplicacy finalizing a backup is the #1 trigger for it but just copying a few TB around has also done it.) I strongly suspect the LSI Adapter but am not 100% sure. There's never anything in the logs that I can see, and my notice is usually randomly hearing the two console beeps of a reboot hitting the BIOS by which time it's too late to IPMI in and look at what's going on. Any idea what I can do to narrow it down when it's super intermittent? (Twice in last 2 days, had 25 days uptime before that.)

I think I'm just going to source a new LSI card since who knows how long that'll take to get here and flash it and see if that fixes it.

Fancy_Lad
May 15, 2003
Would you like to buy a monkey?

Hughlander posted:

Tangentially NAS related...

Since I did some work in December my NAS randomly reboots when under fairly hi IO load. (Duplicacy finalizing a backup is the #1 trigger for it but just copying a few TB around has also done it.) I strongly suspect the LSI Adapter but am not 100% sure. There's never anything in the logs that I can see, and my notice is usually randomly hearing the two console beeps of a reboot hitting the BIOS by which time it's too late to IPMI in and look at what's going on. Any idea what I can do to narrow it down when it's super intermittent? (Twice in last 2 days, had 25 days uptime before that.)

I think I'm just going to source a new LSI card since who knows how long that'll take to get here and flash it and see if that fixes it.

About a year ago I had some similar symptoms on my Unraid box - spontaneous reboots, generally with some decent I/O load, clean logs.

While attempting to reduce variables during troubleshooting I found that I couldn't reproduce the issue if I spun down 2 (unused) hard disks and ejected their trays. It didn't matter what position/adapter the 2 were attached to, so that eliminated a lot of variables for me. My root cause was ultimately a power supply that had been fine for about 2 years that appeared to be in the process of flaking out - replaced that and all has been well.

That whole thing was a giant pain in the rear end to figure out, so best of luck...

Hughlander
May 11, 2005

Fancy_Lad posted:

About a year ago I had some similar symptoms on my Unraid box - spontaneous reboots, generally with some decent I/O load, clean logs.

While attempting to reduce variables during troubleshooting I found that I couldn't reproduce the issue if I spun down 2 (unused) hard disks and ejected their trays. It didn't matter what position/adapter the 2 were attached to, so that eliminated a lot of variables for me. My root cause was ultimately a power supply that had been fine for about 2 years that appeared to be in the process of flaking out - replaced that and all has been well.

That whole thing was a giant pain in the rear end to figure out, so best of luck...

Hmm I think IPMI would tell me if there was any power irregularities. I have an external array as well as internal drives. If I remember, 100% of one array is internal and 80% of the other is external. I can seeing if it always happens when there's only one in use...

H110Hawk
Dec 28, 2006

Hughlander posted:

Hmm I think IPMI would tell me if there was any power irregularities. I have an external array as well as internal drives. If I remember, 100% of one array is internal and 80% of the other is external. I can seeing if it always happens when there's only one in use...

Check if there is a SEL (server event log?) or if it exports voltages. You might want to start graphing those. Either way, power issues can be hard to diagnose because they often happen in a way that your server cannot log.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Hughlander posted:

Hmm I think IPMI would tell me if there was any power irregularities. I have an external array as well as internal drives. If I remember, 100% of one array is internal and 80% of the other is external. I can seeing if it always happens when there's only one in use...

Power supply issues are notoriously hard to diagnose via logs because their very nature often results in the system being unable to write to log before it dies.

That said, "random hard reboots, no logs, can't reproduce reliably" are pretty much the hallmarks of a failing PSU, though of course there's no guarantee it's that. How old is yours, though?

KennyG
Oct 22, 2002
Here to blow my own horn.
It's been a minute since I've built a home server. I have just exhausted my 30T of local storage and was looking to take some of that Mitch cash and sink it into a new server. Is unraid the new way to go? I was looking to put it in a big 24 drive box which I had access to from work. Sorry, the OP was last updated 8 years ago but I browsed the last few pages briefly.

I'm totally willing to roll my own ZFS box if it's better, but frankly $130 to outsource it for a gui and something a bit more reliable seems totally worth it to me - if it doesn't suck.

Enos Cabell
Nov 3, 2004


Unraid kicks all sorts of rear end if you don't mind putting the hardware together yourself.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib
FreeNAS is another option. I've got 2 systems running it.
1. At home, Lenovo TS440, 24GB, E3-1225v3, 8x 6TB
2. At work, Lenovo TS430, 32GB, E3-1225v2, 8x10TB

I haven't had issues other than boot drive corruption once, now I boot from an SSD instead of a USB Drive, no more problems.

Hughlander
May 11, 2005

DrDork posted:

Power supply issues are notoriously hard to diagnose via logs because their very nature often results in the system being unable to write to log before it dies.

That said, "random hard reboots, no logs, can't reproduce reliably" are pretty much the hallmarks of a failing PSU, though of course there's no guarantee it's that. How old is yours, though?

Heh, I was going to say 4 years, but the answer is almost 6. 10/2014: CORSAIR RM Series RM750 750W ATX12V v2.31 and EPS 2.92 80 PLUS GOLD Certified Full Modular Active PFC Power Supply

Trip report though, I fixed up problem I had with Prometheus and set it to have an exporter for the IPMI of both the Supermicro that's dying (Different machine than where prometheus is running.) *and* the IPMI of the ASRocks. Of course the collection is only like every 2 minutes so I doubt that will do anything. I also added some logging to the backup to figure out what zpool it was on when the reset happened.

I'll monitor it for awhile still and see what happens.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

KennyG posted:

It's been a minute since I've built a home server. I have just exhausted my 30T of local storage and was looking to take some of that Mitch cash and sink it into a new server. Is unraid the new way to go? I was looking to put it in a big 24 drive box which I had access to from work. Sorry, the OP was last updated 8 years ago but I browsed the last few pages briefly.

I'm totally willing to roll my own ZFS box if it's better, but frankly $130 to outsource it for a gui and something a bit more reliable seems totally worth it to me - if it doesn't suck.

Go Unraid if you want to be able to add arbitrary single drives later. Go ZFS if you would be very upset if your data died and you have little/no intention of expanding it by single drives in the future.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Hughlander posted:

Heh, I was going to say 4 years, but the answer is almost 6. 10/2014: CORSAIR RM Series RM750 750W ATX12V v2.31 and EPS 2.92 80 PLUS GOLD Certified Full Modular Active PFC Power Supply

Warranty is 5 years. If you bought it with a credit card check and see if they offer any sort of extended warranty. If it does, might as well go ahead and use it.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

DrDork posted:

expanding it by single drives in the future.

This is the most irritating thing about ZFS for me.

A few weeks ago I posted about my plan to revamp the stupid raidz1x10 disk pool I had. I ended up splitting it into four raidz1 3-drive pools (by adding another disk) mainly so it's easier to expand storage later. In an ideal world where I was made of money I'd prefer 5-drive raidz2 pools, but it's less of a hit on the wallet to be able to expand by three drives at a time.

Hughlander
May 11, 2005

K, just happened again while finalizing a backup on the internal array and also watching a tv show on plex. Didn't get to the ipmi console till it finished rebooting I think I'll try the video tomorrow. Did see a small bump in temperature just before the reboot but everything in normal ranges.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
For weird reboots without something like crashes or corresponding temp spikes I usually start thinking power.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

H2SO4 posted:

For weird reboots without something like crashes or corresponding temp spikes I usually start thinking power.

N'thing this. It's highly likely it's power. Could be a loose connector, but the supply is much more likely. I'd check the main 24-pin connector as well as the aux connectors. Ditto if you have any cards with their own power connections and all drives. Any drives have the 3.3v tape mod?

KennyG
Oct 22, 2002
Here to blow my own horn.

DrDork posted:

Go Unraid if you want to be able to add arbitrary single drives later. Go ZFS if you would be very upset if your data died and you have little/no intention of expanding it by single drives in the future.

So if I'm filling a case on day 1 (no real ability to add drives) ZFS/FreeNAS is more reliable?

I'm not doing this for work or I'd call my Isilon rep. This is definitely a homebrew situation so it's not ZOMG my databases :ohdear: but 20 years of high-res photos and lots of other :filez: take a long time to restore from backup wherever they may be (and may not be possible in some scenarios).

So FreeNAS then?

Corb3t
Jun 7, 2003



I wish there as a plug-in in Unraid that let me auto-balance media files whenever I add a new drive.

Unbalance works well enough, I guess. I probably should have picked up a second 12 TB drive last week.

Corb3t fucked around with this message at 16:35 on Apr 10, 2020

IOwnCalculus
Apr 2, 2003





KennyG posted:

So if I'm filling a case on day 1 (no real ability to add drives) ZFS/FreeNAS is more reliable?

I'm not doing this for work or I'd call my Isilon rep. This is definitely a homebrew situation so it's not ZOMG my databases :ohdear: but 20 years of high-res photos and lots of other :filez: take a long time to restore from backup wherever they may be (and may not be possible in some scenarios).

So FreeNAS then?

I prefer rolling my own ZFS on Ubuntu, but I've had ZFS save me from losing the entire array to multiple drive failures twice now. Unless you have two drives fail completely offline (versus just certain unreadable blocks) ZFS will still do its best to keep the array online and will flag the files it knows to be corrupted.

You'll still want to think about how you're laying out the vdevs to reduce the future expense of any drive replacements, since it does support replacing all drives of a single vdev and expanding that way.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

KennyG posted:

So if I'm filling a case on day 1 (no real ability to add drives) ZFS/FreeNAS is more reliable?

I'm not doing this for work or I'd call my Isilon rep. This is definitely a homebrew situation so it's not ZOMG my databases :ohdear: but 20 years of high-res photos and lots of other :filez: take a long time to restore from backup wherever they may be (and may not be possible in some scenarios).

So FreeNAS then?

ZFS has the highest level of durability that I'm aware of, between its internal design, scrubbing, and ability to identify specific failed files during a rebuild.

FreeNAS is a good option, especially if all you want is a basic fileserver that doesn't do much else. If you really want to get deep with plug-ins, dockers, VMs, whatever also running on there, it can be a bit of a pain. But for files + Plex + torrents it works well (though the Plex plugin is usually behind by a few version, if you care much about that).

If you do want to do more than what's available via plug-ins, doing something like Ubuntu with ZOL might be a better option for your sanity. ESXi or similar bare-metal hypervisor also works well with FreeNAS as long as you have an add-in LSI card or similar that you can passthrough to FreeNAS in its entirety. That's how I have my current setup running: ESXi with FreeNAS passthroughed LSI card, and then a pair of SSDs held by ESXi as scratch space and storage for my other VMs that I don't much care if they die via SSD failure. Getting the permissions figured out was interesting, but it all works now, and performance and sanity is better than trying to deal with VMs/non-plug-in jails layered on top of FreeNAS itself.

DrDork fucked around with this message at 16:45 on Apr 10, 2020

KS
Jun 10, 2003
Outrageous Lumpwad


Happy 7th (!) birthday to my NAS which started life as a scrapped DL380 G6. It runs ESXi and a half dozen VMs for plex/sonarr/transmission, including a xigmanas VM with hw passthrough for ZFS. It's silent, lives in a closet, and has IPMI, which is great. 7 years of trouble free operation with the exception that the first drives I put in it were ST3000DM001s. 5 died in the first year. The 6th is still running.

Antec P280 with Seasonic X650
Supermicro X8DTL-iF-O
Dual Xeon E5540 with Noctua NH-U12DX
48GB ECC
M1015 with 6x 3TB drives
some other LSI 9208 board with 2x 512GB SSDs, RAID-1, for VM storage.

My storage growth rate has increased recently. The array's nearly full and wondering what to do next. I could just swap the drives to 12TB in the same pool, but it'd be cool to move to an 8-drive array. The processor's also old enough that it's off VMware's HCL and can't run ESXi 6.7, and it's not very power efficient to be running dual CPUs by modern standards.

Feels like ESXi is getting a bit old fashioned and I'd welcome the chance to do less CJing of VMs in favor of containers, but ZFS has saved me a few times and I don't want to give that up. Any recommendations for the next 7 years?

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

KS posted:



Happy 7th (!) birthday to my NAS which started life as a scrapped DL380 G6. It runs ESXi and a half dozen VMs for plex/sonarr/transmission, including a xigmanas VM with hw passthrough for ZFS. It's silent, lives in a closet, and has IPMI, which is great. 7 years of trouble free operation with the exception that the first drives I put in it were ST3000DM001s. 5 died in the first year. The 6th is still running.

Antec P280 with Seasonic X650
Supermicro X8DTL-iF-O
Dual Xeon E5540 with Noctua NH-U12DX
48GB ECC
M1015 with 6x 3TB drives
some other LSI 9208 board with 2x 512GB SSDs, RAID-1, for VM storage.

My storage growth rate has increased recently. The array's nearly full and wondering what to do next. I could just swap the drives to 12TB in the same pool, but it'd be cool to move to an 8-drive array. The processor's also old enough that it's off VMware's HCL and can't run ESXi 6.7, and it's not very power efficient to be running dual CPUs by modern standards.

Feels like ESXi is getting a bit old fashioned and I'd welcome the chance to do less CJing of VMs in favor of containers, but ZFS has saved me a few times and I don't want to give that up. Any recommendations for the next 7 years?

Spin up a rancheros instance and start dockerizing all those VMs.

CopperHound
Feb 14, 2012

I could use some insight as to why I am getting terrible smb speeds between my laptop and unraid system. Right now transfers of large files over smb are going at about 10Mbit, but if I transfer the same file with ftp it saturates the wireless network at 300Mbit.

I don't think I have any system scan shenanigans going on because cpu usage is remaining low on both ends.

Hughlander
May 11, 2005

KennyG posted:

So if I'm filling a case on day 1 (no real ability to add drives) ZFS/FreeNAS is more reliable?

I'm not doing this for work or I'd call my Isilon rep. This is definitely a homebrew situation so it's not ZOMG my databases :ohdear: but 20 years of high-res photos and lots of other :filez: take a long time to restore from backup wherever they may be (and may not be possible in some scenarios).

So FreeNAS then?

I'll throw my hat into Shilling for Proxmox. It's Debian + ZOL + GUIs, designed for small-midsized companies doing in house datacenters and VMs so can also do clustering. I run two nodes here, my main NAS (With the failing up thread) with 2 zpools over 20TB and 96TB, and the other just having a single mirrored zpool of 1TB to deal with VM/docker hosting. What I like the most about it is that it's mostly turnkey and has been able to grow with me over time.

My previous solution was the first node with only 20TB pool, on ESXI passing-thru the onboard LSI to FreeNAS along with 16GB ram, and then using the rest of the RAM for Ubuntu+Docker. after a few years of that when I added the other zpool I installed proxmox, it imported the pool from FreeNAS, and ran docker in an LXC. This christmas I added the 2nd node with 128GB memory and running docker straight on proxmox. The whole thing is really smooth.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Proxmox is neat.

I just got terraform working with it and a cloud init template of centos so a few lines of code and one command and I can deploy machines in less than 20 seconds.

Hughlander
May 11, 2005

Matt Zerella posted:

Proxmox is neat.

I just got terraform working with it and a cloud init template of centos so a few lines of code and one command and I can deploy machines in less than 20 seconds.

Nice, I played a bit with setting up Ansible for it last week since I have bad memories of Terraform from 5 years ago. I should go back to it.

Trip report: Amazon and Newegg were out of power supplies, but local bestbuy had the one I wanted in stock, ordered from website at 8:30AM, delivered before 11:30AM, put into system by 12:30, torture testing it. It's peak temperature is 10C lower than what it was on the other PSU but I'm not watching a plex movie so :iiam:

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Hughlander posted:

Nice, I played a bit with setting up Ansible for it last week since I have bad memories of Terraform from 5 years ago. I should go back to it.

Trip report: Amazon and Newegg were out of power supplies, but local bestbuy had the one I wanted in stock, ordered from website at 8:30AM, delivered before 11:30AM, put into system by 12:30, torture testing it. It's peak temperature is 10C lower than what it was on the other PSU but I'm not watching a plex movie so :iiam:

Heads up, Jeff Geerlings ansible books are currently free on Leanpub. I bought them and do t regret it but if you're looking for some good books to read since we are all living in a coronavirus stay at home world, it's a pretty awesome move on his part to do this.

The kubernetes book is only half done.

Neslepaks
Sep 3, 2003

I just set up some playbooks for creating and removing containers on my proxmox. Works well.

Atomizer
Jun 24, 2007



DrDork posted:

FWIW I've been shucking drives for DIY NASes for years (including rebuilding one a few months ago) and have never had to gently caress with a 3.3v pin.

This is only an issue with a new drive and older PSU, i.e. the PSU is still trying to supply 3.3V over the repurposed reset connection.

Charles posted:

Why are usb drives cheaper? I still dont get it.

It part warranty, and part market segmentation. The bare drives (that are equivalent to the ones that are generally discussed in this thread) are intended for non-consumer use (NAS, enterprise, etc.) and the manufacturers charge more for them because they can (kind of like how products intended for certain industries, e.g. healthcare/medical, carry a price premium.) It just so happens that they also tend to throw the same basic drives in USB enclosures because they make a ton of them anyway, but then they charge less for them because there's a lower warranty and because they need average consumers to be able to buy them.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down
Got my 12TB WD drive yesterday from the $180 deal. Can’t wait for the three damned days of pre-clear to work through to actually use it. Then I get to wait to transfer the parity over it, which I’m sure I’ll gently caress up and then have to wait another a day to rebuild parity.

Thankfully I still have ~1.2TB free.



I always have a faint worry that running the drive or that long is actually bad for it, despite it being a health check since it’s running non-stop for so long. I’m sure that’s an unfounded concern.

H110Hawk
Dec 28, 2006

TraderStav posted:

I always have a faint worry that running the drive or that long is actually bad for it, despite it being a health check since it’s running non-stop for so long. I’m sure that’s an unfounded concern.

It's fine. If the drive can't take being read front to back right out of the box it's a dud. You aren't taking any life off it. Your random reads later are what is going to actually consume useful life.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

H110Hawk posted:

It's fine. If the drive can't take being read front to back right out of the box it's a dud. You aren't taking any life off it. Your random reads later are what is going to actually consume useful life.

And power cycles. Drives are meant to be used.

Former Human
Oct 15, 2001

KS posted:

Antec P280

I could just swap the drives to 12TB in the same pool, but it'd be cool to move to an 8-drive array.

I have the same case and be advised that the hard drive trays are not designed for new large capacity drives. The screw holes do not line up. I bought an HGST 12TB drive when it was on sale recently and I have only two of the four holes screwed into the tray. I haven't been able to find a replacement tray that both works in the case's drive cage and has the right hole placement.

There are two options: zip tie part of the drive tightly to the tray to minimize vibration, or get a 3.5" to 5.25" adapter and install the drive in the optical bay.

If anyone in the thread knows of updated/universal trays that fit the Antec P280 I'd love the help.

H110Hawk
Dec 28, 2006

Former Human posted:

I have the same case and be advised that the hard drive trays are not designed for new large capacity drives. The screw holes do not line up. I bought an HGST 12TB drive when it was on sale recently and I have only two of the four holes screwed into the tray. I haven't been able to find a replacement tray that both works in the case's drive cage and has the right hole placement.

There are two options: zip tie part of the drive tightly to the tray to minimize vibration, or get a 3.5" to 5.25" adapter and install the drive in the optical bay.

If anyone in the thread knows of updated/universal trays that fit the Antec P280 I'd love the help.

Velcro straps. 2 holes is plenty.

Kia Soul Enthusias
May 9, 2004

zoom-zoom
Toilet Rascal
Are the screw holes in different locations, or is the drive thicker, or something else?

Former Human
Oct 15, 2001

The screw holes on the bottom of 8TB and larger drives are spaced further apart.

Henrik Zetterberg
Dec 7, 2007

sharkytm posted:

And power cycles. Drives are meant to be used.

I always have my unraid set to spin down after 30 mins of activity or whatever to save power. I’ve only had 1-2 drives fail in 10y or so. Posts like this make me consider leaving them spun up at all times even though out usage (movies and stuff) is very cyclical and predictable.

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
I have a Synology running DSM 6.2.2, is there a way to backup the NAS OS itself? I have set up a few docker containers that I would like to be able to restore as they are if something were to happen to my NAS or if I migrated to a new device.

Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast

Incessant Excess posted:

I have a Synology running DSM 6.2.2, is there a way to backup the NAS OS itself? I have set up a few docker containers that I would like to be able to restore as they are if something were to happen to my NAS or if I migrated to a new device.

As far as how I understand it, on a Synology the OS partition is mirrored among each and every single disk in the array regardless of size pool volume what have you - where you'd have to have a 100% disk failure for it to not actually boot

Adbot
ADBOT LOVES YOU

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Former Human posted:

I have the same case and be advised that the hard drive trays are not designed for new large capacity drives. The screw holes do not line up. I bought an HGST 12TB drive when it was on sale recently and I have only two of the four holes screwed into the tray. I haven't been able to find a replacement tray that both works in the case's drive cage and has the right hole placement.

There are two options: zip tie part of the drive tightly to the tray to minimize vibration, or get a 3.5" to 5.25" adapter and install the drive in the optical bay.

If anyone in the thread knows of updated/universal trays that fit the Antec P280 I'd love the help.

Could you drill new holes in correct spots?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply