Speaking of storage hardware, I got a bit of money back in taxes. Which promptly got applied to a couple of used LSI 9207-8e's, a couple of SFF-8088 cables, a SFF8088-8087 cable (that was apparently a spare, and I got for free), and another of those EMC KTN-STL3 chassis that I mentioned.
|
|
# ? Apr 8, 2020 18:42 |
|
|
# ? May 27, 2024 09:23 |
|
Tangentially NAS related... Since I did some work in December my NAS randomly reboots when under fairly hi IO load. (Duplicacy finalizing a backup is the #1 trigger for it but just copying a few TB around has also done it.) I strongly suspect the LSI Adapter but am not 100% sure. There's never anything in the logs that I can see, and my notice is usually randomly hearing the two console beeps of a reboot hitting the BIOS by which time it's too late to IPMI in and look at what's going on. Any idea what I can do to narrow it down when it's super intermittent? (Twice in last 2 days, had 25 days uptime before that.) I think I'm just going to source a new LSI card since who knows how long that'll take to get here and flash it and see if that fixes it.
|
# ? Apr 9, 2020 18:15 |
|
Hughlander posted:Tangentially NAS related... About a year ago I had some similar symptoms on my Unraid box - spontaneous reboots, generally with some decent I/O load, clean logs. While attempting to reduce variables during troubleshooting I found that I couldn't reproduce the issue if I spun down 2 (unused) hard disks and ejected their trays. It didn't matter what position/adapter the 2 were attached to, so that eliminated a lot of variables for me. My root cause was ultimately a power supply that had been fine for about 2 years that appeared to be in the process of flaking out - replaced that and all has been well. That whole thing was a giant pain in the rear end to figure out, so best of luck...
|
# ? Apr 9, 2020 19:28 |
|
Fancy_Lad posted:About a year ago I had some similar symptoms on my Unraid box - spontaneous reboots, generally with some decent I/O load, clean logs. Hmm I think IPMI would tell me if there was any power irregularities. I have an external array as well as internal drives. If I remember, 100% of one array is internal and 80% of the other is external. I can seeing if it always happens when there's only one in use...
|
# ? Apr 9, 2020 19:38 |
|
Hughlander posted:Hmm I think IPMI would tell me if there was any power irregularities. I have an external array as well as internal drives. If I remember, 100% of one array is internal and 80% of the other is external. I can seeing if it always happens when there's only one in use... Check if there is a SEL (server event log?) or if it exports voltages. You might want to start graphing those. Either way, power issues can be hard to diagnose because they often happen in a way that your server cannot log.
|
# ? Apr 9, 2020 21:00 |
|
Hughlander posted:Hmm I think IPMI would tell me if there was any power irregularities. I have an external array as well as internal drives. If I remember, 100% of one array is internal and 80% of the other is external. I can seeing if it always happens when there's only one in use... Power supply issues are notoriously hard to diagnose via logs because their very nature often results in the system being unable to write to log before it dies. That said, "random hard reboots, no logs, can't reproduce reliably" are pretty much the hallmarks of a failing PSU, though of course there's no guarantee it's that. How old is yours, though?
|
# ? Apr 9, 2020 21:17 |
|
It's been a minute since I've built a home server. I have just exhausted my 30T of local storage and was looking to take some of that Mitch cash and sink it into a new server. Is unraid the new way to go? I was looking to put it in a big 24 drive box which I had access to from work. Sorry, the OP was last updated 8 years ago but I browsed the last few pages briefly. I'm totally willing to roll my own ZFS box if it's better, but frankly $130 to outsource it for a gui and something a bit more reliable seems totally worth it to me - if it doesn't suck.
|
# ? Apr 9, 2020 22:24 |
|
Unraid kicks all sorts of rear end if you don't mind putting the hardware together yourself.
|
# ? Apr 9, 2020 22:32 |
|
FreeNAS is another option. I've got 2 systems running it. 1. At home, Lenovo TS440, 24GB, E3-1225v3, 8x 6TB 2. At work, Lenovo TS430, 32GB, E3-1225v2, 8x10TB I haven't had issues other than boot drive corruption once, now I boot from an SSD instead of a USB Drive, no more problems.
|
# ? Apr 9, 2020 23:25 |
|
DrDork posted:Power supply issues are notoriously hard to diagnose via logs because their very nature often results in the system being unable to write to log before it dies. Heh, I was going to say 4 years, but the answer is almost 6. 10/2014: CORSAIR RM Series RM750 750W ATX12V v2.31 and EPS 2.92 80 PLUS GOLD Certified Full Modular Active PFC Power Supply Trip report though, I fixed up problem I had with Prometheus and set it to have an exporter for the IPMI of both the Supermicro that's dying (Different machine than where prometheus is running.) *and* the IPMI of the ASRocks. Of course the collection is only like every 2 minutes so I doubt that will do anything. I also added some logging to the backup to figure out what zpool it was on when the reset happened. I'll monitor it for awhile still and see what happens.
|
# ? Apr 10, 2020 01:22 |
|
KennyG posted:It's been a minute since I've built a home server. I have just exhausted my 30T of local storage and was looking to take some of that Mitch cash and sink it into a new server. Is unraid the new way to go? I was looking to put it in a big 24 drive box which I had access to from work. Sorry, the OP was last updated 8 years ago but I browsed the last few pages briefly. Go Unraid if you want to be able to add arbitrary single drives later. Go ZFS if you would be very upset if your data died and you have little/no intention of expanding it by single drives in the future.
|
# ? Apr 10, 2020 01:36 |
|
Hughlander posted:Heh, I was going to say 4 years, but the answer is almost 6. 10/2014: CORSAIR RM Series RM750 750W ATX12V v2.31 and EPS 2.92 80 PLUS GOLD Certified Full Modular Active PFC Power Supply Warranty is 5 years. If you bought it with a credit card check and see if they offer any sort of extended warranty. If it does, might as well go ahead and use it.
|
# ? Apr 10, 2020 01:38 |
|
DrDork posted:expanding it by single drives in the future. This is the most irritating thing about ZFS for me. A few weeks ago I posted about my plan to revamp the stupid raidz1x10 disk pool I had. I ended up splitting it into four raidz1 3-drive pools (by adding another disk) mainly so it's easier to expand storage later. In an ideal world where I was made of money I'd prefer 5-drive raidz2 pools, but it's less of a hit on the wallet to be able to expand by three drives at a time.
|
# ? Apr 10, 2020 03:08 |
|
K, just happened again while finalizing a backup on the internal array and also watching a tv show on plex. Didn't get to the ipmi console till it finished rebooting I think I'll try the video tomorrow. Did see a small bump in temperature just before the reboot but everything in normal ranges.
|
# ? Apr 10, 2020 05:44 |
|
For weird reboots without something like crashes or corresponding temp spikes I usually start thinking power.
|
# ? Apr 10, 2020 15:50 |
|
H2SO4 posted:For weird reboots without something like crashes or corresponding temp spikes I usually start thinking power. N'thing this. It's highly likely it's power. Could be a loose connector, but the supply is much more likely. I'd check the main 24-pin connector as well as the aux connectors. Ditto if you have any cards with their own power connections and all drives. Any drives have the 3.3v tape mod?
|
# ? Apr 10, 2020 15:59 |
|
DrDork posted:Go Unraid if you want to be able to add arbitrary single drives later. Go ZFS if you would be very upset if your data died and you have little/no intention of expanding it by single drives in the future. So if I'm filling a case on day 1 (no real ability to add drives) ZFS/FreeNAS is more reliable? I'm not doing this for work or I'd call my Isilon rep. This is definitely a homebrew situation so it's not ZOMG my databases but 20 years of high-res photos and lots of other take a long time to restore from backup wherever they may be (and may not be possible in some scenarios). So FreeNAS then?
|
# ? Apr 10, 2020 16:28 |
|
I wish there as a plug-in in Unraid that let me auto-balance media files whenever I add a new drive. Unbalance works well enough, I guess. I probably should have picked up a second 12 TB drive last week. Corb3t fucked around with this message at 16:35 on Apr 10, 2020 |
# ? Apr 10, 2020 16:32 |
|
KennyG posted:So if I'm filling a case on day 1 (no real ability to add drives) ZFS/FreeNAS is more reliable? I prefer rolling my own ZFS on Ubuntu, but I've had ZFS save me from losing the entire array to multiple drive failures twice now. Unless you have two drives fail completely offline (versus just certain unreadable blocks) ZFS will still do its best to keep the array online and will flag the files it knows to be corrupted. You'll still want to think about how you're laying out the vdevs to reduce the future expense of any drive replacements, since it does support replacing all drives of a single vdev and expanding that way.
|
# ? Apr 10, 2020 16:38 |
|
KennyG posted:So if I'm filling a case on day 1 (no real ability to add drives) ZFS/FreeNAS is more reliable? ZFS has the highest level of durability that I'm aware of, between its internal design, scrubbing, and ability to identify specific failed files during a rebuild. FreeNAS is a good option, especially if all you want is a basic fileserver that doesn't do much else. If you really want to get deep with plug-ins, dockers, VMs, whatever also running on there, it can be a bit of a pain. But for files + Plex + torrents it works well (though the Plex plugin is usually behind by a few version, if you care much about that). If you do want to do more than what's available via plug-ins, doing something like Ubuntu with ZOL might be a better option for your sanity. ESXi or similar bare-metal hypervisor also works well with FreeNAS as long as you have an add-in LSI card or similar that you can passthrough to FreeNAS in its entirety. That's how I have my current setup running: ESXi with FreeNAS passthroughed LSI card, and then a pair of SSDs held by ESXi as scratch space and storage for my other VMs that I don't much care if they die via SSD failure. Getting the permissions figured out was interesting, but it all works now, and performance and sanity is better than trying to deal with VMs/non-plug-in jails layered on top of FreeNAS itself. DrDork fucked around with this message at 16:45 on Apr 10, 2020 |
# ? Apr 10, 2020 16:40 |
|
Happy 7th (!) birthday to my NAS which started life as a scrapped DL380 G6. It runs ESXi and a half dozen VMs for plex/sonarr/transmission, including a xigmanas VM with hw passthrough for ZFS. It's silent, lives in a closet, and has IPMI, which is great. 7 years of trouble free operation with the exception that the first drives I put in it were ST3000DM001s. 5 died in the first year. The 6th is still running. Antec P280 with Seasonic X650 Supermicro X8DTL-iF-O Dual Xeon E5540 with Noctua NH-U12DX 48GB ECC M1015 with 6x 3TB drives some other LSI 9208 board with 2x 512GB SSDs, RAID-1, for VM storage. My storage growth rate has increased recently. The array's nearly full and wondering what to do next. I could just swap the drives to 12TB in the same pool, but it'd be cool to move to an 8-drive array. The processor's also old enough that it's off VMware's HCL and can't run ESXi 6.7, and it's not very power efficient to be running dual CPUs by modern standards. Feels like ESXi is getting a bit old fashioned and I'd welcome the chance to do less CJing of VMs in favor of containers, but ZFS has saved me a few times and I don't want to give that up. Any recommendations for the next 7 years?
|
# ? Apr 10, 2020 17:12 |
|
KS posted:
Spin up a rancheros instance and start dockerizing all those VMs.
|
# ? Apr 10, 2020 17:16 |
|
I could use some insight as to why I am getting terrible smb speeds between my laptop and unraid system. Right now transfers of large files over smb are going at about 10Mbit, but if I transfer the same file with ftp it saturates the wireless network at 300Mbit. I don't think I have any system scan shenanigans going on because cpu usage is remaining low on both ends.
|
# ? Apr 10, 2020 18:26 |
|
KennyG posted:So if I'm filling a case on day 1 (no real ability to add drives) ZFS/FreeNAS is more reliable? I'll throw my hat into Shilling for Proxmox. It's Debian + ZOL + GUIs, designed for small-midsized companies doing in house datacenters and VMs so can also do clustering. I run two nodes here, my main NAS (With the failing up thread) with 2 zpools over 20TB and 96TB, and the other just having a single mirrored zpool of 1TB to deal with VM/docker hosting. What I like the most about it is that it's mostly turnkey and has been able to grow with me over time. My previous solution was the first node with only 20TB pool, on ESXI passing-thru the onboard LSI to FreeNAS along with 16GB ram, and then using the rest of the RAM for Ubuntu+Docker. after a few years of that when I added the other zpool I installed proxmox, it imported the pool from FreeNAS, and ran docker in an LXC. This christmas I added the 2nd node with 128GB memory and running docker straight on proxmox. The whole thing is really smooth.
|
# ? Apr 10, 2020 19:42 |
|
Proxmox is neat. I just got terraform working with it and a cloud init template of centos so a few lines of code and one command and I can deploy machines in less than 20 seconds.
|
# ? Apr 10, 2020 20:40 |
|
Matt Zerella posted:Proxmox is neat. Nice, I played a bit with setting up Ansible for it last week since I have bad memories of Terraform from 5 years ago. I should go back to it. Trip report: Amazon and Newegg were out of power supplies, but local bestbuy had the one I wanted in stock, ordered from website at 8:30AM, delivered before 11:30AM, put into system by 12:30, torture testing it. It's peak temperature is 10C lower than what it was on the other PSU but I'm not watching a plex movie so
|
# ? Apr 11, 2020 00:53 |
|
Hughlander posted:Nice, I played a bit with setting up Ansible for it last week since I have bad memories of Terraform from 5 years ago. I should go back to it. Heads up, Jeff Geerlings ansible books are currently free on Leanpub. I bought them and do t regret it but if you're looking for some good books to read since we are all living in a coronavirus stay at home world, it's a pretty awesome move on his part to do this. The kubernetes book is only half done.
|
# ? Apr 11, 2020 01:57 |
|
I just set up some playbooks for creating and removing containers on my proxmox. Works well.
|
# ? Apr 11, 2020 08:42 |
|
DrDork posted:FWIW I've been shucking drives for DIY NASes for years (including rebuilding one a few months ago) and have never had to gently caress with a 3.3v pin. This is only an issue with a new drive and older PSU, i.e. the PSU is still trying to supply 3.3V over the repurposed reset connection. Charles posted:Why are usb drives cheaper? I still dont get it. It part warranty, and part market segmentation. The bare drives (that are equivalent to the ones that are generally discussed in this thread) are intended for non-consumer use (NAS, enterprise, etc.) and the manufacturers charge more for them because they can (kind of like how products intended for certain industries, e.g. healthcare/medical, carry a price premium.) It just so happens that they also tend to throw the same basic drives in USB enclosures because they make a ton of them anyway, but then they charge less for them because there's a lower warranty and because they need average consumers to be able to buy them.
|
# ? Apr 11, 2020 09:38 |
|
Got my 12TB WD drive yesterday from the $180 deal. Can’t wait for the three damned days of pre-clear to work through to actually use it. Then I get to wait to transfer the parity over it, which I’m sure I’ll gently caress up and then have to wait another a day to rebuild parity. Thankfully I still have ~1.2TB free. I always have a faint worry that running the drive or that long is actually bad for it, despite it being a health check since it’s running non-stop for so long. I’m sure that’s an unfounded concern.
|
# ? Apr 11, 2020 15:12 |
|
TraderStav posted:I always have a faint worry that running the drive or that long is actually bad for it, despite it being a health check since it’s running non-stop for so long. I’m sure that’s an unfounded concern. It's fine. If the drive can't take being read front to back right out of the box it's a dud. You aren't taking any life off it. Your random reads later are what is going to actually consume useful life.
|
# ? Apr 11, 2020 16:50 |
|
H110Hawk posted:It's fine. If the drive can't take being read front to back right out of the box it's a dud. You aren't taking any life off it. Your random reads later are what is going to actually consume useful life. And power cycles. Drives are meant to be used.
|
# ? Apr 11, 2020 22:54 |
|
KS posted:Antec P280 I have the same case and be advised that the hard drive trays are not designed for new large capacity drives. The screw holes do not line up. I bought an HGST 12TB drive when it was on sale recently and I have only two of the four holes screwed into the tray. I haven't been able to find a replacement tray that both works in the case's drive cage and has the right hole placement. There are two options: zip tie part of the drive tightly to the tray to minimize vibration, or get a 3.5" to 5.25" adapter and install the drive in the optical bay. If anyone in the thread knows of updated/universal trays that fit the Antec P280 I'd love the help.
|
# ? Apr 12, 2020 02:27 |
|
Former Human posted:I have the same case and be advised that the hard drive trays are not designed for new large capacity drives. The screw holes do not line up. I bought an HGST 12TB drive when it was on sale recently and I have only two of the four holes screwed into the tray. I haven't been able to find a replacement tray that both works in the case's drive cage and has the right hole placement. Velcro straps. 2 holes is plenty.
|
# ? Apr 12, 2020 02:32 |
|
Are the screw holes in different locations, or is the drive thicker, or something else?
|
# ? Apr 12, 2020 03:31 |
|
The screw holes on the bottom of 8TB and larger drives are spaced further apart.
|
# ? Apr 12, 2020 03:36 |
|
sharkytm posted:And power cycles. Drives are meant to be used. I always have my unraid set to spin down after 30 mins of activity or whatever to save power. I’ve only had 1-2 drives fail in 10y or so. Posts like this make me consider leaving them spun up at all times even though out usage (movies and stuff) is very cyclical and predictable.
|
# ? Apr 12, 2020 05:07 |
|
I have a Synology running DSM 6.2.2, is there a way to backup the NAS OS itself? I have set up a few docker containers that I would like to be able to restore as they are if something were to happen to my NAS or if I migrated to a new device.
|
# ? Apr 12, 2020 08:49 |
|
Incessant Excess posted:I have a Synology running DSM 6.2.2, is there a way to backup the NAS OS itself? I have set up a few docker containers that I would like to be able to restore as they are if something were to happen to my NAS or if I migrated to a new device. As far as how I understand it, on a Synology the OS partition is mirrored among each and every single disk in the array regardless of size pool volume what have you - where you'd have to have a 100% disk failure for it to not actually boot
|
# ? Apr 12, 2020 09:01 |
|
|
# ? May 27, 2024 09:23 |
|
Former Human posted:I have the same case and be advised that the hard drive trays are not designed for new large capacity drives. The screw holes do not line up. I bought an HGST 12TB drive when it was on sale recently and I have only two of the four holes screwed into the tray. I haven't been able to find a replacement tray that both works in the case's drive cage and has the right hole placement. Could you drill new holes in correct spots?
|
# ? Apr 12, 2020 11:19 |