|
DrDork posted:If the enclosure was operating in a JBOD mode and simply passing the drives straight on to the controller card (which it should do, especially if it's a cheaper enclosure), you should be just fine so long as the controller card itself is still working. Yeah, the card was still functioning, just the enclosure for the drives was dead. Just in case anyone was curious, after an initial moment of terror when the card didn't want to recognize the RAID array (just the individual drives) swapping them to a new enclosure worked perfectly fine. I had the key stuff backed up multiple places, but losing my 16TB raid array still would have suuuuuuucked.
|
# ? Mar 19, 2018 09:35 |
|
|
# ? May 25, 2024 16:45 |
|
I’ve been looking to add a NAS to my home setup, mostly for storing my .flacs and movies for streaming locally. I was looking at the Synology DS918+ which I was going to run in RAID5 - I think that ought to comfortably meet all my needs. Is this a fairly solid choice or are there any others I ought to look at?
|
# ? Mar 20, 2018 14:09 |
|
I'm running the 918+ in SHR rather than RAID5 but I'm pretty happy with it. Software is pretty easy to use and responsive.
|
# ? Mar 20, 2018 14:38 |
|
IOwnCalculus posted:Decided to order four of the GoHardDrive refurbished HGST HE8s from Newegg. No sales tax means they work out to about the same price as the Easystores, I don't have to shuck anything, and I have a three year warranty with GHD instead of a "is it void / is it not" 2-year warranty with WD. Going to burn them in on my server at home and then start swapping out the 3TB drives in my media server. In searching for this post I just realized I've been posting in this thread alone for ten years Trip report so far: Looks like UPS played football with the box. 2/4 DOA - one never even shows up, the other very infrequently gets recognized but offlines as soon as you try to do anything with it. The other two are going through four passes on nwipe. Going to find out how good GHD's customer support is.
|
# ? Mar 21, 2018 04:28 |
|
IOwnCalculus posted:In searching for this post I just realized I've been posting in this thread alone for ten years Please keep us updated, I hadn't heard of GoHardDrive until your previous post but I might be interested in some drives for non-essential stuff if they're good with their warranties.
|
# ? Mar 21, 2018 05:23 |
|
There's nothing but praise over at r/datahoarder so I figured it was worth a shot. Most of the original negative press they received in years past is that they were previously completely obfuscating the fact that the drives were refurbished. These were very obviously labeled as such, and all four drives I received had June 2015 manufacture dates and some form of Dell-tagged firmware. They don't seem to have fully zeroed out the SMART data, or if they did they subjected at least one of these drives to considerable extra testing. I've only had them plugged in a few hours.code:
|
# ? Mar 21, 2018 05:55 |
|
Single email to GHD via Newegg's interface, they sent me a prepaid return/RMA label via email last night. So far so good.
|
# ? Mar 21, 2018 17:48 |
|
for a friend.quote:I've been running Freenas flawlessly for a long time, the only problem I had was with a board initially that would cause checksum problems (the hd controller probably), but since replacing it i've had no problems for years. I'm on a supermicro running 11.0u4 with 7 disks in a raidz3, and one of my disks just yesterday came up faulted with 2 write and 174 read errors. Here are the steps I then took:
|
# ? Mar 21, 2018 17:53 |
|
rsync's default method for identifying changed files is the timestamp so no, they'll need to add the -c parameter to identify changed files based on checksum. (it does verify checksums when transferring to ensure that data was written correctly, however, regardless of -c) Not sure what you've got going on with the drives themselves, but it's actually not uncommon for drives to fail together. If you buy a couple drives at once, they've almost certainly been manufactured in the same batch, handled in warehousing+shipment the same, and been operated under the same conditions for the same length of time. Some people go as far as sourcing different types of drives from different stores over a period of time to try and mitigate this. (not saying it isn't the controller either, though) Paul MaudDib fucked around with this message at 18:45 on Mar 21, 2018 |
# ? Mar 21, 2018 18:25 |
|
thanks Paul
|
# ? Mar 21, 2018 18:34 |
|
ZFS does its own block checksums. If the scrub comes up clear, your data is intact. If there were checksums that didn't match and couldn't be corrected via RAIDZ3, the scrub output would tell you which ones. I haven't seen this in FreeNAS/openzfs, but I've seen it with Solaris ZFS in the office, and it's a very straightforward error message. One possibility in the scenario-as-described is that Derk's buddy tried to hot swap a drive in a system not built for hot swap, and plugging in the new drive confused it. I don't know if that's possible at the electrical engineering level, nor do I know what sort of hardware the guy is using, but having those other disks error out right as the new drive went in, but test clean afterward, feels like more than coincidence.
|
# ? Mar 21, 2018 19:17 |
|
Incessant Excess posted:I'm running the 918+ in SHR rather than RAID5 but I'm pretty happy with it. Software is pretty easy to use and responsive. Thanks. I see from your post history that you can run PLEX on it as well, which was my only real concern about it. Think I’ll go take the plunge on that and a set of WD reds.
|
# ? Mar 21, 2018 19:39 |
|
DoctorTristan posted:Thanks. I see from your post history that you can run PLEX on it as well, which was my only real concern about it. Yea Plex runs fine on it, there is an app in the Synology app center you can get for it or you can choose to run it as a Docker container, both work. I have a few nagging issues with my Plex installation but I'm pretty sure that those are cause of my networking hardware rather than the NAS itself.
|
# ? Mar 21, 2018 21:28 |
|
Zorak of Michigan posted:One possibility in the scenario-as-described is that Derk's buddy tried to hot swap a drive in a system not built for hot swap, and plugging in the new drive confused it. I don't know if that's possible at the electrical engineering level, nor do I know what sort of hardware the guy is using, but having those other disks error out right as the new drive went in, but test clean afterward, feels like more than coincidence. That can absolutely happen on controllers that are not hot swap aware or enabled. You plug in the new drive and it freaks out trying to figure out what the gently caress, and falls back on a bus or controller reset to fix itself, which isn't handled in software well, and ZFS just sees the entire controller's worth of disks drop off completely.
|
# ? Mar 21, 2018 22:16 |
|
Update on the GHD HE8s: The two that were not DOA have passed through three full random write passes on nwipe and are going through a blanking pass now, with no SMART errors of any kind. I'm going to probably cycle one of them into my array tomorrow and see how long it takes to resilver. For reference, this is the array in question today, and these will be replacing the 3TB Reds: pre:scan: scrub repaired 0 in 22h22m with 0 errors on Mon Mar 19 00:22:59 2018 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ata-TOSHIBA_HDWE150 ONLINE 0 0 0 ata-TOSHIBA_HDWE150 ONLINE 0 0 0 ata-TOSHIBA_HDWE150 ONLINE 0 0 0 ata-TOSHIBA_HDWE150 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0 ONLINE 0 0 0
|
# ? Mar 22, 2018 21:51 |
|
IOwnCalculus posted:Update on the GHD HE8s: The two that were not DOA have passed through three full random write passes on nwipe and are going through a blanking pass now, with no SMART errors of any kind. I'm going to probably cycle one of them into my array tomorrow and see how long it takes to resilver. On the one hand it's worrying that 2/4 disks were bad out of the box, but on the other hand if they make good with the replacements I guess it could just be a small sample size issue. My N40L has five 2TB disks and I'm considering going to 8TB because I'm lazy and don't want to curate the data on my NAS. I've got a couple of the 8TB WD Mybook from the Best Buy deal, but I kind of like to mix up my drives in age/manufacture just in case, which is why I'm interested in GoHardDrive since those HE8s seem like a stellar deal if they're well backed by the GHD warranty.
|
# ? Mar 23, 2018 00:23 |
|
I'm going with UPS treated the box like poo poo. The drives were packed in two layers and the two dead drives were right next to each other, so I'm guessing they ended up taking a very hard hit during shipping. Still better than my 125% DOA on Ironwolves!
|
# ? Mar 23, 2018 06:24 |
|
This is so far out of my wheelhouse that I don't even know where to ask apart from this tangentially related thread. It's an apropos of nothing. I copy pasted this script together for my Synology NAS from related stuff on the internet: code:
Put into task scheduler to run daily, it works fine for only sending me a mail with my external ip address when it has changed (I have the task set to only mail me if the script aborts unexpectedly, hence, the exit thing). But, aesthetically, it bothers me that the amazonaws thing is polled twice. And I'm wondering if I can assign the output of line one to the ip variable as well as send it to a file in one go somehow?
|
# ? Mar 23, 2018 17:43 |
|
Why not just get a dynamic DNS entry somewhere and update that?
|
# ? Mar 23, 2018 17:51 |
|
IOwnCalculus posted:Why not just get a dynamic DNS entry somewhere and update that? I understand it would be the more proper solution.
|
# ? Mar 23, 2018 18:10 |
|
Flipperwaldt posted:This is so far out of my wheelhouse that I don't even know where to ask apart from this tangentially related thread. It's an apropos of nothing. This should be a minimally changed version: code:
code:
Zorak of Michigan fucked around with this message at 00:05 on Mar 24, 2018 |
# ? Mar 23, 2018 19:09 |
|
Zorak of Michigan posted:
Not really knowing what the significance of the quotation marks or the x'es before the variable names during the evaluation is, I removed those (like in the edited code above) when it didn't work and it sure does now. So thanks, I'm a happy man.
|
# ? Mar 23, 2018 20:13 |
|
Flipperwaldt posted:I'm under the impression that costs money, idk. I wouldn't even have the nas accessible from the internet (haven't had the need for that for the last couple of years), but I'm staying with family for a couple of weeks and every so often I need a document stored on it or something. I don't need anything more structurally sound than this, as far as I can see. Nope, there are some free options out there. I use these guys: http://freeddns.noip.com/?d=ddns.net&u=ZGRucy5uZXQv The only downside is for the free account you have to manually renew it every month, but they email you saying it's due and you just click a link.
|
# ? Mar 23, 2018 20:15 |
|
Use DuckDNS
|
# ? Mar 23, 2018 20:16 |
|
I do appreciate the suggestions.
|
# ? Mar 23, 2018 20:38 |
|
Flipperwaldt posted:I liked the second idea. After some moments when I realized the forums code injects those url tags... The backquotes (`) are a shell convention for putting the output of a command into a command line. The x'es are an old UNIX nerd thing. It would run fine without them but I got in the habit of using them. If your variable name isn't already in quotes, and it somehow ends up empty, then a simple code:
code:
|
# ? Mar 23, 2018 21:00 |
|
Zorak of Michigan posted:The backquotes (`) are a shell convention for putting the output of a command into a command line. Thanks and thanks all.
|
# ? Mar 23, 2018 21:46 |
|
IOwnCalculus posted:In searching for this post I just realized I've been posting in this thread alone for ten years I posted in this thread 4 hours and 15 minutes after you did. And I'm mentioned in the OP. We're old.
|
# ? Mar 24, 2018 14:10 |
|
sharkytm posted:We're old. So someone tell me if I should do this, or if I'm over thinking things. When I first built my current iteration of my ZFS array, I had to do it with one RAIDZ of 4x5TB, and later added the 4x3TB after migrating a bunch of data onto it. So the 4x5 is mostly full. I was going to just do a straight swap of the 8tb drives in place of the 3tb, but now I'm thinking I could use them to replace the 5tb, and then use the 5tb to replace the 3tb. This would give a bit more balanced free space across both vdevs, at the expense of running twice as many resilvers. I'm not hugely concerned about write performance. I dump all new downloads to a scratch SATA drive first and then move them over, and even with most writes going to just four drivers, it's never been an issue.
|
# ? Mar 24, 2018 17:57 |
|
For what it's worth, the resilver was glacially slow until I did this:code:
|
# ? Mar 25, 2018 16:22 |
|
I'm assuming you're on Linux, not FreeBSD, but additional tuning parameters you could use are: vfs.zfs.prefetch_disable=1 vfs.zfs.resliver_delay=0 vfs.zfs.resilver_min_time=5000 vfs.zfs.scrub_delay=0 vfs.zfs.top_maxinflight=512 At least on FreeBSD, those are phenomenal, and I'd assume that the Linux implementation has similar tunables. They're pretty aggressive, and will kill your ability to use the NAS reasonably during a resilver, unless you've got some serious caching on the remote device, but the performance boost they give to resilvers and scrubs is huge. My 8x8TB array at ~40% capacity in use was taking over 36 hours to do a scrub, tuned that stuff and it's down to 12 hours. I had previously used these when I grew the array from 8x4TB, and that made it ~20 hours a drive instead of ~50. Again, though, this is super aggressive. I couldn't play 1080p videos in realtime with these set, during a resilver or scrub, until I increased the cache on the remote box and let it buffer a reasonable amount so it could get bursts to make it through the whole file.
|
# ? Mar 25, 2018 16:50 |
|
Yeah, Ubuntu 16.04. I'd actually tried the equivalents of the rest of those, except prefetch_disable, with no change whatsoever. It was writing to the new disk at about 10 MB/sec according to netdata. Apparently the last one I changed for vdev_async_write_min_active has some different behavior on Linux versus BSD/Solaris, because it's the only one that improved things. I can't find the article I dug this up in last night but now that the first resilver (of many) is done I'm going to set it to '3', since apparently setting it higher increases latency without improving throughput. I will say that even during this resilver the server had no problem continuing to stream out movies on Plex.
|
# ? Mar 25, 2018 18:34 |
|
Flipperwaldt posted:I'm under the impression that costs money, idk. I wouldn't even have the nas accessible from the internet (haven't had the need for that for the last couple of years), but I'm staying with family for a couple of weeks and every so often I need a document stored on it or something. I don't need anything more structurally sound than this, as far as I can see. Synology has a built in free dynamic DNS service as part of their DSM software.
|
# ? Mar 25, 2018 19:09 |
|
Alright, I'm going to get a Synology sooner rather than later. Am I going to kick myself for getting a single bay rather than dual bay? I'm looking at the DS118 or 218 ranges. Not 100% sure I need RAID, as I'll back it up online also. There's just 2 of us going to be using it to store photos, music and some videos, plus run a few torrents occasionally. I'm not sure I'll benefit from any super advanced features, so been looking at the 218play or the 118. Any comments on either of these choices? Right now I'm leaning towards the 118 just for price reasons.
|
# ? Mar 25, 2018 20:15 |
|
Pantsmaster Bill posted:Alright, I'm going to get a Synology sooner rather than later. Am I going to kick myself for getting a single bay rather than dual bay? I'm looking at the DS118 or 218 ranges. Not 100% sure I need RAID, as I'll back it up online also. Get a two-bay or higher so you can at least survive a single disk failure.
|
# ? Mar 25, 2018 20:29 |
at least 2 bays would be nice so its somewhat up-gradable too, a one bay NAS to me is no different from a single external hard drive.
|
|
# ? Mar 25, 2018 21:10 |
|
I'm sorry in advance for being lovely with research, but I'm having a hard time sorting through very out-of-date info. I'm rebuilding my 2009-era home NAS after getting lovely after a very bad 2x simultaneous device failure in my raid5 array. I'm looking for alternatives to RAID but a lot of the info I'm reading is from the first half of this decade or I'll find conflicting posts on reddit or other tech sites. My requirements are: Linux based Allow me to pool an arbitrary number of disks together Simple to add new disks to the pool Better fault tolerance than 1x disk failure I don't care much about performance - I'll be streaming media from the disks but only to a single device at a time From what I was reading, SnapRAID was the leading contender, but its weird scheduled parity building throws me, and my understanding is that the number of failures it can withstand is == the number of parity drives. I'm also seeing UnRAID (same limitation of parity) and FlexRaid (apparently a lovely dev?). Are there other contenders?
|
# ? Mar 26, 2018 01:13 |
|
OK so I've got a bunch of r710s that I need to put in a rack cabinet, so I got one of these, and purchased a KVM cable; VGA to VGA+USB. The display works fine, no problem. It's just when I plug in the USB when strange poo poo happens. When I plug it in to any of the servers, it hangs at BIOS. On a whim, I plugged the USB into my laptop to see the dmesg output. code:
|
# ? Mar 26, 2018 04:29 |
|
Yeah that looks like something is hosed with the kvm. Do those servers not have DRAC?
|
# ? Mar 26, 2018 05:10 |
|
|
# ? May 25, 2024 16:45 |
|
IOwnCalculus posted:Yeah that looks like something is hosed with the kvm. Just the basic modules, unfortunately. That's limited to SNMP, IPMI & firmware updates. Can't even tell the server(s) to shut down.
|
# ? Mar 26, 2018 05:39 |