Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
forbidden dialectics
Jul 26, 2005





I've had a large 6-drive RAID-Z2 array running on OpenSolaris for over a year now. Is there any maintenance that needs to be done, software wise? I haven't noticed any decline in speed and there are no hardware or software errors on the drives. SMART readout looks perfect.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





Scrub the pool once in a while, I do it weekly via crontab. Not much else to do.

AlternateAccount
Apr 25, 2005
FYGM
poo poo, it looks like I might have a bad drive or two. I guess I will order a couple more 2TB NAS drives this week and see if it sorts out.

SamDabbers
May 26, 2003



IOwnCalculus posted:

Scrub the pool once in a while, I do it weekly via crontab. Not much else to do.

On FreeBSD 10 there's a knob to do a scrub on a defined interval as part of the periodic(8) system. Just add a few lines to /etc/periodic.conf (you may have to create it) to have it scrub every 7 days, and output some zpool status info in the daily output emails:
code:
daily_status_zfs_enable="YES"                   # enable zpool status output
daily_scrub_zfs_enable="YES"                    # scrub zpools
daily_scrub_zfs_pools=""                        # empty string selects all pools
daily_scrub_zfs_default_threshold="7"           # days between scrubs
#daily_scrub_zfs_${poolname}_threshold="35"     # pool specific threshold
The output looks like this:
code:
Checking status of zfs pools:
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
zroot  10.9T  5.30T  5.58T    48%  1.00x  ONLINE  -

all pools are healthy

...

Scrubbing of zfs pools:
   skipping scrubbing of pool 'zroot':
      last scrubbing is 5 days ago, threshold is set to 7 days

IOwnCalculus
Apr 2, 2003





I'm rolling on NAS4Free, which is still on FreeBSD 9 - so if that's a new-for-10 option, I can't use it yet. Neat, though!

SamDabbers
May 26, 2003



IOwnCalculus posted:

I'm rolling on NAS4Free, which is still on FreeBSD 9 - so if that's a new-for-10 option, I can't use it yet. Neat, though!

I'm not sure if it's new in 10 or if it was in 9 also. You can check what options are available in /etc/defaults/periodic.conf

forbidden dialectics
Jul 26, 2005





IOwnCalculus posted:

Scrub the pool once in a while, I do it weekly via crontab. Not much else to do.

Set a job to scrub once a month. Thanks!

IOwnCalculus
Apr 2, 2003





SamDabbers posted:

I'm not sure if it's new in 10 or if it was in 9 also. You can check what options are available in /etc/defaults/periodic.conf

Unless they stripped it from N4F, looks like it's in 10 only:

code:
~ # ls /etc/defaults/
devfs.rules rc.conf     xmlrpc.inc  xmlrpcs.inc

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

IOwnCalculus posted:

Unless they stripped it from N4F, looks like it's in 10 only:

code:
~ # ls /etc/defaults/
devfs.rules rc.conf     xmlrpc.inc  xmlrpcs.inc
Nah, it was available as far back as FreeBSD 4.1. The file may not exist by default, though, if the system had nothing to bother putting in it.

eddiewalker
Apr 28, 2004

Arrrr ye landlubber
I'm having a hard time finding a straight answer: if I get a Synology and install the iTunes server package, a Synology share will appear under the shared libraries in iTunes on my laptop. Can I make a playlist of music from that shared library then sync it to an iPod?

If that's the case I'll be Synology shopping today. I use a 128gb MB Air as my primary computer, but it's obviously cramped with a local iTunes library.

Thanks Ants
May 21, 2004

#essereFerrari


If you don't actually want to share your library then you can just relocate your library onto the NAS. At which point it works exactly like iTunes on your local storage would, just a little bit slower.

MrMoo
Sep 14, 2000

iTunes server is a bit of an afterthought and you don't get any of the nice album cover navigation for a local folder. More practical options would be Apple's or Google's :yaycloud: services for music.

AlternateAccount
Apr 25, 2005
FYGM

MrMoo posted:

iTunes server is a bit of an afterthought and you don't get any of the nice album cover navigation for a local folder. More practical options would be Apple's or Google's :yaycloud: services for music.

Streaming from iTunes Match can kind of be slow and janky, though :\

eddiewalker
Apr 28, 2004

Arrrr ye landlubber

Caged posted:

If you don't actually want to share your library then you can just relocate your library onto the NAS. At which point it works exactly like iTunes on your local storage would, just a little bit slower.

That's what I'm doing now. I have the iTunes database files stored locally and the music on an NFS share from the terribly slow drive plugged into my router. It's functional, but so so slow. I guess I can keep doing that and get a better experience using any generic NAS device that's faster than a router USB port.

What would be killer is having one "home cloud iTunes instance" that I can share between iTunes on my laptop or my wife's without worrying about local database files. That's what got me interested in paying a premium for Synology or doing an xpenology thing.

What would be even cooler is being able to fire up Remote.app on an iOS device, connect it to a Synology iTunes server and beam music to AirTunes speakers around the house without keeping a laptop open like I do now, but I can't imagine that's possible.

I guess one option is running xpenology and Windows+iTunes side by side in VMs, but that sounds like something Rube Goldberg would applaud.

AlternateAccount
Apr 25, 2005
FYGM

eddiewalker posted:

What would be killer is having one "home cloud iTunes instance" that I can share between iTunes on my laptop or my wife's without worrying about local database files. That's what got me interested in paying a premium for Synology or doing an xpenology thing.


I feeeeeel like this might be an iTunes Match situation.

AlternateAccount
Apr 25, 2005
FYGM
My XPENology box REFUSES to repair, it's kind of making me insane. It started out with a 2TB and 2x1TB(after I ditched that 250GB) then I added another 2TB. The total space has never been right and it refuses to expand to the 3.8TB or whatever that should be available. Hooked up a monitor and saw drive failures on SDB and SDC when it tried the expansion and nearly poo poo my pants.
That was the two 1TB drives, so I replaced the first one and repaired fine and then replaced the second one and it fails to repair AGAIN.
At this point I am just going to copy everything off, kill the volume and start over with all 4 drives present, I think.

BlankSystemDaemon
Mar 13, 2009



A few tips:
Here's a cool little analysis of raidz3.
And I ran into a cool little way to check which drive is the one that's causing your pool to be degraded, and how to get its serial number:
code:
for i in a b c d e f g; do echo -n "/dev/sd$i: "; smartctl -i /dev/sd$i | awk '/Serial Number/ {print $3}'; done
The above will give you a list of devices and their serial numbers (provided it's reported by S.M.A.R.T, which it should be). Just look for the device which reports "No such file or directory", and by exclusion of the labels that you of course have printed on your front-facing edge of your disks, you can easily identify which drive to replace.

..and I'm pretty much back to square one with my server build because I can't for the life of me actually find proper hardware to put into the rig I want. It's always an issue of what's manefacturered vs. what's actually available in Denmark given my requirements (ecc udimm (not so-udimm) memory, oob bmc, dual NIC, at least one pci-ex x8 for a IBM ServeRAID M1015 unless it's the Supermicro X10SL7-F motherboard).

EDIT: ↓ Just be aware that you should strive to have the same drives in all vdevs in your zpool to avoid causing unnecessary wear on one drive, and that if you'll be slowly adding vdevs to your pool over time, you'll start with some wear on the oldest drives when you'll be adding the newest drives.

BlankSystemDaemon fucked around with this message at 15:44 on Feb 19, 2014

GokieKS
Dec 15, 2012

Mostly Harmless.
Heh, I was actually just thinking about RAID-Z3. I finally ordered the MB/CPU/RAM for my new file server as Newegg had/has the SM X10SL7-F and E3 1220v3 as a combo deal (saving about $40), and I'm trying to plan out my build and future expansion to determine what case to get. The board comes with 6 SATA ports off the Intel controller and 8 off the LSI 2308, and I have a M1115 for another 8, giving me a total of 22 SATA ports. As I don't think I'll be adding any more HBAs, that comes out to be nearly perfect for a NORCO RPC-4020/4220 with 20 hot swap bays (and the other 2 ports can be used for ZIL + L2ARC or OS drive). My current file server has 6 x 2TB drives in 2 3-disk RAID-Z vdevs, and I while I'm definitely going to move the data off and rebuild it as a higher-redundancy vdev, I don't want to add any additional drives of that size, so it's going to be 6-disk RAID-Z2. That leaves me with 14 other ports, which means I can do either 2 7-disk RAID-Z3 vdevs, or 2 RAID-Z2s, one with 8 disks and the other with 6. And I haven't quite decided on which yet.

GokieKS fucked around with this message at 15:41 on Feb 19, 2014

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

AlternateAccount posted:

My XPENology box REFUSES to repair, it's kind of making me insane. It started out with a 2TB and 2x1TB(after I ditched that 250GB) then I added another 2TB. The total space has never been right and it refuses to expand to the 3.8TB or whatever that should be available. Hooked up a monitor and saw drive failures on SDB and SDC when it tried the expansion and nearly poo poo my pants.
That was the two 1TB drives, so I replaced the first one and repaired fine and then replaced the second one and it fails to repair AGAIN.
At this point I am just going to copy everything off, kill the volume and start over with all 4 drives present, I think.

Assuming you have tested your drives, maybe your controller is failing, or even bad cables.

GokieKS
Dec 15, 2012

Mostly Harmless.

D. Ebdrup posted:

EDIT: ↓ Just be aware that you should strive to have the same drives in all vdevs in your zpool to avoid causing unnecessary wear on one drive, and that if you'll be slowly adding vdevs to your pool over time, you'll start with some wear on the oldest drives when you'll be adding the newest drives.

Yeah... the 2 3 disk RAID-Z1 vdevs I have are somewhat close in age, which is why I was going to rebuild it as a single RAID-Z2 and didn't want to add any more drives. This new file server will be the rebuilt vdev and either a 7-disk RAID-Z3 or 6-disk RAID-Z2 vdev of 3TB drives, with the last 7/8 drives coming in the future.

evol262
Nov 30, 2010
#!/usr/bin/perl

D. Ebdrup posted:

A few tips:
Here's a cool little analysis of raidz3.
And I ran into a cool little way to check which drive is the one that's causing your pool to be degraded, and how to get its serial number:
code:
for i in a b c d e f g; do echo -n "/dev/sd$i: "; smartctl -i /dev/sd$i | awk '/Serial Number/ {print $3}'; done
The above will give you a list of devices and their serial numbers (provided it's reported by S.M.A.R.T, which it should be). Just look for the device which reports "No such file or directory", and by exclusion of the labels that you of course have printed on your front-facing edge of your disks, you can easily identify which drive to replace.

..and I'm pretty much back to square one with my server build because I can't for the life of me actually find proper hardware to put into the rig I want. It's always an issue of what's manefacturered vs. what's actually available in Denmark given my requirements (ecc udimm (not so-udimm) memory, oob bmc, dual NIC, at least one pci-ex x8 for a IBM ServeRAID M1015 unless it's the Supermicro X10SL7-F motherboard).

EDIT: ↓ Just be aware that you should strive to have the same drives in all vdevs in your zpool to avoid causing unnecessary wear on one drive, and that if you'll be slowly adding vdevs to your pool over time, you'll start with some wear on the oldest drives when you'll be adding the newest drives.

code:
ls /dev/sd* | sed -e 's/.*sd\(.\).*/\1/ | uniq | while read $i; do echo -n "/dev/sd$i: "; smartctl -i /dev/sd$i | awk '/Serial Number/ {print $3}'; done

AlternateAccount
Apr 25, 2005
FYGM

Don Lapre posted:

Assuming you have tested your drives, maybe your controller is failing, or even bad cables.

Well, one of the 1TBs, the first one I switched, was definitely failing. Threw it in an enclosure and tried to just format/write zeroes. After 10 minutes, the time estimated to finish was something like 14 hours.
It's an N54L, so no cables, who knows about the controller. Dammit, the thing worked just fine a week ago.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

AlternateAccount posted:

Well, one of the 1TBs, the first one I switched, was definitely failing. Threw it in an enclosure and tried to just format/write zeroes. After 10 minutes, the time estimated to finish was something like 14 hours.
It's an N54L, so no cables, who knows about the controller. Dammit, the thing worked just fine a week ago.

Take the drives and hook them up to a pc and run something like seatools on them to test them.

phosdex
Dec 16, 2005

GokieKS posted:

Heh, I was actually just thinking about RAID-Z3. I finally ordered the MB/CPU/RAM for my new file server as Newegg had/has the SM X10SL7-F and E3 1220v3 as a combo deal (saving about $40), and I'm trying to plan out my build and future expansion to determine what case to get. The board comes with 6 SATA ports off the Intel controller and 8 off the LSI 2308, and I have a M1115 for another 8, giving me a total of 22 SATA ports. As I don't think I'll be adding any more HBAs, that comes out to be nearly perfect for a NORCO RPC-4020/4220 with 20 hot swap bays (and the other 2 ports can be used for ZIL + L2ARC or OS drive). My current file server has 6 x 2TB drives in 2 3-disk RAID-Z vdevs, and I while I'm definitely going to move the data off and rebuild it as a higher-redundancy vdev, I don't want to add any additional drives of that size, so it's going to be 6-disk RAID-Z2. That leaves me with 14 other ports, which means I can do either 2 7-disk RAID-Z3 vdevs, or 2 RAID-Z2s, one with 8 disks and the other with 6. And I haven't quite decided on which yet.

I just got a X10SL7-F few days ago. Had to wait until yesterday for the processor. Updated all firmwares last night then went to run memtest86+. Just before I ran it, I happened to look at the Event Log and caught either a bad omen or a really crazy coincidence.



That error didn't occur during the memtest86 run, it happened about 3 minutes before. Now I'm a little worried but also hey the board does track and log that stuff correctly, cool. Going to let memtest finish 3 passes I think?

GokieKS
Dec 15, 2012

Mostly Harmless.
A random correctable ECC error once in a while is basically normal. If you discover that you start to get them much more frequently though and almost always from the same stick, then I'd probably be looking to replace it.

AlternateAccount
Apr 25, 2005
FYGM

Don Lapre posted:

Take the drives and hook them up to a pc and run something like seatools on them to test them.

Yeah, I knew I was going to end up here but I didn't want to spend the shitload of time. Booted up UBCD and I am running one of the tools on there doing a surface scan. It's a Seagate drive, but for some reason Seatools didn't see it. Could be the versions on this disc are old or stupid.
21% done, no errors. Gonna get some "known good" drives piled up and then just rebuild the drat thing properly.

AlternateAccount fucked around with this message at 21:42 on Feb 19, 2014

eightysixed
Sep 23, 2004

I always tell the truth. Even when I lie.
What does SMART say?

AlternateAccount
Apr 25, 2005
FYGM

eightysixed posted:

What does SMART say?

Dunno, it's all torn down now for scanning. Nothing bad enough to cause Synology to say anything but NORMAL for drive status, but I am not sure what it's threshold is.

This is what I was getting during the expansion, it's just bizarre.

Pulling one of those two drives out and replacing and then trying to repair would fail after cranking to 100% over 8ish hours.

Siroc
Oct 10, 2004

Ray, when someone asks you if you're a god, you say "YES"!
I picked up a 4tb WD MyCloud a few days ago from BestBuy because it seemed like a great deal even if just for a 4tb drive, but I've not opened it. Does anyone use one of these and can recommend it? Is there a way to have Dropbox sync to the NAS too?

wang souffle
Apr 26, 2002
I posted this in the SSD thread, but I'm not sure if that's the best place for ZFS on SSD questions, so here I go again:

I'm running SmartOS (Illumos-based kernel) which does not support TRIM (yet). Am I realistically going to notice a performance or reliability hit without it, assuming I keep plenty of free space? I know I can get a Sandforce drive, but the EVO is still 10-20% cheaper compared to something like the Intel 530.

Also related: even though SSDs are more reliable (in theory) than spinners, I still want to get the benefit of data checksumming from ZFS. Ignoring complete failure of the SSD, would copies=2 be equivalent to a mirror in regards to recovering from a bad checksum? Or should I just spring for two identical SSDs?

SamDabbers
May 26, 2003



wang souffle posted:

I'm running SmartOS (Illumos-based kernel) which does not support TRIM (yet). Am I realistically going to notice a performance or reliability hit without it, assuming I keep plenty of free space? I know I can get a Sandforce drive, but the EVO is still 10-20% cheaper compared to something like the Intel 530.

You'll probably notice a performance hit, and Sandforce-based drives reportedly perform better than other SSDs in non-TRIM environments as long as you leave some unpartitioned space. FreeBSD has ZFS TRIM since 9.2-RELEASE, and it's also in 10.0-RELEASE. It appears that people are working on porting that feature to Illumos and ZFS-on-Linux, but it's not in an official release yet for either implementation. Use FreeBSD if you need this feature now, I suppose?

wang souffle posted:

Also related: even though SSDs are more reliable (in theory) than spinners, I still want to get the benefit of data checksumming from ZFS. Ignoring complete failure of the SSD, would copies=2 be equivalent to a mirror in regards to recovering from a bad checksum? Or should I just spring for two identical SSDs?

Here's a blog entry examining the ZFS copies feature. The author spends about a third of the article talking about the single drive case, and the conclusion seems to be that using copies=n can prevent dataloss in the case that the drive has an unrecoverable read error but hasn't completely failed, but that you should still make backups, of course:

Richard Elling posted:

Both real and anecdotal evidence suggests that unrecoverable errors can occur while the device is still largely operational. ZFS has the ability to survive such errors without data loss. Very cool. Murphy's Law will ultimately catch up with you, though. In the case where ZFS cannot recover the data, ZFS will tell you which file is corrupted. You can then decide whether or not you should recover it from backups or source media.

Also important to note that: (emphasis mine)

Richard Elling posted:

The copies property works for all new writes, so I recommend that you set that policy when you create the file system or immediately after you create a zpool.

SamDabbers fucked around with this message at 05:05 on Feb 20, 2014

BlankSystemDaemon
Mar 13, 2009



If your OS of choice does not have TRIM support for ZFS on SSDs, it is recommended to do a regular secure erase of the SSD if your host is busy (you'll see ever-decreasing preformance gains from the SSD if you don't). Also note that while both zfs on linux and illumos are working on it, there's no telling when it'll be implimented.

Think of copies=n as optional protection against some types of dataloss, in addition to the features already present in zfs, that isn't backup (because you should still backup): it stores multiple transparent copies of each block making up individual files on every device in a vdev. In case of a dead sector, as determined by checksumming, zfs will automatically use a copy of the block(s) which have the correct checksum.
Additionally, a vdev with parity is always better than just having single-device copies=n, for example - as the vdev with parity protects you against catastrophic drive failure (with the usual caveat that it can only protect against it with however much parity it has).

I'm not really sure how good of a feature it is. In my experience, drives tend to fail spectacularily when they fail - and in addition, it means whatever dataset the property is applied applied to takes up double or triple the amount of diskspace.

BlankSystemDaemon fucked around with this message at 11:09 on Feb 20, 2014

AlternateAccount
Apr 25, 2005
FYGM

D. Ebdrup posted:

If your OS of choice does not have TRIM support for ZFS on SSDs, it is recommended to do a regular secure erase of the SSD if your host is busy (you'll see ever-decreasing preformance gains from the SSD if you don't). Also note that while both zfs on linux and illumos are working on it, there's no telling when it'll be implimented.


Won't writing a disk full of zeroes do the opposite and cause it to behave in a "full" state? Could you use the manufacturers utility to reset it back to factory instead?

Manos
Mar 1, 2004

AlternateAccount posted:

Won't writing a disk full of zeroes do the opposite and cause it to behave in a "full" state? Could you use the manufacturers utility to reset it back to factory instead?

Secure erase is not the same thing as wiping a drive with a pass of zeros or the like.

AlternateAccount
Apr 25, 2005
FYGM

Manos posted:

Secure erase is not the same thing as wiping a drive with a pass of zeros or the like.

a random website says posted:

Definition: Secure Erase is the name given to a set of commands available from the firmware on PATA and SATA based hard drives. The Secure Erase commands are used as a data sanitization method to completely overwrite all of the data on a hard drive.

The Secure Erase data sanitization method is implemented in the following way:

Pass 1: Writes a binary one or zero

What does the process you're describing do instead?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

AlternateAccount posted:

Won't writing a disk full of zeroes do the opposite and cause it to behave in a "full" state? Could you use the manufacturers utility to reset it back to factory instead?
Not necessarily the same thing as "full" but related. The way most SSDs today work with aggressive compression, the controller will notice that there's a long sequence of 0s, map that to a single "null" cell, and the SSD should compress it basically in the end while still marked as written. Then there's factors like wear level balancing that fudge with the problem that your zero'ed block in one write may not be the one read back later.

The TRIM command is what will mark a block as unused though to get the correct behavior of marking a block as freed. A secure erase will go beyond that and write random data as well as dropping the block from the used block chain.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Toshimo posted:

So, troubleshooting some unrelated items last week, I reset my CMOS and which cause my onboard Intel RAID controller to take a giant steaming poo poo all over my array's metadata. I tried a few things, but was unable to resuscitate it. Now, I've given myself over to recreating, reformatting, and refilling my array. One thing I can't seem to find anything on is if there is a way to actually back up the metadata so I don't ever have to go through the seven stages of loss for my 9TB array with 15 years of data on it again. Any help?

I've had to deal with blown arrays over the absolute stupidest poo poo - client shutdown and moved their setup (rackmount server, 2 rackmount drive arrays, SCSI). Swapped the connections to the controller when they reconnected it and lost everything. I mean, I guess I could have understood it complaining until you swapped them back, but no, it just decided to resync the array with the disks in the completely wrong order, which of course meant it trashed everything recalculating "parity" with the wrong bits. Thanks, LSI.

I just trust (linux, BSD or zfs) software raid more. At least I know exactly what it's doing, and a controller failure just means I replace the hardware, not restore from backup.

AlternateAccount
Apr 25, 2005
FYGM

necrobobsledder posted:

Not necessarily the same thing as "full" but related. The way most SSDs today work with aggressive compression, the controller will notice that there's a long sequence of 0s, map that to a single "null" cell, and the SSD should compress it basically in the end while still marked as written. Then there's factors like wear level balancing that fudge with the problem that your zero'ed block in one write may not be the one read back later.

The TRIM command is what will mark a block as unused though to get the correct behavior of marking a block as freed. A secure erase will go beyond that and write random data as well as dropping the block from the used block chain.

Interesting, thanks.

eddiewalker
Apr 28, 2004

Arrrr ye landlubber
I just got two drives from Newegg to throw in my new xpenology box. The packaging looked OK, but one of the drives just clicks loudly and the system refuses to boot until I pull it back out.

Newegg says I have to pay $12 return shipping to RMA, even though it was dead on arrival. I guess that's the last time I order from Newegg.

Adbot
ADBOT LOVES YOU

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
Use their live chat and talk to them until they give you a shipping label.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply