|
I've had a large 6-drive RAID-Z2 array running on OpenSolaris for over a year now. Is there any maintenance that needs to be done, software wise? I haven't noticed any decline in speed and there are no hardware or software errors on the drives. SMART readout looks perfect.
|
# ? Feb 17, 2014 05:11 |
|
|
# ? May 29, 2024 23:45 |
|
Scrub the pool once in a while, I do it weekly via crontab. Not much else to do.
|
# ? Feb 17, 2014 05:55 |
|
poo poo, it looks like I might have a bad drive or two. I guess I will order a couple more 2TB NAS drives this week and see if it sorts out.
|
# ? Feb 17, 2014 05:57 |
|
IOwnCalculus posted:Scrub the pool once in a while, I do it weekly via crontab. Not much else to do. On FreeBSD 10 there's a knob to do a scrub on a defined interval as part of the periodic(8) system. Just add a few lines to /etc/periodic.conf (you may have to create it) to have it scrub every 7 days, and output some zpool status info in the daily output emails: code:
code:
|
# ? Feb 17, 2014 06:12 |
|
I'm rolling on NAS4Free, which is still on FreeBSD 9 - so if that's a new-for-10 option, I can't use it yet. Neat, though!
|
# ? Feb 17, 2014 06:16 |
|
IOwnCalculus posted:I'm rolling on NAS4Free, which is still on FreeBSD 9 - so if that's a new-for-10 option, I can't use it yet. Neat, though! I'm not sure if it's new in 10 or if it was in 9 also. You can check what options are available in /etc/defaults/periodic.conf
|
# ? Feb 17, 2014 06:20 |
|
IOwnCalculus posted:Scrub the pool once in a while, I do it weekly via crontab. Not much else to do. Set a job to scrub once a month. Thanks!
|
# ? Feb 17, 2014 06:28 |
|
SamDabbers posted:I'm not sure if it's new in 10 or if it was in 9 also. You can check what options are available in /etc/defaults/periodic.conf Unless they stripped it from N4F, looks like it's in 10 only: code:
|
# ? Feb 17, 2014 08:15 |
|
IOwnCalculus posted:Unless they stripped it from N4F, looks like it's in 10 only:
|
# ? Feb 17, 2014 10:43 |
|
I'm having a hard time finding a straight answer: if I get a Synology and install the iTunes server package, a Synology share will appear under the shared libraries in iTunes on my laptop. Can I make a playlist of music from that shared library then sync it to an iPod? If that's the case I'll be Synology shopping today. I use a 128gb MB Air as my primary computer, but it's obviously cramped with a local iTunes library.
|
# ? Feb 17, 2014 17:30 |
|
If you don't actually want to share your library then you can just relocate your library onto the NAS. At which point it works exactly like iTunes on your local storage would, just a little bit slower.
|
# ? Feb 17, 2014 17:47 |
|
iTunes server is a bit of an afterthought and you don't get any of the nice album cover navigation for a local folder. More practical options would be Apple's or Google's services for music.
|
# ? Feb 17, 2014 17:52 |
|
MrMoo posted:iTunes server is a bit of an afterthought and you don't get any of the nice album cover navigation for a local folder. More practical options would be Apple's or Google's services for music. Streaming from iTunes Match can kind of be slow and janky, though :\
|
# ? Feb 17, 2014 18:03 |
|
Caged posted:If you don't actually want to share your library then you can just relocate your library onto the NAS. At which point it works exactly like iTunes on your local storage would, just a little bit slower. That's what I'm doing now. I have the iTunes database files stored locally and the music on an NFS share from the terribly slow drive plugged into my router. It's functional, but so so slow. I guess I can keep doing that and get a better experience using any generic NAS device that's faster than a router USB port. What would be killer is having one "home cloud iTunes instance" that I can share between iTunes on my laptop or my wife's without worrying about local database files. That's what got me interested in paying a premium for Synology or doing an xpenology thing. What would be even cooler is being able to fire up Remote.app on an iOS device, connect it to a Synology iTunes server and beam music to AirTunes speakers around the house without keeping a laptop open like I do now, but I can't imagine that's possible. I guess one option is running xpenology and Windows+iTunes side by side in VMs, but that sounds like something Rube Goldberg would applaud.
|
# ? Feb 17, 2014 18:04 |
|
eddiewalker posted:What would be killer is having one "home cloud iTunes instance" that I can share between iTunes on my laptop or my wife's without worrying about local database files. That's what got me interested in paying a premium for Synology or doing an xpenology thing. I feeeeeel like this might be an iTunes Match situation.
|
# ? Feb 17, 2014 21:09 |
|
My XPENology box REFUSES to repair, it's kind of making me insane. It started out with a 2TB and 2x1TB(after I ditched that 250GB) then I added another 2TB. The total space has never been right and it refuses to expand to the 3.8TB or whatever that should be available. Hooked up a monitor and saw drive failures on SDB and SDC when it tried the expansion and nearly poo poo my pants. That was the two 1TB drives, so I replaced the first one and repaired fine and then replaced the second one and it fails to repair AGAIN. At this point I am just going to copy everything off, kill the volume and start over with all 4 drives present, I think.
|
# ? Feb 19, 2014 15:14 |
A few tips: Here's a cool little analysis of raidz3. And I ran into a cool little way to check which drive is the one that's causing your pool to be degraded, and how to get its serial number: code:
..and I'm pretty much back to square one with my server build because I can't for the life of me actually find proper hardware to put into the rig I want. It's always an issue of what's manefacturered vs. what's actually available in Denmark given my requirements (ecc udimm (not so-udimm) memory, oob bmc, dual NIC, at least one pci-ex x8 for a IBM ServeRAID M1015 unless it's the Supermicro X10SL7-F motherboard). EDIT: ↓ Just be aware that you should strive to have the same drives in all vdevs in your zpool to avoid causing unnecessary wear on one drive, and that if you'll be slowly adding vdevs to your pool over time, you'll start with some wear on the oldest drives when you'll be adding the newest drives. BlankSystemDaemon fucked around with this message at 15:44 on Feb 19, 2014 |
|
# ? Feb 19, 2014 15:17 |
|
Heh, I was actually just thinking about RAID-Z3. I finally ordered the MB/CPU/RAM for my new file server as Newegg had/has the SM X10SL7-F and E3 1220v3 as a combo deal (saving about $40), and I'm trying to plan out my build and future expansion to determine what case to get. The board comes with 6 SATA ports off the Intel controller and 8 off the LSI 2308, and I have a M1115 for another 8, giving me a total of 22 SATA ports. As I don't think I'll be adding any more HBAs, that comes out to be nearly perfect for a NORCO RPC-4020/4220 with 20 hot swap bays (and the other 2 ports can be used for ZIL + L2ARC or OS drive). My current file server has 6 x 2TB drives in 2 3-disk RAID-Z vdevs, and I while I'm definitely going to move the data off and rebuild it as a higher-redundancy vdev, I don't want to add any additional drives of that size, so it's going to be 6-disk RAID-Z2. That leaves me with 14 other ports, which means I can do either 2 7-disk RAID-Z3 vdevs, or 2 RAID-Z2s, one with 8 disks and the other with 6. And I haven't quite decided on which yet.
GokieKS fucked around with this message at 15:41 on Feb 19, 2014 |
# ? Feb 19, 2014 15:38 |
|
AlternateAccount posted:My XPENology box REFUSES to repair, it's kind of making me insane. It started out with a 2TB and 2x1TB(after I ditched that 250GB) then I added another 2TB. The total space has never been right and it refuses to expand to the 3.8TB or whatever that should be available. Hooked up a monitor and saw drive failures on SDB and SDC when it tried the expansion and nearly poo poo my pants. Assuming you have tested your drives, maybe your controller is failing, or even bad cables.
|
# ? Feb 19, 2014 15:46 |
|
D. Ebdrup posted:EDIT: ↓ Just be aware that you should strive to have the same drives in all vdevs in your zpool to avoid causing unnecessary wear on one drive, and that if you'll be slowly adding vdevs to your pool over time, you'll start with some wear on the oldest drives when you'll be adding the newest drives. Yeah... the 2 3 disk RAID-Z1 vdevs I have are somewhat close in age, which is why I was going to rebuild it as a single RAID-Z2 and didn't want to add any more drives. This new file server will be the rebuilt vdev and either a 7-disk RAID-Z3 or 6-disk RAID-Z2 vdev of 3TB drives, with the last 7/8 drives coming in the future.
|
# ? Feb 19, 2014 16:01 |
|
D. Ebdrup posted:A few tips: code:
|
# ? Feb 19, 2014 16:17 |
|
Don Lapre posted:Assuming you have tested your drives, maybe your controller is failing, or even bad cables. Well, one of the 1TBs, the first one I switched, was definitely failing. Threw it in an enclosure and tried to just format/write zeroes. After 10 minutes, the time estimated to finish was something like 14 hours. It's an N54L, so no cables, who knows about the controller. Dammit, the thing worked just fine a week ago.
|
# ? Feb 19, 2014 16:43 |
|
AlternateAccount posted:Well, one of the 1TBs, the first one I switched, was definitely failing. Threw it in an enclosure and tried to just format/write zeroes. After 10 minutes, the time estimated to finish was something like 14 hours. Take the drives and hook them up to a pc and run something like seatools on them to test them.
|
# ? Feb 19, 2014 16:43 |
|
GokieKS posted:Heh, I was actually just thinking about RAID-Z3. I finally ordered the MB/CPU/RAM for my new file server as Newegg had/has the SM X10SL7-F and E3 1220v3 as a combo deal (saving about $40), and I'm trying to plan out my build and future expansion to determine what case to get. The board comes with 6 SATA ports off the Intel controller and 8 off the LSI 2308, and I have a M1115 for another 8, giving me a total of 22 SATA ports. As I don't think I'll be adding any more HBAs, that comes out to be nearly perfect for a NORCO RPC-4020/4220 with 20 hot swap bays (and the other 2 ports can be used for ZIL + L2ARC or OS drive). My current file server has 6 x 2TB drives in 2 3-disk RAID-Z vdevs, and I while I'm definitely going to move the data off and rebuild it as a higher-redundancy vdev, I don't want to add any additional drives of that size, so it's going to be 6-disk RAID-Z2. That leaves me with 14 other ports, which means I can do either 2 7-disk RAID-Z3 vdevs, or 2 RAID-Z2s, one with 8 disks and the other with 6. And I haven't quite decided on which yet. I just got a X10SL7-F few days ago. Had to wait until yesterday for the processor. Updated all firmwares last night then went to run memtest86+. Just before I ran it, I happened to look at the Event Log and caught either a bad omen or a really crazy coincidence. That error didn't occur during the memtest86 run, it happened about 3 minutes before. Now I'm a little worried but also hey the board does track and log that stuff correctly, cool. Going to let memtest finish 3 passes I think?
|
# ? Feb 19, 2014 19:34 |
|
A random correctable ECC error once in a while is basically normal. If you discover that you start to get them much more frequently though and almost always from the same stick, then I'd probably be looking to replace it.
|
# ? Feb 19, 2014 20:06 |
|
Don Lapre posted:Take the drives and hook them up to a pc and run something like seatools on them to test them. Yeah, I knew I was going to end up here but I didn't want to spend the shitload of time. Booted up UBCD and I am running one of the tools on there doing a surface scan. It's a Seagate drive, but for some reason Seatools didn't see it. Could be the versions on this disc are old or stupid. 21% done, no errors. Gonna get some "known good" drives piled up and then just rebuild the drat thing properly. AlternateAccount fucked around with this message at 21:42 on Feb 19, 2014 |
# ? Feb 19, 2014 21:36 |
|
What does SMART say?
|
# ? Feb 19, 2014 21:56 |
|
eightysixed posted:What does SMART say? Dunno, it's all torn down now for scanning. Nothing bad enough to cause Synology to say anything but NORMAL for drive status, but I am not sure what it's threshold is. This is what I was getting during the expansion, it's just bizarre. Pulling one of those two drives out and replacing and then trying to repair would fail after cranking to 100% over 8ish hours.
|
# ? Feb 19, 2014 22:14 |
|
I picked up a 4tb WD MyCloud a few days ago from BestBuy because it seemed like a great deal even if just for a 4tb drive, but I've not opened it. Does anyone use one of these and can recommend it? Is there a way to have Dropbox sync to the NAS too?
|
# ? Feb 20, 2014 01:05 |
|
I posted this in the SSD thread, but I'm not sure if that's the best place for ZFS on SSD questions, so here I go again: I'm running SmartOS (Illumos-based kernel) which does not support TRIM (yet). Am I realistically going to notice a performance or reliability hit without it, assuming I keep plenty of free space? I know I can get a Sandforce drive, but the EVO is still 10-20% cheaper compared to something like the Intel 530. Also related: even though SSDs are more reliable (in theory) than spinners, I still want to get the benefit of data checksumming from ZFS. Ignoring complete failure of the SSD, would copies=2 be equivalent to a mirror in regards to recovering from a bad checksum? Or should I just spring for two identical SSDs?
|
# ? Feb 20, 2014 04:17 |
|
wang souffle posted:I'm running SmartOS (Illumos-based kernel) which does not support TRIM (yet). Am I realistically going to notice a performance or reliability hit without it, assuming I keep plenty of free space? I know I can get a Sandforce drive, but the EVO is still 10-20% cheaper compared to something like the Intel 530. You'll probably notice a performance hit, and Sandforce-based drives reportedly perform better than other SSDs in non-TRIM environments as long as you leave some unpartitioned space. FreeBSD has ZFS TRIM since 9.2-RELEASE, and it's also in 10.0-RELEASE. It appears that people are working on porting that feature to Illumos and ZFS-on-Linux, but it's not in an official release yet for either implementation. Use FreeBSD if you need this feature now, I suppose? wang souffle posted:Also related: even though SSDs are more reliable (in theory) than spinners, I still want to get the benefit of data checksumming from ZFS. Ignoring complete failure of the SSD, would copies=2 be equivalent to a mirror in regards to recovering from a bad checksum? Or should I just spring for two identical SSDs? Here's a blog entry examining the ZFS copies feature. The author spends about a third of the article talking about the single drive case, and the conclusion seems to be that using copies=n can prevent dataloss in the case that the drive has an unrecoverable read error but hasn't completely failed, but that you should still make backups, of course: Richard Elling posted:Both real and anecdotal evidence suggests that unrecoverable errors can occur while the device is still largely operational. ZFS has the ability to survive such errors without data loss. Very cool. Murphy's Law will ultimately catch up with you, though. In the case where ZFS cannot recover the data, ZFS will tell you which file is corrupted. You can then decide whether or not you should recover it from backups or source media. Also important to note that: (emphasis mine) Richard Elling posted:The copies property works for all new writes, so I recommend that you set that policy when you create the file system or immediately after you create a zpool. SamDabbers fucked around with this message at 05:05 on Feb 20, 2014 |
# ? Feb 20, 2014 04:50 |
If your OS of choice does not have TRIM support for ZFS on SSDs, it is recommended to do a regular secure erase of the SSD if your host is busy (you'll see ever-decreasing preformance gains from the SSD if you don't). Also note that while both zfs on linux and illumos are working on it, there's no telling when it'll be implimented. Think of copies=n as optional protection against some types of dataloss, in addition to the features already present in zfs, that isn't backup (because you should still backup): it stores multiple transparent copies of each block making up individual files on every device in a vdev. In case of a dead sector, as determined by checksumming, zfs will automatically use a copy of the block(s) which have the correct checksum. Additionally, a vdev with parity is always better than just having single-device copies=n, for example - as the vdev with parity protects you against catastrophic drive failure (with the usual caveat that it can only protect against it with however much parity it has). I'm not really sure how good of a feature it is. In my experience, drives tend to fail spectacularily when they fail - and in addition, it means whatever dataset the property is applied applied to takes up double or triple the amount of diskspace. BlankSystemDaemon fucked around with this message at 11:09 on Feb 20, 2014 |
|
# ? Feb 20, 2014 10:57 |
|
D. Ebdrup posted:If your OS of choice does not have TRIM support for ZFS on SSDs, it is recommended to do a regular secure erase of the SSD if your host is busy (you'll see ever-decreasing preformance gains from the SSD if you don't). Also note that while both zfs on linux and illumos are working on it, there's no telling when it'll be implimented. Won't writing a disk full of zeroes do the opposite and cause it to behave in a "full" state? Could you use the manufacturers utility to reset it back to factory instead?
|
# ? Feb 20, 2014 14:43 |
|
AlternateAccount posted:Won't writing a disk full of zeroes do the opposite and cause it to behave in a "full" state? Could you use the manufacturers utility to reset it back to factory instead? Secure erase is not the same thing as wiping a drive with a pass of zeros or the like.
|
# ? Feb 20, 2014 20:00 |
|
Manos posted:Secure erase is not the same thing as wiping a drive with a pass of zeros or the like. a random website says posted:Definition: Secure Erase is the name given to a set of commands available from the firmware on PATA and SATA based hard drives. The Secure Erase commands are used as a data sanitization method to completely overwrite all of the data on a hard drive. What does the process you're describing do instead?
|
# ? Feb 20, 2014 20:35 |
|
AlternateAccount posted:Won't writing a disk full of zeroes do the opposite and cause it to behave in a "full" state? Could you use the manufacturers utility to reset it back to factory instead? The TRIM command is what will mark a block as unused though to get the correct behavior of marking a block as freed. A secure erase will go beyond that and write random data as well as dropping the block from the used block chain.
|
# ? Feb 20, 2014 20:52 |
|
Toshimo posted:So, troubleshooting some unrelated items last week, I reset my CMOS and which cause my onboard Intel RAID controller to take a giant steaming poo poo all over my array's metadata. I tried a few things, but was unable to resuscitate it. Now, I've given myself over to recreating, reformatting, and refilling my array. One thing I can't seem to find anything on is if there is a way to actually back up the metadata so I don't ever have to go through the seven stages of loss for my 9TB array with 15 years of data on it again. Any help? I've had to deal with blown arrays over the absolute stupidest poo poo - client shutdown and moved their setup (rackmount server, 2 rackmount drive arrays, SCSI). Swapped the connections to the controller when they reconnected it and lost everything. I mean, I guess I could have understood it complaining until you swapped them back, but no, it just decided to resync the array with the disks in the completely wrong order, which of course meant it trashed everything recalculating "parity" with the wrong bits. Thanks, LSI. I just trust (linux, BSD or zfs) software raid more. At least I know exactly what it's doing, and a controller failure just means I replace the hardware, not restore from backup.
|
# ? Feb 20, 2014 22:21 |
|
necrobobsledder posted:Not necessarily the same thing as "full" but related. The way most SSDs today work with aggressive compression, the controller will notice that there's a long sequence of 0s, map that to a single "null" cell, and the SSD should compress it basically in the end while still marked as written. Then there's factors like wear level balancing that fudge with the problem that your zero'ed block in one write may not be the one read back later. Interesting, thanks.
|
# ? Feb 21, 2014 00:33 |
|
I just got two drives from Newegg to throw in my new xpenology box. The packaging looked OK, but one of the drives just clicks loudly and the system refuses to boot until I pull it back out. Newegg says I have to pay $12 return shipping to RMA, even though it was dead on arrival. I guess that's the last time I order from Newegg.
|
# ? Feb 21, 2014 01:23 |
|
|
# ? May 29, 2024 23:45 |
|
Use their live chat and talk to them until they give you a shipping label.
|
# ? Feb 21, 2014 01:27 |