evol262 posted:Or use /dev/disk/by-id
|
|
# ? Apr 20, 2017 21:12 |
|
|
# ? May 26, 2024 13:40 |
|
Hughlander posted:What is the focus on privacy? CrashPlan encrypts the backups by default, Are the spider oak clients open source? I don't think CrashPlan clients are, so you're just placing your faith in them that they've implemented encryption like they say they have. Note that I'm not saying this is something worth worrying about, only that it is a plausible and not insane thing to worry about.
|
# ? Apr 20, 2017 21:15 |
|
Thermopyle posted:Are the spider oak clients open source? I don't think CrashPlan clients are, so you're just placing your faith in them that they've implemented encryption like they say they have. Looks like the SpiderOak desktop client isn't open source. And for Crashplan, it's java... I'm sure you could crack the jar and see that they're calling the JDKs AES calls just like you're supposed to. I'm equally sure there's someone out there who has.
|
# ? Apr 20, 2017 22:27 |
|
D. Ebdrup posted:UUIDs are meant for machines, not humans. Best of all is when you label both the bay and the disk with a sticker that has the drives serial number and, on FreeBSD at least, use GPT labels with the format 'devid-bay#-serial#' so 'zpool status' contains all salient information for identifying which disk to pull when a disk needs to be replaced. On Linux, you'd probably just want to use the format 'bay#-serial#', because of the above-mentioned lack of persistent device IDs. That's kind of not the point, though. /disk/by-path and /disk/by-id can both be used to uniquely identify disks. Or devicemapper. Yes, uuids are meant for machines. Creating a zpool as a one-time operation with uuids is not the end of the world if consistent naming is not important enough to you and you don't trust device initialization/enumeration.
|
# ? Apr 20, 2017 22:29 |
Thermopyle posted:Are the spider oak clients open source? I don't think CrashPlan clients are, so you're just placing your faith in them that they've implemented encryption like they say they have. evol262 posted:Yes, uuids are meant for machines. Creating a zpool as a one-time operation with uuids is not the end of the world if consistent naming is not important enough to you and you don't trust device initialization/enumeration. BlankSystemDaemon fucked around with this message at 08:47 on Apr 21, 2017 |
|
# ? Apr 21, 2017 08:43 |
|
FISHMANPET posted:OK, another problem with my ZFS pool. I have 2 RaidZ vdevs, one was wall 3TB drives, the other was a mix of 1.5TB and 3TB drives so effectively all 1.5TB drives. I've replaced all the 1.5TB drives with 3TB drives. Even though the drives have been fully resilvered zpool status still says the drives are replacing, and every time I reboot it tries to resilver them all again. I also can't get the pool to actually expand. My resilver finished, but the two replacing faux devices remained. I tried to offline or detach the unneeded disks and it says no replicas available. I exported and imported the pool, and that worked, but it's resilvering AGAIN. The pool still hasn't expanded. code:
Also, sdg is the 1.5TB disk I'm replacing, sde is the 3TB I'm replacing it with. Despite that disk being resilvered 2 or 3 times, when I check zpool iostat storage -v the pool is reading from all disks and writing as fast as it can to sde. It's like it's completely unaware that it's resilvered that device already. FISHMANPET fucked around with this message at 14:49 on Apr 21, 2017 |
# ? Apr 21, 2017 14:44 |
|
OK, it turns out ZFS doesn't like when there are damaged files. I removed the damaged files, and I watched the resilver finish and the replacing devices disappear, and the pool expanded itself. So the moral of the story, if you've got damaged files, deal with those first.
|
# ? Apr 22, 2017 05:03 |
|
How were they damaged? Seems like it defeats the point of ZFS.
|
# ? Apr 22, 2017 05:30 |
|
I had a dead drive in my Raidz array for 2 years so no redundancy during that time. I think it totaled up to 12 damaged files, but I'd already replaced/removed 9 of them, leaving just 3 left.
|
# ? Apr 22, 2017 05:46 |
|
FISHMANPET posted:I had a dead drive in my Raidz array for 2 years so no redundancy during that time. I think it totaled up to 12 damaged files, but I'd already replaced/removed 9 of them, leaving just 3 left. 2 years?! Well, glad you got it sorted now.
|
# ? Apr 22, 2017 06:10 |
|
8-bit Miniboss posted:How were they damaged? Seems like it defeats the point of ZFS. Hardly. A huge part of ZFS is that it can tell you exactly what files are corrupted. A failed drive during a resilver, even if you only had single-parity raidz1, does not instantly kill the whole array.
|
# ? Apr 22, 2017 06:18 |
|
IOwnCalculus posted:Hardly. A huge part of ZFS is that it can tell you exactly what files are corrupted. A failed drive during a resilver, even if you only had single-parity raidz1, does not instantly kill the whole array. Well I was more along the path of those files should have been repaired through scrub but if they came from an array with no redundancy at the time, welp.
|
# ? Apr 22, 2017 06:23 |
I think it's a testament to how well ZFS is designed that, in two years, it didn't lose every single file in that array.
|
|
# ? Apr 22, 2017 11:33 |
|
D. Ebdrup posted:I think it's a testament to how well ZFS is designed that, in two years, it didn't lose every single file in that array. A single spinning rust disk running fat32 wouldn't lose many files either. "This filesystem has no redundancy but it didn't totally poo poo itself" isn't a great yardstick. The difference is that zfs knows which files were bad.
|
# ? Apr 22, 2017 14:07 |
evol262 posted:A single spinning rust disk running fat32 wouldn't lose many files either. "This filesystem has no redundancy but it didn't totally poo poo itself" isn't a great yardstick. The difference is that zfs knows which files were bad.
|
|
# ? Apr 22, 2017 15:09 |
|
D. Ebdrup posted:But ZFS isn't comparable to a single spinning disk, it's comparable to hardware RAID HBAs which can often end up completely destroying an array if it loses the parity disks(s). It is comparable to mdraid/btrfs redundancy, though, which won't completely destroy an array. I like ZFS. I use ZFS. But "I lost a parity disk, didn't lose a 2nd and fail the array/have to rebuild and suffer a failure once replaced" isn't miraculous. It's not even surprising from LSI. It's how raid5/raidz is supposed to work. What is impressive is that the bad files were identified instead of silently corrupted.
|
# ? Apr 22, 2017 16:02 |
|
I am looking into what large storage/backup options exist for backing up a macbook pro + external HD(s). The backup option, unlike the laptop, can be non-portable and can be hardwired into the home network but would ideally be able to stay synced via wifi (but I can buy an adapter and sync via hard wiring if needed. I have the ability to build computers and follow instructions so I'm looking to be pointed in the right direction. I don't necessarily need an out of the box solution but would appreciate answers for both home built and OOB solutions. Thanks!
|
# ? Apr 22, 2017 19:01 |
|
How much data? Easiest option is gonna be a 2-4 bay Synology or QNAP
|
# ? Apr 22, 2017 19:04 |
|
Moey posted:How much data? Easiest option is gonna be a 2-4 bay Synology or QNAP About 3TB right now and honestly probably won't grow much beyond since it's video and image files that only need to be kept for a short while before deletion. Say ~9-10TB to give plenty of room? edit: current router is a Linksys E2500, if I also should get a different router to maximize data transfer with whatever NAS I go with please let me know that also tangy yet delightful fucked around with this message at 19:26 on Apr 22, 2017 |
# ? Apr 22, 2017 19:18 |
|
Yeah the E2500 looks to only have 10/100 ethernet ports. A newer router with gigabit ports will help.
|
# ? Apr 22, 2017 19:31 |
|
D. Ebdrup posted:Unlike previous editions of Windows 10, even Windows 10 Home now supports doing backup (file history, as it confusingly calls it) to UNC paths (ie network destinations) - so you simply need to share a folder on your NAS with SMB, then mount that folder in Windows, and tell Windows 10 to use that. I gave this a shot, but didn't like the way Windows would append time and date stamps to filenames it was copying over. I understand why it did, but that got to me and I wasn't thrilled with the lack of visual feedback. Ended up going with Free File Sync and a script to scan for changes every so often. Seems to be doing what I needed it to. Still appreciate the suggestion, though!
|
# ? Apr 22, 2017 19:34 |
|
Kinda what I thought. I'll hit up the networking thread for that. Should I get this Synology 2-Bay 16 TB/ 20 TB Network Attached Storage (DS216play) or would something else be recommended? And then a pair of these HDs? Probably debate between the 4TB and 6TB.
|
# ? Apr 22, 2017 19:38 |
|
Is Samba broken on unRaid 6.3.3? I just attempted to use OpenMediaVault which seemed promising for my use but their version 3 is ((Beta)) but everyone assures is essentially bug free. So I install it and I can't get Samba to fire up no matter what. I downgrade to their version 2 and Samba works fine. But everything else is glitchy and buggy as gently caress. So screw OMV; I decided to switch to Unraid since everyone seems to recommend it and now I'm hitting the SAME loving PROBLEM WITH SAMBA NOT EXPORTING.
|
# ? Apr 22, 2017 22:48 |
|
SMB works fine for me on 6.3.3
|
# ? Apr 22, 2017 22:52 |
|
Ok so it works with my Hackintosh perfectly fine but not my Windows 10 box
|
# ? Apr 22, 2017 23:02 |
|
YouTuber posted:Ok so it works with my Hackintosh perfectly fine but not my Windows 10 box Windows 10 box is probably angry about passwords or something.
|
# ? Apr 22, 2017 23:04 |
|
havenwaters posted:Windows 10 box is probably angry about passwords or something. I went away for 15min and the problem solved itself
|
# ? Apr 22, 2017 23:10 |
|
YouTuber posted:Is Samba broken on unRaid 6.3.3? I just attempted to use OpenMediaVault which seemed promising for my use but their version 3 is ((Beta)) but everyone assures is essentially bug free. So I install it and I can't get Samba to fire up no matter what. I downgrade to their version 2 and Samba works fine. But everything else is glitchy and buggy as gently caress. So screw OMV; I decided to switch to Unraid since everyone seems to recommend it and now I'm hitting the SAME loving PROBLEM WITH SAMBA NOT EXPORTING. Little late since you're going back to unRaid, but I'm on OMV3 and I can use SMB just fine. It needs configuration in 2 places, one is creating the share and the other is applying that share in the SMB/CIFS settings.
|
# ? Apr 23, 2017 00:43 |
|
Burning in my new-old X8DT6-F setup. Final configuration is two X5647 CPUs and 48GB RAM. Got it burning in on memtest just to be safe but I will probably swap the whole thing into my existing chassis / drive array tonight. It has an onboard LSI2008 just like the X8SI6 I'm removing, but I could've sworn it was a bigger pain in the rear end last time to actually reflash it to IT-mode. The whole process was: *Create FreeDOS USB disk *Copy sas2flsh.exe, 2118it.bin, and mpt2sas.rom into said USB disk. Just grabbed the P20 versions of all of these from Broadcom. *sas2flsh.exe -o -e 6 *sas2flsh.exe -o -f 2118.bin -b mpt2sas.rom Done. No dicking around with reboots or other imtermediary firmwares or even reflashing the SAS address. The controller still identifies itself as SMC2008-IR, but it clearly shows the IT firmware. I probably could've skipped the BIOS ROM but Supermicro boards let me disable the option ROMs on each slot / device whenever I want to. The only caveat with my build so far is the heatsinks. Supermicro's LGA1366 boards have a backplate already installed that's threaded for M3 screws, and it doesn't seem like it'd be easy to remove for other solutions. Some Googling turned up the Intel E97381 as a popular and reasonably inexpensive tower heatsink that should clear a 4U chassis, and already has M3-threaded fasteners so it can just use the Supermicro base plate. It installed perfectly on CPU1, but on CPU2 there are a few voltage regulators that are extremely close to the mounting holes. Easy enough to fix with a file - one arm needed a 90 degree notch in it to clear one regulator, while another arm just needed to be rounded down to ensure it didn't touch a lead coming off of another regulator. They also seem reasonably quiet, which is nice. My server lives in the garage so I don't ask for silent, but I don't want "must wear hearing protection" either. If anyone wants an X8SI6 (non-F, no IPMI) with a X3450 and 8GB ECC, let me know
|
# ? Apr 23, 2017 01:24 |
|
8-bit Miniboss posted:Little late since you're going back to unRaid, but I'm on OMV3 and I can use SMB just fine. It needs configuration in 2 places, one is creating the share and the other is applying that share in the SMB/CIFS settings. Yeah I did that, I actually went and watched Youtube videos on how to do it just to make sure I wasn't loving it up. It's was just hosed. That was just one problem with OMV, every time I'd apply configurations to be written it would pop up error messages. I like the concept but both version 3 and 2 had glaring problems I didn't feel like googling and correcting.
|
# ? Apr 23, 2017 01:48 |
|
IOwnCalculus posted:Burning in my new-old X8DT6-F setup. Final configuration is two X5647 CPUs and 48GB RAM. Got it burning in on memtest just to be safe but I will probably swap the whole thing into my existing chassis / drive array tonight. Trip report: *Had to get as much as possible out of the way to fit the board, since this fucker is huge *I wish I had some form of super-short / flexible SFF8087 to SATA cables - the two connectors on the motherboard end up very close to the actual SATA headers they plug into. There's thankfully just enough room to make it work in the Norco case. *The on-board USB headers are too close to each other to fit both of the USB drives I'm using, so one is sticking out the back of the drat server. *Booted right off of those USB drives and needed no reconfiguration whatsoever to go right back to operating as the 'same' server it had been I'm considering a new power supply for it since the Corsair TX650M I have is probably getting up there in age, and I want to go with something fully modular. With the Norco case I actually use none of the SATA connectors from the power supply - it just uses a boatload of 4-pin molex. The board also wants a second EPS connector.
|
# ? Apr 23, 2017 09:28 |
|
I've looked through the last couple of pages but all the chat seems to be about high-end options. So: what's the current best recommendation for a 4-bay home NAS? N54L or a Synology? Something else? This will just sit next to my cable box and be used for general backups / movie storage, I've got a separate HTPC for the front end. And what OS should I be using? Thanks!
|
# ? Apr 23, 2017 10:41 |
|
evol262 posted:I like ZFS. I use ZFS. But "I lost a parity disk, didn't lose a 2nd and fail the array/have to rebuild and suffer a failure once replaced" isn't miraculous. It's not even surprising from LSI. It's how raid5/raidz is supposed to work. What is impressive is that the bad files were identified instead of silently corrupted. Sometimes I wonder how my home file server can be far more cost-effective and manageable than something people use to actually make serious money off of. I don't think they even have an excuse for "we did this n years ago" because ZFS was totally viable back then. I built my home ZFS server back in 2010, in fact. The maximum throughput from that massive HDFS cluster is terrible and won't get better because we're bad at upgrading anything here and my home ZFS box on machines from the same era pushes out more throughput somehow. Granted, ZFS is terrible in a full-blown scale-out situation I think it'd be easy enough for us to have worked something out using sharding and partitioning. I_Socom posted:I've looked through the last couple of pages but all the chat seems to be about high-end options. So: what's the current best recommendation for a 4-bay home NAS? N54L or a Synology? Something else? necrobobsledder fucked around with this message at 14:41 on Apr 23, 2017 |
# ? Apr 23, 2017 14:38 |
|
necrobobsledder posted:At work we have a 5 Petabyte HDFS cluster that is dying and there's a lot of bad blocks it had that we didn't discover until we went to go read files. If we had ZFS, we'd have saved ourselves a whole heck of a lot of effort trying to figure out which files were corrupted. I want to know more, if you can, because most hardware RAID conducts block level checks and will alert you. What happened?
|
# ? Apr 23, 2017 16:16 |
|
CommieGIR posted:I want to know more, if you can, because most hardware RAID conducts block level checks and will alert you. What happened? "I lost a disk and spent two years with no redundancy" vv
|
# ? Apr 24, 2017 15:03 |
|
I was gonna post in working in IT but- I'm looking for a good tough flash drive that's easily attachable to my keys. Preferably with either quick detach or retractable reel. If it doesn't exist all in one, one of you go get on Kickstarter and make that poo poo. Obviously don't want any USB 2.0 poo poo
|
# ? Apr 24, 2017 15:25 |
|
How strong is the suggestion that you shouldn't use ext4 on volumes larger than 16TB? I can't tell if it's just a minor performance thing, or if I should not even consider ext4 on large volumes and look at xfs instead.
|
# ? Apr 24, 2017 15:26 |
|
Twerk from Home posted:How strong is the suggestion that you shouldn't use ext4 on volumes larger than 16TB? I can't tell if it's just a minor performance thing, or if I should not even consider ext4 on large volumes and look at xfs instead. https://www.unix-ninja.com/p/Formatting_Ext4_volumes_beyond_the_16TB_limit It can be done with the latest tools. It looks like it's a 32bit limitation of the tools that come with most distros and using a 64bit version of the tools seems to take care of the issue. Personally I just use XFS rather than loving with it.
|
# ? Apr 24, 2017 15:40 |
|
DrDork posted:"I lost a disk and spent two years with no redundancy" vv Mine is setup to send an email alert if a disk fails, so I'm not used to not having redundancy.
|
# ? Apr 24, 2017 16:32 |
|
|
# ? May 26, 2024 13:40 |
|
One of my 3TB drives died and my friend who helped me set this all up said I can put a 4TB in and just use 3 of it until I potentially have all 4TB drives in there "unlocking" the last TB in each one. Looks like the 4TBs are currently what normally go on sale so I guess it makes sense. Thinking about getting this Seagate: https://www.amazon.com/gp/product/B01LNJBA50/
|
# ? Apr 24, 2017 20:01 |