|
HGST and Toshiba are the lowest failure rate drives at the slight cost of running a couple degrees warmer than other drives in their class
|
# ? Nov 11, 2019 20:32 |
|
|
# ? May 22, 2024 11:06 |
|
MREBoy posted:So what's the thread opinion of HGST brand drives ? I need as many TBs as I can get for $200-250ish, up to 4 physical drives. I found brand new 3TB HUS724030ALA640 (0F14689) @ $50 each on newegg, would get 4 of them. HGST drives are generally really good, as has already been mentioned.
|
# ? Nov 11, 2019 21:01 |
|
MREBoy posted:So what's the thread opinion of HGST brand drives ? I need as many TBs as I can get for $200-250ish, up to 4 physical drives. I found brand new 3TB HUS724030ALA640 (0F14689) @ $50 each on newegg, would get 4 of them. You'll get a better price per TB shucking external drives. Keep an eye out for sales, the 12TB WD Easystore was $180 this weekend, and the 8TBs regularly go on sale for $120-130 so you could get 2 of those instead of 4 3TB and have 4TB more storage.
|
# ? Nov 11, 2019 21:16 |
|
IOwnCalculus posted:If you're looking at https://www.newegg.com/p/N82E16822145894 - those aren't new drives. goHardDrive wipes the SMART data and provides their own warranty. To be fair, they're decent, I've bought some 8TB drives from them and the exchange on a few DOA drives was painless. Yeah those are the exact drives I was talking about. As to why 4 of these drives, cost is the hard limiter for me in this situation + the device they are going in has 4 bays. The cost of a pair of 8 or 10TBs exceeds budget, as does the cost of a single 12. e2a: I hadn't really thought about shucking externals, I'll keep that in mind, I'm not in any hurry and Black Friday isn't that far off.
|
# ? Nov 11, 2019 21:18 |
|
I was fiddling around trying to get https://github.com/haugene/docker-transmission-openvpn working in docker (it can't find peers) , and managed to do this. I wish to revert my stuff back to the way it was before, do I just delete the cluster "delete the host from the cluster"?
|
# ? Nov 12, 2019 03:35 |
|
MREBoy posted:So what's the thread opinion of HGST brand drives ? I need as many TBs as I can get for $200-250ish, up to 4 physical drives. I found brand new 3TB HUS724030ALA640 (0F14689) @ $50 each on newegg, would get 4 of them. You can get 2x 8 TB WD external drives on Amazon right now for ~$105 each. We posted about them recently over the last page or two, Amazon's running a 15% back if you use a Prime card.
|
# ? Nov 13, 2019 00:11 |
|
BB has the 8TB easystore for $120 on ebay, their site, and google shopping: https://slickdeals.net/f/13553686-8tb-wd-easystore-external-usb-3-0-hard-drive-120-or-less-free-shipping?src=frontpage https://www.ebay.com/itm/WD-Easystore-8TB-External-USB-3-0-Hard-Drive-Black/192784859465 https://www.bestbuy.com/site/wd-easystore-8tb-external-usb-3-0-hard-drive-black/5792401.p?ref=8575135 https://www.google.com/shopping/pro...HYegCCsQ1sEDCGM Makes me wonder if they'll get even lower over black Friday weekend.
|
# ? Nov 13, 2019 09:32 |
|
nerox posted:I put Ombi on my server tonight and the app on my wife's phone and now she can search poo poo herself without asking me to do it. It is pretty fantastic. Thread Update: This was a terrible idea, my sabnzbd queue is at like 4tb right now.
|
# ? Nov 13, 2019 14:38 |
|
nerox posted:Thread Update: This was a terrible idea, my sabnzbd queue is at like 4tb right now. Reasons to have your nzb downloader target a SSD.
|
# ? Nov 13, 2019 15:22 |
|
I use unraid, so everything happens on a 500gig SSD that gets pushed into the array every morning at 3am or if it gets full.
|
# ? Nov 13, 2019 15:31 |
|
RAM disk for downloading/repairing/unpacking is ungodly fast. Just need... lots of RAM!
|
# ? Nov 13, 2019 17:02 |
|
nerox posted:Thread Update: This was a terrible idea, my sabnzbd queue is at like 4tb right now.
|
# ? Nov 13, 2019 17:33 |
|
nerox posted:I use unraid, so everything happens on a 500gig SSD that gets pushed into the array every morning at 3am or if it gets full. this is the way to go.
|
# ? Nov 13, 2019 17:54 |
|
pzy posted:RAM disk for downloading/repairing/unpacking is ungodly fast. Just need... lots of RAM! Love ZFS for this, I just poked around to verify my arc is being used properly and yep - I use a scratch NVMe disk but it basically stays in ram the entire time until I push it to my multimedia pool
|
# ? Nov 13, 2019 19:30 |
|
nerox posted:Thread Update: This was a terrible idea, my sabnzbd queue is at like 4tb right now. This is why I manually approve my Ombi requests.
|
# ? Nov 13, 2019 19:36 |
|
Gay Retard posted:This is why I manually approve my Ombi requests. What's the benefit of that though? I'm not going to tell my wife that I am not going to allow the server to download something. I have settings so that sabnzbd doesn't use most of our bandwith during primetime, its only wide open from like 2am to 6am and 9am to 4pm on weekdays, and new stuff gets added at higher priority than old stuff, so new stuff gets downloaded first. We don't have bandwith caps. It really has no effect on us whatsoever. Manual approval is just extra work for me, which is what I wanted to avoid. If other people had access to my server (my upload is poo poo), I would not let them have auto-approval.
|
# ? Nov 15, 2019 16:51 |
|
Then what's the big deal?
|
# ? Nov 15, 2019 17:22 |
|
I was tempted to try out ombi but my only viable option would be paying for something like newshosting and I already pay for Netflix et al. If the streaming services really start to fracture more and more and keep jacking up the costs it looks like a good alternative though!
|
# ? Nov 15, 2019 19:03 |
|
Added more drives to my server and hit a personal milestone:code:
|
# ? Nov 16, 2019 00:40 |
|
I am running FreeNAS 11.2 right now. I am using it solely for PLEX at the moment. I have a RAID-Z2 set (6x4TB) with a single pool and single dataset that I have mostly filled up. I bought some external disk shelves and more disks, and I'm trying to figure out the best way to move forward. Internal: 4TB Drives (x6) - RAID-Z2 vDev 400GB SSD (x1) - cache External: 8TB Drives (x6) - not part of a vDev 4TB Drives (x18) - not part of a vDev What's my best path forward? Turn the 8TB drives into a RAID-Z2 and add that vDev to the existing pool? Make a bunch of mirror vDevs with the new drives, make a new pool, migrate the data over (what's the best way?), and then add the existing drives into that pool? Any suggestions or tips appreciated!
|
# ? Nov 16, 2019 07:04 |
madsushi posted:I am running FreeNAS 11.2 right now. I am using it solely for PLEX at the moment. Put the 24 drives in 3x 8-disk RAIDz2 vdevs, and each time you buy a new harddrive you replace a 4TB one. Once all the 4TB disks in a vdev have been replaced, your pool will grow automatically. EDIT: And if at any point you'd like to stripe data across all disks, simply rename the dataset, create a new dataset with the old name, and mv the data to a new dataset. BlankSystemDaemon fucked around with this message at 11:57 on Nov 16, 2019 |
|
# ? Nov 16, 2019 11:54 |
|
IOwnCalculus posted:Added more drives to my server and hit a personal milestone: edit for content: bought rails for my md3000 and I'm part way through racking my 2u, 8x 3.5" drive box that has the host. Did I mention how loving much I hate racking poo poo? I think my biggest flaw in life is a seeming inability to get things square. Crunchy Black fucked around with this message at 11:59 on Nov 16, 2019 |
# ? Nov 16, 2019 11:56 |
|
IOwnCalculus posted:First time I've ever wrapped around past /dev/sdz. I've only made it up to w. Sometime next year I'll get an external enclosure and shoot past z...and continue to wonder if it's all worth it.
|
# ? Nov 16, 2019 19:22 |
|
D. Ebdrup posted:Put the 24 drives in 3x 8-disk RAIDz2 vdevs, and each time you buy a new harddrive you replace a 4TB one. Thanks for the post! Are you saying I should make a new pool out of these 3 new 8-disk RAIDz2 vdevs, or should I be able to add these vdevs to my existing pool? I had read mixed information about mixing 6-disk and 8-disk vdevs in the same pool.
|
# ? Nov 16, 2019 19:53 |
madsushi posted:Thanks for the post! Are you saying I should make a new pool out of these 3 new 8-disk RAIDz2 vdevs, or should I be able to add these vdevs to my existing pool? I had read mixed information about mixing 6-disk and 8-disk vdevs in the same pool. The way I've seen Jeff Bonwick, one of the fathers of ZFS, explain it once is something along the following: If you think about memory at all, you know that when you add more it just sort of appears as if by magic (although it's actually the MMU that's responsible). ZFS is designed to act like that such that when you add storage (in the form of one drive, or more drives if you want some form of redundancy or availability), you can "just" add it. So what you can end up with is a pool consisting of any number of vdevs (there is a limit, but it's probably not ever going to be hit by anyone, not even companies with the biggest storage arrays) with any kind of mix of drives and in any configuration (striping, though it's not recomended, mirroring, or distributed parity). About the only thing you can't do with ZFS yet is expand the pool by adding one more drive at a time, but that feature is being developed by Matt Ahrens, the other of the fathers of ZFS. EDIT: In theory, if you have a pool with one or more vdevs with 8 disks in it, and you add a vdev with only 6 disks, it'll slightly decrease your bandwidth - but unless you're doing +10G networking for your NAS, that won't impact you in any meaningful way.
|
|
# ? Nov 16, 2019 21:36 |
|
Welp, found out for sure Thursday morning at 4:10 exactly *which* of the drives in my NAS was failing All previous deep scans had come up fine, but one seemed to be making some noises.
|
# ? Nov 17, 2019 00:02 |
|
Gentlemen, I am the proud owner of a new Unraid system! I am running dual Xeon E5-2690v3s with 128GB of RAM and currently doing my first parity sync. Right now I have only 4x 3TB WD Reds in there, along with a couple SSDs for cache. What I would like to do is migrate over some other 3TB Reds from a 5-bay Synology and put in some shucked 8TBs to replace those drives. The Synology with the 3TB Reds is actually a secondary/backup to my "main" NAS which will not be touched. So my question is whether this order of operations is safe: 1) Pull the 5x 3TB Reds from secondary Synology NAS marking their order and setting aside. 2) Insert the 5x 8TB drives and re-setup the Synology NAS. 3) Re-backup data from primary Synology NAS to secondary Synology NAS with the new 8TB drives via HyperBackup. 4) Once HyperBackup success is confirmed, put the remaining 5x 3TB Reds into the Unraid system. 5) Maintain other backups including off-site HDD and cloud, and the primary NAS of course.
|
# ? Nov 17, 2019 22:14 |
Maybe I'm dumb, but what on earth is a parity sync? Distributed parity, whether done via XOR or Reed-Solomon encoding, calculates the parity as soon as the data is written to the array, so how can a sync be needed?
|
|
# ? Nov 17, 2019 22:27 |
|
D. Ebdrup posted:Maybe I'm dumb, but what on earth is a parity sync? Distributed parity, whether done via XOR or Reed-Solomon encoding, calculates the parity as soon as the data is written to the array, so how can a sync be needed? Not sure, that's just what it said when I assigned two drives to the pool and two as parity: Parity-Sync/Data-Rebuild in progress.
|
# ? Nov 17, 2019 22:29 |
|
D. Ebdrup posted:Maybe I'm dumb, but what on earth is a parity sync? Distributed parity, whether done via XOR or Reed-Solomon encoding, calculates the parity as soon as the data is written to the array, so how can a sync be needed? UnRAID has a weird Raid4 thing. I don't know anything else about this besides it Just Works
|
# ? Nov 17, 2019 22:34 |
|
Smashing Link posted:What I would like to do is migrate over some other 3TB Reds from a 5-bay Synology and put in some shucked 8TBs to replace those drives. The Synology with the 3TB Reds is actually a secondary/backup to my "main" NAS which will not be touched. . My only other thought would be to insert two of the three 8tb drives into the empty bays of the 5bay Synology, somehow shift the 12tb of data across to the newly available 16tb, then remove the 3TB drives?
|
# ? Nov 17, 2019 22:39 |
Matt Zerella posted:UnRAID has a weird Raid4 thing. I don't know anything else about this besides it Just Works Stupid chemobrain.
|
|
# ? Nov 17, 2019 23:17 |
|
It looks like UnRaid uses a LVM version of RAID 4, as it is able to use disks of different sizes. Source: https://www.makeuseof.com/tag/unraid-ultimate-home-nas/ Looks like it was designed for performance and keeping only 2 disks in a raid spinning instead of all of them with Raid 5 and Raid 6. As for a Synology Raid resync, it is basically rebuilding the parity table across all drives and data scrubbing, or using the parity table to rebuild the data on one drive. The thing you might want to try is rebuilding the RAID one disk at a time. If you are seriously concerned about data loss. It takes hours per disk, but it will keep the existing RAID working. So the process would go like this: Remove one 3 tb drive, and replace it with a 8. Repair Storage pool. Once it has rebuilt the disk using the parity table go on to the next disk. Repeat until done. https://www.synology.com/en-us/knowledgebase/DSM/help/DSM/StorageManager/storage_pool_expand_replace_disk The process you described will just require you to restore a backup configuration and setup everything again.
|
# ? Nov 18, 2019 02:09 |
|
Axe-man posted:It looks like UnRaid uses a LVM version of RAID 4, as it is able to use disks of different sizes. Source: https://www.makeuseof.com/tag/unraid-ultimate-home-nas/ That was the 2nd option I was considering but I wasn't sure if re-striping the RAID x5 disks would end up taking more time.
|
# ? Nov 18, 2019 02:48 |
Axe-man posted:It looks like UnRaid uses a LVM version of RAID 4, as it is able to use disks of different sizes. Source: https://www.makeuseof.com/tag/unraid-ultimate-home-nas/
|
|
# ? Nov 18, 2019 10:29 |
|
Synology uses either a highly modded Btrfs or ext4 for the file, system, but even more so, it adds a layer of abstraction between the file system and the RAID: https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/What_was_the_RAID_implementation_for_Btrfs_File_System_on_SynologyNAS I'm pretty sure that is the real reason why it takes so long to do what it is trying to do.
|
# ? Nov 18, 2019 13:20 |
I don't understand why people trust BTRFS when they pulled the stunt of claiming it was production ready without testing it, then discovering that they hosed up the XOR for RAID5 after people had started using it in production. You don't get the other benefits of BTRFS without using it directly, as far as I'm aware.
|
|
# ? Nov 18, 2019 14:18 |
|
Synology likes to reinvent the wheel. So they would only really want to use zfs if they could edit it to do what they want or attach a module of theirs to it. This is a good and bad thing.
|
# ? Nov 18, 2019 18:21 |
|
I think I posted about this before but I solidified my thoughts a bit. I have a 4 year old NAS with a quad core Xeon and 32 gigs running Proxmox hosting ~100TB of ZFS/Z1 over two arrays. From profiling the system I'm heavily memory bound and would like to go from 32 to 128 gigs. I'm thinking of switching to AMD from Intel for the performance/$ ratio. Originally I was thinking of upgrading to a 3900 but I haven't found a motherboard that ticked enough boxes of IPMI, 10GB ethernet, 128GB max memory etc... So now I'm thinking of keeping the Xeon and just adding a dual 10GB NIC to it, then building a dedicated Docker/VM host. I think this is getting out of the realm of NAS questions and more into Homelab, but looking for advice on what's going to be a good choice for hardware for any Newegg black friday deals. I don't know what a good case would be anymore that doesn't have room for 10+ drives! Or should I be thinking about keeping it in one box for power / space consideration?
|
# ? Nov 19, 2019 00:17 |
|
|
# ? May 22, 2024 11:06 |
|
I'm trying to make an online storage solution, I just want to be able to have a central hard drive I can access from anywhere that will sync with windows file structure similar to dropbox or google, but not on those platforms. I dont want to be paying subscription fees to those people in perpetuity so I can have ~1 tb hard drive to access. The filesizes would be what you might see on like, average joes cell phone photos and a minute or 2 long videos, meme folder type poo poo. Wont be using endless terabytes etc. Basically just want to virtually carry an external HD. Preferably accessable from windows/android/ios I don't need huge speeds either, I'm sure whatever connection I have will be fine, its good enough to remotely stream plex to myself. As far as hardware, I'd probably be running this on the same thing that runs my plex server which is an amd 2600x / 16gb/w10 this computer is hooked up to a TV and sometimes does gaming etc so i'd prfer if my solution didn't require switching OS to linux, altho if it runs in a lightweight enough environment i can probs find a way to make it work. if it could be setup to run in a VM alongside everything else im already doing thats fine too i have enough experience stumbling through that stuff to probably make that work too. I'm not really sure what else to provide for info or what angles to look at to ask for help better. I'm looking at seafile and I've done some googling but I was hoping to get some input based on my actual use case and the hardware etc. I'm sure somebody here has seen this and has an idea or my situation may even be cookie cutter idk thank you
|
# ? Nov 19, 2019 13:48 |