|
Mr. Crow posted:I'm new to the NAS world and working on building my first system (technically still deciding what I want to do). Plan on setting up a server with ESXi and running multiple VMs, including a NAS. Been looking a lot at ZFS as this thread and most NAS blogs seem to have a hard-on for it; but it kind of seems like overkill for a home media server. I don't like the inflexibility and general requirements it has, at least from a home use scenario. So I thought I'd try out the mergerFS and snapraid set up in a VM on my main server. Obviously this isn't going to perform as well as it would on bare metal, but it does give a pretty good idea as to how it would function. I built a Debian 8.7.1 set up on a 20GB drive because I didn't really plan to put much on the main system. Afterward I created six 100GB SCSI drives (KVM, Virtio) and added them to the VM. I partitioned them and formatted them XFS. I mounted them as follows: code:
Next step was to install mergerfs and snapraid. Neither of these are in the Debian repos, so I had to grab the deb package for mergerfs for Jessie 64bit for my VM and download the snapraid 11.0 source code. I installed make and gcc via apt-get so I could compile and install snapraid, and used dpkg -i to install mergerfs. Mount line for mergerfs, I placed it in /etc/rc.local as for whatever reason (probably my misunderstanding of something) could not get it to work in /etc/fstab, so I used this command. code:
code:
Next was to install Samba. So apt-get install samba. I'm not going to get really into depth here, as the provided configuration file contains plenty of examples for setting up a share. You are going to want to set your share up to point to '/mnt/storage/folder'. Make sure the user connecting has the right permissions, make sure to set them on the folder. Doing this from the /mnt/storage directory will set the permissions across all of the disks/folders. Next is to load some data. I found that copying data from a Windows 10 pro machine to this server I could peak at 100MB/s or higher. One thing I did notice that I felt was extremely strange is when I overwrote a file with the same data, the rate dropped to 3-5MB/s. Overall, the speed was quite acceptable for a virtual machine. I copied 100GB of data to the virtual server via rsync/scp and it hovered around 60MB/s between the host server and the VM. The source of the data and the target VM disk were on the same RAID6 so I think the overall speed there was acceptable. Next step was to run snapraid sync to write out the parity data. Probably due to the source/target disks all being on one array things ran a little slow. Also this VM only has 2 cores and 2GB of RAM allocated to it, so it isn't a powerhouse. I think it took about 30 minutes to sync up 100GB of data. Here's what it looks like now: code:
|
# ? Feb 18, 2017 23:32 |
|
|
# ? May 31, 2024 09:42 |
|
Looks like Ryzen will support ECC RAM, according to leaks and rumors.
|
# ? Feb 20, 2017 17:59 |
|
Grog posted:Uhhhhhhhhh Try hitting the "Turbo" button on your modem.
|
# ? Feb 20, 2017 21:10 |
PerrineClostermann posted:Looks like Ryzen will support ECC RAM, according to leaks and rumors. Tangentially related to packratting, if you guys use nextcloud or similar, you should switch to nginx, php70-fastcgi and postgresql. I switched from apache24, mod_php56 and mariadb, and gained almost 20% in response times, and resource allocation needed for serving requests is down by almost 40% on the exact same hardware I was running on before (on FreeBSD11-STABLE). EDIT:↓ Apparently I'm not the only one who remembers it, so I didn't just dream it despite not being able to locate info on it when I searched for it. BlankSystemDaemon fucked around with this message at 16:14 on Feb 21, 2017 |
|
# ? Feb 21, 2017 09:35 |
|
PerrineClostermann posted:Looks like Ryzen will support ECC RAM, according to leaks and rumors. It would make sense, and actually be a break from tradition if they did not. The majority of AMD's desktop CPUs have supported it, including the current FX line. The operative question is usually whether or not the motherboard manufacturers will opt to include support for it, though--many do, but some do not.
|
# ? Feb 21, 2017 15:04 |
|
I've got enough windows 10 clients now that I'm going to migrate my old WHSv1 to something newer, likely Windows Storage Server Essentials (Thecus w2810pro), because it's closest to WHS which has been great for me. I store pictures and media, use the remote access a bit, have image based machine backups with dedupe, and run crashplan on it. What brand hard drives should I get? I only have 2 600gig drives in my current server, which is fine, space wise, so I'd rather have reliability over extreme sizes. Assuming I'm ok with the slight extra cost to get the windows machine, are there other reasons to try to migrate to something non-windows?
|
# ? Feb 21, 2017 16:31 |
|
havelock posted:I've got enough windows 10 clients now that I'm going to migrate my old WHSv1 to something newer, likely Windows Storage Server Essentials (Thecus w2810pro), because it's closest to WHS which has been great for me. I store pictures and media, use the remote access a bit, have image based machine backups with dedupe, and run crashplan on it. I would have done this but loving DOMAIN requirement. I assume you are going to take that plunge.
|
# ? Feb 21, 2017 17:11 |
|
redeyes posted:I would have done this but loving DOMAIN requirement. I assume you are going to take that plunge. From what I understand it's possible to use a command line thing on the clients to install the client without requiring them to join the domain. If the server just sits there as a DC with no one joined that's fine with me. Right now my win10 machines are both Home, so they can't join a domain anyway, but when I upgrade my desktop shortly it'll be Pro.
|
# ? Feb 21, 2017 17:40 |
|
I'd like to buy two 6 TB drives to expand storage on a Windows Server 2012 R2 system that I'm using as a file/dev/whatever-I-feel-like server. I'll be putting these drives in a RAID 1 array, and I'm just using the Intel RAID controller on my consumer motherboard - I think it's a Gigabyte. I keep this server on 24/7, so I'd like to buy drives that are best suited to that environment. Would WD Blue drives be okay for this setup? The OP recommends WD Red drives which are designed for NAS, but my setup isn't exactly a specialized NAS box. Also, the 6 TB blues are a bit cheaper than the reds for whatever reason, and they're available now on Amazon.
|
# ? Feb 22, 2017 16:45 |
|
Blue (or Green) drives aren't exactly going to eat your lunch if you use them, but the reasons Reds are suggested for NAS applications are more than simply "well you're using it in a NAS." Reds support TLER, which is useful if you're connecting drives up to a RAID controller, as it will reduce the chance of a drive being erroneously marked as failed (which can be quite a hassle). Reds also have a better warranty, and are explicitly intended for 24/7 operation, whereas Blues are more intended for occasional use with warranties to match. Blues are the most budget of budget WD drives, and should be treated as such. Reds, incidentally, are not much more expensive, and in fact, once you take tax into consideration, are basically the same price since you can get them for $233 from B&H Video right now. I'd recommend going with the higher quality drives literally intended for the application you're going to use them for.
|
# ? Feb 22, 2017 17:08 |
|
You can also get a pair of WD reds in the MyBook Duo's that they sell - so 2x6tb for $400, 2x8tb for $500. The warranty on those drives is 2 years instead of 3 if you buy the drives individually, though.
|
# ? Feb 22, 2017 21:11 |
|
DrDork posted:I'd recommend going with the higher quality drives literally intended for the application you're going to use them for. Thanks for the advice!
|
# ? Feb 23, 2017 16:37 |
|
havelock posted:From what I understand it's possible to use a command line thing on the clients to install the client without requiring them to join the domain. If the server just sits there as a DC with no one joined that's fine with me. Right now my win10 machines are both Home, so they can't join a domain anyway, but when I upgrade my desktop shortly it'll be Pro. Maybe post when you give this a try. I'd like to know specifically if it works easily for backups minus the domain. I wish MS had some kind of solution for backups on home and small networks. Domains are just such overkill for these situations in general. WHS filled that gap back in the day. redeyes fucked around with this message at 16:46 on Feb 23, 2017 |
# ? Feb 23, 2017 16:44 |
|
I'm moving to a small form factor build for my next computer, and as a result I won't have any space in the case for hard drives. I'm thinking of re-purposing the old computer as a NAS, using FreeNAS to hold the hard drives. I don't need RAID or automated backup functionality, as I pay for a cloud backup already. The only functionality I need is to be able to access the hard drives under "My Computer" as a networked hard drive. Will this work or do I need to run Windows?
|
# ? Feb 24, 2017 17:26 |
|
Lolcano Eruption posted:I'm moving to a small form factor build for my next computer, and as a result I won't have any space in the case for hard drives. I'm thinking of re-purposing the old computer as a NAS, using FreeNAS to hold the hard drives. I don't need RAID or automated backup functionality, as I pay for a cloud backup already. The only functionality I need is to be able to access the hard drives under "My Computer" as a networked hard drive. Will this work or do I need to run Windows? Any operating system that can run Samba server can do this (basically anything).
|
# ? Feb 24, 2017 17:30 |
|
So why aren't bitflips during ZFS scrubs an issue? And do you want ECC on a ZFS NAS for anything other than the usual reasons why you want ECC in general?
|
# ? Feb 24, 2017 20:53 |
PerrineClostermann posted:So why aren't bitflips during ZFS scrubs an issue? And do you want ECC on a ZFS NAS for anything other than the usual reasons why you want ECC in general? Aside from ALL OS kernels being stored in memory also befitting from ECC, all OS' that cache files in memory will benefit from ECC (and that's pretty much every modern OS, to some extend), it's just that ZFS employs a much larger amount of memory caching than any other filesystem. ZFS' ARC (adaptive read-cache, which isn't a read-ahead cache but a cache of the most used blocks) will use as much memory as you give it since there's no default maximum defined, so unless you define a maximum, it ends up using all the memory if there are no other uses for it - but that having been said, it's also written in such a way that it'll back off its ARC size if there are more pressing uses for memory, rather than force other applications to swap to disk. Scrubs happen on-disk, not in-memory.
|
|
# ? Feb 24, 2017 21:01 |
|
I have a FreeNAS question. I'm running 9.10 on a Microserver, which originally had 2x 2TB disks configured as mirrors. I managed to pick up some 6TB WD Reds from work free of charge the other day, so I popped these in and added them to my existing volume as a mirrored pair - so my volume is now made up of two mirrors. What's actually happening with my data now, is it being moved around across these two mirrors, or does the 6TB mirror start getting used as the 2TB one fills up? Is there any way to migrate data onto the 6TB mirror to enable the 2TB disks to be swapped out?
|
# ? Feb 25, 2017 16:38 |
Thanks Ants posted:I have a FreeNAS question. I'm running 9.10 on a Microserver, which originally had 2x 2TB disks configured as mirrors. I managed to pick up some 6TB WD Reds from work free of charge the other day, so I popped these in and added them to my existing volume as a mirrored pair - so my volume is now made up of two mirrors. What's actually happening with my data now, is it being moved around across these two mirrors, or does the 6TB mirror start getting used as the 2TB one fills up? Is there any way to migrate data onto the 6TB mirror to enable the 2TB disks to be swapped out? If you wanted to migrate the smaller disks to the bigger ones, you should have done the following: zpool replace <pool> <olddisk> <newdisk>, then wait for that process to finish, shut the server down and removed the disk, then done the same with the other <olddisk> and <newdisk>.
|
|
# ? Feb 25, 2017 16:48 |
|
Thanks. I don't have any plans to need to expand this again before the hardware is in need of replacement so will accept that any move from this point involves copying the data to a new box.
|
# ? Feb 25, 2017 17:15 |
It doesn't, though. You can use 'zpool offline <disk>', then shut the machine down, replace it with a new disk, boot again and then use the zpool replace command described before. It just means that while it's resilvering, you won't have any redundancy - but since it's a mirror, it's a very fast process (compared to raidz, at least - compared to traditional raid, it's upwards of an order of magnitude faster)
BlankSystemDaemon fucked around with this message at 22:09 on Feb 25, 2017 |
|
# ? Feb 25, 2017 22:02 |
|
Oh I see what you mean - use that method to swap out the 2TB disks when the time comes. Thanks.
|
# ? Feb 25, 2017 22:39 |
|
Resurgence of the argument about bit flips in the AMD Ryzen thread has got me thinking about my home server setup. I'm running an Asrock J3710 motherboard (Braswell) with 8GB non-ECC RAM (a pair of Kingston HyperX SODIMMS), 2x WD Red drives with Ubuntu Server 16.04 installed. Both drives are encrypted with luks and I've got a few Bitcoin keys amongst all my personal stuff. The problem is, I keep a backup of my friend's keys too, and while I'm not that fussed about my 0.5 BTC he has more than me at stake. I have an hourly rsync job that syncs about 4GB of critical stuff to Amazon cloud but if the bits get flipped then a 'flipped' version of my stuff gets synced to Amazon, right? And then If I don't realise until 6 months later those files could be corrupt for good. It's probably a chance in a billion that, out of 4GB of data, a bit flip would occur exactly where I don't want it to. Isn't it? Should I build a different small server with ECC? I don't wan't to change from a headless Linux box because I like the configurability I can achieve with cron, rsync and stuff. If I built something new then the emphasis would be on low idle power, sitting under 20W preferably. If I went ahead with an upgrade I'd like to spend less than £200 on a motherboard/CPU and RAM. e: Any idea what sort of idle watts I would get if I used a Pentium G4400? apropos man fucked around with this message at 11:40 on Feb 26, 2017 |
# ? Feb 26, 2017 10:59 |
|
To be honest the chances are so low that I'm not sure its worth building a new system just for ECC support. In case you do want to upgrade and at the risk of sounding like a broken record, the Dell T20 is still a great deal if you're looking for a new home server with ECC. It costs ~$250 when it's on sale and those sales seem to pop up every few weeks. I got mine for 200€ after Dell's cashback. You could then sell your old parts on Ebay. It's going to be very hard to build something similar from parts because the CPU alone has a MSRP of $215. Idles as low as 15W without drives and comes with fairly decent hardware. Xeon e3-1225v3 (4x3.2 Ghz Haswell), 4GB ECC-RAM, decent power supply, AMT support for remote management (power, bios, remote desktop, booting from disk images — all over IP so you won't need a monitor/keyboard after initial setup). Mine has a GTX1060 for remote game streaming inside a VM and idles at only 28W. eames fucked around with this message at 12:32 on Feb 26, 2017 |
# ? Feb 26, 2017 12:29 |
If Amazon Cloud supports file revisioning of some sort, it doesn't really matter if a bit flips when you have backup - the problem is detecting those bit flips, as they're entirely silent both when they happen and when they're written to disk. It's also important which kind of file happens to get the bitflip, because a single flipped bit in the middle of a home movie you recorded with your own camera isn't even going to be noticed, whereas a bitflip in a bitcoin wallet can have potentially disasterous effect if it has no built-in checksumming (I don't know, but I would assume it does). The Dell T20 that eames recommended is an excellent choice for that pricepoint, but a fair chunk of that 15W idle is actually from the BMC, PCH and memory nowadays as all modern CPUs idle at very low power. It'll be interesting to see what sort of price point the Denverton SoCs land on, once they start coming to market in mid-2017, because with up to 16 SATA ports, they seem like an excellent use-case for an 8-bay NAS with an 8-bay SATA expansion unit.
|
|
# ? Feb 26, 2017 13:20 |
|
As a home user, most of my really important things are backed up via a local HD, external HD, and 'the magical cloud'. Otherwise, I'm not going to freak out if disk1 in the array flatlines and I have X TB of movies and tv gone. Business/Enterprise, different ballpark.
|
# ? Feb 26, 2017 13:56 |
|
I'm using borgbackup for my important stuff. I have it set up to refresh my main archive (~4GB) every hour and then sync that with the cloud version, which is on Amazon S3. https://borgbackup.readthedocs.io/en/stable/ It automatically deduplicates and encrypts everything into numbered chunks and, to be honest, I haven't a clue how such an archiver would be affected by bit flips. A bit flip might corrupt the entire archive or borg might checksum everything. I don't know. If borg detects a changed file due to a bit flip I would imagine that it thinks the file has been deliberately changed and insert the modified version into the archive. That's what any rational piece of software would do if there was no way of knowing if the change was not deliberate. My main archive is composed of the usual stuff: digital receipts, scanned proof of ID, work documents, a poo poo load of holiday photos and only a tiny portion of that is a few Bitcoin wallets, maybe 1MB. I am interested in going ECC the more I think about it, actually. I don't think I need the power of one of those Dell's although they seem like a bargain. I remember that someone recommended the AMD Kabini 5350 for small server ECC but I think it's getting of an age now where there is better performance/power ratio available, like a recent Pentium. Hmm. Conclusion: I feel that ECC is worth it for my particular uses. I've been using the current system a few months without obvious problem so I'll take time to read up and shop around. When I transition to a new ECC build I will manually checksum stuff that cannot afford to be corrupted, like the Bitcoin wallets.
|
# ? Feb 26, 2017 15:14 |
|
The Dell recommendation isn't solely about power, it's about the price-point. For the ~$250 it costs, you'd be hard pressed to put together a functional ECC-supporting box that wasn't made with cut-rate or otherwise compromising hardware. Figure a low-price motherboard is $100, that AMD 5350 is another $75, and you haven't even started talking about PSU, case, etc. They're just very nice little server kits all the way around, and simply happen to have the added benefit of a CPU that can actually do more than just serve files.
|
# ? Feb 27, 2017 04:33 |
|
Can I flash a H310 to support 6TB drives in IT mode? My boss told me to take what I wanted from a failed sub company and WELP there were 4 6TB reds in a NAS that they don't want anymore. I'm using unraid, which I like a lot and paid for and currently it's loaded up with 4TB greens attached to the motherboard. It's in a node case but it's full and Id like to use all 8 drives so I'm going to upgrade the enclosure. Looking at all the links online it gets confusing. E: looks like the m1015 is still the way to go? Matt Zerella fucked around with this message at 22:09 on Mar 4, 2017 |
# ? Mar 4, 2017 21:08 |
|
Looking for some input on upgrade paths. I currently have the following file server at home: Windows Server 2008 R2 (thx Dreamspark) Core i3-550 some consumer-grade motherboard 12GB non-ECC RAM 1 - 2TB drive (system, misc storage) 1 - 5TB drive (storage) 8x2TB RAID-6 hanging off an Areca 1230 hardware RAID controller (max capacity of 12 SATA ports on the Areca card, plus a few on the motherboard) I currently have just over 14.6TB of data sitting on the system right now. The fault-tolerance of the RAID array is nice, but I have backups so honestly the RAID was more appealing to me in having most of my poo poo in One Big Volume. That said, the thought of restoring multiple terabytes from cloud backup is unappealing, so fault-tolerance would still be nice. I'm thinking I'd like to go to 8TB drives for greater density. Right now I'm torn between the following options: - Keep the base system hardware and OS in place, build a new 4x8TB RAID-6 on the Areca, migrate data, pull the old 8x2 array, then expand the new array to provide capacity growth. Simplest for now, I've got until 2020 to replace 2008 R2. - Storage upgrade as above, but also upgrade base system and OS while I'm loving around with it. or - Upgrade base system and OS, migrate storage to ZFS or other non-hardware storage array. Mostly out of concern about the aging Areca card being the bottleneck in any future array rebuild operation. But I've also heard not-great things about ZFS resilvering; when is OpenZFS going to get sequential resilvering implemented?
|
# ? Mar 5, 2017 09:04 |
|
So I've been faffing around with a bash script this morning to mitigate the cost of a rare bitflip until I upgrade my system to ECC. I decided to automate checksumming in a Documents directory where I would like to keep critical files. Here's the result: code:
|
# ? Mar 5, 2017 12:17 |
|
make sure that your data has been rotated out of memory between when this happens, because pretty much every OS will just use a bunch of memory to cache whatever you recently read/write
|
# ? Mar 5, 2017 12:23 |
|
Ah. poo poo. I thought there'd be a catch but I thought it would have been that my script was not properly thought out. I'll look into ways to forcefully read the files from disk when running the sha256sum command. e: I've rescheduled the checksum script to run daily at 0235. I've added the following root crontab script to run daily at 0230: code:
apropos man fucked around with this message at 15:35 on Mar 5, 2017 |
# ? Mar 5, 2017 14:58 |
|
Matt Zerella posted:Can I flash a H310 to support 6TB drives in IT mode? My boss told me to take what I wanted from a failed sub company and WELP there were 4 6TB reds in a NAS that they don't want anymore. In the last month, a buddy and I have flashed 3 H200 cards to IT mode using the BIOS instructions here with no problems (instructions for the H310 included as well): https://techmattr.wordpress.com/2016/04/11/updated-sas-hba-crossflashing-or-flashing-to-it-mode-dell-perc-h200-and-h310/ If you haven't bought the card yet, it seems that the H200 can be had pretty readily on ebay for ~30-35ish shipped in the US right now with best offers on the buy it now auctions with multiple cards listing for around 40.
|
# ? Mar 5, 2017 15:31 |
|
Fancy_Lad posted:In the last month, a buddy and I have flashed 3 H200 cards to IT mode using the BIOS instructions here with no problems (instructions for the H310 included as well): Thanks... but I'm an impatient rear end in a top hat and I bought a m1015 for ~67 on eBay.
|
# ? Mar 5, 2017 15:49 |
|
I think one of my hard drives in my current NAS is about to poo poo the bed, whirling juddering and taking time to read and write on occasions. It's a netgear stora and has served me well (see what I did there?) for quite some time. I would like to replace it with something a bit newer two or four bay ( I have a couple of 1tb Samsungs I would like to use) I will add a couple of 3or4 TB reds must have not lovely software. I use it mainly for streaming films and storing downloads to save filling up my SSD on my main PC. I am in the U.K. If that makes any difference.
|
# ? Mar 5, 2017 17:23 |
|
The Synology NAS have a fairly good rep. The software must be quite good, especially from an ease of use perspective, because it's been available in hacked form for home built NASes for some time. They are a bit pricey though. Sorry I couldn't recommend a particular model.
|
# ? Mar 5, 2017 21:16 |
|
Matt Zerella posted:Thanks... but I'm an impatient rear end in a top hat and I bought a m1015 for ~67 on eBay. That's a good price. Few years ago when I was looking they had drifted into the 150 range. I ended up buying an m5015 because it was much cheaper.
|
# ? Mar 5, 2017 21:19 |
|
I have a bunch of 2TB in a raid-z2 config that have been running well for the past 3-4 year but I'm afraid of the bath tub effect kicking in plus I'm at 73% capacity and growing. I'm debating just resilvering with 3TB for an instant 50% gain in capacity or just rebuilding with a smaller array of bigger drives(I currently have 10 drives). I'm trying to balance # of drives with rebuild times if I do have a failed drive. With that in mind, I wanted to take a quick survey. What brands, models and sizes are folks using these days? I'm currently using 8 WD Reds(purchased in groups of 2) and 2 HGSTs and wanted to get a feel for what folks feel are reliable. I looked at the backblaze report but getting a hold of those reliable HGSTs seems expensive or just plain hard.
|
# ? Mar 6, 2017 06:21 |
|
|
# ? May 31, 2024 09:42 |
|
spoon daddy posted:I have a bunch of 2TB in a raid-z2 config that have been running well for the past 3-4 year but I'm afraid of the bath tub effect kicking in plus I'm at 73% capacity and growing. I'm debating just resilvering with 3TB for an instant 50% gain in capacity or just rebuilding with a smaller array of bigger drives(I currently have 10 drives). I'm trying to balance # of drives with rebuild times if I do have a failed drive. I just buy whatever is cheapest except for the bad outliers on those backblaze reports.
|
# ? Mar 6, 2017 15:47 |