Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
After a few seconds of googling, I found a decent discussion about ECC vs Non-ECC for ZFS from the FreeNAS forums. More interesting, here's a research paper on how non-ECC ram can screw up your ZFS. Which is all to say that a few bad bits aren't going to kill you, but a RAID controller seems like a safer bet than ZFS if you have unreliable ram.

Adbot
ADBOT LOVES YOU

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Well, Monday is the day the rest of my toys get here and I go full-out ZFS. Been testing for a bit and FreeNAS seems to be the way to go. The Drobo dropped a drive yesterday... I'm pretty sure it knows its days are numbered.

uhhhhahhhhohahhh
Oct 9, 2012
Anyone running XPEnology know if it's worth installing it on ESXi? I have it on native at the moment but need to update to 5.2 and was wondering if it's worth the extra hassle putting it on a VM just to get the fan quieter. Isn't there going to be a performance hit too? any other cons? I don't know if I'll be able to migrate either without losing my data. Running on a HP Gen8 with only 2GB of RAM.

thebigcow
Jan 3, 2001

Bully!

uhhhhahhhhohahhh posted:

Anyone running XPEnology know if it's worth installing it on ESXi? I have it on native at the moment but need to update to 5.2 and was wondering if it's worth the extra hassle putting it on a VM just to get the fan quieter. Isn't there going to be a performance hit too? any other cons? I don't know if I'll be able to migrate either without losing my data. Running on a HP Gen8 with only 2GB of RAM.

No, not on your hardware.

codo27
Apr 21, 2008

I have a Buffalo Linkstation. Web access is enabled on the NAS, but I cant hit it outside of my home network. Under the settings there is a port listed and I've allowed that in port forwarding in my router settings but at a loss as to what to try otherwise.

theperminator
Sep 16, 2009

by Smythe
Fun Shoe

codo27 posted:

I have a Buffalo Linkstation. Web access is enabled on the NAS, but I cant hit it outside of my home network. Under the settings there is a port listed and I've allowed that in port forwarding in my router settings but at a loss as to what to try otherwise.

Check to make sure the gateway/router address in the network settings of your nas is configured

IOwnCalculus
Apr 2, 2003





What port is it trying to use? If it's defaulting to port 80, for example, some ISPs will block that no matter what you do on your end.

codo27
Apr 21, 2008

It was set up with DHCP and UPnP so I wouldn't think I'd have to do all that much. Ports it shows are:
External Port:
36027
NAS Internal Port:
9000

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Due to how terrible UPnP is for security I wouldn't be surprised if some ISPs tried to detect UPnP happening from behind a NATed IP address like your home network. Do you have SSL enabled on that 9000 port? What kind of SSL does that server support? Some ISPs will detect that you have an insecure version of a web server and refuse to deliver it back to a client. Not just ISPs do it but proxies and content distribution networks like to filter obviously malicious traffic out. But if you can hit port 9000 and get a response on your LAN, you have a forwarding rule mapping 36027 to 9000 on that IP, and you can connect to that port somehow that's really about it. Granted, you can have an issue like conflicting routes / subnets but I don't think that applies here.

BlankSystemDaemon
Mar 13, 2009



With how frequently UPnP comes enabled on CPE, I would be surprised if that was true - despite the fact that you are absolutely correct in saying that UPnP is terrible for security. It also bears mentioning that it's not nearly as universal as the name would have you believe.
Internet Of Things is going to be a lot of fun given how good we know companies, who make low-profit network devices, aren't at updating the devices because they simply cannot afford to hire the programmers to keep software up-to-date.

EDIT: Just had a thought - have you checked that destination NAT is also setup? Both that and port forwarding needs to be setup for it to work properly.

BlankSystemDaemon fucked around with this message at 21:59 on Jul 28, 2015

codo27
Apr 21, 2008

Sorry, networking really is my weakness. Given the info above, and my router gui looks like this, what do I plug in to each box?



Its the ISPs router if that isn't obvious, UPnP and NAT came enabled I'm pretty sure, or are now anyway but I don't think I enabled them

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib
Well, the TS440 arrived at 11AM, and by 4PM was loaded with 20TB of Toshiba drives, and a 16GB USB stick with FreeNAS. I'm currently dumping 6TB of data onto it from my desktop at 100MB/s, which is amazing. That's effectively 800Mb/s on a Gigabit network. 20% overhead is less than I was expecting, especially because the data has to travel down a single ethernet cable to the basement switch/router. It took a lot of reading, but Plex is up and running, with correct storage and user allocation, and I've got a CIFS Share for my desktop backup program. I ran into some issues early on with users/permissions/storage locations, but a couple of botched installs of the Plex plugin later, we're up and running. I've got to learn how the VitrualBox interface works on this thing, and I'll be pretty happy.

It turns out that 20GB of ECC RAM still really that much for FreeNAS, so I'll probably be upgrading to 32GB.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

sharkytm posted:

It turns out that 20GB of ECC RAM still really that much for FreeNAS, so I'll probably be upgrading to 32GB.

I assume you're using some form of ZFS, in which case, yeah, it'll gobble up all the RAM you can toss at it. That said, unless you are serving multiple concurrent users in high-performance settings, you don't really need it: 32GB to serve a 20Mbps video stream from Plex is complete overkill and won't result in any noticeable performance gain.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

sharkytm posted:

It turns out that 20GB of ECC RAM still really that much for FreeNAS, so I'll probably be upgrading to 32GB.

Are you using dedup? That's a big ol memory hog with ZFS.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Moey posted:

Are you using dedup? That's a big ol memory hog with ZFS.

And also completely pointless for a home setup.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

Moey posted:

Are you using dedup? That's a big ol memory hog with ZFS.

Nope, No dedupe. RAIDZ2 on ZFS. I'm slamming the RAM with huge copies, 2 VM's, Transmission, Plex, and some other stuff. ZFS is just gobbling whatever it can. Amazingly, very little slows this thing down. I'm used to my old ReadyNAS, Promise SmartStor, and DNS323, this is a whole other league of performance.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

sharkytm posted:

Nope, No dedupe. RAIDZ2 on ZFS. I'm slamming the RAM with huge copies, 2 VM's, Transmission, Plex, and some other stuff. ZFS is just gobbling whatever it can. Amazingly, very little slows this thing down. I'm used to my old ReadyNAS, Promise SmartStor, and DNS323, this is a whole other league of performance.

Yeah the other services you are running are eating that memory, not the file system itself.

My home box is maxed at 32gb and am starting to get the itch to upgrade it, but I running a much different setup. ESXi booted off a thumbdrive with a controller passed to a freenas VM. Then a bunch of other VMs as well.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

Moey posted:

Yeah the other services you are running are eating that memory, not the file system itself.

My home box is maxed at 32gb and am starting to get the itch to upgrade it, but I running a much different setup. ESXi booted off a thumbdrive with a controller passed to a freenas VM. Then a bunch of other VMs as well.

Living dangerously, I see (FreeNAS in a VM). With all the plugins/jails off, just FreeNAS is using 16GB of memory. The rule of thumb that I've read is 1GB/TB with ZFS, and I'm running 20TB, so that seems just about right.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I'm using 12GB with 34TB. (95% used...ugh)

It's mass storage for serving HTPCs, and there are zero issues with it. Copying stuff to/from the server almost always breaks 100MB/s.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

Thermopyle posted:

I'm using 12GB with 34TB. (95% used...ugh)

It's mass storage for serving HTPCs, and there are zero issues with it. Copying stuff to/from the server almost always breaks 100MB/s.

Not to derail, but 17x2TB+parity? Can't come up with any other math that would make 34TB work.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

G-Prime posted:

Not to derail, but 17x2TB+parity? Can't come up with any other math that would make 34TB work.

Sorry, I was just rounding and I've actually got 3 pools because ZFS sucks at expanding and I miss WHS.

Also, I transposed numbers...it's actually ~43TB.

code:
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank1  27.2T  26.1T  1.15T    95%  1.00x  ONLINE  -
tank2  7.25T  6.81T   446G    93%  1.00x  ONLINE  -
tank3  9.06T  8.10T   989G    89%  1.00x  ONLINE  -
It's pretty stupid and I'm thinking about just deleting a shitton of stuff so I can have the space to migrate to a better setup without having to spend 1 bajillion dollars on new hard drives to copy everything to so I can redo my pools.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
Makes total sense now, no worries.

Things like your current situation make me really think hard about how I'm going to deal with that problem. Odds are pretty good that I'm just going to resize by buying replacement drives over time, staggered, and let the pool resilver once they're fully replaced. Then lather, rinse, repeat on the next size up, until the end of time or storage tech improving dramatically.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

sharkytm posted:

The rule of thumb that I've read is 1GB/TB with ZFS, and I'm running 20TB, so that seems just about right.

Again, while ZFS will happily manage to scarf down all the RAM you can throw at it, for home and small-office use you absolutely do not need to keep to that high of a ratio. VM's and whatnot will of course gobble up their own chunks of RAM, but FreeNAS will work quite well on fairly light amounts of RAM if all you're asking of it is to serve single users and aren't slamming it with hilarious IOPS (which may or may not be the case depending on what you're doing with the VMs).

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
What's more important to note is that throwing more RAM at ZFS or anything else that cares about caching is if you're going to access the file again, which is not really the case for most large media file servers unless several users tend to watch the same 4k video at the same time or something. Otherwise, you're just looking at having to hit the drive up for the data no matter what anyway. Basically, the ZFS users that need gobs of RAM know they need it probably because they're serious enough about performance to think about these sorts of things. I have kind of crappy I/O performance when using ZFS points for iSCSI targets running VMware ESXi without having sufficient RAM and an L2ARC setup.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

G-Prime posted:

Makes total sense now, no worries.

Things like your current situation make me really think hard about how I'm going to deal with that problem. Odds are pretty good that I'm just going to resize by buying replacement drives over time, staggered, and let the pool resilver once they're fully replaced. Then lather, rinse, repeat on the next size up, until the end of time or storage tech improving dramatically.

That's how I've been managing my pool. The only regret I have is that it's two vdevs of 4x2TB and 4x4TB drives, both raidz1. It does make expanding easier, since I'll probably swap the 2TB drives for 4/6TB sometime next year, but I worry about losing two disks from the same vdev when resilvering, since they're all disks of the same age.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

PitViper posted:

That's how I've been managing my pool. The only regret I have is that it's two vdevs of 4x2TB and 4x4TB drives, both raidz1. It does make expanding easier, since I'll probably swap the 2TB drives for 4/6TB sometime next year, but I worry about losing two disks from the same vdev when resilvering, since they're all disks of the same age.

That's why I want to do it staggered. Every time you replace a disk (as I understand it), it'll resilver, and use the same amount of space as the old disk. Once you replace them all, it should expand to fill that space neatly. Yeah, resilvers are stressful on the array, but because ZFS knows what data is missing, it should only read what's needed to fill the new drive, rather than the complete contents of the array. Also, I've reached the conclusion that there's never a reason to do z1. I know that it's too late for you to change your mind on that, obviously, but I'm going to be looking toward the future, and may just go z3 for as much of a safety net as possible.

Yossarko
Jan 22, 2004

Since buying a MBP and iPad, I almost no longer use my big desktop, which is left on 24/7 for torrenting and as a media server. Other than that, I don't use it anymore.

So I'm thinking of selling it, and just getting a NAS that can do both of these things. After reading on CPU / transcoding and such, I'm looking at the Synology DS-214play and a 3TB WD RED drive. Any thoughts ?

Even though it's a 2-bay drive with RAID support and such, I don't think I'm going to put two disks in (yet, anyway). This is possible, correct ? I'll be using this as 30% backup solution and 70% media center / streaming (transcoding) server to my ChromeCast, iPhone and iPad.

For actual backup redundancy, I was thinking maybe buying Amazon Cloud space and syncing a backup folder to that ? It seems cheaper than buying a second disk in a RAID setup.

Thanks for any thoughts / pointers / etc.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

G-Prime posted:

That's why I want to do it staggered. Every time you replace a disk (as I understand it), it'll resilver, and use the same amount of space as the old disk. Once you replace them all, it should expand to fill that space neatly. Yeah, resilvers are stressful on the array, but because ZFS knows what data is missing, it should only read what's needed to fill the new drive, rather than the complete contents of the array. Also, I've reached the conclusion that there's never a reason to do z1. I know that it's too late for you to change your mind on that, obviously, but I'm going to be looking toward the future, and may just go z3 for as much of a safety net as possible.

Yeah, my oldest disks in the pool now are around 3 years of power-on time. All WD Reds, sitting on a dedicated HBA. I don't know that I'd do z2 on a 4 disk vdev, and I only did 4 disks initially because it fit tidily on my 4-port controller, and I only needed 4x2TB of space at the time. Ideally I'd jump to a single 8 disk vdev and use z2 or z3, but I'd need to use at least 4TB disks just to manage my existing data, to say nothing of expanding capacity.

This way, I can refresh half the disks every two years or thereabouts, disk capacity is outpacing my storage growth needs, and even with some disasters relating to bad cables and a dead controller, I haven't lost any data since I built the pool. Next hardware refresh I'll probably even use ECC RAM!

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
My HBA and Rackable JBOD enclosure came in, and it's almost 2/3 full with spindles. I'm down to 1TB of data left to migrate from the Drobo, once that's done there's three more going in.

I may have a problem.

H2SO4 fucked around with this message at 19:34 on Jul 31, 2015

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
This issue with rearranging the drives is the biggest reason I still use a system similar to the Synology Hybrid Raid. I've partitioned my harddrives with 80 GB partitions, created a bunch of mdadm RAID6/1 devices of these and then created LVM volumes of them. Thanks to pvmove I can move data off any of the RAID devices, delete the RAID and recreate it with different amount of drives or geometry. Since pvmove uses mirrored LVM volumes during data transfer my storage should never be in degraded state.

With this setup you could start with 3x1TB RAID5, upgrade it to 6x1TB RAID6 and later change to 4x4TB RAID6. This system has proven itself, I'm still using the same volume group I originally created with 80GB IDE drives or 250GB IDE/SATA drives. I can't remember for sure anymore, it was about 10 years ago and the LVM has never experienced a major failure. I'm waiting for btrfs to mature and to get more experience with it so I could filesystem with checksumming, now I just have copious amounts of md5sum files all over the place.

Minor problem with this system is that after a bunch of pvmoves your data may be all over the place. I recently upgraded from 4 drives to 6 and I wanted to rearrange the volumes more optimally and logically. I had to draw a graph in PowerPoint to figure out where all my extents were laying. Not all of it made sense.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

G-Prime posted:

That's why I want to do it staggered. Every time you replace a disk (as I understand it), it'll resilver, and use the same amount of space as the old disk. Once you replace them all, it should expand to fill that space neatly. Yeah, resilvers are stressful on the array, but because ZFS knows what data is missing, it should only read what's needed to fill the new drive, rather than the complete contents of the array. Also, I've reached the conclusion that there's never a reason to do z1. I know that it's too late for you to change your mind on that, obviously, but I'm going to be looking toward the future, and may just go z3 for as much of a safety net as possible.

The problem is you're also forcing 4 separate rebuilds, which can hammer old HDDs to the point where they start to actively fail on you. It happened to me on an old RAID I had made from 400gb drives.

8-bit Miniboss
May 24, 2005

CORPO COPS CAME FOR MY :filez:
Man oh man, I've waited too long to upgrade. Going from a Netgear ReadyNAS Ultra 4 to a TS440 with FreeNAS is such a huge performance jump. I can do all these things now. :swoon:

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Man, it's real easy to upgrade the storage in a Drobo, and real loving tedious to whittle it back down. It's been in basically a constant state of rebuilding as I'm moving data off of it and onto the FreeNAS box.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

Methylethylaldehyde posted:

The problem is you're also forcing 4 separate rebuilds, which can hammer old HDDs to the point where they start to actively fail on you. It happened to me on an old RAID I had made from 400gb drives.

Very valid point, but that's more or less what I'm trying to avoid.

Just for example, let's say I do a 10x3TB drive RAID-Z2 build in January 2016. My plan would be to replace a drive with a larger one every 2 months (arbitrary timeframe). So the first drive goes in March, then May, so on. In September 2017, I've now replaced every drive in the array with a 6TB. There have been 10 resilvers over that time, but no drive was even 2 years old, so it should be relatively safe to do so. Assuming the array is half full (possibly fair depending on the person), you're reading ~1.5TB from each disk each time you do it. 12.5TB of reads is the URE threshold for a WD Red. So, yes, you're putting yourself at risk there, obviously, if you're using the drives at all. That said, you could do it as a Z3 and (I think?) do two disks at a time, and reduce that risk considerably, because you'd take it down to 5 resilvers rather than 10. Somebody please correct that last part if I'm wrong.

Overall, this looks like a less risky way to do it than replacing drives as they fail, just by virtue of making sure that there are as few potential points of failure as possible. Sure, I could wait until my array is 18 months old and replace a drive when it dies. But that means ALL the disks are 18 months old and equally at risk of failure. Or I could minimize that risk by staggering the age of all drives in the array and continually replacing the oldest. Yes, you have to seed the pool with drives initially, but starting the replacement cycle early is still a risk reduction as I see it. Either way, when disks fail, you're going to have to do that resilver and possibly trash the array. May as well do it on your own terms, in a planned fashion, rather than an emergency.

Walked
Apr 14, 2003

Just got my 1515+ setup; running 5x3tb drives in SHR with 2 parity drive.

100% maxing out my GigE on backups. I have a few computers on the network using it for various purposes (Plex, backup, Time Machine, etc) so I'm considering doing NIC aggregation on the Synology side - probably dont really need it but its fun to employ poo poo like that at home.

Also: Veeam Endpoint is awesome for home backup to NAS. Very very pleased; been eyeballing it for a while and finally decided to give it a whirl here.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

G-Prime posted:

Overall, this looks like a less risky way to do it than replacing drives as they fail, just by virtue of making sure that there are as few potential points of failure as possible. Sure, I could wait until my array is 18 months old and replace a drive when it dies. But that means ALL the disks are 18 months old and equally at risk of failure. Or I could minimize that risk by staggering the age of all drives in the array and continually replacing the oldest. Yes, you have to seed the pool with drives initially, but starting the replacement cycle early is still a risk reduction as I see it. Either way, when disks fail, you're going to have to do that resilver and possibly trash the array. May as well do it on your own terms, in a planned fashion, rather than an emergency.

First Rule: RAIDZ is not enough. RAIDZ2 is about the minimum you want these days.

Second Rule: Make a second copy of your data. It can be to tapes (I do that), or to a 6 or 8TB external drive. Just something so that if your machine catches fire or the entire pool decides gently caress da police as it corrupts itself, you still have another copy someplace. It doesn't have to be fault tolerant, just there.


It's not the resliver per-se that causes issues. It's doing it 8+ times in fairly rapid succession, on fairly old drives. If one or two of the drives have developed that flaky golden crust, you can be in for a sad time if it's a RAID-Z array. The disks are mostly fine at light load, but under heavy multi-week 100% sustained read/write, they can develop issues, and issues with your only copy of the data is bad.

Honestly, it's easier to make sure you have space to expand and just get 8 or 10 disks at a time. If you can't justify or afford to drop $1500 on a stack of disks all at once, just save up. Set aside the single disk/month payment, and once you're ready to buy, pick up a 10-pack from Amazon or Newegg. Drives only get cheaper the longer you wait, and it's nice to have all of one model instead of getting a bunch of different revisions or similar models. I did something similar when my 8 old as gently caress 1.5TB WD Greens (I know, but they were cheap, WD IDLE worked on them, and reds didn't exist) started having intermittent errors that hung the controller and hardlocked the system. I got 10 spiffy new 3 TB Reds, set them in a big RAIDZ2 pool, and did a cp * /tank /tank2. It chewed on it for a few days, and when it was done I gave the old greens as .308 decommissioning.

calandryll
Apr 25, 2003

Ask me where I do my best drinking!



Pillbug
Any suggestions on a script for cleaning up old snapshots? I have stuff dating back with daily/weekly/monthly snapshots to the beginning of April when I first setup my NAS.

thebigcow
Jan 3, 2001

Bully!

calandryll posted:

Any suggestions on a script for cleaning up old snapshots? I have stuff dating back with daily/weekly/monthly snapshots to the beginning of April when I first setup my NAS.

I don't know what platform you are on but see if any of this helps http://serverfault.com/questions/340837/how-to-delete-all-but-last-n-zfs-snapshots

calandryll
Apr 25, 2003

Ask me where I do my best drinking!



Pillbug
Forgot to mention Ubuntu. Thanks, I'll take a look at play around with it and see how well it works.

Adbot
ADBOT LOVES YOU

Henrik Zetterberg
Dec 7, 2007

Any recommendations on a mini-ITX case that has 6+ internal 3.5s? Looks like there's only 3 hits on newegg.

I'm running UnRaid on an older Supermicro Atom mini-ITX board and currently just have my extra drives laid out on the floor since my current case only has 4 drive slots. I'm moving and am looking on mounting everything proper.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply