IT Guy posted:Those of you with a DS411j, what are your xfer speeds? I'm starting to think mine is hosed. BlankSystemDaemon fucked around with this message at 20:39 on Apr 6, 2012 |
|
# ¿ Apr 6, 2012 20:27 |
|
|
# ¿ May 16, 2024 21:55 |
IT Guy posted:I just ran some tests on mine. I'm getting 10MB/s read and 4MB/s writes over CIFS. 60MB/s read 40MB/s write over FTP. Those drives are fine drives, I have the exact same number in my N36L, and there I can fully saturate 2x LAGG'd gigabit NICs on read and almost on write (dd: 400MBps read and 170MBps write, from memory)
|
|
# ¿ Apr 6, 2012 20:39 |
Just using Windows file transfer default values (no enlarged buffer) 9k jumbo frames, I get ~100MBps read and 60MBps write.
|
|
# ¿ Apr 6, 2012 21:01 |
UndyingShadow posted:My FreeNAS server is getting kernal panics every time I do large file transfers. It stays up for about an hour, and then crashes with an error in the console. It's reproducible every time. As a general rule, any issue (with exception of the UI) you can run into with FreeNAS will be replicapable (and has already been experienced by someone else) on FreeBSD (and if the issue is related to zfs, that makes it even more likely that someone else has encountered it). BlankSystemDaemon fucked around with this message at 18:17 on Apr 8, 2012 |
|
# ¿ Apr 8, 2012 18:14 |
IT Guy posted:FreeNAS has ZFS and is very loving simple to setup. However, with the current release, you can't expand the array due to a limitation with ZFS version 15 (I think in future versions you will be able to). You can also replace drives with bigger drives, which in v15 just requires an export and an import (and resilvering once the replacement has been done for each disk). BlankSystemDaemon fucked around with this message at 19:17 on Apr 9, 2012 |
|
# ¿ Apr 9, 2012 19:12 |
I assume you've tried the method mentioned of Using 7-Zip and Win32DiskImager on Windows that's described in the documentation? If so, and you have n*x running in a virtual box, OSX running anywhere or some old hardware with n*x, that very same page describes how to do it there.
BlankSystemDaemon fucked around with this message at 07:10 on Apr 11, 2012 |
|
# ¿ Apr 11, 2012 07:04 |
Also, in case you have your freenas accessible from the internet, Samba 3.0.x - 3.6.3 (inclusive): "root" credential remote code execution without authentication is a reason to use hosts.allow (if you absolutely have to have it exposed to the internet - better yet to not expose it at all, since it requires so much locking down that it's not worth it).
|
|
# ¿ Apr 11, 2012 08:23 |
evil_bunnY posted:If you open SMB/CIFS to a routed network you deserve everything you'll get.
|
|
# ¿ Apr 11, 2012 13:02 |
wang souffle posted:Can anyone convince me what to do with 8x 2TB drives in a ZFS setup? I currently have 5 of the drives in a RAIDZ1 setup and need a migration plan for existing data once I get 3 more.
|
|
# ¿ Apr 15, 2012 21:17 |
Thermopyle posted:How many people are storing multi-terabytes (say like more than 4 TB) of data that isn't movies/tv shows/video of some sort? Do anyone have any recommendations for off-site internet storage backup (need about 5TB and no transfer limit, and preferably compatible with FreeNAS in some way)? I kinda want to add an additional backup.
|
|
# ¿ Apr 17, 2012 19:01 |
Bonobos posted:Any danger in mixing Samsung F4's with the Hitachi 5k3000 drives? nyoron posted:Would the data go through Workstation A, or would it go straight from B to C? FXP will do it over the FTP protocol, and it's not that hard to setup. Might be worth looking into. BlankSystemDaemon fucked around with this message at 09:35 on Apr 20, 2012 |
|
# ¿ Apr 20, 2012 09:24 |
How regularly do people in this thread run zpool scrub <pool>?
|
|
# ¿ Apr 22, 2012 15:45 |
Alright, I split the difference and set it up to run every 3 weeks (it takes about 16 hours to finish a scrub). Weirdly, the best practices guide for zfs suggests weekly for consumer-grade drives and monthly for datacenter-grade drives. Is there really that big of a difference, or is it just being conservative?
|
|
# ¿ Apr 23, 2012 06:27 |
titaniumone posted:Earlier in the thread someone brought up aggregating network ports. I ended up buying an Intel Quad Port PCI-E 4x card for my server. A little bit of configuration in FreeBSD and on my Cisco switch, and now: Also, you're welcome. I believe it was me who mentioned LAGG, I'm running it with a dual port NIC, because it's in a low-profile pci-express port.
|
|
# ¿ May 4, 2012 09:46 |
FreeNAS 8.2.0-BETA3 is currently running on my N36L along with transmission and flexget in a jail, and I'm looking into setting up minidlna or serviio - and I must say that I'm very impressed with the way the plugin system works. EDIT: Learn from my mistake, don't ever enable autotune or serial console by intent or accident (respectively, in my case). I'm stuck at the freebsd boot0 series of the bootstage and can't get out until someone more clever than me helps me. BlankSystemDaemon fucked around with this message at 08:03 on May 29, 2012 |
|
# ¿ May 28, 2012 20:21 |
badjohny posted:This might not be the thread for this question, but is apple doing anything to combat bit rot? I know MS is going to ReFS, Linux looks to be going to Btrfs, and ZFS is currently the goto file system for preventing Bit Rot or Bit flip.
|
|
# ¿ May 29, 2012 15:26 |
evil_bunnY posted:With Zevo?
|
|
# ¿ May 29, 2012 17:46 |
Jigoku San posted:So my old Acer EasyStore WHS is getting weird, it dropped a drive out of the pool for no reason and refused to put it back(the drive had nothing wrong with it) and its not streaming files well anymore. They start to slow down and then lockup after ~10min of playback, seeking takes forever and can lockup MPC. I've got most of it backed up so I'm looking for other options to use my current hardware and 6 mismatched drives (3 1tb and 3 1.5tb). I would recommend making two vdevs in one pool though, one with 3x1TB and one with 3x1.5TB - if you make a raidz2 with 6 drives you'd only get the minimum capacity (1TB)*5.
|
|
# ¿ May 30, 2012 12:35 |
DrDork posted:And, because parity is dealt with at the vdev level, if you create several smaller vdevs because you don't feel like buying a dozen drives all at the same time, you end up losing more space to parity than if you just had one big vdev. Another option for pool expansion is to replace drives over a period of time (which is why this method is useful if you can put a bit of money away each month to buy new drives regularily) until all drives in the pool have been replaced with bigger drives and then let the pool auto-grow (a feature of newer versions of zfs than v15, in v15 you have to export, replace then import and resilver for each drive). BlankSystemDaemon fucked around with this message at 20:48 on Jun 9, 2012 |
|
# ¿ Jun 9, 2012 20:41 |
FISHMANPET posted:That's exactly what the autoexpand feature does:
|
|
# ¿ Jun 9, 2012 22:21 |
DarkLotus posted:Thank you for a very informative reply. How does ZFS handle using an SSD for cache? Is there anything special I need to do with the configuration to ensure redundancy? Read about zil and cache (write and read caching, respectively) on the ZFS Best Practices Guide if you really do have a special setup/need and consider what's in the ZFS Evil Tuning Guide before doing anything else because zfs is a well-tested system and if there were better values than the default, they should be the default. Other than straightening out bottlenecks, there's not a whole lot you can do performance-wise (but why would you need to? Even on consumer hardware (my low-power N36L, to be exact), I've seen SMB get maxed out on LAGG'ed dual-gigabit NIC). BlankSystemDaemon fucked around with this message at 09:52 on Jun 25, 2012 |
|
# ¿ Jun 25, 2012 09:43 |
I've posted about that very issue at least 5 separate times in this thread, I think. Yes, the bge driver on FreeBSD for this particular NIC has a bug. Just buy a cheap low-profile single-/dual-port pci-express x1-x4 NIC that's listed on the FreeBSD em NIC driver manpage - All of those are Intel NICs that are fully supported with the em NIC driver, so it's just a question of finding one that fits the above-listed specs. Alternatively, HP has a parts number 503746-B21 NIC that's sold as an accessory to both the N36L and N40L.
BlankSystemDaemon fucked around with this message at 14:04 on Jun 29, 2012 |
|
# ¿ Jun 29, 2012 14:00 |
Colonel Sanders posted:I did not know that I could share the same folder with multiple protocols. Regardless, I cant justify setting up AFP for my Macbook because I rarely ever use the Macbook, and I'm sure it will connect to either SMB or NFS just fine. If I had a mac only network I'd do AFP and if it was n*x only I'd do NFS - but if it's a mixed enviroment, SMB is the way to go.
|
|
# ¿ Jun 30, 2012 19:47 |
Stein Rockon posted:regarding N40L with FreeNAS rather than a ReadyNAS. That being said, the crashplan client isn't going to be easy to get running on FreeNAS - it's not officially compatible, and although people have done it on FreeBSD, that's not quite the same on FreeNAS, even with the PBI plugin system in the 8.2 release. As for disks, I'd recommend 4x Seagate Barracuda Green ST2000DL003 64MB 2TB out of the ones you have listed. You also need a USB flash drive to install FreeNAS on, of course. In short: Just go with the ReadyNAS solution, unless you insist on DIY solutions. The primary advantage of DIY solutions like N40L is that they're more easily expanded later on compared to a ReadyNAS solution. BlankSystemDaemon fucked around with this message at 09:22 on Jul 15, 2012 |
|
# ¿ Jul 15, 2012 09:17 |
Synology is a good choice, but QNAP is also worth looking at. Every time I talk with people about NAS (not DIY solutions, mind you) it comes down to Synology or QNAP. Speaking of Synology, does anyone remember the package repository that was linked in this thread at some point? Apparently I didn't bookmark it, and I can't find it again (tried searching with threadid: and various keywords, but with no luck).
|
|
# ¿ Jul 19, 2012 05:35 |
frumpsnake posted:Are you referring to this one?
|
|
# ¿ Jul 19, 2012 07:13 |
Longinus00 posted:There is no reason to recommend so much ram for home usage. luigionlsd posted:Honestly it's just 3x2TB drives now, one more down the line. *snip* How is it for expandability, for if I have 3 drives now, and 4 in the future? As for later expansion of pool, you can add more vdevs as needed (with any type of parity you want), or you can replace and resilver (rebuild the parity/data-integrity of the entire vdev) each drive in a vdev in order to expand the array once all drives have been replaced by bigger drives (autoexpand was added in v16, I believe, so if you're running v15 you need to export and re-import the pool manually). Look, I know it sounds scary - however it does provide robust data integrity compared to the way WHS does it (also note that WHS is abandoned as a project and there won't ever be another, and support + software updates will stop too in the not too distant future). If you really want to stick to Windows, buy Windows 8 as it includes ReFS in all versions. That way you get software parity on the zfs level, but on the Windows platform. Or hope that Microsoft releases a new WHS with ReFS, as some of us do, no matter how unlikely it seems. BlankSystemDaemon fucked around with this message at 12:27 on Jul 27, 2012 |
|
# ¿ Jul 27, 2012 12:09 |
Zorak of Michigan posted:ReFS is only on Windows Server releases, not the consumer Windows releases. Longinus00 posted:No it's not wrong, go ahead and benchmark the effects of less than 1GB/TB of memory yourself. Besides, the person asking is going to install it in an old core 2 duo system so probably won't be able to use cheap rear end DDR3 memory.
|
|
# ¿ Jul 27, 2012 18:05 |
error1 posted:(or working backups)
|
|
# ¿ Sep 9, 2012 14:19 |
Misogynist posted:ZFS loves RAM. You can get away with any low-end CPU as long as you're not doing inline dedupe/compression, but you need to max that out at 8 GB of RAM if you want any kind of decent performance. But yeah, you really don't want to do compression or dedup on the 1.5 GHz dualcore CPU. For everything else though, it's plenty powerful (even smbd which just uses 1 core isn't ever limited by the CPU in all cases I've seen (it always comes down to either disk i/o, or faulty network configuration/driver software)). BlankSystemDaemon fucked around with this message at 20:40 on Oct 4, 2012 |
|
# ¿ Oct 4, 2012 20:31 |
spaceship posted:Joined the Synology club yesterday. I wish zfs would add one feature that it's missing, namely adding parity disks to already-existing pools (it's a feature that's listed on some wish-list, and it's supposedly coming - eventually). MasterColin posted:Just picked this up for a start of an UnRaid server. Planning on running UnRaid, Sab/SB/PS3MediaServer. I'm also going to upgrade my router to a commercial grade router to make sure I can get gigabit speeds. BlankSystemDaemon fucked around with this message at 08:38 on Oct 22, 2012 |
|
# ¿ Oct 22, 2012 08:33 |
Steakandchips posted:Can take up to 8GB ram. Speaking of the N40L, I recommend getting the out of band BMC (a vKVM solution so you can run the server headless; it was made for the N36L but should work on the N40L) along with a one or two-port NIC from Intel as a zpool with +4 disks and raidz1 can easily do +200MBps transfer speed if the drives are fast enough (Mine run at 230/180MBps with LAGG'd NICs). If I remember correctly, the N40L doesn't have the same motherboard NIC onboard (it uses Broadcom NC107i whereas the N36L uses Broadcom NetXtreme BCM5723), so you may not run into the bge driver issue in FreeBSD that I've mentioned several times in this thread. Also note that if you want to use the eSATA and ODD SATA ports for a 5th and 6th drive, you need to hack or use an already-hacked BIOS as the ODD one only runs at SATA150 speeds and the eSATA doesn't support ACHI out-of-the-box (with the hacked BIOS, both support ACHI and SATA300). Some people do some pretty crazy HP Microserver mods. Apparently I completely missed this when it was news, but HP intend to take the microservers market seriously which means we're hopefully looking at a refresh within BlankSystemDaemon fucked around with this message at 13:54 on Oct 28, 2012 |
|
# ¿ Oct 28, 2012 13:12 |
WeaselWeaz posted:How low power is one of these? I've debated buying one or building something similar to run FreeNAS (or even do double duty as a second HTPC) but it seems like it still uses more power than a Synology. With a HDHomeRun, FreeBSD (for zfs-based filesharing on SMB) and MythTV you can actually have a complete HTPC/fileserver. You'll also need graphics card, and there's a EVGA card that fits and has a fan. Be aware that not all cards fit due to fan/heatsink configuration. BlankSystemDaemon fucked around with this message at 14:44 on Oct 28, 2012 |
|
# ¿ Oct 28, 2012 14:32 |
It can do it, but it's not recommended. The thing about spindown and headparking when idle (and spin-up when doing anything or scheduled scrubs/S.M.A.R.T checks or anything at all happens) is that it causes more wear and tear on the drives, which directly impacts the drives lifetime. Additionally, you'll find that if you do get a NAS, you start using it more and more.
|
|
# ¿ Oct 28, 2012 20:46 |
And wdidle doesn't work on newer Green drives.
|
|
# ¿ Oct 29, 2012 08:56 |
I finally verified the production date of my Samsung HD204UI F4EG (some of which are affected by an issue if they're from earlier than December 2010), and have setup S.M.A.R.T tests like this: A short test on the 1st, 8th, 15th, 22nd and a long test on the 28th for every month of the year, regardless of the day of the week. Is this too much? I did not know disks listed production date on them.
|
|
# ¿ Nov 3, 2012 18:18 |
fletcher posted:I didn't even need to mess with the firmware flash for my 5th drive that sits in the optical bay. Run some S.M.A.R.T tests on your drives, that's about the only good recommendation I can come up with if you want to figure out what happened. Also be aware that replacing the drives with 3TB ones won't grow the pool before you've replaced all five drives. BlankSystemDaemon fucked around with this message at 08:31 on Nov 5, 2012 |
|
# ¿ Nov 5, 2012 08:26 |
ZFS Evil Tuning Guide posted:Tuning is often evil and should rarely be done. More often than not, lousy performance is due to misconfiguration. If you suspect you have one, don't be afraid to save your config and completely wipe your FreeNAS installation (ie. leave the pool alone/dismount it before making any changes, then just reinstall FreeNAS from scratch). It's an appliance system, it's kinda meant to be run that way. BlankSystemDaemon fucked around with this message at 20:29 on Nov 12, 2012 |
|
# ¿ Nov 12, 2012 20:21 |
Yes, it accepts 3TB disks just fine. Incidentally, here is a list of known-good drives.
|
|
# ¿ Nov 14, 2012 16:40 |
|
|
# ¿ May 16, 2024 21:55 |
Ninja Rope posted:Has anyone had luck getting > 8 gigs of ram into an N40L? Here are some that are verified known-good: ECC: Super Talent DDR3-1333 8GB/512Mx8 ECC Samsung Chip Server Memory - W1333EB8GS Non-ECC: Corsair 8GB DDR3 1333MHz XMS3 CMX8GX3M1A1333C9, GSkill Ares F3-1333C9D-16GAO, Patriot Gamer2 PGD316G1333ELK
|
|
# ¿ Nov 15, 2012 10:36 |