|
IT Guy posted:Are hard drives fine to be stored for LONG periods of time whichout power/use?
|
# ? Apr 25, 2012 21:44 |
|
|
# ? May 31, 2024 11:15 |
|
necrobobsledder posted:The specific API for decoding video would be DXVA2, which is all that was supported on Linux for a long time. I run Linux on my Microserver and it also does mdraid so CPU is at a premium on this setup. So if I get a GT520 for my N40L, can I run XBMC under linux with HDMI support/audio over HDMI, or do most people stick to Windows?
|
# ? Apr 25, 2012 22:43 |
|
IT Guy posted:Are hard drives fine to be stored for LONG periods of time without power/use? This is actually called a cold spare, a hot spare is powered on and connected to the control but not attached to the array, but the array is configured to rebuild on that hot spare in case of a drive failure.
|
# ? Apr 26, 2012 03:48 |
|
Residency Evil posted:So if I get a GT520 for my N40L, can I run XBMC under linux with HDMI support/audio over HDMI, or do most people stick to Windows?
|
# ? Apr 26, 2012 13:34 |
|
Does anyone see any problems with the N40L handling about 10 servers rsyncing to it at the same time? It's going to be very little data transferred to it (1GB total), I'm mainly considering the CPU. Will the CPU be able to handle rsyncing all 10 servers at once? I'm going to be using the rsync daemon, not rsync over ssh so it won't have to do any encrypting/decrypting.
|
# ? Apr 27, 2012 17:02 |
|
IT Guy posted:Does anyone see any problems with the N40L handling about 10 servers rsyncing to it at the same time? It's going to be very little data transferred to it (1GB total), I'm mainly considering the CPU. Will the CPU be able to handle rsyncing all 10 servers at once? I'm going to be using the rsync daemon, not rsync over ssh so it won't have to do any encrypting/decrypting. I doubt you'll have any issues related to the CPU as long as you've got 2-4GB of RAM. Your bottleneck will be your network throughput.
|
# ? Apr 27, 2012 19:27 |
|
DarkLotus posted:I doubt you'll have any issues related to the CPU as long as you've got 2-4GB of RAM. Your bottleneck will be your network throughput. Right on. It has 8GB RAM so that won't be an issue. The rsync is from 10 remote servers rsyncing unimportant data over the WAN/VPN over 6mbit/800kbit DSL connections. I definitely won't be hitting any network throughput limits.
|
# ? Apr 27, 2012 19:32 |
|
I decided I needed to drag my archaic backup system into something more modern as one of my drives is dying, so I ordered a N40L before the cashback offer finished. Previously I had a matching-size drive for each of my 'live' drives which I'd periodically plug in and do a manual diff (not as bad as it sounds!) and copy, then stash under my bed again. I'm reasonably techy but not all that experienced with linux, so it's a whole new world of things to learn and I was hoping for some sanity checking of what I've come up with. I'm basically planning:
It's ZFS (the most critical part) that I have the main worries about due to lack of experience. I tried it in a VM and it seemed far too easy - to the point I assumed I was missing something. It basically came down to: code:
I'll also be installing Apache and planning to write a mobile-friendly web interface for browsing the server/viewing stats/etc. I've seen Webmin, are there any other web interfaces people use?
|
# ? Apr 29, 2012 22:04 |
|
Sounds fine, but I'd push you towards FreeNAS on a 2-4gb USB stick with 8gb of RAM rather than ZFS on linux on the 250gb drive.
|
# ? Apr 29, 2012 22:46 |
|
DNova posted:Sounds fine, but I'd push you towards FreeNAS on a 2-4gb USB stick with 8gb of RAM rather than ZFS on linux on the 250gb drive. Thanks for the backup. The only reason I'd gone this route was that I'd heard it was hard to extend FreeNAS to cover other use cases that might come up, whereas with a standard linux install I can just SSH in and install whatever. I was thinking of trying out Ubuntu from a flash drive rather than the supplied disk, but then saw mentioned the idea that the spare space on this drive can be used for in-progress downloads, allowing the main array to be spun down except when in use. I really need to just start playing around with it but it's going to take a while of shuffling data around until I can free up all the disks I intend to use, and I want to get the ZFS side straight before I do this so the data I copy back to it is then safe - I don't want it sat around on the 'temporary' disks for too long without backup.
|
# ? Apr 30, 2012 00:38 |
|
Froist posted:Thanks for the backup. The only reason I'd gone this route was that I'd heard it was hard to extend FreeNAS to cover other use cases that might come up, whereas with a standard linux install I can just SSH in and install whatever. I was thinking of trying out Ubuntu from a flash drive rather than the supplied disk, but then saw mentioned the idea that the spare space on this drive can be used for in-progress downloads, allowing the main array to be spun down except when in use. The ZFS for linux implementation is still a little weak compared to the FreeBSD (and obviously Solaris) versions. Going with a straight FreeBSD install instead of the FreeNAS distribution will get you the best of both worlds. Multipurpose OS, solid ZFS implementation. I don't know if anyone else here is running ZFS on linux as their production setup.
|
# ? Apr 30, 2012 01:08 |
|
My server has 2 roles. 1-2 user file server and torrent client. I've been using win7 / utorrent to do that, but the boot drive finally gave out (an ancient 30GB IDE drive) and want to try linux. Got Ubuntu 12.04 installed, installed gnome3 as the shell, enabled and am using remote access, not 100% sure on which torrent client to use. The way I had utorrent configured it would read through several rss feeds and download the latest episodes once they aired. The feeds more than just that series, however, so it had to filter the results. Then it would assign labels for each show and when complete would move them into a label specific folder. At which point a folder sync program would one-way sync to my laptop for viewing. Very convenient, the latest episodes would magically appear on my laptop and could be deleted once viewed while remaining on the server for seeding and archive purposes. So those are the features I want from the linux client. utorrent has a linux version, but its webUI only, and last I'd tried webUI it couldn't do everything I wanted. Has it improved or is there a better option out there? Soft-RAID (5x3TB R6) and BtrFS good options? ilkhan fucked around with this message at 02:59 on Apr 30, 2012 |
# ? Apr 30, 2012 02:55 |
ilkhan posted:So those are the features I want from the linux client. utorrent has a linux version, but its webUI only, and last I'd tried webUI it couldn't do everything I wanted. Has it improved or is there a better option out there? Check out rTorrent, you can do all kinds of things with it. I use a webui called rtgui, there are several others to choose from.
|
|
# ? Apr 30, 2012 03:02 |
|
Froist posted:I decided I needed to drag my archaic backup system into something more modern as one of my drives is dying, so I ordered a N40L before the cashback offer finished. Previously I had a matching-size drive for each of my 'live' drives which I'd periodically plug in and do a manual diff (not as bad as it sounds!) and copy, then stash under my bed again. How much ram you need depends on what you'll actually be running. A pure NAS could get away with very little ram depending on usage but the more services you run the more ram you'll need. Nothing you're said indicates you need more than 2GB currently. The nice thing about ram is that upgrading is very painless so you can just wait until you're actually running out of ram and hitting swap to upgrade. Userland filesystems are really slow. You'll have to ask someone else about how good the native implementation is but it'll likely be more involved and buggier than installing something from the main repository. As far as system monitoring goes there are already solutions available like munin or cacti which you can just just write plugins for instead of implementing everything from scratch.
|
# ? Apr 30, 2012 03:56 |
|
Spotted in the FreeNAS console:code:
code:
RMA ahoy.
|
# ? Apr 30, 2012 21:03 |
|
AnandTech reviewed the SilverStone GD07 media server case, a full-ATX case that has a ton of ventilation, stays nice and quiet, and has a lot of storage options. It's got 5 3.5" drive bays, 2 2.5" bays, and two sets of two 5.25" external bays (into which you could put 3-in-2 hot swap backplanes if you want. Neat.
|
# ? Apr 30, 2012 23:34 |
|
Factory Factory posted:AnandTech reviewed the SilverStone GD07 media server case, a full-ATX case that has a ton of ventilation, stays nice and quiet, and has a lot of storage options. Hm, so what niche is this exactly? A HTPC that's also its own media server in an attractive enough case/quiet enough to leave in your living room? Sexy case, just curious who the userbase would be.
|
# ? Apr 30, 2012 23:44 |
|
Pretty much, yeah.
|
# ? May 1, 2012 00:06 |
|
movax posted:Hm, so what niche is this exactly? A HTPC that's also its own media server in an attractive enough case/quiet enough to leave in your living room? Or it would be if I didn't have an MD3000i.
|
# ? May 1, 2012 00:31 |
|
Froist posted:
2GB of ram is not enough for a functional ZFS system. Your performance will be bad, and if you're unlucky, you may experience kernel panics.
|
# ? May 1, 2012 19:54 |
|
Okay, so Amazon finally has the Hitachi 2tb retail packages on sale for $129.99 each (not ideal I know, but I picked a horrible time to build my server). Now these are the 5 platter design 7k2000 models, and I will be putting them in the HP Microserver. I ended up buying 4 of these suckers, as I learned the Samsung 2tb drives I ordered previously had their warranties cut from 3 years to 1 year, so I returned them (it appears the new Samsung drives are manufactured by Seagate in China, so no telling how reliable the new drives will be vs the old ones). The Hitachis are a retail package and have a 3-year warranty. So how dumb was I to buy these over the WD Greens for ZFS use in FreeNAS? I understand these drives are supposed to be okay for raid use, but given the platter design, did I make the right choice for its intended purpose (media server for a house with 5 users at any given time). Seagate also sells shockingly cheap 2tb and 3tb drives (compared to WD / Hitachi), but the general online consensus right now seems to be to stay away from Seagate at all costs, given the now-lovely warranties on the drives and the recently spotty reliability record. I understand Hitachi makes a 5k3000 model (5400 rpm), but these are for whatever reason more expensive than the 7200 rpm models. If it makes a difference, I will be mixing the 4x Hitachi 7k2000 drives with a Samsung F4 and another Hitachi 5k3000 for the ZFS raid array.
|
# ? May 2, 2012 02:13 |
|
Bonobos posted:Okay, so Amazon finally has the Hitachi 2tb retail packages on sale for $129.99 each (not ideal I know, but I picked a horrible time to build my server). Now these are the 5 platter design 7k2000 models, and I will be putting them in the HP Microserver. I ended up buying 4 of these suckers, as I learned the Samsung 2tb drives I ordered previously had their warranties cut from 3 years to 1 year, so I returned them (it appears the new Samsung drives are manufactured by Seagate in China, so no telling how reliable the new drives will be vs the old ones). The Hitachis are a retail package and have a 3-year warranty. You made a fine choice. I have 4 2TB 7K3000 drives in my N40L RAID-Z1. They're awesome drives. You shouldn't have buyer's remorse.
|
# ? May 2, 2012 02:19 |
|
titaniumone posted:2GB of ram is not enough for a functional ZFS system. Your performance will be bad, and if you're unlucky, you may experience kernel panics. Isn't this only true if you turn on the RAM eating features of ZFS (e.g. dedup and compression)?
|
# ? May 2, 2012 02:38 |
|
Fangs404 posted:You made a fine choice. I have 4 2TB 7K3000 drives in my N40L RAID-Z1. They're awesome drives. You shouldn't have buyer's remorse. Thanks for that, the last thing I need is picking out lovely drives. Is there a big difference between the 7k2000 and 7k3000 drives? other than the 'SATA III !!!" and the 7k3000 drives running slightly faster?
|
# ? May 2, 2012 02:52 |
|
Oh, man. You don't have to use fuse! Check this out. Especially since you use Ubunut, it even has its own little PPA setup. Performance was a lot better for me through this than fuse.
|
# ? May 2, 2012 03:17 |
|
Longinus00 posted:Isn't this only true if you turn on the RAM eating features of ZFS (e.g. dedup and compression)? I believe the rule of thumb is 1GB of RAM for every 1TB of storage. If you have 4x2TB, you'll want 8GB of RAM. The whole point is kinda moot anyway when you look at how insanely low RAM prices are nowadays. Just max out your motherboard and call it a day. Bonobos posted:Thanks for that, the last thing I need is picking out lovely drives. The 7K3000 has twice the buffer (64mb vs 32mb). Other than that, SATA III, and the extra speed, I think they're almost identical.
|
# ? May 2, 2012 03:17 |
|
Fangs404 posted:I believe the rule of thumb is 1GB of RAM for every 1TB of storage. If you have 4x2TB, you'll want 8GB of RAM. The whole point is kinda moot anyway when you look at how insanely low RAM prices are nowadays. Just max out your motherboard and call it a day. There is no way that is a true trend. Sun ZFS storage appliances (which have every single memory chewing option enabled so they can get the best benchmark numbers possible) sure as hell don't follow this trend and you'd think they'd know how to set up their own filesystem.
|
# ? May 2, 2012 03:25 |
|
If you have access to the arc stats and whatnot, that can help a ton too. ARC will eat some motherfuckin' rams son.
|
# ? May 2, 2012 03:27 |
|
Longinus00 posted:There is no way that is a true trend. Sun ZFS storage appliances (which have every single memory chewing option enabled so they can get the best benchmark numbers possible) sure as hell don't follow this trend and you'd think they'd know how to set up their own filesystem. http://doc.freenas.org/index.php/Hardware_Requirements quote:The best way to get the most out of your FreeNAS™ system is to install as much RAM as possible. If your RAM is limited, consider using UFS until you can afford better hardware. ZFS typically requires a minimum of 6 GB of RAM in order to provide good performance; in practical terms (what you can actually install), this means that the minimum is really 8 GB. The more RAM, the better the performance, and the Forums provide anecdotal evidence from users on how much performance is gained by adding more RAM. For systems with large disk capacity (greater than 6 TB), a general rule of thumb is 1GB of RAM for every 1TB of storage.
|
# ? May 2, 2012 03:30 |
|
Fangs404 posted:http://doc.freenas.org/index.php/Hardware_Requirements I guess they better tell Oracle/Sun they're deploying their own technology incorrectly.
|
# ? May 2, 2012 03:46 |
|
Longinus00 posted:I guess they better tell Oracle/Sun they're deploying their own technology incorrectly. Considering the general consensus among almost everyone I work with / communicate with, this wouldn't be a terrible surprise. edit: With regard to Oracle.
|
# ? May 2, 2012 05:27 |
|
Longinus00 posted:I guess they better tell Oracle/Sun they're deploying their own technology incorrectly. I have no idea how Oracle deploys its technology, but it's certainly not unheard of for enterprise servers to have 128gb+ RAM. A server with 128TB hard drive space and 128GB of RAM isn't far-fetched at all. This guy supports up to 768GB of RAM, so you can see that its certainly possible to get drat near 1PB of storage on a single server using this 1TB storage:1GB memory ratio. Regarding this thread (consumer NAS solutions), my 4x2TB N40L has 8gb of RAM and runs FreeNAS (RAID-Z1). I only have about 500mb of free RAM at any given time. ZFS really does eat up a shitload of RAM. Again, RAM is really so cheap that there's no reason not to max out. Fangs404 fucked around with this message at 05:43 on May 2, 2012 |
# ? May 2, 2012 05:37 |
|
Fangs404 posted:I have no idea how Oracle deploys its technology, but it's certainly not unheard of for enterprise servers to have 128gb+ RAM. A server with 128TB hard drive space and 128GB of RAM isn't far-fetched at all. This guy supports up to 768GB of RAM, so you can see that its certainly possible to get drat near 1PB of storage on a single server using this 1TB storage:1GB memory ratio. Yeah, you can get insane amounts of RAM, it just costs insane amounts of money. Yields are so low on the high-density ICs/modules, but if for some insane reason you have to have that much RAM in one machine, if you throw money at Dell/HP/etc, you can get it.
|
# ? May 2, 2012 05:52 |
|
Fangs404 posted:I have no idea how Oracle deploys its technology, but it's certainly not unheard of for enterprise servers to have 128gb+ RAM. A server with 128TB hard drive space and 128GB of RAM isn't far-fetched at all. This guy supports up to 768GB of RAM, so you can see that its certainly possible to get drat near 1PB of storage on a single server using this 1TB storage:1GB memory ratio. Possible vs. advisable are different things, I also made no comment on how far fetched running close to a terrabyte of memory in a computer was. How much of that memory usage is tied up by the ARC, ZFS handles its page cache reporting differently from other file systems? I could argue that all of my computers have very little if any free memory if I always counted the page cache. Longinus00 fucked around with this message at 06:22 on May 2, 2012 |
# ? May 2, 2012 05:52 |
|
Longinus00 posted:Possible vs. advisable are different things, I also made no comment on how far fetched running close to a terrabyte of memory in a computer was. How much of that memory usage is tied up by the ARC, ZFS handles its page cache reporting differently from other file systems? I could argue that all of my computers have very little if any free memory if I always counted the page cache. Zero swap usage. This is all physical memory usage.
|
# ? May 2, 2012 06:21 |
|
Did you miss my comment about ARC and page caches? I could post screenshots that show near 100% memory "usage" on non ZFS systems if you like.
|
# ? May 2, 2012 06:26 |
|
MrMoo posted:Oof, low-end NAS vendors now moving onto 10 GigE, ~US$4,300. I ask from time to time about this. I'm looking at picking an 859 or 879 as an iSCSI device to store backups of VM's to however I have't actually talked to anyone who has used one with ESXi. My plan is to run the actual machines on DAS on the servers (SAS drives on Dell R610s) but figure out a good backup system that I can dump a nightly copy of the critical servers to the NAS and that way be able to recover pretty quickly if need be. I don't really know how much better the x79 is than the x59 is in terms of features and performance, but that low end virtualization thing is a pain to navigate.
|
# ? May 2, 2012 06:35 |
|
Longinus00 posted:Did you miss my comment about ARC and page caches? I could post screenshots that show near 100% memory "usage" on non ZFS systems if you like. You've got to remember that most FreeNAS systems (mine included) boot off of a thumb drive where there is no hard drive swap partition. All caching must be done in memory, so it makes perfect sense why the official recommendation is to load up the servers with tons of RAM; you've got to compensate for the lack of swap. That said, I'm sure enterprise-grade ZFS machines run off of hard drive-installed FreeBSD or Solaris which will of course have a hard drive which can be used for paging. Still, though, even RAIDed SSD hard drives are orders of magnitude slower than memory, and enterprise-grade systems generally require much greater speeds. So, even in that case, maxing out the memory to reduce paging will be advantageous. I still don't think the 1TB:1GB storage to memory ratio recommendation is far-fetched at all.
|
# ? May 2, 2012 07:03 |
|
Fangs404 posted:You've got to remember that most FreeNAS systems (mine included) boot off of a thumb drive where there is no hard drive swap partition. All caching must be done in memory, so it makes perfect sense why the official recommendation is to load up the servers with tons of RAM; you've got to compensate for the lack of swap. When's the last time you've seen a page cache that caches to disk (ignoring swap)? You can have tiered ARC caches on SSDs but that's different because it's explicit. If you installed freenas to disk you shouldn't see any difference in memory usage. What I'm trying to get through to you is that the ARC is going to slurp up all available memory just like any other page cache except the kernel is going to report it as being "used" even though ZFS will still readily release it when under memory pressure. You're misinterpreting the zfs ARC memory usage in the same way people naively think linux/osx/windows is "wasting" all their memory when they see 98% used. This is why I've been asking what your ARC size is to get a real feel for how much memory ZFS is actually using.
|
# ? May 2, 2012 07:23 |
|
|
# ? May 31, 2024 11:15 |
|
Longinus00 posted:When's the last time you've seen a page cache that caches to disk (ignoring swap)? You can have tiered ARC caches on SSDs but that's different because it's explicit. If you installed freenas to disk you shouldn't see any difference in memory usage. Oh, I totally misunderstood what you're saying. arcstat.py is showing my ARC usage around 6gb, so I see what you mean. So while you definitely could run FreeNAS with less RAM, I imagine it's not recommended simply because the memory available for ARC will be much smaller, and performance will suffer as a result. The FreeNAS recommendation is probably to ensure that there is a "reasonable" amount of RAM available for ARC to improve performance. Furthermore, I'd imagine that in your example referring to Oracle servers, there's really a limit to the size ARC needs to be. I imagine that beyond a certain point, the performance improvements become negligible (diminishing returns). On smaller consumer-grade systems, 8gb/8tb probably makes perfect sense, but enterprise systems running 128tb of storage probably need far less ram for ARC; perhaps something like 24gb or 48gb would suffice. Fangs404 fucked around with this message at 07:35 on May 2, 2012 |
# ? May 2, 2012 07:30 |