Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
thideras
Oct 27, 2010

Fuck you, I'm a tree.
Fun Shoe

IT Guy posted:

Are hard drives fine to be stored for LONG periods of time whichout power/use?

As in, if I purchase 5 drives. I only want to use 4 in a RAID 5 config but keep the 5th drive as a "hot spare" that just sits on the shelf. Basically having a spare drive ready in case the RAID goes into degraded mode without actually having it in the device setup as a hot spare. Say the array lasted for 2 years but then all of a sudden had a drive failure. Would that drive that's been sitting on the shelf unused for 2 years be just as good as when you bought it?
I have no idea if one could fail after a certain amount of time without use, and the only issue I can think of is lubrication for the arm/platters. I would do a full platter scan every 6 months to make sure the drive is still functional. It would suck to have a drive drop out of the array to find your spare drive is also dead. That being said, I've fired up drives that were many years old without issue.

Adbot
ADBOT LOVES YOU

Residency Evil
Jul 28, 2003

4/5 godo... Schumi

necrobobsledder posted:

The specific API for decoding video would be DXVA2, which is all that was supported on Linux for a long time. I run Linux on my Microserver and it also does mdraid so CPU is at a premium on this setup.

I grabbed an nVidia GT520 that came with a half-height bracket for lower idle power use and because there's hardware support for TrueHD audio and so forth. I figure it'll do swell as a back-up video card down the line too if anything better comes out than the Microserver (I kinda doubt I could fit 5 drives + component GPU + PSU + non-embedded CPU in the size / power envelope for years so I suspect this will stand up quite well). For others that don't care or just want a cheaper card, there's an AMD 5450 or anything else lower-end that'll be supported under XBMC on any OS that provides DXVA to it.
It's a combination of everything in the stack - your GPU needs to support certain levels of the hardware decode API, your OS needs to have drivers that enable the API, and your video playback software must be able to leverage the API.

HTPCs are kind of fun to spec out in a sick way because you're always trying to miniaturize everything (bottom-level hardware specs) while maximizing the power to play as much as possible for your needs (top-down requirements).

So if I get a GT520 for my N40L, can I run XBMC under linux with HDMI support/audio over HDMI, or do most people stick to Windows?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

IT Guy posted:

Are hard drives fine to be stored for LONG periods of time without power/use?

As in, if I purchase 5 drives. I only want to use 4 in a RAID 5 config but keep the 5th drive as a "hot spare" that just sits on the shelf. Basically having a spare drive ready in case the RAID goes into degraded mode without actually having it in the device setup as a hot spare. Say the array lasted for 2 years but then all of a sudden had a drive failure. Would that drive that's been sitting on the shelf unused for 2 years be just as good as when you bought it?

This is actually called a cold spare, a hot spare is powered on and connected to the control but not attached to the array, but the array is configured to rebuild on that hot spare in case of a drive failure.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Residency Evil posted:

So if I get a GT520 for my N40L, can I run XBMC under linux with HDMI support/audio over HDMI, or do most people stick to Windows?
That's precisely what I'm doing now - Linux, HDMI out to my receiver, everything's gravy. XBMC Eden's Live distribution works out of the box with the setup on the N36L (materially the same as the N40L)

IT Guy
Jan 12, 2010

You people drink like you don't want to live!
Does anyone see any problems with the N40L handling about 10 servers rsyncing to it at the same time? It's going to be very little data transferred to it (1GB total), I'm mainly considering the CPU. Will the CPU be able to handle rsyncing all 10 servers at once? I'm going to be using the rsync daemon, not rsync over ssh so it won't have to do any encrypting/decrypting.

DarkLotus
Sep 30, 2001

Lithium Hosting
Personal, Reseller & VPS Hosting
30-day no risk Free Trial &
90-days Money Back Guarantee!

IT Guy posted:

Does anyone see any problems with the N40L handling about 10 servers rsyncing to it at the same time? It's going to be very little data transferred to it (1GB total), I'm mainly considering the CPU. Will the CPU be able to handle rsyncing all 10 servers at once? I'm going to be using the rsync daemon, not rsync over ssh so it won't have to do any encrypting/decrypting.

I doubt you'll have any issues related to the CPU as long as you've got 2-4GB of RAM. Your bottleneck will be your network throughput.

IT Guy
Jan 12, 2010

You people drink like you don't want to live!

DarkLotus posted:

I doubt you'll have any issues related to the CPU as long as you've got 2-4GB of RAM. Your bottleneck will be your network throughput.

Right on. It has 8GB RAM so that won't be an issue. The rsync is from 10 remote servers rsyncing unimportant data over the WAN/VPN over 6mbit/800kbit DSL connections. I definitely won't be hitting any network throughput limits.

Froist
Jun 6, 2004

I decided I needed to drag my archaic backup system into something more modern as one of my drives is dying, so I ordered a N40L before the cashback offer finished. Previously I had a matching-size drive for each of my 'live' drives which I'd periodically plug in and do a manual diff (not as bad as it sounds!) and copy, then stash under my bed again.

I'm reasonably techy but not all that experienced with linux, so it's a whole new world of things to learn and I was hoping for some sanity checking of what I've come up with. I'm basically planning:
  • Keep the 250gb drive for the OS but move it up to the top bay
  • Ubuntu headless server/SSH admin
  • ZFS over 4x2tb in RAIDZ, should give me 6tb useable space and protection should a drive fail
  • Samba sharing to Macs + PCs around the house
  • Possibly Time machine backups - though it seems I would need AFP for this
  • DLNA server, maybe with CouchPotato etc down the line
  • For the moment just sticking with 2GB ram - will this actually become a limiting factor for a pure-server use case?

It's ZFS (the most critical part) that I have the main worries about due to lack of experience. I tried it in a VM and it seemed far too easy - to the point I assumed I was missing something. It basically came down to:
code:
sudo apt-get install zfs-fuse
sudo zpool create media -m /storage raidz /dev/sda /dev/sdb /dev/sdc /dev/sdd
I understand I need to set up a schedule to scrub it, but is there anything else I may be missing? Also I'll probably try to use the kernel version of zfs rather than Fuse for real, it sounds like the performance difference is worth the extra effort to set it up.

I'll also be installing Apache and planning to write a mobile-friendly web interface for browsing the server/viewing stats/etc. I've seen Webmin, are there any other web interfaces people use?

sleepy gary
Jan 11, 2006

Sounds fine, but I'd push you towards FreeNAS on a 2-4gb USB stick with 8gb of RAM rather than ZFS on linux on the 250gb drive.

Froist
Jun 6, 2004

DNova posted:

Sounds fine, but I'd push you towards FreeNAS on a 2-4gb USB stick with 8gb of RAM rather than ZFS on linux on the 250gb drive.

Thanks for the backup. The only reason I'd gone this route was that I'd heard it was hard to extend FreeNAS to cover other use cases that might come up, whereas with a standard linux install I can just SSH in and install whatever. I was thinking of trying out Ubuntu from a flash drive rather than the supplied disk, but then saw mentioned the idea that the spare space on this drive can be used for in-progress downloads, allowing the main array to be spun down except when in use.

I really need to just start playing around with it but it's going to take a while of shuffling data around until I can free up all the disks I intend to use, and I want to get the ZFS side straight before I do this so the data I copy back to it is then safe - I don't want it sat around on the 'temporary' disks for too long without backup.

NickPancakes
Oct 27, 2004

Damnit, somebody get me a tissue.

Froist posted:

Thanks for the backup. The only reason I'd gone this route was that I'd heard it was hard to extend FreeNAS to cover other use cases that might come up, whereas with a standard linux install I can just SSH in and install whatever. I was thinking of trying out Ubuntu from a flash drive rather than the supplied disk, but then saw mentioned the idea that the spare space on this drive can be used for in-progress downloads, allowing the main array to be spun down except when in use.

I really need to just start playing around with it but it's going to take a while of shuffling data around until I can free up all the disks I intend to use, and I want to get the ZFS side straight before I do this so the data I copy back to it is then safe - I don't want it sat around on the 'temporary' disks for too long without backup.

The ZFS for linux implementation is still a little weak compared to the FreeBSD (and obviously Solaris) versions. Going with a straight FreeBSD install instead of the FreeNAS distribution will get you the best of both worlds. Multipurpose OS, solid ZFS implementation. I don't know if anyone else here is running ZFS on linux as their production setup.

ilkhan
Oct 7, 2004

I LOVE Musk and his pro-first-amendment ways. X is the future.
My server has 2 roles. 1-2 user file server and torrent client.
I've been using win7 / utorrent to do that, but the boot drive finally gave out (an ancient 30GB IDE drive) and want to try linux.
Got Ubuntu 12.04 installed, installed gnome3 as the shell, enabled and am using remote access, not 100% sure on which torrent client to use.

The way I had utorrent configured it would read through several rss feeds and download the latest episodes once they aired. The feeds more than just that series, however, so it had to filter the results. Then it would assign labels for each show and when complete would move them into a label specific folder. At which point a folder sync program would one-way sync to my laptop for viewing. Very convenient, the latest episodes would magically appear on my laptop and could be deleted once viewed while remaining on the server for seeding and archive purposes.

So those are the features I want from the linux client. utorrent has a linux version, but its webUI only, and last I'd tried webUI it couldn't do everything I wanted. Has it improved or is there a better option out there?
Soft-RAID (5x3TB R6) and BtrFS good options?

ilkhan fucked around with this message at 02:59 on Apr 30, 2012

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

ilkhan posted:

So those are the features I want from the linux client. utorrent has a linux version, but its webUI only, and last I'd tried webUI it couldn't do everything I wanted. Has it improved or is there a better option out there?

Check out rTorrent, you can do all kinds of things with it. I use a webui called rtgui, there are several others to choose from.

Longinus00
Dec 29, 2005
Ur-Quan

Froist posted:

I decided I needed to drag my archaic backup system into something more modern as one of my drives is dying, so I ordered a N40L before the cashback offer finished. Previously I had a matching-size drive for each of my 'live' drives which I'd periodically plug in and do a manual diff (not as bad as it sounds!) and copy, then stash under my bed again.

I'm reasonably techy but not all that experienced with linux, so it's a whole new world of things to learn and I was hoping for some sanity checking of what I've come up with. I'm basically planning:
  • Keep the 250gb drive for the OS but move it up to the top bay
  • Ubuntu headless server/SSH admin
  • ZFS over 4x2tb in RAIDZ, should give me 6tb useable space and protection should a drive fail
  • Samba sharing to Macs + PCs around the house
  • Possibly Time machine backups - though it seems I would need AFP for this
  • DLNA server, maybe with CouchPotato etc down the line
  • For the moment just sticking with 2GB ram - will this actually become a limiting factor for a pure-server use case?

It's ZFS (the most critical part) that I have the main worries about due to lack of experience. I tried it in a VM and it seemed far too easy - to the point I assumed I was missing something. It basically came down to:
code:
sudo apt-get install zfs-fuse
sudo zpool create media -m /storage raidz /dev/sda /dev/sdb /dev/sdc /dev/sdd
I understand I need to set up a schedule to scrub it, but is there anything else I may be missing? Also I'll probably try to use the kernel version of zfs rather than Fuse for real, it sounds like the performance difference is worth the extra effort to set it up.

I'll also be installing Apache and planning to write a mobile-friendly web interface for browsing the server/viewing stats/etc. I've seen Webmin, are there any other web interfaces people use?

How much ram you need depends on what you'll actually be running. A pure NAS could get away with very little ram depending on usage but the more services you run the more ram you'll need. Nothing you're said indicates you need more than 2GB currently. The nice thing about ram is that upgrading is very painless so you can just wait until you're actually running out of ram and hitting swap to upgrade.

Userland filesystems are really slow. You'll have to ask someone else about how good the native implementation is but it'll likely be more involved and buggier than installing something from the main repository.

As far as system monitoring goes there are already solutions available like munin or cacti which you can just just write plugins for instead of implementing everything from scratch.

sleepy gary
Jan 11, 2006

Spotted in the FreeNAS console:
code:
Apr 30 15:01:35 fileserver smartd[1590]: Device: /dev/ada0, 48 Currently unreadable (pending) sectors
Apr 30 15:01:35 fileserver smartd[1590]: Device: /dev/ada0, 48 Offline uncorrectable sectors
server#: smartctl -a /dev/ada0
code:
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       715
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       13

197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       48
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       48


RMA ahoy.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
AnandTech reviewed the SilverStone GD07 media server case, a full-ATX case that has a ton of ventilation, stays nice and quiet, and has a lot of storage options.

It's got 5 3.5" drive bays, 2 2.5" bays, and two sets of two 5.25" external bays (into which you could put 3-in-2 hot swap backplanes if you want.

Neat.

movax
Aug 30, 2008

Factory Factory posted:

AnandTech reviewed the SilverStone GD07 media server case, a full-ATX case that has a ton of ventilation, stays nice and quiet, and has a lot of storage options.

It's got 5 3.5" drive bays, 2 2.5" bays, and two sets of two 5.25" external bays (into which you could put 3-in-2 hot swap backplanes if you want.

Neat.

Hm, so what niche is this exactly? A HTPC that's also its own media server in an attractive enough case/quiet enough to leave in your living room? Sexy case, just curious who the userbase would be.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Pretty much, yeah.

evil_bunnY
Apr 2, 2003

movax posted:

Hm, so what niche is this exactly? A HTPC that's also its own media server in an attractive enough case/quiet enough to leave in your living room?
The niche is me.

Or it would be if I didn't have an MD3000i.

titaniumone
Jun 10, 2001

Froist posted:

  • ZFS over 4x2tb in RAIDZ, should give me 6tb useable space and protection should a drive fail
  • For the moment just sticking with 2GB ram - will this actually become a limiting factor for a pure-server use case?

It's ZFS (the most critical part) that I have the main worries about due to lack of experience.

2GB of ram is not enough for a functional ZFS system. Your performance will be bad, and if you're unlucky, you may experience kernel panics.

Bonobos
Jan 26, 2004
Okay, so Amazon finally has the Hitachi 2tb retail packages on sale for $129.99 each (not ideal I know, but I picked a horrible time to build my server). Now these are the 5 platter design 7k2000 models, and I will be putting them in the HP Microserver. I ended up buying 4 of these suckers, as I learned the Samsung 2tb drives I ordered previously had their warranties cut from 3 years to 1 year, so I returned them (it appears the new Samsung drives are manufactured by Seagate in China, so no telling how reliable the new drives will be vs the old ones). The Hitachis are a retail package and have a 3-year warranty.

So how dumb was I to buy these over the WD Greens for ZFS use in FreeNAS? I understand these drives are supposed to be okay for raid use, but given the platter design, did I make the right choice for its intended purpose (media server for a house with 5 users at any given time).

Seagate also sells shockingly cheap 2tb and 3tb drives (compared to WD / Hitachi), but the general online consensus right now seems to be to stay away from Seagate at all costs, given the now-lovely warranties on the drives and the recently spotty reliability record.

I understand Hitachi makes a 5k3000 model (5400 rpm), but these are for whatever reason more expensive than the 7200 rpm models. If it makes a difference, I will be mixing the 4x Hitachi 7k2000 drives with a Samsung F4 and another Hitachi 5k3000 for the ZFS raid array.

Fangs404
Dec 20, 2004

I time bomb.

Bonobos posted:

Okay, so Amazon finally has the Hitachi 2tb retail packages on sale for $129.99 each (not ideal I know, but I picked a horrible time to build my server). Now these are the 5 platter design 7k2000 models, and I will be putting them in the HP Microserver. I ended up buying 4 of these suckers, as I learned the Samsung 2tb drives I ordered previously had their warranties cut from 3 years to 1 year, so I returned them (it appears the new Samsung drives are manufactured by Seagate in China, so no telling how reliable the new drives will be vs the old ones). The Hitachis are a retail package and have a 3-year warranty.

So how dumb was I to buy these over the WD Greens for ZFS use in FreeNAS? I understand these drives are supposed to be okay for raid use, but given the platter design, did I make the right choice for its intended purpose (media server for a house with 5 users at any given time).

Seagate also sells shockingly cheap 2tb and 3tb drives (compared to WD / Hitachi), but the general online consensus right now seems to be to stay away from Seagate at all costs, given the now-lovely warranties on the drives and the recently spotty reliability record.

I understand Hitachi makes a 5k3000 model (5400 rpm), but these are for whatever reason more expensive than the 7200 rpm models. If it makes a difference, I will be mixing the 4x Hitachi 7k2000 drives with a Samsung F4 and another Hitachi 5k3000 for the ZFS raid array.

You made a fine choice. I have 4 2TB 7K3000 drives in my N40L RAID-Z1. They're awesome drives. You shouldn't have buyer's remorse.

Longinus00
Dec 29, 2005
Ur-Quan

titaniumone posted:

2GB of ram is not enough for a functional ZFS system. Your performance will be bad, and if you're unlucky, you may experience kernel panics.

Isn't this only true if you turn on the RAM eating features of ZFS (e.g. dedup and compression)?

Bonobos
Jan 26, 2004

Fangs404 posted:

You made a fine choice. I have 4 2TB 7K3000 drives in my N40L RAID-Z1. They're awesome drives. You shouldn't have buyer's remorse.

Thanks for that, the last thing I need is picking out lovely drives.

Is there a big difference between the 7k2000 and 7k3000 drives? other than the 'SATA III !!!" and the 7k3000 drives running slightly faster?

Wheelchair Stunts
Dec 17, 2005
Oh, man. You don't have to use fuse! Check this out. Especially since you use Ubunut, it even has its own little PPA setup. Performance was a lot better for me through this than fuse.

Fangs404
Dec 20, 2004

I time bomb.

Longinus00 posted:

Isn't this only true if you turn on the RAM eating features of ZFS (e.g. dedup and compression)?

I believe the rule of thumb is 1GB of RAM for every 1TB of storage. If you have 4x2TB, you'll want 8GB of RAM. The whole point is kinda moot anyway when you look at how insanely low RAM prices are nowadays. Just max out your motherboard and call it a day.

Bonobos posted:

Thanks for that, the last thing I need is picking out lovely drives.

Is there a big difference between the 7k2000 and 7k3000 drives? other than the 'SATA III !!!" and the 7k3000 drives running slightly faster?

The 7K3000 has twice the buffer (64mb vs 32mb). Other than that, SATA III, and the extra speed, I think they're almost identical.

Longinus00
Dec 29, 2005
Ur-Quan

Fangs404 posted:

I believe the rule of thumb is 1GB of RAM for every 1TB of storage. If you have 4x2TB, you'll want 8GB of RAM. The whole point is kinda moot anyway when you look at how insanely low RAM prices are nowadays. Just max out your motherboard and call it a day.

There is no way that is a true trend. Sun ZFS storage appliances (which have every single memory chewing option enabled so they can get the best benchmark numbers possible) sure as hell don't follow this trend and you'd think they'd know how to set up their own filesystem.

Wheelchair Stunts
Dec 17, 2005
If you have access to the arc stats and whatnot, that can help a ton too. ARC will eat some motherfuckin' rams son.

Fangs404
Dec 20, 2004

I time bomb.

Longinus00 posted:

There is no way that is a true trend. Sun ZFS storage appliances (which have every single memory chewing option enabled so they can get the best benchmark numbers possible) sure as hell don't follow this trend and you'd think they'd know how to set up their own filesystem.

http://doc.freenas.org/index.php/Hardware_Requirements

quote:

The best way to get the most out of your FreeNAS™ system is to install as much RAM as possible. If your RAM is limited, consider using UFS until you can afford better hardware. ZFS typically requires a minimum of 6 GB of RAM in order to provide good performance; in practical terms (what you can actually install), this means that the minimum is really 8 GB. The more RAM, the better the performance, and the Forums provide anecdotal evidence from users on how much performance is gained by adding more RAM. For systems with large disk capacity (greater than 6 TB), a general rule of thumb is 1GB of RAM for every 1TB of storage.

:parrot:

Longinus00
Dec 29, 2005
Ur-Quan

I guess they better tell Oracle/Sun they're deploying their own technology incorrectly.

Wheelchair Stunts
Dec 17, 2005

Longinus00 posted:

I guess they better tell Oracle/Sun they're deploying their own technology incorrectly.

Considering the general consensus among almost everyone I work with / communicate with, this wouldn't be a terrible surprise.

edit: With regard to Oracle.

Fangs404
Dec 20, 2004

I time bomb.

Longinus00 posted:

I guess they better tell Oracle/Sun they're deploying their own technology incorrectly.

I have no idea how Oracle deploys its technology, but it's certainly not unheard of for enterprise servers to have 128gb+ RAM. A server with 128TB hard drive space and 128GB of RAM isn't far-fetched at all. This guy supports up to 768GB of RAM, so you can see that its certainly possible to get drat near 1PB of storage on a single server using this 1TB storage:1GB memory ratio.

Regarding this thread (consumer NAS solutions), my 4x2TB N40L has 8gb of RAM and runs FreeNAS (RAID-Z1). I only have about 500mb of free RAM at any given time. ZFS really does eat up a shitload of RAM. Again, RAM is really so cheap that there's no reason not to max out.

Fangs404 fucked around with this message at 05:43 on May 2, 2012

movax
Aug 30, 2008

Fangs404 posted:

I have no idea how Oracle deploys its technology, but it's certainly not unheard of for enterprise servers to have 128gb+ RAM. A server with 128TB hard drive space and 128GB of RAM isn't far-fetched at all. This guy supports up to 768GB of RAM, so you can see that its certainly possible to get drat near 1PB of storage on a single server using this 1TB storage:1GB memory ratio.

Regarding this thread (consumer NAS solutions), my 4x2TB N40L has 8gb of RAM and runs FreeNAS (RAID-Z1). I only have about 500mb of free RAM at any given time. ZFS really does eat up a shitload of RAM. Again, RAM is really so cheap that there's no reason not to max out.

Yeah, you can get insane amounts of RAM, it just costs insane amounts of money. Yields are so low on the high-density ICs/modules, but if for some insane reason you have to have that much RAM in one machine, if you throw money at Dell/HP/etc, you can get it.

Longinus00
Dec 29, 2005
Ur-Quan

Fangs404 posted:

I have no idea how Oracle deploys its technology, but it's certainly not unheard of for enterprise servers to have 128gb+ RAM. A server with 128TB hard drive space and 128GB of RAM isn't far-fetched at all. This guy supports up to 768GB of RAM, so you can see that its certainly possible to get drat near 1PB of storage on a single server using this 1TB storage:1GB memory ratio.

Regarding this thread (consumer NAS solutions), my 4x2TB N40L has 8gb of RAM and runs FreeNAS (RAID-Z1). I only have about 500mb of free RAM at any given time. ZFS really does eat up a shitload of RAM. Again, RAM is really so cheap that there's no reason not to max out.

Possible vs. advisable are different things, I also made no comment on how far fetched running close to a terrabyte of memory in a computer was. How much of that memory usage is tied up by the ARC, ZFS handles its page cache reporting differently from other file systems? I could argue that all of my computers have very little if any free memory if I always counted the page cache.

Longinus00 fucked around with this message at 06:22 on May 2, 2012

Fangs404
Dec 20, 2004

I time bomb.

Longinus00 posted:

Possible vs. advisable are different things, I also made no comment on how far fetched running close to a terrabyte of memory in a computer was. How much of that memory usage is tied up by the ARC, ZFS handles its page cache reporting differently from other file systems? I could argue that all of my computers have very little if any free memory if I always counted the page cache.



Zero swap usage. This is all physical memory usage.

Longinus00
Dec 29, 2005
Ur-Quan

Fangs404 posted:



Zero swap usage. This is all physical memory usage.

Did you miss my comment about ARC and page caches? I could post screenshots that show near 100% memory "usage" on non ZFS systems if you like.

CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!
:yaycloud::smithcloud:

MrMoo posted:

Oof, low-end NAS vendors now moving onto 10 GigE, ~US$4,300.



http://www.qnap.com/static/landing/10gbe_en.html

I ask from time to time about this. I'm looking at picking an 859 or 879 as an iSCSI device to store backups of VM's to however I have't actually talked to anyone who has used one with ESXi. My plan is to run the actual machines on DAS on the servers (SAS drives on Dell R610s) but figure out a good backup system that I can dump a nightly copy of the critical servers to the NAS and that way be able to recover pretty quickly if need be.

I don't really know how much better the x79 is than the x59 is in terms of features and performance, but that low end virtualization thing is a pain to navigate.

Fangs404
Dec 20, 2004

I time bomb.

Longinus00 posted:

Did you miss my comment about ARC and page caches? I could post screenshots that show near 100% memory "usage" on non ZFS systems if you like.

You've got to remember that most FreeNAS systems (mine included) boot off of a thumb drive where there is no hard drive swap partition. All caching must be done in memory, so it makes perfect sense why the official recommendation is to load up the servers with tons of RAM; you've got to compensate for the lack of swap.

That said, I'm sure enterprise-grade ZFS machines run off of hard drive-installed FreeBSD or Solaris which will of course have a hard drive which can be used for paging. Still, though, even RAIDed SSD hard drives are orders of magnitude slower than memory, and enterprise-grade systems generally require much greater speeds. So, even in that case, maxing out the memory to reduce paging will be advantageous.

I still don't think the 1TB:1GB storage to memory ratio recommendation is far-fetched at all.

Longinus00
Dec 29, 2005
Ur-Quan

Fangs404 posted:

You've got to remember that most FreeNAS systems (mine included) boot off of a thumb drive where there is no hard drive swap partition. All caching must be done in memory, so it makes perfect sense why the official recommendation is to load up the servers with tons of RAM; you've got to compensate for the lack of swap.

That said, I'm sure enterprise-grade ZFS machines run off of hard drive-installed FreeBSD or Solaris which will of course have a hard drive which can be used for paging. Still, though, even RAIDed SSD hard drives are orders of magnitude slower than memory, and enterprise-grade systems generally require much greater speeds. So, even in that case, maxing out the memory to reduce paging will be advantageous.

I still don't think the 1TB:1GB storage to memory ratio recommendation is far-fetched at all.

When's the last time you've seen a page cache that caches to disk (ignoring swap)? You can have tiered ARC caches on SSDs but that's different because it's explicit. If you installed freenas to disk you shouldn't see any difference in memory usage.

What I'm trying to get through to you is that the ARC is going to slurp up all available memory just like any other page cache except the kernel is going to report it as being "used" even though ZFS will still readily release it when under memory pressure. You're misinterpreting the zfs ARC memory usage in the same way people naively think linux/osx/windows is "wasting" all their memory when they see 98% used. This is why I've been asking what your ARC size is to get a real feel for how much memory ZFS is actually using.

Adbot
ADBOT LOVES YOU

Fangs404
Dec 20, 2004

I time bomb.

Longinus00 posted:

When's the last time you've seen a page cache that caches to disk (ignoring swap)? You can have tiered ARC caches on SSDs but that's different because it's explicit. If you installed freenas to disk you shouldn't see any difference in memory usage.

What I'm trying to get through to you is that the ARC is going to slurp up all available memory just like any other page cache except the kernel is going to report it as being "used" even though ZFS will still readily release it when under memory pressure. You're misinterpreting the zfs ARC memory usage in the same way people naively think linux/osx/windows is "wasting" all their memory when they see 98% used. This is why I've been asking what your ARC size is to get a real feel for how much memory ZFS is actually using.

Oh, I totally misunderstood what you're saying. arcstat.py is showing my ARC usage around 6gb, so I see what you mean. So while you definitely could run FreeNAS with less RAM, I imagine it's not recommended simply because the memory available for ARC will be much smaller, and performance will suffer as a result. The FreeNAS recommendation is probably to ensure that there is a "reasonable" amount of RAM available for ARC to improve performance.

Furthermore, I'd imagine that in your example referring to Oracle servers, there's really a limit to the size ARC needs to be. I imagine that beyond a certain point, the performance improvements become negligible (diminishing returns). On smaller consumer-grade systems, 8gb/8tb probably makes perfect sense, but enterprise systems running 128tb of storage probably need far less ram for ARC; perhaps something like 24gb or 48gb would suffice.

Fangs404 fucked around with this message at 07:35 on May 2, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply