Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

IOwnCalculus posted:

Going through all of that again, because I'm not going to need quite that many spindles in the forseeable future...any reason not to use the on-board SATA ports exclusively, instead of starting off with the LSI card?
The primary reason to use the LSI or IBM cards are for hardware compatibility with OpenSolaris and *BSD using software RAID at minimal cost since SATA port multipliers tend to blow chunks or are Windows-only. Then there's the whole "omg, I have 10+ TB, I'm gonna need some reliability in my hardware" train of thought and the $5 onboard SATA controllers might not be worth the "risk." This sort of need comes up mostly when we're building 6+ disk arrays because most onboard SATA controllers stop at 6 ports and you need a boot disk (if you're not using USB or IDE boot that is).

If you don't need something now or for the foreseeable future, no point spending the money. Just a warning though - I've found that similar to when I got broadband, when I got 10TB+ of space I started filling it up faster (an old VM here, another one there... crap, 200GB), the only difference from before is that I'd be delaying when I'd need to clean things up.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Scuttle_SE posted:

Hmm...the cpu is a Atom D525, and according to Intels website it is capable of 4GB, or am I completely misreading stuff?

Yes, but the motherboard is also a factor in how much memory is supported.

I still don't understand what you need the memory for?

IOwnCalculus
Apr 2, 2003





necrobobsledder posted:

The primary reason to use the LSI or IBM cards are for hardware compatibility with OpenSolaris and *BSD using software RAID at minimal cost since SATA port multipliers tend to blow chunks or are Windows-only. Then there's the whole "omg, I have 10+ TB, I'm gonna need some reliability in my hardware" train of thought and the $5 onboard SATA controllers might not be worth the "risk." This sort of need comes up mostly when we're building 6+ disk arrays because most onboard SATA controllers stop at 6 ports and you need a boot disk (if you're not using USB or IDE boot that is).

If you don't need something now or for the foreseeable future, no point spending the money. Just a warning though - I've found that similar to when I got broadband, when I got 10TB+ of space I started filling it up faster (an old VM here, another one there... crap, 200GB), the only difference from before is that I'd be delaying when I'd need to clean things up.

Oh, I'm not coming from no storage, I've been running an Ubuntu box with a six-1.5TB mdraid RAID5 array for quite some time :v: I want to migrate to some newer hardware and ESX/ZFS so I can consolidate a couple boxes, plus get better error protection and dedup (which would, according to fdupes, get me 250GB without even looking deeper than file-level). I've been pretty steady on my disk usage for quite some time. I'm thinking of going to a RAIDZ2 with either 6x 2TB drives or 5x 3TB drives, booting ESX off of the onboard USB header, and maaaaaybe adding in a cheap two port card, assuming a compatible one exists, and hanging a SSD on it for ZIL.

Google says ESX should pick up the individual disks on the onboard controller just fine, so I guess that will work. Now, to just find some money :v:

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

Please remember that dedupe on ZFS is going to require INSANE amounts of memory.

You've been warned.

IOwnCalculus
Apr 2, 2003





RAM is dirt loving cheap these days, the main reason I haven't done this yet is the spike in drive prices.

Obviously Erratic
Oct 17, 2008

Give me beauty or give me death!

D. Ebdrup posted:

Those numbers are just fine - are you still getting slow NIC file transfer speeds between the server and your desktop? If so (and you want to stay with Ubuntu), try upgrading to the latest firmware and drivers for your NIC.

My network speeds over GigE seem to sit at around 80~90MB/Sec - the highest I saw it go was about 117MB/Sec but was able to sustain multiple copies at a time around 80MB/Sec which is WAAAAAY better than I was getting before!

I'm quiet happy with Ubuntu and the ZFS performance so far, I think I do need to tweak a little more though. Any tips on upgrading the firmware & drivers for the N40L's NIC?

Nam Taf
Jun 25, 2005

I am Fat Man, hear me roar!

feld posted:

Please remember that dedupe on ZFS is going to require INSANE amounts of memory.

You've been warned.

Out of morbid curiousity, can you put some figures to that?

BlankSystemDaemon
Mar 13, 2009



Obviously Erratic posted:

My network speeds over GigE seem to sit at around 80~90MB/Sec - the highest I saw it go was about 117MB/Sec but was able to sustain multiple copies at a time around 80MB/Sec which is WAAAAAY better than I was getting before!

I'm quiet happy with Ubuntu and the ZFS performance so far, I think I do need to tweak a little more though. Any tips on upgrading the firmware & drivers for the N40L's NIC?

Now try using copyhandler or a similar program where you can adjust the buffer size for network transfers.

With regard to drivers - doesn't look like there are any available on the drivers page, unless you're meant to use the N36L drivers which I can't really believe on account of them using seperate chipsets, cpus and possibly a different motherboard.
Contact HP through email? I'm sure they can answer a simple query as to whether drivers are available and where.

Scuttle_SE
Jun 2, 2005
I like mammaries
Pillbug

D. Ebdrup posted:

Yes, but the motherboard is also a factor in how much memory is supported.

I still don't understand what you need the memory for?

Not that I need it right now, but memory is dirt cheap, and I might as well max the thing out now, in case it is needed later down the line.

Tornhelm
Jul 26, 2008

necrobobsledder posted:

Fat good a hard drive does in a file server if there's no power or signal to it. Look close. I'll say that a Microserver can fit 4 3.5" disks in the hot swap bays and a 3.5" and 2.5" in the upper 5.25" bay comfortably and without modding the case.

It fits two 3.5" in the ODD bay if you have something like the Nexus Double Twin bracket, and a 2.5" in the space underneath that. Replace the ODD blanking plate with a mesh one and it'll keep them just as cool as the bottom four drives. That can be a 6x3TB setup with very little fuss or bother (and running the OS off of either a USB or the 2.5" depending on what OS you end up with of course).

Edit: For the chap who was looking for info on the N40L bios, your best bet would be to check out the avforums owners thread - the overclockers.com.au megathread is more geared to the N36L afaik.

Tornhelm fucked around with this message at 11:12 on Dec 30, 2011

Obviously Erratic
Oct 17, 2008

Give me beauty or give me death!
Has anyone had any experience or done any benchmarking with a ZFS cache drive SSD in the Micro server?
Considering getting one of the Nexus Double Twin brackets to slot another 2TB in and then adding in a smallish SSD but not sure whether I'd be best to use it for cache or for OS (don't have any issues running from USB on mobo yet)

BlankSystemDaemon
Mar 13, 2009



Obviously Erratic posted:

Has anyone had any experience or done any benchmarking with a ZFS cache drive SSD in the Micro server?
Considering getting one of the Nexus Double Twin brackets to slot another 2TB in and then adding in a smallish SSD but not sure whether I'd be best to use it for cache or for OS (don't have any issues running from USB on mobo yet)
Well, let's distinguish between cache and zil first (because zfs does, so should you). Cache is for reading, while zfs intent log (referred to as zil or log) is for writing. Cache can be run on 1 device and it typically doesn't need to be much bigger than 4-8GB depending on how much memory you have. Log devices on the other hand need to be a bit bigger, but should also never be run without another functioning as a mirror device (since any writes stored on the log device will be lost if the system loses power, potentially corrupting data). For both, though - go for SLC SSDs as they tend to be much faster in disk i/o.
Moreover, for the HP N36L for example, its cpu is a tad slow (running at only 1.3, though the HP N40L isn't much faster at 1.4) for tasks which aren't threaded (like SMB/CIFS sharing, and some other tasks).
Really, it'd be easier to help you if I knew what you wanted to improve in specific on your server (provided you've already purchased a seperate NIC and you're running loadbalanced link aggregation, else you should look into that first so your ethernet connection doesn't become the next bottleneck (if it isn't already, if you're using FreeBSD with the bge0 standard driver)

Now for a few links or anyone messing about with zfs:
Here is a page discussing the best practices on using zfs. More specific to improving your zfs preformance, here is a tuning guide to zfs (keep in mind the "Tuning is evil" section).

As to your question, I have done some zfs benchmarking and the best speeds I've seen were on 5 Hitashi 15k SAS-connected 450GB drives running in zraid2 with one ssd for cache and two ssds for zil with a 10 Gb fiber connection (the server was a iSCSI attached datastore for a Vmware ESXi) - but it varies a lot based on sector size, what data you're moving (and your buffers on said filetransfers, for example on Windows the default is 512, but if you're running jumbo frames you want to raise it), whether your network is setup properly (with 9k jumbo frames and loadbalcanced link aggregation), and whether there's a break on one of the pairs in the cat5e/cat6 cable you're using (I only add this because it was the reason my TV was always slow to connect) - just to mention a few things.

BlankSystemDaemon fucked around with this message at 14:37 on Dec 31, 2011

Obviously Erratic
Oct 17, 2008

Give me beauty or give me death!

D. Ebdrup posted:

Well, let's distinguish between cache and zil first (because zfs does, so should you). Cache is for reading, while zfs intent log (referred to as zil or log) is for writing. Cache can be run on 1 device and it typically doesn't need to be much bigger than 4-8GB depending on how much memory you have. Log devices on the other hand need to be a bit bigger, but should also never be run without another functioning as a mirror device (since any writes stored on the log device will be lost if the system loses power, potentially corrupting data). For both, though - go for SLC SSDs as they tend to be much faster in disk i/o.
Moreover, for the HP N36L for example, its cpu is a tad slow (running at only 1.3, though the HP N40L isn't much faster at 1.4) for tasks which aren't threaded (like SMB/CIFS sharing, and some other tasks).
Really, it'd be easier to help you if I knew what you wanted to improve in specific on your server (provided you've already purchased a seperate NIC and you're running loadbalanced link aggregation, else you should look into that first so your ethernet connection doesn't become the next bottleneck (if it isn't already, if you're using FreeBSD with the bge0 standard driver)

Now for a few links or anyone messing about with zfs:
Here is a page discussing the best practices on using zfs. More specific to improving your zfs preformance, here is a tuning guide to zfs (keep in mind the "Tuning is evil" section).

As to your question, I have done some zfs benchmarking and the best speeds I've seen were on 5 Hitashi 15k SAS-connected 450GB drives running in zraid2 with one ssd for cache and two ssds for zil with a 10 Gb fiber connection (the server was a iSCSI attached datastore for a Vmware ESXi) - but it varies a lot based on sector size, what data you're moving (and your buffers on said filetransfers, for example on Windows the default is 512, but if you're running jumbo frames you want to raise it), whether your network is setup properly (with 9k jumbo frames and loadbalcanced link aggregation), and whether there's a break on one of the pairs in the cat5e/cat6 cable you're using (I only add this because it was the reason my TV was always slow to connect) - just to mention a few things.

Cool, thanks for the links too. I don't think I'll really bother with an SSD for cache or ZIL as I don't think I'll ever have the network throughput to make it matter.

Thanks for the patience and help so far, this whole ZFS journey has been awesome so far.
My only other question, is that I found another 2TB drive laying around (wish that would happen more often) and figure I might rebuild my zpool now to add it in before I get too far along. However, it's not a 4k sector drive whilst all the others in the pool are. When I rebuild should I still go ashift=12 to accommodate all the other drives? Am I better to forget this drive and not add it in at all?

My understanding is that I can't simply grow the zpool and I'll need to destroy it and rebuild, which I'm pretty sure is correct?

EDIT: Seems I'll be fine to keep ashift=12 for the 4k zpool:
code:
A 4k aligned pool will work perfectly on a 512b aligned disk, it's just
the other way that's bad. I guess ZFS could start defaulting to 4k, but
ideally it should do the right thing depending on content (although
that's hard for disks that are lying).
DOUBLE EDIT: drat, looks like a 6-drive RAIDZ pool will be bad for performance due to the striping of ZFS. Maybe this is not a good idea.

Obviously Erratic fucked around with this message at 04:50 on Jan 1, 2012

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
It's possible to expand the RAIDZ out by replacing each drive and resilvering each replacement. Then you can grow the array to expand out to the total available capacity. This is way too much risk and effort for most so people just decommission the array and load balance out to other arrays. People preferred Windows Home Server for a reason, but it turns out it's kinda hard to get that kind of redundancy at the file / folder level on NTFS. Unraid is another option based on Linux that can emulate WHS-like folder redundancy where you'd only lose what's on the drive if it failed.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Obviously Erratic posted:

DOUBLE EDIT: drat, looks like a 6-drive RAIDZ pool will be bad for performance due to the striping of ZFS. Maybe this is not a good idea.

Why do you say this? You won't have the small boost that normal RAID 5 would offer, but it should run at the speed of the pool's slowest block device.

zero0ne
Jul 20, 2007
Zero to the O N E

D. Ebdrup posted:

... Stuff about ZFS ZIL and L2ARC ...

Just some nitpicks as I have started researching this a bit (and I only post as I continue to look for counterpoints and more info).

1) The ZIL drive is what can be small, as per the following:

ZFS Best Practice Guide posted:

- The maximum size of a log device should be approximately 1/2 the size of physical memory because that is the maximum amount of potential in-play data that can be stored. For example, if a system has 16 GB of physical memory, consider a maximum log device size of 8 GB.
- For a target throughput of X MB/sec and given that ZFS pushes transaction groups every 5 seconds (and have 2 outstanding), we also expect the ZIL to not grow beyond X MB/sec * 10 sec. So to service 100MB/sec of synchronous writes, 1 GB of log device should be sufficient.
https://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#General_Storage_Pool_Performance_Considerations

2. I would say you can still get away without mirroring the ZIL as long as you get a SSD that has a supercap (Intel 320 series among others). It will protect against power loss, but of course not against a drive failure. As long as you have a newer version of ZFS, it will gracefully handle a ZIL drive failure and switch over to using blocks from the storage pool.

3) a Cache drive is L2ARC, where normal ARC is in-memory and will use all but 1GB of current memory. If you have limited RAM, getting a nice SSD for L2ARC is going to help a lot. Hell, even if you have 16 or 32GB RAM, it may still help in some cases. Nice thing with L2ARC is that if the drive fails, nothing is lost.

4) SLC isn't (IMO) the best for both. If you want max performance, you will need to keep in mind that SLC is ideal for the ZIL, and MLC is better for the cache.

Check this thread out: http://forums.freebsd.org/archive/index.php/t-20873.html

Instead of a SLC SSD, look into a PCI / PCI-express RAM drive. I think ASUS sold the 4GB ram cards, and there are a few others out there. Not sure how "GOOD" they are, but at least they don't drop in performance after continued use.


My big question I had is how much uummmph is having both a ZIL and cache drive going to get you? IE, Say I have 15 2TB SATA drives. I can either:
5x 3drive RAIDZ1 in a pool
3x 5drive RAIDZ2 in a pool

The first one gives me 20TB space, and the second gives me 18TB space.

The question is, does the extra protection of the Z2 warrant the loss of 2 vdevs in the ZFS pool (when it comes to IOPS). does a cache and ZIL drive offset that enough to warrent the extra protection?

Lets say the drives can do something like 100 IOPS, were talking about 300 IOPS vs 500 IOPS.


This is going to be for a FreeNAS setup primarily for ESXi (not production, but it is my home setup, so redundancy is important to me).

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

zero0ne posted:

The question is, does the extra protection of the Z2 warrant the loss of 2 vdevs in the ZFS pool (when it comes to IOPS). does a cache and ZIL drive offset that enough to warrent the extra protection?
For my peace of mind, there is no doubt that I would go 3x 3+2 raidz2.

Flying_Crab
Apr 12, 2002



mattdev posted:

Thanks, guys.

I suppose that the only concern I still have is whether or not it will play all of my media files on the PS3. I use PS3 Media Server right now and it plays basically everything. Will the Synology still do this or do I need to still run PS3 Media Center for certain types of :filez:?

I have a DS211j, if your streamer doesn't natively support that format the Synology is not going to be fast enough to transcode it. It has a built in DLNA server for devices that need that.

My WDTV Live picks up everything on my NAS and plays just fine with few exceptions.

quote:

I just (~3 hours ago) placed an order for the DS411. The j version seems to skimp on RAM and the processor is a bit slower.

For what it's worth, I have a DS211j which is much slower and lower specced and it has never hiccuped doing lots of BT while taking Time Machine backups from my Macbook to streaming MKV files to the same Macbook or my WDTV Live. I don't think I've ever got it to go above 50-60% CPU usage under heavy load nor has it come close to being low on ram. Although if you're going to hammer the poo poo out of it you might want a higher specced model.

Flying_Crab fucked around with this message at 17:39 on Jan 2, 2012

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

adorai posted:

For my peace of mind, there is no doubt that I would go 3x 3+2 raidz2.
Specifically because he's looking at VMs, I'd start wondering what sort of storage performance requirements will be necessary including average latency and IOPS / VM and such. About all of home networking users are fine with just stacking some green drives and hooking up to wired networking, but if you're doing VMs at home things get a lot more interesting. I've got a number of green drives that host some VMs and do bulk data storage with a RAIDZ1, and it's kinda clear that I just don't have the IOPS nor the appropriate scheduling to do it with that architecture. I don't think a ZIL or L2ARC would help appreciably for the number of VMs that could be stacked on, say, 15TB of green drives. You have to do some vertical scaling to get some decent VM performance even for a home setup is the reality, and that probably means 5400RPM green drives are out, which explodes costs for those at home trying for those 20-disk arrays. I say just like in professional IT storage, tiering should be used at home for optimal cost-performance-benefit involving VMs.

zero0ne posted:

If you want max performance, you will need to keep in mind that SLC is ideal for the ZIL, and MLC is better for the cache.
If you're getting this serious, the only SSDs (if you're looking at them) you should be looking at are Intel SSDs and the only SLC SSDs they've got on the market now are not the supercap variety. This would mean that you're again going to have to trade off a bit of redundancy/reliability for speed optimization just on the basis of part availability. The x25-e is probably so old now that the newer Cougar Point MLC based SSDs would beat it across the board. SLC's primary advantage is purely for longer write cycles compared to MLC though.

The amusing part is that as a home user you'd be better off with a good MLC for the L2ARC and what amounts to a ramdisk for your ZIL. Frankly, I think it'd be better for the guys writing ZFS (whoever's left at Oracle, see...) to make ZFS scale more efficiently with more RAM, not by tempting people to use a separate device for a ZIL. L2ARC is a different matter though.

zero0ne
Jul 20, 2007
Zero to the O N E
Good points necrobobsledder.

One thing I forgot to mention is that the entire storage array isn't going to be for VMs, so I will be going for a tiered approach. Storage is the primary use of this, but I do want to make sure it can support my ESXi lab.

I'll be getting some entry gigE switch, simply so I CAN use VLANs, and will eventually upgrade to a Cisco. I need to start learning more about networking, and the small business switch with the GUI should at least start me off, where I can then upgrade later when I am more comfortable.
The actual ESXi boxes are currently some simple desktops, either 2x with 8GB, 2x with 16GB. Either dual core or quad core Intel CPUs.

Now for the actual storage:

For the ZIL, I just have this problem of using either 1 or 2 SSDs for something that really doesn't need to be more than 4-10GB. Why spend 300+ on an SSD that is going to be wasted as a ZIL (be it 2x Intel SLCs or a single supercap MLC).

Something like this: http://www.anandtech.com/show/1742
(they run around $150 for the max 4GB)

Using an SSD for the cache seems like it should help with my workload though (be it streaming or the ESXi load).

Main storage will end up being the top 3 cheapest 2TB drives (have to use 2TB due to the SAS cards being based on an older LSI chip that only supports up to 2TB).

What I was thinking (after your comments), was to purchase one of the 4x 2.5" hot swap bays and either load it up with SSDs or some quick SAS drives. I am thinking the SSDs would be more cost effective, simply because my ESXi load isn't going to be anything crazy, and they shouldn't wear out too quick.

Will post a list of items that will get purchased when I start building it on newegg, but for starters, this is probably the case I am going to use:

http://www.newegg.com/Product/Product.aspx?Item=N82E16811146051

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

zero0ne posted:

For the ZIL, I just have this problem of using either 1 or 2 SSDs for something that really doesn't need to be more than 4-10GB.
For your ZIL i would just go with one http://www.newegg.com/Product/Product.aspx?Item=N82E16820167062 since newer zfs versions do not have a risk of complete data loss from a ZIL failure. I think a home lab environment can afford a transaction loss in the unlikely event that the ZIL fails.

Rukus
Mar 13, 2007

Hmph.
Quick question regarding a RAID card:

Say if I get this 2-port 8-channel RAID card, and two SATA breakout cables, I could run up to 8 green drives no problem, right? The drives will just be hosting documents/music/movies that are dished out over the network, so they won't be seeing any real heavy usage beyond that.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Rukus posted:

Quick question regarding a RAID card:

Say if I get this 2-port 8-channel RAID card, and two SATA breakout cables, I could run up to 8 green drives no problem, right? The drives will just be hosting documents/music/movies that are dished out over the network, so they won't be seeing any real heavy usage beyond that.

Yep, though maybe shop around for the cables. I think I got mine for $10 + $5 shipping (should have gotten a second at the time, dummy), but not from Newegg. THey're $25 a piece with shipping at best on Newegg.

Minty Swagger
Sep 8, 2005

Ribbit Ribbit Real Good
Not sure on length you need, but here's a primo price.
http://www.monoprice.com/products/subdepartment.asp?c_id=102&cp_id=10254&cs_id=1025406

Rukus
Mar 13, 2007

Hmph.
Thanks, yeah, I just took a gander at Monoprice while waiting for replies and saw those. What's even better is that shipping to Canada will only cost $3.50 since they charge by weight.

Thanks again for the confirmation. :)

chizad
Jul 9, 2001

'Cus we find ourselves in the same old mess
Singin' drunken lullabies
For anyone running FreeNAS 8, looks like 8.0.3 was just released. Among other changes, it looks like there's some fixes for the email system, so maybe the daily report and smartctl and such emails will be more reliable.

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

Nam Taf posted:

Out of morbid curiousity, can you put some figures to that?

Last figures I saw were ~3GB per 1TB. That's *just* for the dedupe. That's not including room for regular ZFS ARC buffering, etc.

Note, HAMMER on Dragonfly lets you get by with around 256MB per each TB.

zero0ne
Jul 20, 2007
Zero to the O N E

Rukus posted:

Quick question regarding a RAID card:

Say if I get this 2-port 8-channel RAID card, and two SATA breakout cables, I could run up to 8 green drives no problem, right? The drives will just be hosting documents/music/movies that are dished out over the network, so they won't be seeing any real heavy usage beyond that.

BE CAREFUL! with this card. If you see a 8 port SAS card under ~400 bucks, and it is based on a LSI chip (the 1068E I think), it will only have support for up to 2TB per channel. This is an issue with the "outdated" LSI chip most of them are based on.

For example, this Intel 8 channel SAS card: http://www.newegg.com/Product/Product.aspx?Item=N82E16816117157

EDIT: looks like that card supports more than 2TB with the updated drivers according to a comment(at least for Windows, didn't check BSD or linux support).

zero0ne fucked around with this message at 02:45 on Jan 6, 2012

zero0ne
Jul 20, 2007
Zero to the O N E
Is anyone running FreeNAS here with a custom, updated version of ZFS?

FreeNAS currently runs with zfs 15, so I was thinking of hacking it to zfs 28. Some of the features won't be available via the GUI, but I can bust out the cmd prompt if I want to mess around with the new features.

Curious about stability.

KennyG
Oct 22, 2002
Here to blow my own horn.

necrobobsledder posted:

:words: for those at home trying for those 20-disk arrays.

Can RAIDZ even handle a 20 some disk array? Also, if these were say 3TB and you had a full up 24 Drive RAIDZ3 @ ~57.3TB, would a scrub of a full array take as long as SIZE/READ_SPEED which is to say 60,114,862MB/(200MB/s)= 200k seconds or 83 hours!

If you're using 'consumer' drives and scrubbing every week half of your time will be spent scrubbing. This is nuts!


I've outgrown my 10TB of JBOD and need to get something serious going. I was resigned to hardware raid6 on a ~$$$$ SAS controller, but RAIDZ and a (couple) HBA(s) sounds mighty enticing.

roadhead
Dec 25, 2001

It says in the docs not to do more than 9 physical devices in a single RaidZ (you can have multiple RaidZ groups in a single storage pool) - from the FreeBSD handbook:

FreeBSD Handbook posted:

Note: Sun recommends that the amount of devices used in a RAID-Z configuration is between three and nine. If your needs call for a single pool to consist of 10 disks or more, consider breaking it up into smaller RAID-Z groups. If you only have two disks and still require redundancy, consider using a ZFS mirror instead. See the zpool(8) manual page for more details.

You can do it but I noticed strange behavior when I had a 10 disk RaidZ and RaidZ2.

EDIT: Just got 8.2-RELEASE-p6 is this to prep for updating to 9.0-RELEASE ?

Also the scrubbing process takes only as much time as the array is full. I have a RaidZ with 6 devices (1.5 TB WD greens) that is nearly full that takes ~8 hours to scrub. My other RaidZ2 with similar drives isn't nearly as full, has 1 drive screaming in agony, and scrubs in 2.5 hours.

ad14 posted:

Jan 6 07:34:16 hydra smartd[4586]: Device: /dev/ad14, 4 Offline uncorrectable sectors
Jan 6 07:34:16 hydra smartd[4586]: Device: /dev/ad14, Failed SMART usage Attribute: 1 Raw_Read_Error_Rate.
Jan 6 08:04:15 hydra smartd[4586]: Device: /dev/ad14, FAILED SMART self-check. BACK UP DATA NOW!
Jan 6 08:04:17 hydra smartd[4586]: Device: /dev/ad14, 1368 Currently unreadable (pending) sectors
Jan 6 08:04:17 hydra smartd[4586]: Device: /dev/ad14, 4 Offline uncorrectable sectors
Jan 6 08:04:17 hydra smartd[4586]: Device: /dev/ad14, Failed SMART usage Attribute: 1 Raw_Read_Error_Rate.

roadhead fucked around with this message at 15:07 on Jan 6, 2012

BnT
Mar 10, 2006

zero0ne posted:

If you see a 8 port SAS card under ~400 bucks, and it is based on a LSI chip (the 1068E I think), it will only have support for up to 2TB per channel. This is an issue with the "outdated" LSI chip most of them are based on.

The exception to this is the IBM m1015 (based on the LSI SAS2008 chip), which supports larger drives. You can grab these for about $70 used/"new pull" or $170 new. You'll have to buy some cables too and flash them, but after all that hassle they're excellent JBOD cards for ZFS installs and reasonably priced too.

edit: added a link to the flashing guide I used

BnT fucked around with this message at 16:23 on Jan 7, 2012

KennyG
Oct 22, 2002
Here to blow my own horn.
If I was going to test OpenIndiana in a VM, what would I tell the VM wizard that the OS was? Solaris?

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
^^ Yep.

BnT posted:

The exception to this is the IBM m1015 (based on the LSI SAS2008 chip), which supports larger drives. You can grab these for about $70 used/"new pull" or $170 new. You'll have to buy some cables too and flash them, but after all that hassle they're excellent JBOD cards for ZFS installs and reasonably priced too.

QFT. I have one and it's awesome (if flashed it to IT mode) and it's stable as all hell. SATA3 but aside from SSDs you're not really going to see the difference if you're using slower RPM drives.

I have 4 Hitachi 5k3000 drives hanging off of mine in FreeBSD 9.0 on ZFSv28 and it's been rock solid/heart touching for the past 4 months.

zero0ne
Jul 20, 2007
Zero to the O N E

BnT posted:

The exception to this is the IBM m1015 (based on the LSI SAS2008 chip), which supports larger drives. You can grab these for about $70 used/"new pull" or $170 new. You'll have to buy some cables too and flash them, but after all that hassle they're excellent JBOD cards for ZFS installs and reasonably priced too.

Good to know! Now the search begins :)

EDIT: (don't feel like posting 3 times in a row!)

For a RAIDZ or RAIDZ2, what should I be doing drive wise?

Assuming I get the IBM m1015, I will have 8 ports to use. Based on ZFS rules, I should be using a power of 2 for drive #s (not including the parity drives).

So, the smart thing seems to be doing a Z2 with a total of 6 drives including parity. This way I now have 2 free ports on this controller.

I could then do another Z2, on the second, identical controller, and have another 2 free ports.

With each card being able to handle 6Gbps, I could then use the extra 4 ports for SSDs.

If I setup 4 SSDs as my VM volume, does it make any sense for a cache / ZIL disk?

(maybe not on the SSD pool, but a cache / ZIL for the HDD pool?)

zero0ne fucked around with this message at 05:14 on Jan 8, 2012

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

KennyG posted:

If you're using 'consumer' drives and scrubbing every week half of your time will be spent scrubbing. This is nuts!
Not so bad if you're scrubbing multiple zpools at the same time, which is what I was implying a bit with a 20-disk "array." But the more correct terminology should have been pool, admittedly.

With larger pools, you'll have multiple RAIDZ vdevs in your pool, but you don't necessarily have to put several RAIDZ vdevs into a single monolithic pool. Storage tiering principles say that's suboptimal probably and if you're needing access to that much data, it's highly unlikely there's (temporally) uniform access requirements. That is, you don't need access to 100% of your data 100% of the time. If your home access is anything like mine, you really only care about 30% of your data at any given moment. I don't care about Christmas movies and music any other time of the year than from October to January.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire

chizad posted:

For anyone running FreeNAS 8, looks like 8.0.3 was just released. Among other changes, it looks like there's some fixes for the email system, so maybe the daily report and smartctl and such emails will be more reliable.

I have 8.0 running off a thumb drive on my NAS. If I upgrade to 8.0.3, will it wipe my ZFS raid? I heard that you can't simply upgrade 8.0->8.0.x, so I am worried.

BlankSystemDaemon
Mar 13, 2009



Just export your config, do a clean install and import your config. Worked just fine for me.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
drat, the N40L is on sale for $200 at MacMall. I may have to grab one for a file server now, then get drives once they come back down to normal.

http://www.macmall.com/p/HP-Desktop-Servers/product~dpno~8887013~pdp.gjbdfih

Adbot
ADBOT LOVES YOU

zero0ne
Jul 20, 2007
Zero to the O N E

Moey posted:

drat, the N40L is on sale for $200 at MacMall. I may have to grab one for a file server now, then get drives once they come back down to normal.

http://www.macmall.com/p/HP-Desktop-Servers/product~dpno~8887013~pdp.gjbdfih

I don't see drives going down to what they used to be for a WHILE. Today they are going for 7-10 cents /GB, less than a year ago they were going for as little as 2 - 4 cents /GB.

A year maybe, but in a year, mechanical HDDs should have been as low as 1 cent per TB based on the way their price / GB was falling.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply