Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
teamdest
Jul 1, 2007

LamoTheKid posted:

So I'm rebuilding my home machine. One thing, I'm switching from Debian6/mdadm RAID5 to OpenIndiana/napp-it/zfs and I need to get all my poo poo over to it.

I have an external drive to back everything off to. Can I just install ZFS-Fuse under debian and set that external as ZFS, copy stuff over (yeah, I know it'll be slow), export it and then import it on the new hardware/OpenIndiana? I'm pretty sure ZFS-FUSE is a few revisions behind what OI has, but it's backwards compatible right?

EDIT: Im an idiot, I just tested this out with 2 VMs and it worked great. Looks like this is the plan. Sorry if I poo poo up the thread.

I don't see why you couldn't format the external as plain old ext4 or whatever and get a bit better speed. Are you planning to keep the external in the ZFS pool after conversion?

Adbot
ADBOT LOVES YOU

teamdest
Jul 1, 2007
Have you dabbled in Linux before? I can't help much with windows server stuff, I'm a *nix guy, but if you're willing to go that route I can show you my current setup which works pretty drat well (and is being reinstalled right now actually because I was annoyed at some decisions I've made).

Actually on that note lemme just run down what I'm doing at the moment:

Hardware is 4x1TB, 2x500GB, 2x750(Inactive) on a Micro-ATX/Athlon type of setup. Video, audio, etc is all integrated, it's a really nice low power setup. I can dig up the specs if anyone actually cares.

Software is an Ubuntu/btrfs/dm-crypt stack.

I'm in the process of overwriting the 1TB drives right now in preparation for bringing them back online as a 3TB Raid5 array, and the two 500's are a Raid1 for the OS and everything else. The 750's are inactive because I ran out of ports and I'm too cheap to buy a cheap sata controller when I know I should just buy a good SAS one at some point.

The whole system will be encrypted, with btrfs filesystems sitting on dm-crypt'd partitions. So far, btrfs seems to be working nicely, it was easy to set up even from within the ubuntu installer (as of 11.04 server, anyway). I don't have the mirror online yet and I'm actually not 100% sure on what the process even is for doing that, but I think I can just add the current drive to a BTRFS array with the completely-blank unformatted 2nd drive, and theoretically it should mirror over including the "random" data, though that might take a while.

The system will also be running Subsonic and Deluge web servers, and possibly some other minor stuff, but really this is an experiment to see if btrfs is functional as a ZFS-alternative and mdadm/lvm/XFS replacement (which is what the system was doing previously).

teamdest
Jul 1, 2007

LamoTheKid posted:

From what I've read, OpenIndiana does not support ext4. Am I wrong here?

I honestly didn't even think of that, actually. I'm so used to operating in Deb/Ubuntu/Centos I forget how narrow some stuff is in terms of options and tools. Sorry, yea I guess ZFS is the way to go then. is XFS or JFS an option? I don't know OpenSolaris/OpenIndiana etc. very well.

teamdest
Jul 1, 2007
For anyone that is interested, I've been using ZFS on Linux (http://zfsonlinux.org/) for a little while now, and it seems pretty drat stable, and there doesn't seem to be any performance hit compared to the XFS/mdadm setup that I used to run.

It's obviously still beta software, but so far it seems to work fine, zpool/zfs commands behave as expected. I'm a little annoyed that setting up an encrypted array is a choice between:

cryptsetup each individual drive, enter 5 passwords to mount them all and then ZFS rebuilds it every boot

-OR-

zpool over the bare drives, make a zvol and encrypt that, which then requires manual work in order to increase the size.

I had initially planned to try out btrfs, but it currently doesn't support Raid5/6 style arrays.

Also, what are the current recommendations for SATA/SAS controller cards? My Perc-5i's worked out well for a while, but there were some finicky issues that eventually made me stop using them, and a LOT of raid cards use LSI JMicron chips which are not well supported under linux.

teamdest fucked around with this message at 04:40 on Aug 3, 2011

teamdest
Jul 1, 2007

what is this posted:

You should probably use RAID6.

At which point (4 drive R6) you might as well use a Striped pair of Mirrors to avoid the write-hole and probably boost performance a bit.

teamdest
Jul 1, 2007

gregday posted:

Well I can't get ZFS native on Linux to work. I built and installed the 0.6.0rc5 of spl and zfs, and the kernel modules all appear to load, but when I run any zpool commands, I get an error that the modules aren't in the kernel. I'm using Linux 2.6.32 so I don't know what the hell.

Do the kernel modules "appear" to load as in not throw an error message, or does `lsmod` *say* they are loaded?

teamdest
Jul 1, 2007
you should be able to import your existing pools, contingent upon what someone a bit ago mentioned re: partitions vs. disks on FUSE. I don't know if /dev/zfs is supposed to be created dynamically, since I do my work on Ubuntu which had a PPA repo available.

Just for fun, does modprobe zfs tell you what you expect?

I don't think they have a wiki and their documentation right now is mostly just the official stuff from solaris, so you might have to hit up the mailing list and see if someone can help.

teamdest
Jul 1, 2007
sorry, yea I'm not that great with linux, though I manage to get by most days.

code:
xxx@yyy:/usr/local/crashplan/conf$ sudo ls -l /dev/zfs
crw------- 1 root root 10, 56 2011-07-30 01:59 /dev/zfs

teamdest
Jul 1, 2007

heeen posted:

What performance is to be expected from four WD20EARS (2TB) drives in a raidz configuration? I am getting ~60Mb/s through dd and this strikes me as a bit low. The hardware is a HP proliant microserver.

The performance of any RaidZ/Z2/Z3 array is equivalent to the performance of the slowest drive in the array. I think someone linked to this earlier in the thread, but it goes into detail:

http://constantin.glez.de/blog/2010/06/closer-look-zfs-vdevs-and-performance

In short, use more, smaller vdevs and use mirrors instead of raidz's if you need performance. 60MB/s is not that hot for a Sata II drive overall, but I think the green's are low power so that might affect things.

Additionally, bandwidth (MB/s) is not the only measure of hard drive performance, IOPS can be a major factor too, and can be a more noticable bottleneck than low bandwidth, since there aren't many household uses of a fileserver I can think of that require sustained transfer rates of 60+ megs a second, and that's getting on towards saturating a gigabit bus anyway.

teamdest
Jul 1, 2007

heeen posted:

Right. It just struck me as odd when I started to fill the pool with large media files that even after some tweaking I couldn't get over 60Mb/s. Anyways, the 60Mb/s number coincidences with various speed benchmarks I found, e.g:


If zfs chose to begin writing at the end of the drive, that is.

well, can you run some benchmarks and see if this is a consistent thing? dd isn't the most exacting test of a disk's capabilities, after all.

Edit: also how full is your array? copy-on-write filesystems have issues when they get fragmented and largely full where finding a contiguous chunk large enough for the data becomes more difficult.

teamdest fucked around with this message at 13:43 on Aug 5, 2011

teamdest
Jul 1, 2007
Hard drive benchmarking is actually kind of a complicated thing to do, and there's a lot of factors involved. Sending a file from your laptop to your file server is really a pretty poor way to go about it, as anything from the network stack at either end to the files themselves can cause huge variances in performances.

Edit: if you don't know the specifics of whatever system you're using, your test is kind of worthless, since there's no way to know the difference between you setting the test up wrong, running it wrong, or outright running the wrong test, versus an actual problem with the setup you're describing.

Edit 2: First test, get on the system directly at a console and test the direct write speed with something like dd if=/dev/zero of=/filename.blah and see what you get. Remove the issue of CIFS, Gigabit, OS X and just see if the filesystem is vdev and filesystem are up to speed.

teamdest fucked around with this message at 10:37 on Aug 7, 2011

teamdest
Jul 1, 2007

McRib Sandwich posted:

A bit more info. Ran this on the ProLiant again, on the Nexenta command line against a 4x500GB drive zpool, configured RAID 0 equivalent. Compression on, synchronous requests written to stable storage was enabled. Results:

code:
$ dd if=/dev/zero of=/volumes/tank1/file.out count=1M

1048576+0 records in
1048576+0 records out
536870912 bytes (537MB) copied, 15.8415 seconds, 33.9MB/s
compared against a rough test of raw CPU throughput:

code:
$ dd if=/dev/zero of=/dev/null count=1M

1048576+0 records in
1048576+0 records out
536870912 bytes (537MB) copied, 2.50262 seconds, 215MB/s
34MB/s on the native filesystem still seems really slow to me. These are old drives, I can't image they need to be realigned like the 4k sector drives do. Any thoughts?

edit: The 34MB/s speeds are also in line with my timed tests in copying over large files to a iSCSI-mounted zvol that I created on top of the same pool.

Excellent, so it seems that the issue is somewhere in ZFS or in the drives themselves. are they already built and have data on them, or could you break the array to test the drives individually?

teamdest
Jul 1, 2007

McRib Sandwich posted:

Actually, I ended up doing that last night after posting that update. I found that running the same dd command on a pool comprised of a single drive delivered about the same write speed -- 34MB/s. That said, I would've expected increased performance from a RAID 10 zpool of those four drives, but I didn't see any increase in write performance in that configuration.

Anyway, I have free reign over these drives and can break them out as needed. What other tests should I run against them?

Well I would expect your write speed and read speed to pick up on a striped array, that's kind of strange. A Mirror, not so much, since it has to write the same data twice. Can you try it locally on a striped array instead of a mirrored stripe? Just trying to eliminate variables. And could I see the settings of the pools you're making? something like `zfs get all <poolname>` should output them, though I don't know if that's the exact syntax.

teamdest
Jul 1, 2007

frogbs posted:

I'm building an array for video editing/file footage storage. We'd like to not lose everything, so I think the best option then is to buy the biggest array we can (12tb) and then use it in raid 10. Having an array that can potentially fail kind of defeats the purpose. We'll have it on a UPS, so I doubt we'd run into the write hole problem, but better safe than sorry!

If you want to "Not lose everything", you need to make backups. RAID is not a backup. Make Backups.

teamdest
Jul 1, 2007
For backups, the thing most people don't realize is that they need to be persistent, over at least a fairly long period. A second array would be additional redundancy but doesn't cover 99% of the issues that would require data to be restored.

Tape, DVDs, Hard drives, the media doesn't matter, but the data needs to be persistent: once it's written, it is (within reason) never erased, only versioned as needed for updating. Crashplan/Mozy/Backblaze are backup solutions. Folders labelled by date with complete copies of your working directory are backups (though time-intensive and annoying ones). a Git Repository on a remote computer is a backup (I do this with documents). A second file server is, unless it is large enough to contain the initial dataset + all additional changes (deltas), not a backup, it is redundancy in case your primary server dies.

teamdest
Jul 1, 2007

Methylethylaldehyde posted:

The logistics of having 5+ TB of poo poo on crashplan makes anyone without a FIOS connection cry. It would take me something like 17 months to upload all of my media to the crashplan servers.

And yeah, the idea is pretty much a non-starter, but a man can dream.

Either Crashplan or Backblaze, can't remember which, offers you the option of mailing a drive with your data on it, in order to speed up the initial backup. I always thought that was a cool idea.

teamdest
Jul 1, 2007

Thermopyle posted:

Seconding CrashPlan.

And a third from me.

teamdest
Jul 1, 2007
does anyone have a recommendation for an internal SAS or SATA expander? Ideally one that takes SFF-8087 or regular SATA ports and fans it out into SFF-8087 or regular sata ports?

teamdest
Jul 1, 2007
An update on the ongoing saga of modernizing my file server:

Swapped some hardware around, picked a few new things up, Currently running:

AMD Athlon II X2 240
8GB DDR3
5x 1TB Drives
4x 500GB Drives
1x 30GB SSD
1x 16GB USB key
1x 120GB Drive

I've been planning to do some consolidation for a long time now, and it's finally happened: my file server box became a VMware ESXi 5 Box. Now I can throw around VMs like nobody's business, and having to reboot the fileserver doesn't mean a 20 minute pause in all my work. To facilitate this changeover, I've wound up using the following:

Supermicro USAS-L8i controller
HP SAS Expander (Not actually installed yet)
Some generic 3-in-2 5.25" to 3.5" bay adapter thing

And for software:

VMware vSphere 5
Debian
ZFS-on-Linux

The installation of vSphere was completely painless, the client software is excellent (it's a lot like Workstation if you have used that), and the management seems sound. I've spun off a File Server, LDAP Server, Applications Server, and a couple other things that I'm mostly playing around with. Right now 1 500GB drive is used for the VM and hypervisor storage, But it will at some point be hosted on a Raid10 of 500GB drives instead, I am just waiting on the last cable I need. The 120GB drive will be a shared storage facility for the VMs so that they don't have to hit the file server's array for internal work. The 30GB is going to be host to any VMs that have high disk I/O requirements.

The other VM's I'll omit, but the file server is a Debian 6 box running Samba and ZFS on Linux, possibly some iSCSI poo poo to come later. It hosts a 3TB Raid-Z with Hot Spare which is being destroyed as we speak (copying the data away) in favor of a 3TB Z2 array. I will probably add a pair of TB disks at some point to expand this to a 4TB Raid-Z2 w/ Hot Spare, but that's off in the future.

The transition from physical to virtual was relatively smooth, minus two big caveats:

1) The card I chose is one of Supermicro's new "UIO" things, the components are on the top of the board instead of the bottom, and the bracket is reversed. at the moment it's just sitting in the case (yay zip ties), but in the long term I'm going to need to put an extension on the bracket, looks to be about a centimeter or so to align it to the PCI hole above it.

2) ZFS effectively requires physical access to the drives it uses, it works much better on devices than on partitions (in terms of speed and some unusual errors I'd seen). However in order to use RDM (Raw Device Management), vSphere's version of direct drive access, the SATA/SAS controller needs to, and I quote, "Export the device's serial number", which apparently my motherboard controller did not. The Supermicro card does, so all is well, but bear in mind if you're using the integrated stuff, high chance of no RDM ability at all.

3) Yes I said two, shut up. Right now this isn't an issue, but eventually running 5-10 VMs on a single physical network port might begin to bog down, especially since the file server tends to get hit pretty hard in the evenings and I've seen that just about max a Gigabit connection all on its own. A cheap 1 or 2 port PCI-E 1x network card will fix that issue once it presents itself, but it's still something to think about.

teamdest
Jul 1, 2007
Running an rc of freebsd is like running a stable version of most other operating systems. That said, I run zfs on linux under debian and. Have seen no performance degradation compared against those same drives in a mdadm xfs array or a freebsd zfs array. Speed and stability seem to be the same, though I think the rc of freebsd 9 has a newer zpool version than the linux one.

teamdest
Jul 1, 2007

Wheelchair Stunts posted:

You are awesome. :) As an aside, are there anyways to export the zpool on an Ubuntu zfs (via SPL) to import on a (Open/)Solaris stack? I swear I've been tracking the thread and did a quick browse.

You should be able to just do `zpool export`, is that not working? That's how I went from debian 6 to opensolaris.

teamdest
Jul 1, 2007

DNova posted:

I just ordered an N40l with 8tb of disk, 8gb of ram, and some extras for $840.40 shipped. Not bad. Do I really need a NIC for it?

considering the other specs and purchase price, you should get 8 nics.

teamdest
Jul 1, 2007
From experience, I seriously doubt you will be running a minecraft server of any reasonable performance on an n40l. That server (the base one, though in terms of resources bukkit is no better) is a huge resource hog, it will easily eat 2GB of ram and as much processing power as you can throw at it.

teamdest
Jul 1, 2007

Telex posted:

Okay, I have a FreeNAS 7 machine up. It has a ZFS pool with two vdevs, one are normal format drives (4, raidz) and the other are WD advanced format (4, raidz) for a single pool.

What's my degree of difficulty here in upgrading the whole thing to OpenIndiana and upgrading the existing pool to the latest zfs version without having to erase things and copy over from my backup raid. Which is a scary JBOD raid attached to a windows machine because I can't get things to work in FreeNas with the port multiplier that the external raid works with so I really don't want to try a full 10TB restore from backup JBOD through windows and cry when it all breaks.

Export/Import and Upgrade are non-destructive functions, I don't know what WD advanced format means though?

teamdest
Jul 1, 2007
I can't speak to FreeNAS specifically but I ran openindiana and napp-it off a USB key for an year or so and while the web interface was suck-slow the arrays were unaffected.

teamdest
Jul 1, 2007

Prefect Six posted:

Can't back up a mapped drive.

So from what Fishmanpet said it looks like I can't install the linux version on FreeNAS. Does anyone know any offsite backup solution that will pull directly from my NAS or that I can manually upload to? I'm thinking maybe Amazon S3 until I need more that 5 gigs of space.

This is going to sound a little on the crazy side, but if you have the capability to spin up a linux VM, you can map to your stuff via NFS and run Crashplan in the VM to back it up that way.

teamdest
Jul 1, 2007
the IBM M1015 is my current recommendation for a cheap controller. It needs to be reflashed to the standard LSI2008 firmware/bootrom which can be tricky since some motherboards won't do it, but once it's flashed it's been perfect. I've got an extra one at the moment because I thought I ruined it by screwing up the flash, but it turns out i'm just dumb sometimes.

The M1015 with the IT-mode firmware supports 3TB drives and does passive passthrough without requiring any drivers at least under OpenIndiana and Solaris 11.

teamdest
Jul 1, 2007
So what's everyone's opinion on Server OS/Software nowadays? I'm having some aggravating problems w/ Solaris 11 that I honestly just don't feel like taking the time to debug without some kinda support, so I'm looking to swap out for something easier.

I'm pretty set on ZFS at this point, there's just nothing even coming close, so I seem my choices as:

1) Solaris 11 (zfs v31) - having weird, not-consistently-reproducible date/time software issues, napp-it is kind of a pain but I can put up with it I suppose.

2) OpenIndiana 151a5 (zfs tags system thing) - haven't heard much about this or how well it's working out, would still have to put up with napp-it.

3) Nas4Free - probably still going to have the FreeBSD-related CIFS speed issues that I've seen before

4) Freenas - not seeing an advantage here over Nas4Free, older zfs version, etc.

Am I missing anything? Is a life of OpenIndiana Hell my only option? At this point I might as well skip the whole napp-it thing and just do OpenIndiana + iSCSI and make my file server a CentOS VM like I used to use.

teamdest
Jul 1, 2007
Are the major FreeBSD + Samba speed problems resolved at this point? I honestly haven't been following it and I should be, but a major portion of the file server's usage is Windows shares and NFS to some VMs, with the 3rd major point being iSCSI (Comstar) target for the VM server, but since i'm running over 1gig Copper I doubt I'd take a hit moving that over to NFS, i'm just lazy and set in my ways.

Is NAS4Free worth checking out yet? I tried it but it had some major problems with the 3TB drives I was using and it didn't seem to be a tractable problem from what their (nonexistant) support and forums could tell me.

seems FreeNAS is a nonstarter. ZFS v15 would block me out of Dedup, Raid-Z3, and a bunch of speed improvements.

I suppose I should just buckle down and fill the holes in my knowledge where Linux and BSD differ, and get this thing running properly.

teamdest
Jul 1, 2007

Misogynist posted:

We're actually using it for scientific computing (analyzing extremely high-resolution microscope images of mouse brains) and performance is really rather good. I'm not sure which speed problems you're referring to.

the last time I worked with it (which was FreeBSD 7, admittedly), their Samba implementation left a LOT to be desired. Speeds from/to a high-performance workstation were in the ~10MB/s range over 1Gig copper, whereas running Linux/XFS + Samba I would be getting consistently 70-100MB/s (near total utilization, when you account for the overhead). If that's a solved problem then i'll start investigation a changeover.

teamdest
Jul 1, 2007

DrDork posted:

I find it's hit or miss. Running the latest version of NAS4Free, about half the time I get ~90MB/s over CIFS on a GigE connection between two Intel Pro cards. The other half of the time it's more like 15MB/s, and I have yet to discover why.

See that's kind of what i'm worried about. I do all my Nix-Nix things over NFS or iSCSI so that's a non-issue but the main use case here is as a file-dump for Windows boxes, and that means CIFS pretty much exclusively.

teamdest
Jul 1, 2007

DrDork posted:

If you get a good one, RAID cards can offer you better performance, better reliability, and a bunch of extra ports. But you're going to pay for it--that $40 Rosewill RAID card is a step down from whatever's built into your motherboard, and should be avoided like the plague. These days you don't really NEED RAID cards like you did in the past, especially if you're going to go with something like ZFS/RAIDZ where you wouldn't be using the RAID card as anything more than a port multiplier, anyhow.

With the caveat that if you need more ports than your motherboard supports, most onboard raids won't work with a sas port multiplier.

teamdest
Jul 1, 2007
8gb is more than enough for zfs, just don't enable dedup or compression and you should be fine.

teamdest
Jul 1, 2007

fletcher posted:

The OP doesn't even mention the N40L!

actually going through it right now, glaring omission. My fault, will be corrected shortly.

teamdest
Jul 1, 2007

movax posted:


Excellent :pervert:

Finished for the moment FYI. Any recommendations?

teamdest
Jul 1, 2007

IOwnCalculus posted:

I think the main reason previously was ZFS versions being more current in OI than in FreeBSD, but I think that's no longer the case?

I was using solaris because of some bad CIFS performance from a previous FreeBSD installation. The CIFS server on FreeBSD left a lot to be desired, whereas I found the Solaris 11 implementation to be fantastsic (and Comstar iSCSI was awesome too), however using FreeBSD 9 now I don't seem to be getting the same performance issues I was having before.

teamdest
Jul 1, 2007

hayden. posted:

I didn't really see a better place to ask this. What's the recommended software for securely wiping only a single partition (the one that currently has the OS on it, so I'd need bootable option)? I'm returning a laptop I bought that has one-button restore, so I just want to wipe the OS partition and restore it before I return it to the store.

you could do this pretty easily by booting into any linux live CD and doing `dd if=/dev/zero of=/dev/sdXY` where sdXY is the partition you want to overwrite, but this relies on you having some underlying linux knowledge (to find the partition in the first place, etc.)

teamdest
Jul 1, 2007

Footsteps Falco posted:

Can I start in a RAID 5 and change to RAID 6 without data loss?

depends on the implementation. Hardware RAID you have to check the firmware. I don't think ZFS supports it. Dunno about MDADM.

teamdest
Jul 1, 2007

Civil posted:

And who knows what the hell is up with Newegg's WD Red drives. We've all just attributed it to shoddy handling.

This really has been a persistent thing for YEARS now. I remember ordering hard drives back in... 2005 or 2006 to build a computer, and 3 or 4 out of 5 were DOA. It's down to their lovely packing, mostly. UPS is rough on packages by nature, and newegg just doesn't acknowledge that issue in their packaging. Ordering from Amazon has been 100% success rate for me.

Adbot
ADBOT LOVES YOU

teamdest
Jul 1, 2007

kiwid posted:

drat, and I bought 5, all 5 came in single boxes. Guess I got lucky. Still waiting on my 6th from another vendor so I'm not sure if they work just yet.

There we go. This is how they used to come, more or less. I got a couple sets from them back in ~2009-11 and roughly half would have to go back. I was thinking the individual box was a bit better, but it seems like it's still a crapshoot. Maybe a different SKU or something?

Edit: Responded to the wrong post, sort of.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply