Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

WickedMetalHead posted:

Yea, i've decided to go with Software raid, i have since discovered that decent hardware raid cards cost a loving arm and a leg.

Hardware RAID is pretty expensive, but Jesus gently caress is it nice to have once you're done. I can get sustained reads/writes on my 6 disk array in excess of 300 MB/sec. Random read/write performance is also pretty loving awesome for doing high volume file transfers. Sure the card was like $500, but it was so worth it in the long run.

Adbot
ADBOT LOVES YOU

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
I've been looking at making a NAS box ever since I ended up packing 14 HDDs into my CM stacker case.

after poking around, ZFS seems like the best thing going, but I'm a bit confused as to how you can expand your storage pool without backing it up, tearing everything down and rebuilding it from scrach.

I have an Areca 1220 RAID5/6 card that I'm using under windows, and it can expand the raidset just by pluging in a new drive and telling it to go. The downside is if you gently caress something up, your data goes poof because it's impossible to recreate an array with the same initial configuration, which lead to me losing all my poo poo.

This same card is apparently natively supported with the current revision of open solaris, which means I'll probably use it as a JBOD host for the disks. My problem is I'm a broke motherfucker and I can't possibly populate all 8 ports with 1TB+ drives without taking out a loan of some kind.

My question basically boils down to this: Is there a set of 'best practices' for zraid1/2 and zfs that avoids the backup/break pool/rebuild issue? If not, is there a way to minimize the amount of data I have to backup to a different system before breaking and rebuilding? From what I read here, you're basically hosed if you want to increase the number of disks in your zraid. I hope there is a solution to this that doesn't end up with me needing 5TB of unrelated storage space in order to increase the capacity of my zraid.

Also, any zfs/zraid related guides would be really helpful, that blog post was just informative enough for me to quickly get way over my head and shoot myself in the foot.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Profane Obituary! posted:

They recommend no more then 8 disks per ZRaid. You cannot live expand a ZRaid but you can add in a new Zraid and add it to a zpool. A zraid (or mirror or single disks) is the underlying storage while a zpool is what collects all those storage devices together into a place where you can make a filesystem.

You typically wan't to use the same size disk's for your Zraids because it will only use the smallest drive's worth of data. (if you have 500gb, 1tb, 1tb, it will be like you have 3 500's).

Also i've recently started playing with Nexenta. It is a opensolaris distro that ports the Debian userland to live on top of the solaris kernel. Not every package is ported but they are actively working on porting more packages. It makes upgrading easy and they have rtorrent working on it as a package so if you use or like rtorrent then you might want to look into rtorrent.

Yeah, looks like I'd end up with a wonky raid 50 when all was said and done. Nice to see that I can get a package version of rtorrent, looking at the fuckery required under open solaris was making me sad. Between rtorrent and sabnzbd, my main system will finally be able to turn off at night.


My one other question is if it is possible to use full disk encryption on zfs? While not critically important, it would be nice to secure all my accounting files and documents. I suppose I could just make a truecrypt virtual drive and map it or something.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

The_Last_Boyscout posted:

Does anyone know of good mirroring software? Like if I move files from one directory to another on the source, the backup would do the same move instead of deleting and copying back over.

Look at rsync, it's designed for poo poo like that.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

H110Hawk posted:

Having done our storage system where it has to scale past a handful of servers, the cheaper way almost always adds up to the better way. Disks don't fail that often under backup use. One guy, $15/hour, full health benefits. 30 minutes per server per failed disk. You have to beat $15 for "increased reliability". You probably get at most 1.2-1.5 failed disks per rack per day. I imagine any increase in cost would be at a minimum $100 per server. 10 servers in a rack, $1000/rack, $10,000 total. That is 666 drive swaps. You're hoping to bring failure from 1.5 to 1.2 reliably? 1.5 to 0.5? Or just reduce the time spent per disk from 30 minutes to 20 or 15?

What do you propose the next cheapest solution to be that would beat their cost and margin?

For the cost, they can replicate data across 2 or 3 different racks and not give a poo poo if an entire rack burns down. And yeah, unless you're going to be paying some poor Chinese boy to change tapes in the 500 LTO-4 drives you'd be using for nightly backups, it makes way more sense to just say gently caress it and throw together a new rack and let the data replicate over for added parity.

I kinda want to see the block diagram for the higher level de-dupe and replication. I shudder to think what the block/chunk database looks like for the de-dupe function.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

bob arctor posted:

Did I miss something or do they have only the single motherboard NIC per unit. Replicating 67TB over that might take a while.

Say you replicate your data twice in a RAID1 setup, so now you have 3 copies of the data across 3 different bricks. What are the chances that 2 bricks will fail completely in the time it takes for it to replicate across that gigabit link. It takes 10 days to completely replicate the data over a single GigE link. The chances of 3 RAID6s failing within 10 days of each other is so retardedly low as to approach zero outside of acts of god.

Hell, I'm betting like 5% of their RAID6s are running either partially or completely degraded. And they don't give a poo poo.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

roadhead posted:

Ok, cheaper UPS.

Yes it has to be able to transcode 1080P x264 and deliver it to my PS3 over the gigabit network. That will probably be the most taxing thing the box does. I would also like to use it for Asterisk, and other fun random projects that I dream up. Also I am a programmer by trade so you never know what this box might end up doing.


The server will be ripping/encoding DVDs to back them up and make getting at them easier.

I plan on building something similar, using the Norco 4220 SAS box. I'm planning on using VMware ESXi and a shitload of loopback voodoo to use ZFS as a large storage pool accessed via Samba/iSCSI/NFS by the various computers in the house, with VMs run as appropriate for whatever programs people want to use. We'll see how well ESXi plays with the hardware I'm getting, and how much of a prick all the setup will be.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

roadhead posted:

So FreeBSD or Solaris for hosting the RAID-Z ?

ESXi as the base OS, with the OpenSolaris VM given direct access to all the disks. That way I get the best performance out of the dozen or so VMs I'll be using, while still getting to use ZFS for all the data storage.

I've got no idea if I can get the opensolaris VM to boot from it's ZFS volume, but if it can't, I have a ton of 250GB disks I can use as a boot disk for the VM.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

adorai posted:

Probably the unixy-ness of it. Make no mistake, it is far more powerful, but less easy to maintain than say, openfiler.

If sun were to open source fishworks, you can bet thousands of geeks would flock to it at once.

If it had a GUI for configuration, I'm sure a lot more people would use it. Trying out esoteric command line statements on your only copy of your precious data with only a --help to guide you would be a bit nerve wracking.

Hell, the only difference between OpenFiler and Solaris is the nifty web UI.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

roadhead posted:

Or you can run FreeBSD (either as a guest of ESXi or on the metal) but I'm not aware of any Linux distros that have STABILIZED their port of ZFS. Probably only a matter of time though :)

Pretty much, but given I know poo poo all about FreeBSD, linux, and solaris in general, I might as well learn the one system with a native support for it. That and you can get Nexenta, which gives you the OpenSolaris Kernel with the Ubuntu userland, so you can use all those fancy programs you don't find on a regular distro.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

PrettyhateM posted:

Would you guys suggest Nexenta? I have been running an opensolaris box with raidz1 for the last six months and am getting sick of the process to update apps and such.

I haven't really played with it outside occasional fuckery in a VM, but using apt-get for packages was pretty loving nice once I edited in the repositories I wanted to use. It comes as a LiveCD, so I suppose you could play with it and see if you like it.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Weinertron posted:

^^^^^
Yes ZFS is for you. ZFS is loving amazing. This box that I talk about below was built out of random hardware lying around, and is using both onboard nic and an add-in card.

My friend brought over his Opensolaris box to let me dump some data I had backed up for him to it, and I'm seeing transfer speed slow down as time passes. Furthermore, I seem to have overloaded it by copying too many things at once and I lost the network share for a second. I ssh'ed into it, and everything looks fine, but transfer speed keeps dropping from the initial 60MB/s writes I was seeing all the way down to 20MB/s. Is everything OK as long as zpool status returns 0 errors?

I don't know much about zfs, how full should the volume be allowed to run? Its on 4x1TB drives, so it has about 2.67TB logical space. Of this, about 800GB is available right now.

The speed drop might just be a side effect of your host computer's disks. Most drives will transfer 60mb/sec pretty easily if it's large files on the outer sectors, but as you fragment the files, move smaller files, or move toward the inner tracks, the drives will slow down quite a bit. I know that internal disk to disk transfers of files on my computer will go anywhere from 150MB/sec SSD to RAID array all the way down to ~15MB/sec slow assed disk to RAID array.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

roadhead posted:



Just got these in my hot little mitts with-in the last hour. Had to rush back to work before USPS got there though, I need more bits and pieces from monoprice to get power to the fans+drives.

So a 9 drive raidz1 pool with a hot-spare, or a 10 drive raidz2 pool? Same effective space either way, but reading the ZFS manual just now the max "recommended" size of a vdev is 9 disks.

Given the failure stats for RAID5 given earlier in the thread, you want to use RAID6/raidz2. The other problem is with 8+ drives is that you end up with a huge chance of a non-recoverable error even with 2 parity drives.

Honestly, I'd take the space hit and make it 2 RAIDZ2 arrays. 9TB of useful space, and it's about as fault tolerant as you're ever going to get without taping it and hiding it in Iron Mountain. The lost capacity won't really cause many problems when you can just add in another vdev made of 2 TB drives 6 months from now when they're $100 each.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

adorai posted:

How much money is basically no money?

"I billed you fuckwits an extra 3 hours of overtime and paid for the damned thing myself" would probably be the limit of the budget.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

adorai posted:

8x http://www.newegg.com/Product/Product.aspx?Item=N82E16822148395&cm_re=500gb-_-22-148-395-_-Product
2x http://www.newegg.com/Product/Product.aspx?Item=N82E16816132029&Tpk=rosewill%20esata
1x any PC
1x copy of opensolaris

It's ghetto, but way less ghetto than a single 2tb drive and comes in under $1000. The real advantage is you can then build another one that you store off site, and do incremental snapshot backups of any cifs or nfs volume easily.

Nthing opensolaris for drat near anything data related. Easy to use, you can snapshot the data a hojillion times without really taking any hits to the storage size. Plus you can do all kinda of goofy poo poo with it once you learn about some of the enterprise level features.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Combat Pretzel posted:

Anyone here running (Open)Solaris in a virtual machine and use it as ZFS fileserver? If so, how's the performance and longterm stability of the VM?

I've had my run with OpenSolaris up to today and am switching back to Windows, but am still looking for a way to keep using ZFS for my data. I'm not so fond of a dedicated storage box, since my main machine keeps running 24/7, too.

I run it through ESXi, one old 320 GB drive as the boot drive with the opensolaris image, with other virtual machines on thier own ZFS filesystem. I have virtual machines for PS3 Media server and uTorrent and poo poo. Stability has been fine so far, and if you have some old SSDs to throw at it, the ZFS performance just gets silly.

Methylethylaldehyde fucked around with this message at 15:07 on Nov 21, 2009

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
Welp, aside from the fact that opensolaris has some kind of bizarre bug with intel multiprocessor computers using the regular ata driver, my new media/noise box is doing very well.


I decided to use the Norco 4220 gigantic case of doom because frankly between the three computers in the house we probably have 15 drives.
I also learned that they make Reverse breakout cables which let you plug your SAS backplanes into a regular old SATA port, which saved me quite a bit of money, because I have an old Areca 1220 raid card I can use.

Currently my raidZ1 pool is 4 1.5 TB WD green drives, but once I migrate data over, I'll be adding another 4x 1 TB raidz1 vdev to the pool, and possibly a bunch of random sized drives acting as random scratch/download disks. I should have enough controller ports available to drive all the disks, so we'll see.

The one thing people never mentioned on the reviews of this case is how god damned LOUD it is. The thing has 4 80mm delta screamers in it. I can hear it through a closed door, and can't hear other people talking in the next room if I'm near it.


On the other hand, once I figured out how the gently caress opensolaris worked, I was able to set up xVM, 3 windows images for use as various media servers/download boxes, and got VNC to work for all the images and misc poo poo set up. Hell, the hardest part of that was figuring out how the gently caress to make samba work and allow windows clients to log into it. Currently migrating the contents of my old RAID5 onto the larger RAIDZ1 then breaking it and moving the hardware over.


I'll have pictures of the build sometime later.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

necrobobsledder posted:

The thickness of the metal (merely "adequate") along with the flimsiness of the drive trays was mentioned in a lot of reviews I've read, along with the backplanes having the molex connectors hot-glued on there poorly. I know I shouldn't expect much for a $340 4u case, but I want to feel like my data is physically sound at least. The poster I quoted said that he could hear the fans in the next room over, and given how sounds bounce in a garage, it'd be awful.

Once the top is on, it's not quite so bad, and I'll probably under volt the fans at some point to try and quiet them down. On the other hand, I'm putting it in the crawlspace under my house, and 8 inches of insulation should be plenty to keep it from driving me nuts.


My case I guess was updated, because the molex connectors had these soldered on clamps that held them in place. I was able to shove them around quite at bit and none of them fell off. No hot glue to be seen.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

NeuralSpark posted:

I virtualized my file server using VMware ESXi and raw device mapped my 5 RAID drives to the Linux VM where I use XFS / mdadm to publish the stack out via NFS.

It's one of the better decisions I've made in a while, as it's allowed me to use the machine that's already running to host test VMs for work crap.


Virtualization is the greatest thing ever. I have about a dozen VMs with various versions of windows on my opensolaris box, all tied into a big assed RAIDZ2 array. Since I suck at any *nix based system unless it has a precompiled pkg to install, I use it for all my torrenting, media serving, and porn collection needs. It's also real nice that virtualbox supports RDP natively, so I can just dial into the VM and use it almost like a thin client on my regular system.

I even fixed the fuckoff loud fans in my Norco 4220. Turns out you can replace the fan bracket with 3 120mm fans ziptied together, then ziptie the fan brick to the case through the holes the original bracket mounted to. Result: A much much quieter system with about the same airflow. No longer does this thing sound like someone has a 727 spooling up for takeoff in my bedroom.

Methylethylaldehyde fucked around with this message at 15:41 on Apr 25, 2010

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

IT Guy posted:

I went ahead and ignored everyone telling me not to RAID 5 my three Western Digital Caviar Green drives and did anyway.

Don't do it.

I Raid-5'd 4 of them and had no problems. The random IOPS is kinda bad, but I basically just used it for bulk storage and serving sequential media files to hosts, and it worked out well enough.

Now that I crammed them into a ZFS RAIDZ it's performance has gone up by about 15% for the usage patterns I use.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Combat Pretzel posted:

I would start to worry when checksum and I/O errors start appearing in zpool status.

Use smartmontools, it works nicely. Getting it to loving run right requires a bit of fuckery, but now I have scripts that check for drive temps, head parking, and other poo poo. Heads are currently at about 3500 parks, with it going up 2-3 every hour or so.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
I have those drives, and depending on the revision, might have those exact drives, and my opensolaris raidz works fine.

You can't expand them in the traditional sense like you can with RAID 5, but you can add more vdevs to a zfs pool to expand them.

Example: A 4 disk raidz gives you 4.5 TB of useful space. If you want to expand that, you can make another 3 disk raidz and add it to the original pool. You can't add 1 disk to a raidz setup, but that capability might be coming in a future update, a lot of blogs are talking about how people are clamoring for it.


Also, if you wait another few weeks, we should have a stable version 134 based release candidate available, which should cut down on the headaches getting everything set up. It also gives you some cool features, like dedupe. Most TV series will show a 5-15% savings, because of the intro and credits sharing almost entirely the same data.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Combat Pretzel posted:

That a verifiable fact? Because I'm sure if the episodes have been encoded in two-pass VBR, bit allocations for said sections will be different.

Yeah, that's the savings I saw from a half dozen TV shows I have on my media box now. You might end up with some special snowflake x264 encodes that are somehow different for each and ever block, but I'm guessing you'll still see some savings.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

IT Guy posted:

I have a (probably stupid) question about growing an mdadm RAID 5 array.

Does it redistribute the current data evenly on the current array to the new drives after adding them or is it just newly written data? The latter doesn't seem logical.

It recalculates and redistributes all the data, takes god damned forever.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

dietcokefiend posted:

What is the general consensus on using two drives of identical size but different brands to further reduce drive failure in a RAID setup? I have two drives of equal size on hand and was planning on doing software RAID1 on my ubuntu box. Speed wise they are pretty darn close.

make sure your array will truncate (is that he right term?) Off the last 1GB or so, because if the drives have slightly different sizes, then you can run into a case where the array eats itself alive when it can't address those last few sectors, or they just refuse to play nice.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

The_Frag_Man posted:

Hey guys,

I bit the bullet and bought a Norco 4220. Now I need help deciding on what parts to put in it.

Current parts decided on:

AMD PhenomII X6 1055T - because it's 250 bux for 6 cores.
120mm fan bracket from cavediver

Now.. what about the rest? Could someone recommend a good motherboard for this CPU at least? Hopefully with good-enough on board video that wont suck a lot of juice.

I thought about asking in the main hardware thread, but I figured that I would get better answers here. Thanks a lot.

Edit: I'm thinking the Gigabyte GA-890GPA-UD3H board looks really good for the price.

Just make sure it has 3+ PCI-E 8x or better slots on it. The Intel SASUI8 HBAs for like $160 were a pretty damned good deal. They also work like a dream on opensolaris.

You can actually zip tie 3 120mm fans together and sorta wedge them in place and save yourself the cost of the 120mm fan bracket.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

EnergizerFellow posted:

Heads up that the Intel SASUC8I is a card only. No low-profile bracket or cables, which may be a pro or con, depending on your needs. Get the LSI SAS3081E-R if you need a low-profile bracket.

Once you get the Intel SASUC8I card, make sure to re-flash it with the normal LSI firmware (the Intel firmware is EFI-only).

Uhh, mine worked fine for a regular BIOS start, and do you have a link to the LSI firmware for it?

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
Oh yeah, I installed smartmontools on opensolaris, and it's able to read the SMART data off the drives without any sort of issues.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

The_Frag_Man posted:

Those expander cases sound VERY promising.

Yeah, they should be short enough that you can fit them back to back in a double sized rack.


As long as you're going to be around to support it, I really can't recommended ZFS off an opensolaris box enough. Native kernel mode iSCSI, NFS and CIFS drivers, and a file system that as far as I can tell, refuses to die on you unless you gently caress up and delete it.

I have that same Norco 4220 case, you can replace the fan bracket with 3 120mm fans zip tied together if you're feeling half assed, or you can order an actual fan bracket from a few places on the internet. The Drive sleds are a bit flimsy, but then again, they aren't exactly load bearing structures, so who gives a poo poo?

I have my current box set up as a media server, 8x 1.5 TB Western Digital Green drives in a RAIDZ2 (raid6) array. The filebench media benchmark showed ~220MB/sec reads and ~180MB/sec writes. You could get better performance by making two 4 disk RAID 5s and striping them together, but I went with increased reliability over increased speed.

I have a 2 port Intel server gigabit card running in teamed mode to a managed gigabit switch, and I can saturate any single gigabit connection, and run two seperate clients at about 90MB/sec via CIFS.

Once you get used to the few different things solaris does compared to *nix, it's very easy to manage.

As far as reliability is concerned, between the block checksumming, ability to create instant snapshots at arbitrary times, and the relative indestructibility of the actual pool, your data is in safe hands. The zpool is completely portable, if something manages to completely gently caress your boot drive or sets fire to your motherboard, you can reimage a spare drive, zpool import and be back in production in about 20 minutes. I actually did that myself, the drive I was using for my boot disk took a poo poo, so I reinstalled solaris, ran the command, and all my data was back in about 3 seconds.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Melp posted:

Is there enough room in that 4220 for 3x140mm fans? Delta make some insane 350+ CFM 140mm fans that require a 24V power supply and I really want to check them out.

I remember being all excited about using RAIDZ2 on a future machine but I got talked out of it, and I can't remember how. I'm probably going to end up moving my current (gaming/general computing) setup over to the 4224 and add a massive RAID array to it. All of this will be on Win 7 and 16 of the drives will be on one overpriced RAID card or another, probably in 2 8-disk RAID6 arrays. Has anyone used LDM? I'd like to get more info on it before I consider using it to combine the two RAID6 volumes. The Wikipedia article says there are a lot of problems with it, but I can't figure out if/how they would impact me.

No, 3x120mm fans barely fit, so trying to shoehorn 3x140mm fans won't work at all. I suppose you could use two of them, but I'm not sure how the airflow would work without a bracket of some kind to prevent airflow around the fans. Oh god, 24v 100w Delta screamers, why god why?

I had a hardware RAID card for a while, and while it was nice, having to deal with the issues involved with gaming and the random bluescreens you'll get with that coupled with a card that REALLY doesn't like hard shutdowns left me with issues a few times. I eventually decided that it was easier to have a separate box with all the drives in it, and a nice little Cooler Master centurion case for my gaming stuff. I love it. Enough local storage to keep all my crap on, enough network storage to archive every blu ray disk I own several times over, and it's in a rack out in the garage, where I never have to listen to it again!

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Melp posted:

I have an unhealthy obsession with really powerful fans, but I guess for the Norco, I'm going to have to stick to the measly 250 CFM 120mm fans...

Did you have bluescreens/hard shutdowns as a result of the RAID card? The only hard shutdowns I've had in several months are because of power, and I'd be getting a UPS for the machine. I would be moving to two separate machines in time, but I came up with this idea of a combined super-setup to offset some of the cost.

The Bluescreens were because of lovely drivers and games that were at the bleeding edge of what my system could handle. The raid card on more than one occasion told me to gently caress right off if I thought I was going to keep doing that.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Farmer Crack-rear end posted:

Have you turned on jumbo frames on your network cards? (And/or, does your network switch support them?)

Jumbo frames makes all the difference. The CIFS protocol has a lot of overhead on a standard packet, with a 9k jumbo frame, that overhead goes way down.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

necrobobsledder posted:

Hitachis are par with Samsung although they seem to be worst (by like 1w) in terms of power consumption and heat. To their credit, they're the ones that came out with THE first 1TB drives, so it's not like they have no R&D muscle. It's just that being a technological innovator doesn't actually mean you make better products.

I decided to stop giving a crap about the drives and to just use ZFS. I'm still migrating over the drives I put into my Thecus NAS that I bought as a literal omg, I need it now emergency (my company laptop broke and I needed to use my fileserver as my workstation).

Yeah, ZFS is nice like that because it's so stupidly robust when it comes to the bizarre kinds of errors that crop up sometimes with consumer level stuff.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

KennyG posted:

It depends on the setup. RAID-Z is not a classic "RAID" architecture in the sense of Raid-0, 1, 5, 1+0, 6 etc. It's a (clever) file system that is software raid aware. You can't run NTFS on RAID-Z. This isn't a big deal in 99% of the cases, but it is a distinction that I think needs to be made. You could, in theory, run a RAID-Z of RAID-6 arrays.

Saying Raid Z > Raid 5 is an apples and oranges comparison. Raid 5 on proper hardware can be just as fault tolerant and significantly faster. Raid-Z on the other hand can cope with consumer level drivers better because of it's higher level of abstraction.

It's completely situational. RAID-Z leaves you with fewer OS options at the benefit of having a wider (and less expensive) selection of hardware, RAID-5/6 will generally perform better and gives you more software options at the cost of hardware selection (and cost).

Actually, speed is a non-issue these days. You're offloading into hardware what a single core and half a gig of ram can do without any issue on modern hardware. Given most modern file servers have 4 cores and 8-16 gigs, it really doesn't make a difference. It all comes down to interoperability, fault tolerance, and featureset at that point.


xlevus posted:

After reading back 10 pages but I've probably missed something and I'm a little confused with all the TLER, IDLE, EARS, EADS, mdadm, zfs stuff. Could somebody please clarify?

* You want to turn off IDLE/TLER when using WD Green disks in RAID (why?)
* The new 2TB EARS drives can't turn on/off TLER/IDLE
* Neither TLER/IDLE matter if you're using software RAID. i.e. mdadm/zfs
* RAID-Z > RAID-6 > RAID-5


I've currently got a ~3 year old 5x1TB RAID-5 mdadm array on WD Green disks but want to bump it up to 8TB, preferably with all new disks considering their age. What's the suggested path to choose? WD or Samsung disks? FreeBSD+RAID-Z, Ubuntu+RAID-6?


I use RAIDZ2 on an opensolaris installation. All 8 of the disks are new 4k format WD1500EARS drives, Intel SAS HBAs, and an AMD 4 core processor. It has also runs a bunch of VMs through virtualbox, which makes life a shitload easier when it comes to using the server box for things that linux/solaris blow rear end at, like multimedia streaming, uTorrent, and poo poo that only comes on windows flavored boxes.

Once you get used to the wonky way solaris wants you to do things, it's really not that bad at all. Plus it's stable like a tank, I have yet to have any major issue that wasn't caused by a dying boot HDD or a dumbass command.

The best part is if your system catches fire, as long as you can rescue 6 out of 8 of the drives, you can plug them into ANY computer, run the liveCD, 'import pool -f' and your data is right back where you left it. No flakey raid cards, no dropping 500-600 on hardware RAID, you get to spend that on a Norco 4220 instead!

Methylethylaldehyde fucked around with this message at 19:04 on Jun 15, 2010

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

FISHMANPET posted:

Purely anecdotal evidence, but virtualization networking has been a huge pain in my rear end on build 134. First I tried using Xen, and the network would occasionally just freeze. Now I'm using Virtual Box, and having the same problem.

Also, wondering what you mean by "SMB sucks balls"

SMB is kinda annoying sometimes with opensolaris. The kernel mode CIFS driver I've been using hasn't had any real issues, but I'm also not exactly hammering the hell out of it either.

The networking really doesn't like unusual NICs in your system. I used the e1000 Intel Ethernet card, and it works fine in the fancy bridging mode that gives each VM it's own un-NATed IP.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

necrobobsledder posted:

Also, if you have transient errors getting written to disk, you can always use ZFS's awesome snapshot system to get back the original, uncorrupted data. It's just far more resilient than what I can do with mdadm + LVM + hardware RAID on my consumer hardware.

Snapshots are awesome stuff, especially on static datasets. Once I finished organizing my poo poo, I snapshotted it and now barring something catching fire, even if a stick of memory goes tits up and starts writing junk, I'll still have a known good copy!

Goon Matchmaker posted:

Using what parts?

Any decent case + Decent power supply ~150
4x Samsung 2TB drives ~120 each ~~480
Intel Motherboard w/ integrated video ~120
Core 2 Duo ~150
2x2gb DDR2 RAM ~175

Total: 1075.

These are ballpark numbers I pulled off newegg and out of my rear end. It's entirely possible to get this set up for under $1000. It only starts getting silly when you want hot swapable stuff, SAS controllers, and rackmountable cases.

Methylethylaldehyde fucked around with this message at 01:26 on Jun 16, 2010

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

illamint posted:

OK, so, I'm planning on building an OpenSolaris-based file server. Originally, I thought that it'd be prudent to build it around a new Xeon or faster Intel chip, but I realized I have a dual-quad-core (8x1.8GHz) Opteron box that I'm not doing anything with. I figured that the cores wouldn't be fast enough to do a nice raidz2 setup, am I wrong? Is the ZFS/raidz stuff sufficiently multithreaded or not CPU-dependent that I could pull this off? I have 8GB RAM, too, and can bump it up to 16GB for not too much money.

It's multithreaded out the rear end, and unless you're trying to run a 50 disk RAIDZ2, it'll be overkill the likes of which you have never seen. I would also toss VirtualBox on it and use it as a VM host.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

ATLbeer posted:

So I was thinking about the same thing a few days ago but, never got around to testing it

If I have a parent zfs pool tank which is composed of a single raidz pool, I can just continue to expand the tank pool by adding other raidz pools under it. Essentially creating a growing drive while maintaining raidz protection of my data?

Basically trying to get a large pool of protected storage from jbods (in the literal sense of a pile of unused drives)

Yeah. You start with a single zpool composed of a raidz vdev. You can add additional vdevs to the pool, provided each vdev is of the same type. So you could have a 4+1 raidz vdev, a 6+1 raidz vdev, and a 2+1 raidz vdev all in one big pool, and ZFS gives no shits. It might have funny performance as it hits each vdev in turn hunting for the files in needs, but it'll work just as reliably as a homogeneous pool of 5+1 vdevs.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Zhentar posted:

Initial NexentaStor impressions: CIFS performance is much better than FreeNAS. Interface is not nearly as intuitive, and lacks adequate protection against doing bad things. I accidentally wiped my data trying to reset my CIFS setup.


Does anyone know of any ZFS file recovery tools?

I don't think there ARE any. If you manage to do something stupid, the zfs guides all tell you that this is what those backups you made are for. With the complex geometry of the disks, and the modify on write architecture of the ZFS filesystem, I'm pretty sure you're screwed.

Adbot
ADBOT LOVES YOU

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

FISHMANPET posted:

I've got OpenSolaris running an Ubuntu VM with VirtualBox, and I'm desperately seeking a new solution. For some reason, bridging the one physical NIC into two virtual ones is causing havoc with my networking. It just drops the connection frequently, and it gets really bad until I reboot it once a week. I had this same problem with Xen and OpenSolaris is Dom0. So I'm not sure if I want to go the ESXi route or Ubuntu host with something else hosing the OpenSolaris VM.

Get an intel card. My e1000 based Intel NIC handles the multiple VMs fine. Apparently the injection they use makes some drivers poo poo themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply