Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Bonobos posted:

First question - for the VM server, I don't know if I should roll with Solaris, FreedBSD, FreeNAS or Openindiana. I have used Windows and Mac PCs, but my linux experience has never been positive. Can anyone recommend a particular distro?

Second question - is ZFS raid doable for drives of various sizes, or am I just going to regret trying? I have some WD green drives that are of the consumer quality - I don't believe those support raid, so I assume I can't roll those.

I'll assume that I will need to upgrade the ram from 2gb to 8gb to see acceptable performance. Can anyone offer any suggestions?
Having recently done this, I went with FreeNAS, on the grounds that it's pretty simple, and works well. I'll start by saying, though, that if you want to use different-sized drives, you're going to have to look into something like FlexRAID or BeyondRAID; normal RAID simply won't work without losing you a ton of space. I personally found it easier just to buy a few more 2TB drives. WD Green drives work fine, I don't know what would make you think they wouldn't. The only catch being ones with "EARS" in their serial, as those are kinda wonky--they still work, you just have to tick the box for "Force 4k sectors" during setup. I have one in my system and still get 50+MB/s sustained read/write. Also, for performance you probably want more than 2GB of RAM, but 8GB isn't really needed unless you want to be real fancy. A single 4GB stick brings you to 6GB and costs about $25.

Anyhow, installing FreeNAS itself is easy. I followed this guide to install it to a USB stick. Takes about 10 minutes and a few clicks, that's it. Stick the USB stick into the HP's internal USB port, connect it to a router, and power it on. Either attach a spare monitor, or go into your router's config page to see what IP address the HP ended up with, and slap that into your browser of choice. Now you're into the FreeNAS config pages, and from there it's quite easy. Set up the drives however you like, set up some shares (don't forget to enable Guest access if that's what you want, and enable CIFS), and you're basically done. If you know what you're doing, the whole process probably takes an hour. If not (like me) then a few hours. Required Linux experience is literally limited to "Can you type in an IP address"? as the whole thing is run from a WebGUI.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Another option for hdds are the Samsung F4EG 2TB drives. I got 4 of those running in 4k sektor size zraid1, and I'm getting line-speed over SMB on both read and write. (~110Mbps). Not that I expect this to affect you, but there's an old firmware version that were on drives sold before january 2011 which can experience dataloss if you enable SMART tests on the drives. However, the silly thing is that the newer firmware doesn't reflect a version change so you can't easily tell what drives you have. So it's not a bad idea to flash them as soon as you get them - info is here - luckily on the microserver you don't need to move the drives around, you can set them in the BIOS with the same result.

For zraid1, there's a golden rule about memory: 1GB for every 1TB of diskspace you have in the array (including parity space).

Also, while guest sharing is fine, if you're not the only one on the network or plan on bringing the server anywhere, it might be worth it to use local user authentication on one that has full read-write-execute permissions, and one that's anonymous access that only has read access.

What DrDork said about linux experience is true (although as a long-time BSD user, I have to point out that linux is not unix), but you'll be better off if you go read the FreeBSD handbook. At least familiarize yourself with man through the manpages that are available on freebsd.org.


Additionally, while I remember it, buy another NIC. The bge driver that handles the HP ProLiant NC7760/NC7781 embedded NIC in FreeBSD has problems which will cause drops and bad preformance (along the speeds DrDork mentioned, so he might want to pay attention too). Anyhow, go to manpage for the driver and check the HARDWARE section for any NIC you can easily find and buy, and use that. Personally I went with the HP NC112T (503746-B21) but anything based off any of the chipsets mentioned on that manpage will work fully (just ensure it has 9k jumbo frames support, as you'll want that if the rest of your network supports it).

Just noticed that you asked for memory, here are some that work.

EDIT: Fixed links, added more info. drat it, I seem to go over some of these things every time someone buys a microserver.

BlankSystemDaemon fucked around with this message at 10:04 on Feb 5, 2012

Minty Swagger
Sep 8, 2005

Ribbit Ribbit Real Good
Its not a free option past 3 disks, but UNRAID does what you're looking for. Its similar to Flexraid and Beyondraid above.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

D. Ebdrup posted:

Additionally, while I remember it, buy another NIC. The bge driver that handles the HP ProLiant NC7760/NC7781 embedded NIC in FreeBSD has problems which will cause drops and bad preformance (along the speeds DrDork mentioned, so he might want to pay attention too). Anyhow, go to manpage for the driver and check the HARDWARE section for any NIC you can easily find and buy, and use that. Personally I went with the HP NC112T (503746-B21) but anything based off any of the chipsets mentioned on that manpage will work fully (just ensure it has 9k jumbo frames support, as you'll want that if the rest of your network supports it).
This is true. Personally, 50MB/s was fast enough that I didn't feel like spending yet another $50 on a NIC. If you do decide to stay with the HP embedded NIC, do some trial runs moving files around in a manner that simulates how you'll actually use it, and see what happens. Stock, mine would give me ~100MB/s for the first few seconds, and then drop hard to ~30 with a lot of stuttering--which would be fine for transferring small files, but I move big ones around a lot. After some tuning I got the drop to settle at ~50, with no stuttering. There are a lot of guides for how to tune ZFS, and other than it being kinda a trial-and-error process that'll eat up an afternoon, it's not hard, or even strictly necessary. Do remember that (contrary to a lot of the guides) there is no reason to ever use vi unless you actually want to--use nano instead (built-in with FreeNAS) and save yourself the headache.

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

This is true. Personally, 50MB/s was fast enough that I didn't feel like spending yet another $50 on a NIC. If you do decide to stay with the HP embedded NIC, do some trial runs moving files around in a manner that simulates how you'll actually use it, and see what happens. Stock, mine would give me ~100MB/s for the first few seconds, and then drop hard to ~30 with a lot of stuttering--which would be fine for transferring small files, but I move big ones around a lot. After some tuning I got the drop to settle at ~50, with no stuttering. There are a lot of guides for how to tune ZFS, and other than it being kinda a trial-and-error process that'll eat up an afternoon, it's not hard, or even strictly necessary. Do remember that (contrary to a lot of the guides) there is no reason to ever use vi unless you actually want to--use nano instead (built-in with FreeNAS) and save yourself the headache.
As a general rule regarding ZFS tuning, there's always this to remember:

"ZFS Evil Tuning Guide [solarisinternals.com/wiki posted:

"]
Tuning is often evil and should rarely be done.

First, consider that the default values are set by the people who know the most about the effects of the tuning on the software that they supply. If a better value exists, it should be the default. While alternative values might help a given workload, it could quite possibly degrade some other aspects of performance. Occasionally, catastrophically so.

Over time, tuning recommendations might become stale at best or might lead to performance degradations. Customers are leery of changing a tuning that is in place and the net effect is a worse product than what it could be. Moreover, tuning enabled on a given system might spread to other systems, where it might not be warranted at all.

Yes, you've been able to tune the zpool so that it runs stable at less than half the performance you can easily expect with the hardware you have - but is 50bux really that much? Also, do note that while the NIC I recommended works, it's by FAR not the only one that does. You could easily check the manpages/HCL and locate a gigabit pci-ex x1 NIC for perhaps as low as 10bux, certainly not over 20bux.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
While it's true that you shouldn't blindly accept someone else's tuning numbers and think that they will universally apply, the defaults are just that: defaults. They are there to provide solid performance and reliability across the widest spectrum of hardware setups possible. This frequently means that they will not give the best performance for any specific hardware/software setup. In my case a little bit of tuning took me from stuttering at 30MB/s to not stuttering at 50MB/s. Your mileage may vary, but it's at least something worth looking into if you've got a free afternoon, and it's easy enough to roll back to the defaults if you mess something up. And hey, if initial testing on your particular setup yields performance levels you're happy with, by all means, leave everything as default and skip the hassle.

And no, $50 isn't that much but it's $50 I don't need to spend as the current performance is perfectly sufficient for my needs. The cheapest Intel gigabit PCIe NIC I could find was $30, and that's assuming the 82574L chipset is supported (which I imagine it is, as the 82574 is supported). I'd rather spend the cash on something else.

Bonobos
Jan 26, 2004
Thank you both DrDork & D.Ebdrup, your advice is precisely what I was look for.

It sounds like FreeNAS is the way to go, I will try and see how it goes.

Sounds like I need a new NIC as well...I see various Intel gigabit NICs on newegg, will all work the same? Basically I would want the fastest throughput I can get on my gigabit network, while obviously spending the least amount of money. Any specific suggestions there?

IOwnCalculus
Apr 2, 2003





For what it's worth, I've always had the best luck finding good prices on Intel gigabit NICs on eBay, even for new or indistinguishable-from-new items. I bought two 82574L PCIe x1 NICs for my ESXi all-in-one box and the one from Amazon was about $30 (needed it overnight) and the one from eBay (could wait a bit) was $15. Both came in the exact same condition and packaging.

Go figure.

Also, annoyingly, dual-port Intel PCIe NICs still command a bit of a premium, unlike say the dual-port PCI-X NICs.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Bonobos posted:

Sounds like I need a new NIC as well...I see various Intel gigabit NICs on newegg, will all work the same?
As D.Ebdrup mentions, there's list in the man page for it.

tl;dr You want one with one of the following Intel chipsets: 82540, 82541ER, 82541PI, 82542, 82543, 82544, 82545, 82546, 82546EB, 82546GB, 82547, 82571, 82572, 82573, or 82574 (assumed 82574L as well, but I'm not 100% on that).

BlankSystemDaemon
Mar 13, 2009



There's also the Ethernet section of the hardware notes for 8.2-RELEASE which is what FreeNAS 8.0x is based on. 8.1/2 will be based on FreeBSD-9.0-RELEASE, but anything that's supported in 8.2-RELEASE is also supported in 9.0-RELEASE.

If you find a specific chipset that you think might work but it's not on the list, simply do a google search on "<chipset> freebsd" and you'll find it very likely someone else has already been wondering the same.
Unfortunately Google's BSD search no longer exist, but just about anything you can wonder has probably already been asked in one form or another.

Odette
Mar 19, 2011

DrDork posted:

(assumed 82574L as well, but I'm not 100% on that).

My googlefu found this re: 82574L chipset. Is this a valid ... driver thingy?

-e-

I'm in the process of assembling my new NAS.

I am going to have 6 2TB WD Green drives running on it, but I am not sure if I want zraid1 or zraid2.

Odette fucked around with this message at 09:39 on Feb 6, 2012

Maniaman
Mar 3, 2006
Our fileserver is on the fritz and I'm pretty sure it's data drive is getting ready to die and take everything with it.

Seeing as it's just a linux box with a 1.5tb dying drive that does nothing but store files, I'm wanting to replace the entire box with a NAS.

I'm looking at the Synology DS1511+. My question is, what is the best way to set it up so I can get the most amount of storage, while still being able to recover from drive failures without having to worry too much about the "write-hole" or whatever it's called when you get into larger disks?

Also, what drives are recommended right now? I'd like to do 2TB disks if possible.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Maniaman posted:

Our fileserver is on the fritz and I'm pretty sure it's data drive is getting ready to die and take everything with it.

Seeing as it's just a linux box with a 1.5tb dying drive that does nothing but store files, I'm wanting to replace the entire box with a NAS.

I'm looking at the Synology DS1511+. My question is, what is the best way to set it up so I can get the most amount of storage, while still being able to recover from drive failures without having to worry too much about the "write-hole" or whatever it's called when you get into larger disks?

Also, what drives are recommended right now? I'd like to do 2TB disks if possible.

You can avoid the write hole by not using RAID5 (Use RAID 10 or 6), or use a UPS (you won't avoid it completely but it significantly reduces chances of this happening).

I recommend Hitachi 5k3000 2TB HDs. They've been running solid for me for 6 months now. They don't use 4k sectors (if you care about that kind of thing). But I also hear good things about the Samsung F4 drives. To be honest, I haven't really paid attention since I built my box and then forgot about it.

Hope you get a backup of that server soon.

evol262
Nov 30, 2010
#!/usr/bin/perl
Since the Norco 4220/m1015/Solaris combo seems to be pretty popular here, I'm wondering if anybody's got a clue how to clear the red error light on the bays? MegaCli's not getting me very far (it claims to successfully clear, but nothing happens -- the locate option doesn't work either), and MSM doesn't look like anything's wrong with it at all. Is the red light some I2C thing handled by the backplane instead of the controller?

Zorak of Michigan
Jun 10, 2006


LmaoTheKid posted:

You can avoid the write hole by not using RAID5 (Use RAID 10 or 6),

How does RAID 6 close the write hole? Unless I'm misinformed, it's usually just RAID 5 with duplicate parity data. An incomplete write to RAID 6 could still leave you with inaccurate parity data, you'd just have twice as much of it.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Zorak of Michigan posted:

How does RAID 6 close the write hole? Unless I'm misinformed, it's usually just RAID 5 with duplicate parity data. An incomplete write to RAID 6 could still leave you with inaccurate parity data, you'd just have twice as much of it.

You're probably right. I was just under the impression that RAID5 was the only one with a write hole possibility.

movax
Aug 30, 2008

It doesn't, the chances just go down a bit. A write hole in RAID5 would be when one of the member disks doesn't match the others and therefore you can't tell which one of the disks is bad. With 6, the write hole would be when two disks don't match the other simultaneously.

Your server is full of expensive disks, you should be able to budget it in a UPS, be it an used APC w/ refurb batteries, or a new CyberPower from Amazon via Prime.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
ZFS Supremacy :smug:

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

FISHMANPET posted:

ZFS Supremacy :smug:

Agreed. Haven't touched my ZFS box in months, except when I hooked up the UPS my boss gave me.

Maniaman
Mar 3, 2006
Don't worry, it will be on a UPS.

Perhaps write-hole isn't what I was looking for. I was thinking the deal where if a large disk fails out of the array, with larger disks there's a extremely high chance of another disk failing during the rebuild.

movax
Aug 30, 2008

LmaoTheKid posted:

Agreed. Haven't touched my ZFS box in months, except when I hooked up the UPS my boss gave me.

code:
~ [ uptime                                                        ] 12:29:07 PM
 12:29pm  up 226 day(s), 22:40,  1 user,  load average: 0.01, 0.01, 0.01
Last time I touched it, I was moving around furniture I think. Super hands off, I occasionally SSH in and dmesg just to see if any issues have popped up.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Maniaman posted:

Don't worry, it will be on a UPS.

Perhaps write-hole isn't what I was looking for. I was thinking the deal where if a large disk fails out of the array, with larger disks there's a extremely high chance of another disk failing during the rebuild.
Other than using high-quality drives to begin with, there's not a whole lot you can do about that--rebuilding the array is going to be a stressful process no matter how you have it set up, and the only thing that would mitigate it would simply be not filling the array very much (less data on a drive means less work), but that kinda defeats the purpose of having a storage array in the first place. If you're really concerned about that possibility, RAID6 gives you the ability to lose two drives simultaneously and still be ok, the tradeoff compared to RAID5 being that (in a 5-bay 2TB setup like the Synology) you drop from 8TB usable space down to 6TB.

Maniaman
Mar 3, 2006
I'd love to go with 2TB disks if possible, but there's no way I can afford $250/drive x 5 right now. Samsung EcoGreen F4 drives are on Newegg right now for $160 + shipping. Is there anything wrong with using green drives? Is that drive any good for a NAS? I seem to remember reading something about RAID controllers freaking out over idle times in green drives, or was that just a WD thing?

Odette
Mar 19, 2011

I'm going to be buying 5x WD20EARX next week or so, how can I differentiate between batches? As I'd like to buy from different batches.

Star War Sex Parrot
Oct 2, 2003

Odette posted:

I'm going to be buying 5x WD20EARX next week or so, how can I differentiate between batches? As I'd like to buy from different batches.
The drives have manufacturing dates on them, but you won't know until you actually have the drive in-hand.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

FISHMANPET posted:

ZFS Supremacy :smug:
Copy on write software RAID supremacy would do it, it's just that ZFS is the only widely available one that has any bit of production deployments. Just wish that it wasn't so memory hungry to get performance just barely matching other software RAID solutions (eg. Windows RAID, mdraid). But hey, I kinda like having a CIFS server that's pretty responsive compared to the Linux one not to mention an NFS server that kicks rear end on Solaris.

Maniaman posted:

Perhaps write-hole isn't what I was looking for. I was thinking the deal where if a large disk fails out of the array, with larger disks there's a extremely high chance of another disk failing during the rebuild.
That's one case of a RAID write hole. Another is writing when there's loss of power. I try to avoid the term "write hole" to begin with and just call it what it is - a critical section. Critical sections don't matter if there's only reads happening by definition, and there's various ways to mitigate them, but the section becomes much, much, much larger than on a CPU running at several GHz with maybe up to 64 byte instructions (variable-width) than a hard disk writing to a RAID stripe of 16KB+ per disk with a multi-GB file.

I don't bother putting my ZFS server on a UPS because 99.99% of the time I just read from it and I've never had a problem with disk corruption. Only problems I've ever had were a disk dying and my boot partition getting corrupted.

Bonobos
Jan 26, 2004
Question on ZFS, assuming I have 2x WD green drives (EARS, 2tb drives), and 2x Samsung F4s, with an extra Hitachi 5k3000 drive, all are 2tb, would I be better of going 2 Mirrored Vdev’s (RAID1), or can I risk running all 5 different drives in RAIDZ (ie., RAID5)?

I've never run a RAID array before, and I understand its most efficient to run the same size drives, but no clue how this fares for different types / speeds of drives. I just want some type of redundancy with automatic checksumming.

Ideally I should just get 5 of the same type of drives, but with prices what they are at, I cannot afford to blow almost $1000 on HDDs.

ashgromnies
Jun 19, 2004
Is there any RAID implementation that works similarly to Drobo's BeyondRAID? I already have 7TB of mixed disks in a Drobo and was considering selling it for $200 and upgrading to a Drobo FS for $600 so I could access it over the network but maybe there's something else that would work better?

Blazing fast speed isn't important to me. I mostly do reads, it's for media storage and backup.

ashgromnies fucked around with this message at 05:10 on Feb 7, 2012

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Maniaman posted:

I'd love to go with 2TB disks if possible, but there's no way I can afford $250/drive x 5 right now. Samsung EcoGreen F4 drives are on Newegg right now for $160 + shipping. Is there anything wrong with using green drives? Is that drive any good for a NAS? I seem to remember reading something about RAID controllers freaking out over idle times in green drives, or was that just a WD thing?
If you're more concerned about price, you may want to consider cracking open external drives. They'll have whatever the manufacturer's 5400RPM or "Green" drive is in there, and (for whatever reason) you can usually find them $20+ cheaper than their normal internal-drive equivalents. Obviously you take a chance with warranty and whatnot, but $20 saved per drive x 5 drives pretty much buys you a spare anyhow.

I haven't heard anything about the EcoGreen F4 specifically, but the two WD Greens I have in my HP FreeNAS box right now are behaving reasonably well (outside the 4k thing, but that's another issue).

Bonobos posted:

Question on ZFS, assuming I have 2x WD green drives (EARS, 2tb drives), and 2x Samsung F4s, with an extra Hitachi 5k3000 drive, all are 2tb, would I be better of going 2 Mirrored Vdev’s (RAID1), or can I risk running all 5 different drives in RAIDZ (ie., RAID5)?
That depends entirely on how much redundancy you want. Going RAID1 obviously gives you good redundancy, but the least space. RAIDZ1/RAID5 is probably the default and most sensible option for home or small-business use, as with 5 drives you'd be able to suffer the loss of any one drive, and only "lose" one drive worth of space. These days you don't really have to worry too much about matching drives unless you really feel the need for maximum performance. Unless one of the drives is substantially slower than the rest, you'll probably never notice the difference. Do note that EARS drives are bastard red-headed step-children, and will normally require you to force 4k sectors because they're lying bastards.

ashgromnies posted:

Is there any RAID implementation that works similarly to Drobo's BeyondRAID?
FlexRAID is a newer "Smart" RAID that would let you use disks of varying sizes. It's not particularly fast, or well-proven, but it might be something for you to consider.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

DrDork posted:

If you're more concerned about price, you may want to consider cracking open external drives. They'll have whatever the manufacturer's 5400RPM or "Green" drive is in there, and (for whatever reason) you can usually find them $20+ cheaper than their normal internal-drive equivalents. Obviously you take a chance with warranty and whatnot, but $20 saved per drive x 5 drives pretty much buys you a spare anyhow.

I haven't heard anything about the EcoGreen F4 specifically, but the two WD Greens I have in my HP FreeNAS box right now are behaving reasonably well (outside the 4k thing, but that's another issue).

That depends entirely on how much redundancy you want. Going RAID1 obviously gives you good redundancy, but the least space. RAIDZ1/RAID5 is probably the default and most sensible option for home or small-business use, as with 5 drives you'd be able to suffer the loss of any one drive, and only "lose" one drive worth of space. These days you don't really have to worry too much about matching drives unless you really feel the need for maximum performance. Unless one of the drives is substantially slower than the rest, you'll probably never notice the difference. Do note that EARS drives are bastard red-headed step-children, and will normally require you to force 4k sectors because they're lying bastards.

FlexRAID is a newer "Smart" RAID that would let you use disks of varying sizes. It's not particularly fast, or well-proven, but it might be something for you to consider.

Regarding FlexRAID, I've been using it for about 2 months now. It's decent, but also pretty loving buggy; I often have to stop and restart the service if I've tinkered with the settings too much. That said, when it works, it works.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
MY general rule of thumb for ZFS is if you have over 4 drives, RAIDZ2 is the way to go, especially if you're using large drives. I worry about large rebuilds in an array. I'm probably just being paranoid (I've never had a drive fail during rebuild).

Then again, RAID isn't backup so you should have a large external, or offsite backup solution ontop of your NAS.

ashgromnies
Jun 19, 2004

PopeOnARope posted:

Regarding FlexRAID, I've been using it for about 2 months now. It's decent, but also pretty loving buggy; I often have to stop and restart the service if I've tinkered with the settings too much. That said, when it works, it works.
So for something reliable that will take any drive size and allows hotswapping my only good option right now is Drobo?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

ashgromnies posted:

Is there any RAID implementation that works similarly to Drobo's BeyondRAID? I already have 7TB of mixed disks in a Drobo and was considering selling it for $200 and upgrading to a Drobo FS for $600 so I could access it over the network but maybe there's something else that would work better?

Blazing fast speed isn't important to me. I mostly do reads, it's for media storage and backup.

When I moved to a Linux file server from Windows Home Server, I had quite an array of different sized disks. What I did was come up with a scheme to partition them all so that I had at least 3 of every size partition.

I then used mdadm to RAID each set of 3+ partitions together and then used LVM to pool them all together.

Then I hosed myself in the rear end by using ext4 on LVM and when my storage expanded to 16TB I was stuck because 16TB is the biggest you can go with ext4. Now I need an extra 16TB of space to back all that data up to and come up with a better filesystem...

Longinus00
Dec 29, 2005
Ur-Quan

Thermopyle posted:

When I moved to a Linux file server from Windows Home Server, I had quite an array of different sized disks. What I did was come up with a scheme to partition them all so that I had at least 3 of every size partition.

I then used mdadm to RAID each set of 3+ partitions together and then used LVM to pool them all together.

Then I hosed myself in the rear end by using ext4 on LVM and when my storage expanded to 16TB I was stuck because 16TB is the biggest you can go with ext4. Now I need an extra 16TB of space to back all that data up to and come up with a better filesystem...

If you want a BIG FS with BIG files then XFS is probably the way to go right now. This has actually been true for a while but recently XFS has been fixed so it's better with lots of small files (metadata heavy workloads).

http://lwn.net/Articles/476263/
http://www.youtube.com/watch?v=FegjLbCnoBw

I'm also pretty sure >16TB ext4 requires a mkfs option so you can't just convert over.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Oddhair posted:

What would the symptoms be if you had the wrong cable? I have the forward kind of setup (and I just double checked and the cables I ordered are forward, but I've been having problems.) does the connection simply not recognize the drives?

BTW: http://www.monoprice.com/products/subdepartment.asp?c_id=102&cp_id=10254

The .5M 30 gauge feels really flimsy in those SATA wires, I might try the 1M 28 gauge just for grins.

I just got this today, and holy poo poo were you right. The cable looks like it's made of foil or something. Oh well, it's plugged in and working.
code:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c7t0d0  ONLINE       0     0     0
            c7t1d0  ONLINE       0     0     0
            c7t2d0  ONLINE       0     0     0
            c7t3d0  ONLINE       0     0     0
            c8t3d0  ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            c7t4d0  ONLINE       0     0     0
            c7t5d0  ONLINE       0     0     0
            c7t6d0  ONLINE       0     0     0
            c7t7d0  ONLINE       0     0     0
            c8t5d0  ONLINE       0     0     0
        spares
          c8t4d0    AVAIL

Oddhair
Mar 21, 2004

I used that cable first with an Intel drive cage just placed in my WHS, and the RAID controller (which has an audible alarm, yay!) was throwing PD errors. Since one of the SATA ports on the cage was loose from the board, I then got some 3-in-2 trays with fans and placed the 6 drives in it, and currently it's seeing 5 of the 6. There's so many unverified variables that I don't have any way to blame any one portion. I've already installed a bigger PSU in case that was the issue, I just need this storage to last long enough, reliably, to RMA a pair of HDs totaling 2.5TB to Seagate. Also since it's WHS v1 it will have to be <2TB/volume, so I was going to do a RAID6 with the 6x500GB drives.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

FISHMANPET posted:

I just got this today, and holy poo poo were you right. The cable looks like it's made of foil or something. Oh well, it's plugged in and working.
code:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c7t0d0  ONLINE       0     0     0
            c7t1d0  ONLINE       0     0     0
            c7t2d0  ONLINE       0     0     0
            c7t3d0  ONLINE       0     0     0
            c8t3d0  ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            c7t4d0  ONLINE       0     0     0
            c7t5d0  ONLINE       0     0     0
            c7t6d0  ONLINE       0     0     0
            c7t7d0  ONLINE       0     0     0
            c8t5d0  ONLINE       0     0     0
        spares
          c8t4d0    AVAIL

Yeah, I've got the same one, works fine but scares the poo poo out of me.

DigitalMocking
Jun 8, 2010

Wine is constant proof that God loves us and loves to see us happy.
Benjamin Franklin
Finished building the system I posted about a few pages back, 8 2tb stat drives, intel controller, supermicro board etc. Went ahead and put windows storage server on it (we're an exclusively windows shop) and for some reason, none of the hyper-v hosts can see any of the 5 targets on the box. The port is open, they can telnet to the port so I know connectivity is good, but nothing shows up on refresh in the iscsi connector.

Microsoft's iscsi target software seems almost retard-proof, not sure what I missed. If anyone has any ideas, i'd appreciate hearing them.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

ashgromnies posted:

So for something reliable that will take any drive size and allows hotswapping my only good option right now is Drobo?

Once the array is up and running, it's usually fine. It's just when you need to change things that you have to gently caress with it.

Adbot
ADBOT LOVES YOU

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So I got my second 5-in-3 enclosure installed and all wired up.

FISHMANPET fucked around with this message at 00:42 on Feb 10, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply