Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!
:yaycloud::smithcloud:

nikomo posted:

I wonder what kind of HDDs you guys buy if you don't want to do RAID0 because the chance of malfunction is doubled. I have not had an HDD die on my hands during my (short) lifetime.

It only takes one to gently caress up your day.

Adbot
ADBOT LOVES YOU

zapateria
Feb 16, 2003

soj89 posted:

I didn't want to post a new thread to ask this and I figure this is the most relevant thread to ask in.

Is there a free software program you guys use to monitor the disk space on your file server? The system and the file server right now is running Windows 7 Ultimate. I know you can get the drive space to show up when you map the network shares but I don't want mount all of the shares. I've googled for windows sidebar gadgets (came up with nothing) and just programs but they're all shareware or seem a bit shady. What do you guys use?

If you just want to see the free disk space on a server, you can use the built-in tool "perfmon", connect to the server and add a counter to free space on logical disks.

Example:



If you want to get more advanced, get something to collect the WMI data and graph it.

We use Splunk to collect data and reports that alerts if servers have low diskspace. Splunk is amazing btw, but alerts require an enterprise license ($5000+)

IOwnCalculus
Apr 2, 2003





nikomo posted:

I wonder what kind of HDDs you guys buy if you don't want to do RAID0 because the chance of malfunction is doubled. I have not had an HDD die on my hands during my (short) lifetime.

If you've never had a HD fail, you simply haven't used enough of them for a long enough time.

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl

soj89 posted:

I didn't want to post a new thread to ask this and I figure this is the most relevant thread to ask in.

Is there a free software program you guys use to monitor the disk space on your file server? The system and the file server right now is running Windows 7 Ultimate. I know you can get the drive space to show up when you map the network shares but I don't want mount all of the shares. I've googled for windows sidebar gadgets (came up with nothing) and just programs but they're all shareware or seem a bit shady. What do you guys use?

For all of my computers I really like Treesize. The free version won't let you do network shares, I RDP into my server fairly frequently anyway so I just run Treesize locally, but if you wanted to pay money they have a version which will do network shares.

devmd01
Mar 7, 2006

Elektronik
Supersonik

IOwnCalculus posted:

If you've never had a HD fail, you simply haven't used enough of them for a long enough time.

Every single disk volume except for two and my backup NAS is raided. The two that aren't, I have backup set and really, if my backup nas goes down, who cares.


I just put together a new NAS/VM box from spare parts, should be pretty slick.

Antec 300 Case
Antec Truepower 550W
Intel DQ35JO Motherboard
Core2Duo 2.66
4x1GB DDR2-800
10x1000GB Seagate ST31000340NS (SN06!)
1x16GB Sandisk SSD boot for ESXi 4
Adaptec 5805Z SAS/SATA RAID with SAS SFF-8087-> 4xSATA cables
Intel PRO/1000 PT PCI-e x1

8-Drive RAID-6 gets me a good 5.34TB for the data volume. The two critical VMs (Untangle, FreeNAS) will be stored on the remaining space on the 16GB SSD. The other VMs will be very lightweight on the disk I/O, so those go on a single 1TB, backed up to an eSATA 1TB with ghettovcb2.sh.

IOwnCalculus
Apr 2, 2003





Nice. I can't say I'm that paranoid, but at the same time I simply don't store any data I care about on anything less than a RAID1 or RAID5 volume. Primary storage for me is a 4x 1.5TB RAID5 at home, and the stuff I really do care about gets backed up to a 2x750GB RAID1 stashed away at my mom's house. The stuff I really really care about will eventually also get backed up to a single 1.5TB drive, sitting in my server in a datacenter.

Data I don't give a flying gently caress about goes on either a RAID0 scratch drive in the same box as the RAID5, or on a single drive.

yatagan
Aug 31, 2009

by Ozma
I'm looking for a 4+ bay directly attached enclosure for personal file server use. Any recommendations? I can only find like 3 on Newegg, 2 without reviews and 1 with a loud fan (deal breaker).

MrMoo
Sep 14, 2000

Like these: http://eshop.macsales.com/shop/hard-drives/RAID/Desktop/

or the more insane 8-way boxes listed here: http://www.directron.com/externalhd.html

(SANS Digital)


Drobo have big rear end 8 and 9 bay boxes too: http://store.apple.com/us/product/TW754LL/A?fnode=MTY1NDA0Nw&mco=MTcwNzc3ODE&s=topSellers

MrMoo fucked around with this message at 10:22 on Mar 10, 2010

yatagan
Aug 31, 2009

by Ozma

Thank you for the links, but I guess I'm confused at the pricing on those sites. A decent 2 bay enclosure goes for $50-80, but the cheapest 4 bay goes for around $200. Why would adding two slots drive the price up so much?

I don't really need RAID or anything, I just want a block that holds drives and lets me separate my data from my computer tower. It seems a bit silly, but the best option looks to be multiple 2 bay enclosures?

MrMoo
Sep 14, 2000

Look for the JBOD only ones,

4-bay for US$139 http://www.directron.com/tr4u.html
8-bay for US$299 http://www.directron.com/tr8u.html

Also don't forget supply & demand, 2-bays are going to be a lot more popular.

yatagan
Aug 31, 2009

by Ozma

MrMoo posted:

Look for the JBOD only ones,

4-bay for US$139 http://www.directron.com/tr4u.html
8-bay for US$299 http://www.directron.com/tr8u.html

Also don't forget supply & demand, 2-bays are going to be a lot more popular.

Thank you, pretty much exactly what I'm looking for, now I just need to do some searches and find out which ones make the least noise.

movax
Aug 30, 2008

FISHMANPET posted:

For anybody interested in this card, particulary movex, who already has one, I found a bracket online that will work for it. SuperMicro was stupid enough to use standard hole spacing, so all you need is a standard PCI bracket with tabs. Keystone Electronics makes one. You can get it from digikey for about 5 dollars shipped:
http://search.digikey.com/scripts/DkSearch/dksus.dll?site=us&lang=en&mpart=9203

There are other places that have it cheaper per unit, but there are order minimums and they ship via UPS, so it costs way more for 1 of them. Digikey can ship via the postal service. But if you're doing something with a bunch of these, you can search all of Keystone's vendors here:
http://www.keyelco.com/order.asp
The part number to search for is 9203. If you dig around the Keystone site you can find mechanical drawings of the bracket. I compaired their measurements to my card and it looks like it should work. I'll post a trip report when I get it.

That bracket is sweet, I may jump on those just to make my server innards a little bit neater.

As for fans and the Norco 4020/4220, I replaced all the 80mms with Yate Loon 80mms, taped off any other holes in the fan bracket (wind tunnel please), and my drives do pretty good, 40C at load, and its pretty quiet to boot. Only 8 drives though...

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

movax posted:

That bracket is sweet, I may jump on those just to make my server innards a little bit neater.

As for fans and the Norco 4020/4220, I replaced all the 80mms with Yate Loon 80mms, taped off any other holes in the fan bracket (wind tunnel please), and my drives do pretty good, 40C at load, and its pretty quiet to boot. Only 8 drives though...

Yeah, gently caress that bracket, unless you feel like rethreading it. The threads are pointing the wrong way. I've found another one from brackets.com, but I'm not sure how to buy it.

I love the conundrum when one entity says to another "I want to give you money in exchange for a product that you have" and the answer is "no"

boingthump
Oct 27, 2005

and i descend from grace

I've glanced over the basics of the thread but a decent SATA card didn't jump out at me. Basically I'm planning on turning an oldCompaq SR1650NX into a FreeNas box.

Since Freenas is all software Raid and speed isn't too much of an issue I was looking for a decent SATA card to add into the case. As is the MOBO only supports 2 SATA drives and will not support 2tb drives like I plan on putting in.

Can anyone point me to a good card for this use?

The_Frag_Man
Mar 26, 2005

I found a site that seems to be developing a case that I want:
http://www.boksdesign.com/

It's a small cube to hold a mini-itx board, 5 disks, and a 120mm fan. However it seems to be dead.

I don't get it, it seems that a case like this would be super popular, but nobody makes one. Do any of you guys know where it might be possible to get a case like this?

japtor
Oct 28, 2005
SilverStone SG05 seems somewhat close size wise...but doesn't have the size for 5 disks (just a slim optical, 2.5 and 3.5):
http://www.silverstonetek.com/products/p_contents.php?pno=sg05
Or I guess any old Shuttle type would be similar. I'm wondering how the drives were mounted in that thing. You might be able to get whatever cube-ish case and rig up something yourself with an internal drive cage/chassis, but it could get pretty tight. If you don't need the cube form there's stuff like that fancy Chenbro case. A bit bigger but relatively compact.

The_Frag_Man
Mar 26, 2005

That Chenbro case isn't available in Australia, and it would cost me 300 dollars total to get one. For that price just for the case, I would be better off looking at a 4 bay NAS.

The SG05 is about the right size, but I don't know if I could modify it to hold 5 disks. It is about 130 dollars though, less than half of the Chenbro.

japtor
Oct 28, 2005
Are Shuttles available there? I mentioned them cause the Boks looked a lot like a shallower version of the old Shuttle cases...that still leaves the question of how the drives were mounted. I couldn't find any pics of the Boks with the drives in it.

The_Frag_Man
Mar 26, 2005

This is a picture of how they were mounted in an earlier prototype:
http://farm1.static.flickr.com/169/391218381_1f93b978e0.jpg

I found it here, but it seems to be down now:
http://www.wizdforums.co.uk/archive/index.php/t-7065.html

Shuttle is available here I think, but I don't know of a shuttle server that can take 5 drives.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
With a little bit of work, this case can be made to hold 6 drives.
http://www.newegg.com/Product/Product.aspx?Item=N82E16811144140
Can you get that in Australia? It's the X-QPACK2. By default it's got 2 5.25 bays, an external 3.5 bay, and an internal 3.5 bay. Mount your system drive in the internal 3.5 bay, grind out the spot where the external 3.5 bay goes, and put in a 5 in 3 bay enclosure. Then get a small 1U server power supply because the regular PSU won't fit with the drive enclosure in there.

Right now mine is buried under a table, but I can dig it out and get some pics. I'm pretty loving proud of it, even though I'm retiring it soon.

The_Frag_Man
Mar 26, 2005

I'd love to see pictures of your solution.
I will have to look around for that case, I haven't seen it before.

MrMoo
Sep 14, 2000

The_Frag_Man posted:

Shuttle is available here I think, but I don't know of a shuttle server that can take 5 drives.

I think the biggest can take 4, http://au.shuttle.com/product_detail.jsp?PLLI=14&PI=217



Synology have a 5-bay DX1010 which can pair up with the DX510 for 10-bays,


http://www.synology.com/us/products/DS1010+/index.php

QNAP have a a few variations of 8-bay monsters, SS-839, TS-859, TS-809 Pro




http://www.qnap.com/pro_detail_feature.asp?p_id=124
http://www.qnap.com/pro_detail_feature.asp?p_id=146
http://www.qnap.com/pro_detail_feature.asp?p_id=109

With the SS-839 using 2.5" disks for smaller form factor.

MrMoo fucked around with this message at 06:45 on Mar 13, 2010

japtor
Oct 28, 2005

The_Frag_Man posted:

This is a picture of how they were mounted in an earlier prototype:
http://farm1.static.flickr.com/169/391218381_1f93b978e0.jpg

I found it here, but it seems to be down now:
http://www.wizdforums.co.uk/archive/index.php/t-7065.html

Shuttle is available here I think, but I don't know of a shuttle server that can take 5 drives.
Yeah you'd have to hack up your own rail to do it, but the case dimensions look similar to the Boks.

FISHMANPET posted:

With a little bit of work, this case can be made to hold 6 drives.
http://www.newegg.com/Product/Product.aspx?Item=N82E16811144140
Just a note that that one is pretty chunky in comparison.
14.70" x 11.20" x 9.00" vs (iirc) approx 9"x8"x7" (I think Shuttles are 10-11" deep)
If you don't mind something that size there's probably a lot of other options.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

japtor posted:

Just a note that that one is pretty chunky in comparison.
14.70" x 11.20" x 9.00" vs (iirc) approx 9"x8"x7" (I think Shuttles are 10-11" deep)
If you don't mind something that size there's probably a lot of other options.

Yeah, it's pretty deep, though it was the best I could find at the time. It's designed as a caming case, so it's deeper than it really needs to be to hold a big graphics card.

Xenomorph
Jun 13, 2001

nikomo posted:

I wonder what kind of HDDs you guys buy if you don't want to do RAID0 because the chance of malfunction is doubled. I have not had an HDD die on my hands during my (short) lifetime.

Is this some form of comedy? How old are you? Just a few weeks old? How do you type?

Magnetic hard drives are sloppy, nasty, fragile pieces of poo poo technology that people can't wait to get rid of.
Think about all the excitement for SSD, think about the massive Backup market, the constant warnings telling people to backup their poo poo, and the billion-dollar Online backup industry.

Hard drives are the least reliable component in any system, and with all that unreliability and unpredictability, putting them in a RAID0 is just increasing chance of data loss by an assload.

Hard drives can fail at some random time that could be 1 week after purchase, or after 10 years of heavy use. That is all brands.

Do you think there is some magic brand that doesn't fail?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Xenomorph posted:

Is this some form of comedy? How old are you? Just a few weeks old? How do you type?

Magnetic hard drives are sloppy, nasty, fragile pieces of poo poo technology that people can't wait to get rid of.
Think about all the excitement for SSD, think about the massive Backup market, the constant warnings telling people to backup their poo poo, and the billion-dollar Online backup industry.

Hard drives are the least reliable component in any system, and with all that unreliability and unpredictability, putting them in a RAID0 is just increasing chance of data loss by an assload.

Hard drives can fail at some random time that could be 1 week after purchase, or after 10 years of heavy use. That is all brands.

Do you think there is some magic brand that doesn't fail?

I've never had a Samsung drive fail, and I've been using 8 of them for nearly 2 days :smug:
On the other hand I've various Seagates of various sizes for 4.5 years, and I've had one entire drive fail!

Seagates are poo poo, Samsung superiority! :smug:
:spergin:

movax
Aug 30, 2008

FISHMANPET posted:

I've never had a Samsung drive fail, and I've been using 8 of them for nearly 2 days :smug:
On the other hand I've various Seagates of various sizes for 4.5 years, and I've had one entire drive fail!

Seagates are poo poo, Samsung superiority! :smug:
:spergin:

I've never had WD drives fail, clearly WD is the superior species of drive :smugbert:

Seriously though, I need to get 8 more 1.5TB drives, and am debating getting 7200rpm drives to match the current Seagates, or dropping to 5400rpm drives to save power. Limiting factor is GigE I/O.

Xenomorph
Jun 13, 2001
I just ordered my forth WD 640 Gig drive. Those things are amazing. Over 100 MB/sec sustained copying files to each other. These are spread out over three systems.

I'm getting close to ordering a bunch more to finally build a NAS box.

I'm thinking a little hardware RAID 5 box with four WD Black 640 gig drives.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Has anyone used the LaCie 5big Network - NAS server - 10TB ( http://www.cdw.com/shop/products/default.aspx?edc=1762600&cm_sp=homepage-_-MainFeature1-_-LaCie+5big+Network&programidentifier=1 ) ?

We're looking at purchasing one for work for miscellaneous storage.

japtor
Oct 28, 2005
The worry I have with LaCie stuff is this:

quote:

Note: In the event that an individual hard disk fails in the LaCie 5big network, please contact your LaCie reseller or LaCie Customer Support. Please replace a defective hard drive only with a new drive provided by LaCie.
I'm sure it'll work fine if you stick whatever drive in, it's just more of a worry if the NAS itself dies and they try to blame your own drive to deny service or something. The spares on CDW are $280 vs $150-180 for a bare drive off Amazon.

NOTinuyasha
Oct 17, 2006

 
The Great Twist
My dad has had this Buffalo Linkstation which he stores family photos on along with his own personal backups. Redundancy (the LS was single-drive) was to manually mirror the most important of that data to an even more ancient Netgear SC101 with two IDE 300GB drives in RAID-1. He spends most of his time away from home and sort of neglected this setup. Until he had problems I had nothing to do with any of this.

Anyway the buffalo LS starts beeping, of course I'm handed the thing to diagnose it. The type of flashing and beeping means firmware failure, surprise. I couldn't get the device to register with the DHCP or respond to pings.

First things first I need to get to the backups, so I fire up the netgear software and... can't find it. So I trudge into the basement and find the netgear blinking away with a red light - this device CAN register with the DHCP but is inaccessible. Turns out all Netgear's SC101s were terrible poo poo with all sorts of problems, the worst of which being catastrophic hardware failure. This device is a highly proprietary SAN server, so the data is completely inaccessible unless I buy a new enclosure, which might not get the data back.

So the Buffalo is my only hope, but I don't want to try wiping the firmware with the drive inside, so I take it out. ~TurNs OuT~ Buffalo hosed with their EXT3 implementation and I can't recover this drive with Knoppix alone.

Would it be terribly nerdy to youtube a bonfire of NASes, or would it be justified?

Artificial Nebulae
Apr 3, 2009
Does anyone know if Time-Limited Error Recovery on Western Digital drives interferes with Linux-based software RAID? I've been selecting parts for a simple NAS/Webserver to replace what I have now. I've been relatively pleased with the single WD drive I bought for my desktop I built a year ago, but when I checked the reviews for the various drives I picked out, I found out about the whole TLER situation.

I've talked to someone that said that Linux software RAID probably won't have a problem with drives that have had TLER disabled. I found a linux-raid newsgroup discussion that really wasn't all that definitive - some people said that the setup wouldn't care about TLER, others said that TLER would benefit / detriment the setup. I also searched through this thread and couldn't find anyone talking about TLER problems in conjunction with Linux RAID, only problems with Enterprise-level RAID controllers.

In the meantime, I've heard that Western Digital has permanently disabled TLER on consumer-grade hard drives in a way that you can't use the wdtler.exe utilty to change it, which means that if the lack of TLER on the drives that I want to purchase will interfere with the setup I'll have to go and find a different set of drives, which is a shame because all the highest rated 1TB drives on Newegg are Western Digital-branded. :sigh:

Puck42
Oct 7, 2005

I'm using 4 WD15EARS in my server under software raid and haven't noticed any issues. But TLER was supposedly disabled in these. The only real problem was making sure I partitioned it correctly to compensate for the 4k sectors since WD has the drives report a 512 byte sector so that XP will work with them.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I did a lot of research on this for my ZFS server, and I suspect the same information would apply for mdadm.

Basically, it doesn't matter.

The problem with TLER is that after 8 seconds if the hardware RAID controller doesn't get a response it marks the drive as failed and drops it from the array. That's a problem on consumer drives because they're set up to grind and look for minutes, since it assumes this is your only copy of the data and damnit it's gonna get that data.

RAID drives just return "sorry, couldn't do it" after 7 seconds and mark the sector as bad and let the RAID controller rebuild that block.

With software raid, it won't time out after 8 seconds, and just basically go as long as the system will let it, which is usually 10-20 minutes. The way it works in OpenSolaris specifically, the read times out after 5 minutes, and it would try twice, for a 10 minute delay. After that 10 minutes ZFS will mark the sector as bad and rewrite it.

So it would be better if it would just time out after a few seconds, but software RAID won't drop the drive all the time like a hardware raid would.

devilmouse
Mar 26, 2004

It's just like real life.
After procrastinating for more than a year, I'm finally putting together my storage machine. I'm either going to run OpenSolaris if I can stomach using Solaris again or I'll just go back to the comforting land of FreeBSD. The machine itself will be our home fileserver, doing the usual lifting: streaming music/movies to a handful of computers around the house, serving as a backup store for the various macs we have lying around, and occasionally serving torrents.

The parts list is, for the most part, pretty standard as far as I can tell from wandering around the AVSforums.

Case: NORCO RPC-4220
CPU: Intel Xeon E3110 3.0GHz LGA 775 65W Dual-Core Processor
Controllers: 2x SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz
System HD: Kingston SSDNow V Series SNV125-S2/30GB 2.5" Internal Solid State Drive (SSD)
RAM: Kingston 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) ECC Unbuffered Desktop Memory Model KVR800D2E6K2/4G
MB: SUPERMICRO MBD-X7SBE LGA 775 Intel 3210 ATX
PSU: CORSAIR CMPSU-750TX
DVD: Sony Optiarc Slim Combo Black SATA Model CRX890S-10
DVD cable: BYTECC 18" Sata and Slim Sata Power 7+6pin Cable, for Sata Slim OD
Fans: 3x Noctua NF-S12B FLX 120mm, 2x Noctua NF-R8-1800 80mm
Sata power cables: 5x custom cables from frozencpu.com
Custom 120mm fanboard from some dude named cavediver

Questions in no particular order:
* Overall, any issues with the parts list?
* Unbuffered or buffered RAM? While I know that I want ECC RAM on the off-chance of a freak occurance, I'm less sure on the question of buffered.
* Is there a better motherboard that I haven't been able to find? The boards with 2 PCIX slots are rare at best and this was the only one I managed to find on newegg.
* ZFS configuration... I'm not sure how to make the best use of the 20 bays in terms of vdevs. I'm not going to be buying all 20 drives at once, so possible options when everything is said and done:
2x 9-disk RAIDZ2 + 2-disk RAIDZ
-or-
2x 8-disk RAIDZ2 + 4-disk RAIDZ
-or-
2x 7-disk RAIDZ2 + 6-disk RAIDZ2

While it's tempting to go to just 2x 10-disk RAIDZ2 vdevs, I don't like the idea of going past the suggested 9-disk limit in ZFS. I'm leaning most strongly towards the last option for the redundancy / expandability, even if it results in the least usable space.

Any other thoughts?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

devilmouse posted:

After procrastinating for more than a year, I'm finally putting together my storage machine. I'm either going to run OpenSolaris if I can stomach using Solaris again or I'll just go back to the comforting land of FreeBSD. The machine itself will be our home fileserver, doing the usual lifting: streaming music/movies to a handful of computers around the house, serving as a backup store for the various macs we have lying around, and occasionally serving torrents.

The parts list is, for the most part, pretty standard as far as I can tell from wandering around the AVSforums.

Case: NORCO RPC-4220
CPU: Intel Xeon E3110 3.0GHz LGA 775 65W Dual-Core Processor
Controllers: 2x SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz
System HD: Kingston SSDNow V Series SNV125-S2/30GB 2.5" Internal Solid State Drive (SSD)
RAM: Kingston 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) ECC Unbuffered Desktop Memory Model KVR800D2E6K2/4G
MB: SUPERMICRO MBD-X7SBE LGA 775 Intel 3210 ATX
PSU: CORSAIR CMPSU-750TX
DVD: Sony Optiarc Slim Combo Black SATA Model CRX890S-10
DVD cable: BYTECC 18" Sata and Slim Sata Power 7+6pin Cable, for Sata Slim OD
Fans: 3x Noctua NF-S12B FLX 120mm, 2x Noctua NF-R8-1800 80mm
Sata power cables: 5x custom cables from frozencpu.com
Custom 120mm fanboard from some dude named cavediver

Questions in no particular order:
* Overall, any issues with the parts list?
* Unbuffered or buffered RAM? While I know that I want ECC RAM on the off-chance of a freak occurance, I'm less sure on the question of buffered.
* Is there a better motherboard that I haven't been able to find? The boards with 2 PCIX slots are rare at best and this was the only one I managed to find on newegg.
* ZFS configuration... I'm not sure how to make the best use of the 20 bays in terms of vdevs. I'm not going to be buying all 20 drives at once, so possible options when everything is said and done:
2x 9-disk RAIDZ2 + 2-disk RAIDZ
-or-
2x 8-disk RAIDZ2 + 4-disk RAIDZ
-or-
2x 7-disk RAIDZ2 + 6-disk RAIDZ2

While it's tempting to go to just 2x 10-disk RAIDZ2 vdevs, I don't like the idea of going past the suggested 9-disk limit in ZFS. I'm leaning most strongly towards the last option for the redundancy / expandability, even if it results in the least usable space.

Any other thoughts?

You can do better. Drop the PCI-X card for a pair of AOC-USAS-L8i's. They're PCI Express x8 with 2 SAS ports (each SAS port can be broken out into 4 SATA ports, but I believe the 4220 is a SAS case anyway, so you save some cable). They also work great in OpenSolaris (I have one right now). I think that case might also use molex ports, so no need for the SATA power cables.

Also, I would ditch your lame CPU for an AMD Phenom II X2 or X3 or X4. You can get chips that are cheaper and have more cores when you go AMD.

Good choice on the fan board though.

devilmouse
Mar 26, 2004

It's just like real life.

FISHMANPET posted:

You can do better. Drop the PCI-X card for a pair of AOC-USAS-L8i's. They're PCI Express x8 with 2 SAS ports (each SAS port can be broken out into 4 SATA ports, but I believe the 4220 is a SAS case anyway, so you save some cable). They also work great in OpenSolaris (I have one right now). I think that case might also use molex ports, so no need for the SATA power cables.

Also, I would ditch your lame CPU for an AMD Phenom II X2 or X3 or X4. You can get chips that are cheaper and have more cores when you go AMD.

Doh, good catch on the cable stuff. Turns out that it does still use molex power for the drive plane, as well as SAS connectors.

Did you have any problems with the making that UIO bracket fit in your case (assuming you're not using a SuperMicro case)? It's a good looking card otherwise, wonder why newegg doesn't sell it? Going PCIe saves me a bunch of headaches on finding an older MB that supports it.

Any suggestions on a motherboard with onboard video and at least 2x PCIe 8x slots, for the Phenoms?

Thanks for the input!

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

devilmouse posted:

Doh, good catch on the cable stuff. Turns out that it does still use molex power for the drive plane, as well as SAS connectors.

Did you have any problems with the making that UIO bracket fit in your case (assuming you're not using a SuperMicro case)? It's a good looking card otherwise, wonder why newegg doesn't sell it? Going PCIe saves me a bunch of headaches on finding an older MB that supports it.

Any suggestions on a motherboard with onboard video and at least 2x PCIe 8x slots, for the Phenoms?

Thanks for the input!

I happened to have a bracket from a wireless card that I could transplant onto the card. Since you're using that rackmount case, you would probably be fine letting the card fly free without any bracket. A few pages pack somebody posted pics with that card in a rackmount case.

As for the motherboard, I'd recommend what I got, except I was a dumb rear end and forgot to get a drive with onboard video. Pretty much any AMD chipset will work, but you'll need to buy an Intel NIC.

Also, I forgot to say it, but ZFS like smaller pools. I'll post the article when I get some more time, but basically more pools means faster writes.

Fake edit: There's another supermicro card, something like the AOC-USAS-MV8. It's a few bucks cheaper than the L8i. DON'T GET IT. It works well in Linux and BSD, but not Solaris.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
You don't need a powerful CPU for most media server use cases. Unless you're doing transcoding like with PS3MediaServer, you're best getting a moderately powerful 64-bit CPU with low TDP for an OpenSolaris file server. ZFS benefits from fast CPUs, sure, but it's probably not worth it for even most small office uses of a file server. I'd say no more than an E5200 (or the Xeon equivalent) would ever be necessary for a ZFS file server unless it's running at quite high loads.

My E5200 setup only uses about 50w idle and can churn through video if I need it to while I doubt most of the Xeons can get total system wattage that low. Power consumption may matter more or less in your region or household.

devilmouse posted:

Unbuffered or buffered RAM?
Buffered / registered RAM is not as necessary if you use ZFS specifically for data integrity, but it can help make sure that if you get a weird batch of RAM that it'll smooth out irregularities that most people see with consumer non-ECC and non-registered RAM. For servers this is normally not even a question to go with ECC + registered because of the importance of each server. Only reason I'm going to use it later is that if I'm going to trust 24TB of data to a server, it better be stinkin' reliable as gently caress.

FISHMANPET posted:

Also, I would ditch your lame CPU for an AMD Phenom II X2 or X3 or X4. You can get chips that are cheaper and have more cores when you go AMD.
Have you seen the server boards available for AMD CPUs? Terrible selection of NICs, and most assume you're going to be doing multi physical CPUs. If you go with consumer level motherboards, you'll almost certainly need to buy an extra couple of NICs. So for the sake of convenience in a motherboard layout so I don't need to use two extra PCI slots, I'd go with a cheaper Intel CPU with a server class motherboard. This is really personal preference as far as I'm concerned, not really about cost (maybe about a $100 cost difference, which I would hope doesn't matter if you're building such a large array in the first place).

Also, part of why you should get a server motherboard with multiple PCI x8/x16 slots is that most consumer class motherboards split lanes into a single x8 on the backend (because even SLI setups don't max out 8 lanes) and incurs some multiplexing penalty (the cost of the extra lanes exceeds that of the multiplex logic). This means that if you use two PCI-e SAS / SATA cards with 8+ drives each you don't get maximum bandwidth possible. This doesn't matter if you only use one card, but it can be a problem if you use two or more cards that run at high load.

Adbot
ADBOT LOVES YOU

devilmouse
Mar 26, 2004

It's just like real life.
Thanks for the feedback, guys... I've bumped up the RAM to 8 gigs after seeing how much ZFS appreciated the extra memory. I'm glad you pointed out the AOC-USAS-L8i, too, since using PCI-X was making me sad inside.

Now I just have to find a motherboard that supports all this stuff. Oh, Newegg, why must your motherboard filter be so bad?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply