Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
shifty
Jan 12, 2004

I dont know what you're talking about

Longinus00 posted:

You're still missing the most important information, how much bandwidth those video streams require.
I don't know what bandwidth the video streams require. My tuner is a HDHomerun Prime, which they claim records at 4-8GB per hour. It has 3 tuners, but I'll be adding one more eventually.

Longinus00 posted:

Considering you're only allocating 1.5TB for it I'm guessing your saving compressed files. I assume you making one big z2 pool with all your disks and then making a subvolume for the iSCSI target?
Video should be recorded as H.264. I'm not sure if it's compressed or not, but 1TB is plenty of space for general viewing. I can move videos I want to keep to the larger CIFS share. Yes, I'm using one big z2 pool and creating a subvolume for the iSCSI target. Depending on performance, I might switch to a mirror for the iSCSI volume instead. Only recorded TV and backups will be on that subvolume, my other video files will be shared via CFS since those can be streamed on a network share. The only reason I'm using iSCSI is because TV can't be recorded to a network share.

Unfortunately, it doesn't seem to be stable with the 10GB NIC. I switched to 1GB, and it seems to be fine, but a bit slower. Now Windows is reporting 15-20MBps(it was 60). I'm not really sure what I can do about that. Hopefully it will be faster when I'm connecting to raw drives.

Adbot
ADBOT LOVES YOU

b0lt
Apr 29, 2005

shifty posted:

Unfortunately, it doesn't seem to be stable with the 10GB NIC. I switched to 1GB, and it seems to be fine, but a bit slower. Now Windows is reporting 15-20MBps(it was 60). I'm not really sure what I can do about that. Hopefully it will be faster when I'm connecting to raw drives.

Virtualized IO performance is pretty lovely, even with raw drives (it can even be slower in some cases). If you want to virtualize your NAS, you should use VT-d (or whatever AMD calls their version) passthrough on an entire controller.

r u ready to WALK
Sep 29, 2001

b0lt posted:

you should use VT-d (or whatever AMD calls their version) passthrough on an entire controller.

Seconding this, you MUST use VT-d or you will have to reboot the entire ESXi host once hard drives start to fail. At least it was that that way when I tested some failure scenarios on ESXi 4.1. if you pull a drive and replace it with another, ESXi is unable to reset the controller channel and detect the new drive until you reboot the whole machine. It turns out ESXi really hates total device loss, it can sort of cope if it happens with iscsi and fc targets but not local drives.

That's why I wound up with ubuntu, zfsonlinux and kvm instead. I wish I had bought a motherboard that supported VT-d, oops :(

I'd love to hear if ESXi 5.0 deals with it any better, though. Try pulling a raw mapped local disk, replace it with another on the same channel and then rescan for storage. On ESXi 4.1 the channel would be blocked until the next reboot.

If you're not going to hotswap perhaps it's not a big deal, but my server has drive trays and the whole point of running an array for me is to be able to switch out the disks without powering everything off. Having to power cycle your array every time a disk fails is a great way to get a double or triple disk error while rebuilding, or to pull the wrong disk by accident.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Do use VT-d to passthrough controllers for ZFS. Do not bother with running ZFS with virtual disks (virtual compatibility mode RDMs can work though but the performance will be worse than with VT-d) unless you're messing with some drive parameter calculations or anything else that's not related to reliability. For example, if a drive dies and is an RDM to your VM, VMware may not even detect a failure readily and commands to the drive from the VM will not be the same as if you had removed the disk from the VM manually (this acts as a hypervisor-side unmount command of sorts rather than just yanking a disk out or dropping it).


ESXi 5.0 has had a number of improvements in terms of storage management rather than storage resilience (Storage DRS is cool and all but it's not quite as relevant as the compute DRS). I haven't seen anything in release notes that would lead me to believe that the behavior of DAS losses would be dealt with any better. Almost everything VMware invests their money into for ESX(i) is related to enterprise installs which almost never use DAS except for niche, transition use cases to SANs which are FC and iSCSI - pick your protocol and move on.

ESX(i) by design must block at this point to rescan for new storage in general (Scanning for HBAs, for example) because there's what amounts to a big internal lock that goes across all storage. Storage configuration changes (including loss of a drive) is not something that happens repeatedly in a (sane) ESX(i) environment. It's about 20% of the reason why at one job I was at it took at least an hour to create a single VM - we did an HBA scan for every VM created because discovering a new drive and new controller are created equal for ESX(i) hosts in terms of impact upon a storage configuration.

shifty
Jan 12, 2004

I dont know what you're talking about
I didn't realize that. I was just planning on using vt-d to pass the drives themselves. Looks like I'm buying a controller then. Any recommendations? I only have 3 pcie slots, and need to save at least one of them for NICs, otherwise I'm buying a new motherboard too, so I'll need 4+ port cards.

Edit: The drives I'll be using are 3TB.

shifty fucked around with this message at 15:23 on Aug 2, 2012

b0lt
Apr 29, 2005

shifty posted:

I didn't realize that. I was just planning on using vt-d to pass the drives themselves. Looks like I'm buying a controller then. Any recommendations? I only have 3 pcie slots, and need to save at least one of them for NICs, otherwise I'm buying a new motherboard too, so I'll need 4+ port cards.

Edit: The drives I'll be using are 3TB.

You'll probably want something based on the LSI 2008 chipset, you can sometimes find pretty cheap ones on ebay that were stripped from old IBM servers.

Longinus00
Dec 29, 2005
Ur-Quan

shifty posted:

I don't know what bandwidth the video streams require. My tuner is a HDHomerun Prime, which they claim records at 4-8GB per hour. It has 3 tuners, but I'll be adding one more eventually.

The bandwidth is staring you in the face, 8GB/hr is approximately 20ish Mbps. This means that if you want a 1 minute buffer on 5 streams that will require at least 750MB. In this case a 2GB ARC should be enough to keep everything running smoothly so something like 4GB total ram for the VM.

shifty posted:

Video should be recorded as H.264. I'm not sure if it's compressed or not, but 1TB is plenty of space for general viewing. I can move videos I want to keep to the larger CIFS share. Yes, I'm using one big z2 pool and creating a subvolume for the iSCSI target. Depending on performance, I might switch to a mirror for the iSCSI volume instead. Only recorded TV and backups will be on that subvolume, my other video files will be shared via CFS since those can be streamed on a network share. The only reason I'm using iSCSI is because TV can't be recorded to a network share.

H264 is a video compression standard so yes these are compressed videos in the same way mp3s are compressed audio.

shifty posted:

Unfortunately, it doesn't seem to be stable with the 10GB NIC. I switched to 1GB, and it seems to be fine, but a bit slower. Now Windows is reporting 15-20MBps(it was 60). I'm not really sure what I can do about that. Hopefully it will be faster when I'm connecting to raw drives.

shifty
Jan 12, 2004

I dont know what you're talking about

b0lt posted:

You'll probably want something based on the LSI 2008 chipset, you can sometimes find pretty cheap ones on ebay that were stripped from old IBM servers.

Thanks. I ended up going with LSI SAS9211-8i. LSI says it should support 3TB, so here's hoping. I'll find out tonight.

Longinus00 posted:

The bandwidth is staring you in the face, 8GB/hr is approximately 20ish Mbps. This means that if you want a 1 minute buffer on 5 streams that will require at least 750MB. In this case a 2GB ARC should be enough to keep everything running smoothly so something like 4GB total ram for the VM.
Thanks so much for your help. I'll start there and monitor. I really hope the 20MBps is because of the virtualized drives and not the NIC. I won't find out until I buy another hard drive.

Longinus00 posted:

H264 is a video compression standard so yes these are compressed videos in the same way mp3s are compressed audio.
For some reason I thought it was a container and not necessarily compressed.

Edit: After a very long night, I got the card flashed, installed, and passed through. Just testing on a single drive it copied data around 140MB. Definitely better than before. I was unable to test the 3TB, I'll try tomorrow.

shifty fucked around with this message at 06:15 on Aug 4, 2012

r u ready to WALK
Sep 29, 2001

I just had this silly idea - linux has an iscsi target and windows 7 comes with an iscsi initiator built-in.

I've already created a 2TB zvol, exported it with iscsi and it seems like windows will let me convert my internal drives to dynamic and raid1 mirror them onto the iscsi zvols! :downs:

I think i'm going to try breaking the mirror and running my whole steam installation from an iscsi drive tomorrow. CrystalDiskMark says my internal WD green does 50mb/s while the iscsi drive does 90mb/s (and is backed by tons of cache so random read/write is WAY faster)

If it sucks really bad, I can just swap the drive letters around or do a sync back to the local drive to get back to normal.
But it would be very fun to stick all my storage in the basement and run my desktop with everything iscsi and smb mounted except for the boot drive!

teamdest
Jul 1, 2007
the IBM M1015 is my current recommendation for a cheap controller. It needs to be reflashed to the standard LSI2008 firmware/bootrom which can be tricky since some motherboards won't do it, but once it's flashed it's been perfect. I've got an extra one at the moment because I thought I ruined it by screwing up the flash, but it turns out i'm just dumb sometimes.

The M1015 with the IT-mode firmware supports 3TB drives and does passive passthrough without requiring any drivers at least under OpenIndiana and Solaris 11.

shifty
Jan 12, 2004

I dont know what you're talking about

teamdest posted:

the IBM M1015 is my current recommendation for a cheap controller. It needs to be reflashed to the standard LSI2008 firmware/bootrom which can be tricky since some motherboards won't do it, but once it's flashed it's been perfect. I've got an extra one at the moment because I thought I ruined it by screwing up the flash, but it turns out i'm just dumb sometimes.

The M1015 with the IT-mode firmware supports 3TB drives and does passive passthrough without requiring any drivers at least under OpenIndiana and Solaris 11.

You're not joking. I had to flash my card using the EFI shell, which did everything except the BIOS. No clue why, but I crossed my fingers, rebooted, and re-flashed with Windows 7, which did the BIOS.

Bonobos
Jan 26, 2004
Just a heads up, HP N40L's are on sale for 250 on the egg right now (shellshocker or whatever). Just picked up a spare. Not 200 or whatever they were going for at the beginning of the year, but these haven't been on sale in forever.

Delta-Wye
Sep 29, 2005
I've been sort of dinking around with my new drives in my spare time and still can't get it working. It's frustrating because neither drive separately show errors, but together they are very very unhappy. At this point, I actually suspect it may be a power issue; the new drives draw a little bit more current than the old ones according to the labels and it could be that all four of the new drives might be a bit too much.

I need to replace the case anyways, so I've begun hunting around for a replacement for a small NAS. As expected, all of the nice cases are pretty expensive so I figured if I'm going to go all out and rebuilding from scratch I might as well get two more drives and switch to raidz2. So I'm looking at a couple miniATX cases that support that many drives (but none with sweet drive trays like my old Chenbro) along with a miniATX board that can handle that many drives. This has turned out to be pretty difficult!

Cases seem limited to Lian Li PC-Q08B, PC-Q25B and the Fractal Design Array R2.

The motherboard is even more of a PITA. Very few have 7ish SATA connection (right now I have one with 4 for the storage drives and a PATA connection for the system drive) and I don't know if the cases will have room for a large card like the M1015.

Are there any common builds for such a computer? It doesn't seem as popular as I would have guessed.

b0lt
Apr 29, 2005

Delta-Wye posted:

I need to replace the case anyways, so I've begun hunting around for a replacement for a small NAS. As expected, all of the nice cases are pretty expensive so I figured if I'm going to go all out and rebuilding from scratch I might as well get two more drives and switch to raidz2. So I'm looking at a couple miniATX cases that support that many drives (but none with sweet drive trays like my old Chenbro) along with a miniATX board that can handle that many drives. This has turned out to be pretty difficult!

Cases seem limited to Lian Li PC-Q08B, PC-Q25B and the Fractal Design Array R2.

The motherboard is even more of a PITA. Very few have 7ish SATA connection (right now I have one with 4 for the storage drives and a PATA connection for the system drive) and I don't know if the cases will have room for a large card like the M1015.

Are there any common builds for such a computer? It doesn't seem as popular as I would have guessed.

Hello, me from a month ago! I went with the P-Q25B, some mini-ITX intel board, the supermicro equivalent of the M1015 (don't do this, the bracket is reversed so you can't actually use it with the case properly unless you substitute your own bracket), and it works great. You should probably buy a small, fully modular PSU though, or you'll end up with this kind of clusterfuck:

Gism0
Mar 20, 2003

huuuh?
the bracket problem is a common one, though the proper brackets are pretty cheap on ebay.

Gism0
Mar 20, 2003

huuuh?
My new toy:

Core i5 3470 3.2GHz, P8Z77-I DELUXE, 16GB RAM, 128GB Samsung 830 Series SSD

Bitfenix Prodigy mini-ITX case:



I tried ESXi 5 but had endless problems. First problem was getting the ethernet driver working, but I found a custom driver for that. After that I made an OpenIndiana VM which worked fine, VMDirectPath worked great as well. Then I made the mistake of passing through the USB controller that I was using to boot ESXi, so I had to reinstall. Then I couldn't boot windows 7 with VMDirectPath'd graphics and more than 2GB RAM. It's a known problem apparently but none of the workarounds would let me boot with more than 2GB, causes a BSOD. Finally I gave up when I couldn't get openElec to output to the TV, though looking back that was probably the same problem (below) I had with Ubuntu.

Finally I installed Ubuntu 12.04 and even that didn't work right away, I had to apply a bug fix for the Intel xorg drivers posted only 3 days ago or it'd kernel panic on boot! I was hoping my days of tinkering with xorg configuration were over.

Now everything is sweet. ZFS is working great (even if it's just a single USB 3 disk heh) and XBMC is lightning fast. Now I just need some money for 5 x 3TB drives..

Rukus
Mar 13, 2007

Hmph.

Delta-Wye posted:

...

Cases seem limited to Lian Li PC-Q08B, PC-Q25B and the Fractal Design Array R2.

The motherboard is even more of a PITA. Very few have 7ish SATA connection (right now I have one with 4 for the storage drives and a PATA connection for the system drive) and I don't know if the cases will have room for a large card like the M1015.

Are there any common builds for such a computer? It doesn't seem as popular as I would have guessed.

I "upgraded" from a PC-Q08B to a PC-Q25B and I really like it. I'm running my drives in a JBOD pool, so the card I use is a SuperMicro SAS to SATA: http://www.supermicro.com/products/accessories/addon/aoc-sas2lp-mv8.cfm. It works great, and supports 3TB drives (and probably 4TB as well).

Like b0lt mentioned, definitely look for a proper-fitting PSU. Modular is preferred, but the recommended manufacturers' (Seasonic/Corsair/XFX) modular PSUs are too long. I ended up using a Corsair 400W (the older builder series, when they were still Seasonic rebadges) and just placed the extra cables under the PSU.

Another option is Silverstone's Strider PSU: http://www.newegg.com/Product/Product.aspx?Item=N82E16817256065. You can also pick up shorter cables for it, too: http://www.newegg.com/Product/Product.aspx?Item=N82E16812162010. Though I've heard that it uses cheaper capacitors and the fan is a bit more audible. It's a tradeoff for being fully modular for that size of PSU.

Delta-Wye
Sep 29, 2005

Rukus posted:

I "upgraded" from a PC-Q08B to a PC-Q25B and I really like it. I'm running my drives in a JBOD pool, so the card I use is a SuperMicro SAS to SATA: http://www.supermicro.com/products/accessories/addon/aoc-sas2lp-mv8.cfm. It works great, and supports 3TB drives (and probably 4TB as well).

Like b0lt mentioned, definitely look for a proper-fitting PSU. Modular is preferred, but the recommended manufacturers' (Seasonic/Corsair/XFX) modular PSUs are too long. I ended up using a Corsair 400W (the older builder series, when they were still Seasonic rebadges) and just placed the extra cables under the PSU.

Another option is Silverstone's Strider PSU: http://www.newegg.com/Product/Product.aspx?Item=N82E16817256065. You can also pick up shorter cables for it, too: http://www.newegg.com/Product/Product.aspx?Item=N82E16812162010. Though I've heard that it uses cheaper capacitors and the fan is a bit more audible. It's a tradeoff for being fully modular for that size of PSU.

Thanks for the info guys. Do you use a freebsd varient with that SATA card? Driver support is another issue of concern :(

Also, my fellow packrats... Newegg has WD20EARX 2TB drives for $100 again: http://www.newegg.com/Product/Product.aspx?Item=N82E16822136891

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Delta-Wye posted:

Also, my fellow packrats... Newegg has WD20EARX 2TB drives for $100 again: http://www.newegg.com/Product/Product.aspx?Item=N82E16822136891

The 3TB Seagate has been holding steady at $50/TB: http://edwardbetts.com/price_per_tb/internal_hdd/

There were a few open box deals on 2TB drives earlier this week for $30/TB

b0lt
Apr 29, 2005

Delta-Wye posted:

Thanks for the info guys. Do you use a freebsd varient with that SATA card? Driver support is another issue of concern :(

Also, my fellow packrats... Newegg has WD20EARX 2TB drives for $100 again: http://www.newegg.com/Product/Product.aspx?Item=N82E16822136891

It has good driver support for FreeBSD, Solaris, and Linux. It's basically the same card as the IBM M1015, so the same caveats with regards to flashing the firmware apply.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

b0lt posted:

It has good driver support for FreeBSD, Solaris, and Linux. It's basically the same card as the IBM M1015, so the same caveats with regards to flashing the firmware apply.

I'm pretty sure the AOC-SASLP-MV8 doesn't have Solaris drivers.

movax
Aug 30, 2008

FISHMANPET posted:

I'm pretty sure the AOC-SASLP-MV8 doesn't have Solaris drivers.

Yeah, that one doesn't, I think it's the similarly named model that's an old PCI-X card that works under Solaris. The AOC-SAT2-MV8 or something?

Delta-Wye
Sep 29, 2005
No solaris doesn't mean no *bsd, right?

Just checking...

evil_bunnY
Apr 2, 2003

Different kernel, BSD ran on a wider range of hardware last time I checked.

Delta-Wye
Sep 29, 2005

evil_bunnY posted:

Different kernel, BSD ran on a wider range of hardware last time I checked.

I do know the difference between the two, and I am aware of BSDs pretty decent hardware support; I am asking about this particular hardware. If it works I may be ordering it next day or two. If this particular item doesn't (and usually it's the cheaper lowend equipment that has the spottiest support) then I will keep digging for a sata solution.

b0lt
Apr 29, 2005

FISHMANPET posted:

I'm pretty sure the AOC-SASLP-MV8 doesn't have Solaris drivers.

Oops, yeah. Here's a pretty good listing of SAS cards and driver support. The supermicro LSI 2008 cards are the AOC-USAS2-L8i and L8e.

Hiyoshi
Jun 27, 2003

The jig is up!
A few weeks ago, about twenty or so 2 TB consumer-grade Seagate drives were given away at work that work perfectly fine on their own (and in smaller RAIDs), but our IT coordinator said for whatever reason they just would not build larger RAIDs properly. He told me that they would build and work in up to 5-drive RAIDs fine, but when going any larger than that they just wouldn't build the RAIDs. According to him, it's not that uncommon for consumer-level drives to not reliably build larger RAIDs. What could be the cause for this?

I grabbed a few of the drives and the one I put in my PC has been excellent except for mysteriously reporting 4 TB of space on the initial format (I tried to put more than 2 TB of data on it but it threw errors at the 2 TB mark :().

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So question about moving from a controller that doesn't support larger than 2 TB drives to one that does.

Right now I've got 5 750GB drives, and one of them has gone tits up in a way that Seatools doesn't detect, so I think I'll just buy a new one.

I might as well get a 3TB drive since I'm just about full and I need to replace the 750 GB drives with larger ones anyway. So my plan is to get a 3TB drive and only use 750GB of it, and keep doing that until I replace the controller, then I can upgrade the rest to 3 TB. I'll probably have to zero out the drives I used before the new controller, right?

decryption
Jun 23, 2005

I'm looking to build a 6 disk FreeNAS box and wondering what the lowest power consumption CPU & PSU combo is these days? It'll be on 24/7, so I'm looking to cut power usage as much as possible to save money on running costs (electricity is 22c/kWh here in Australia!)

I'm thinking that an Intel Atom D2700 based box will be enough grunt, together with a 4-port SATA HBA - but the motherboards it's bundled with only have PCI slots, not PCIe, which I don't think is enough for a 4-card SATA HBA?

The next option is an AMDE350 based CPU & board, but according to some benchmarks I've seen, it doesn't really use less power than the next option, an Intel i3-2100 CPU. Which costs a little more, but is way faster and I can use an ATX board with more SATA ports.

So really is the i3-2100 the lowest power CPU out there these days, suitable for a 6-disk NAS?

And in regards to a PSU, I've seen the PicoPSU, but it probably doesn't have enough power for 6 drives, does it?

evil_bunnY
Apr 2, 2003

What's your projected software stack?

r u ready to WALK
Sep 29, 2001

decryption posted:

So really is the i3-2100 the lowest power CPU out there these days, suitable for a 6-disk NAS?

And in regards to a PSU, I've seen the PicoPSU, but it probably doesn't have enough power for 6 drives, does it?

Don't worry too much about the motherboard and powersupply, those parts generally don't waste too much power. Just look for a PSU with a high efficiency grade, even if it's a 400w+ model it doesn't mean that it's wasting loads of power when the components need less.

My linux file server with 500w power, 8x 3TB WD Reds, a bunch of pci-e cards and a i3-2120T CPU is using 90-95W at the wall. Even if your power is very expensive that doesn't add up to very many kWh, it'll cost you roughly 50 cents a day to run 24/7.

Longinus00
Dec 29, 2005
Ur-Quan

error1 posted:

Don't worry too much about the motherboard and powersupply, those parts generally don't waste too much power. Just look for a PSU with a high efficiency grade, even if it's a 400w+ model it doesn't mean that it's wasting loads of power when the components need less.

My linux file server with 500w power, 8x 3TB WD Reds, a bunch of pci-e cards and a i3-2120T CPU is using 90-95W at the wall. Even if your power is very expensive that doesn't add up to very many kWh, it'll cost you roughly 50 cents a day to run 24/7.

Power supply efficiencies tend to drop to around 70% (or less) quickly under 20% load which is what the cutoff for 80 plus certification starts at. If the vast majority of your time is spent under 20% then you might consider using a power supply that is a more appropriately size if possible. In your example you're probably drawing somewhere in the range of 70-75W from the PSU so you are looking at a savings of maybe 5-10W from a supply that would fit that 70W into a 82% bronze rating (350-400W). The only thing to watch out for is making sure the PSU is strong enough to handle the power on current draw or enabling staggered spin up on the controller.

r u ready to WALK
Sep 29, 2001

Thanks! I didn't know that, but http://en.wikipedia.org/wiki/80_PLUS#Efficiency_level_certifications has the details for each of the certifications.

The problem is that my local computer shops only sell big-rear end powersupplies with good ratings, the ones with sub-500w ratings are really cheap ones with an 80 plus bronze at most.

But i see you can get 300w supplies with gold ratings if you dig around, like this one: http://www.anandtech.com/Show/Index/4069?cPage=3&all=False&sort=0&page=1&slug=huntkey-300w-80plus-gold

I'll think about switching! It would have to last a couple years before blowing up to justify the cost though.

Longinus00
Dec 29, 2005
Ur-Quan

error1 posted:

Thanks! I didn't know that, but http://en.wikipedia.org/wiki/80_PLUS#Efficiency_level_certifications has the details for each of the certifications.

The problem is that my local computer shops only sell big-rear end powersupplies with good ratings, the ones with sub-500w ratings are really cheap ones with an 80 plus bronze at most.

But i see you can get 300w supplies with gold ratings if you dig around, like this one: http://www.anandtech.com/Show/Index/4069?cPage=3&all=False&sort=0&page=1&slug=huntkey-300w-80plus-gold

I'll think about switching! It would have to last a couple years before blowing up to justify the cost though.

The important bits to take out of that wikipedia article are these two points.

wikipedia posted:

To qualify for 80 PLUS, a power supply must achieve at least 80% efficiency at three specified loads (20%, 50% and 100% of maximum rated power). However, 80 PLUS supplies may still be less than 80% efficient at lower loads. For instance, an 80 PLUS, 520 watt supply could still be 70% or less efficient at 60 watts (a typical idle power for a desktop computer).[7] Thus it is still important to select a supply with capacity appropriate to the device being powered.

It is easier to achieve the higher efficiency levels for higher wattage supplies, so gold and platinum supplies may be less available in consumer level supplies of reasonable capacity for typical desktop machines.

A 5-10W difference is probably going to cost you less than $10 a year so it's not really cost efficient to buy a new PSU for an existing system. The price difference between a gold and bronze rating at these power draws is also too slim to offset the price increase; a 3W (~5%) difference would take 6 and a half years to pay off using the current average price of electricity in the US. On the other hand, 80plus bronze 300W power supplies are as cheap as any other 80plus power supply of any wattage so spending more money for a high wattage PSU for a NAS doesn't make much sense.

Bonobos
Jan 26, 2004
So anyone using the new WD Red drives? I put in an order with Amazon for 4x 3tb drives, but they are not in stock and I have no clue when I will receive them.

I'd like to use them in a ZFS system, and was debating between using them and using Hitachi's 5400 or 7200 RPM drives which are the same price but available now. Any advantages to rolling with the reds? Just want asuage my apprehension on basically being an early adopter. I just need them to be quiet and stable drives. They like JUST came out, but reviews are mixed on the egg (despite a good review from Tom's).

r u ready to WALK
Sep 29, 2001

Bonobos posted:

So anyone using the new WD Red drives?
I've had 8 of them for a week now.
Sequential read/write in 8-drive raidz2 hovers around 300mb/s, copying from one zfs to another inside the same pool at around 100-120mb/s.
These drives certainly aren't built for IOPS but the upside is that even with 8 drives the system is very quiet.

One important caveat with using 3TB AF drives though: https://github.com/zfsonlinux/zfs/issues/548

I created my pool with ashift=12 instead of the default ashift=9 and i honestly think i'd recommend against it. I'll have a hard time emptying and recreating my pool, but with the 4k shift I'm losing tons of space to metadata since each sub-512 byte metadata block gets ballooned to the 4k alignment. It's especially bad when creating a zvol with the default options, you'll end up using twice the pool space! But creating the zvol with "zfs create -b 128k -V 2T redz/vols/games" makes it reasonably space efficient again.
The 128k zvol blocks kills my random read performance on that volume, though. :(

code:
  pool: redz
 state: ONLINE
 scan: scrub repaired 0 in 10h3m with 0 errors on Sat Aug  4 01:04:46 2012
config:

        NAME                                           STATE     READ WRITE CKSUM
        redz                                           ONLINE       0     0     0
          raidz2-0                                     ONLINE       0     0     0
            scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T01376xx  ONLINE       0     0     0
            scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T01159xx  ONLINE       0     0     0
            scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T01164xx  ONLINE       0     0     0
            scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T01373xx  ONLINE       0     0     0
            scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T01376xx  ONLINE       0     0     0
            scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T01180xx  ONLINE       0     0     0
            scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T01192xx  ONLINE       0     0     0
            scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T01362xx  ONLINE       0     0     0

complex
Sep 16, 2003

Remind me what the downside of leaving ashift=9 is again? Performance hit, right? As ZFS tries to write 512 sectors and the drive performs an extra layer of translation?

shifty
Jan 12, 2004

I dont know what you're talking about

shifty posted:

After a very long night, I got the card flashed, installed, and passed through. Just testing on a single drive it copied data around 140MB. Definitely better than before. I was unable to test the 3TB, I'll try tomorrow.
I finally got everything transitioned over, and things are running OK. Until I get the rest of my drives, it's set up as 1 500G mirror (iSCSI) and 1 2TB mirror (half CIFS + half iSCSI subvolume). The iSCSI performance is fine, writes are 60-100MBps. CIFS performance is a little lower than I expected(20MBps), is that normal?

I'm reconsidering my plan to just use a CIFS + iSCSI subvolume iSCSI though. When both are used at the same time, write performance on both drop to around 10MBps, and I get a lot of stuttering when streaming video. Will I see the same results when I transition to RAID-Z2? If so, now that I have an 8 port controller dedicated to FreeNAS I'll probably just do a mirror for CIFS and a RAID-Z2 for iSCSI.

What's the best backup strategy? Snapshots + rsync to an external drive? I only need about 2TB for backup.

r u ready to WALK
Sep 29, 2001

complex posted:

Remind me what the downside of leaving ashift=9 is again? Performance hit, right? As ZFS tries to write 512 sectors and the drive performs an extra layer of translation?

Wikipedia article: http://en.wikipedia.org/wiki/Advanced_Format#Advanced_Format_512e
The idea is that to write a single emulated 512 byte sector, the drive has to read a whole physical 4k sector, modify the data in its internal cache to calculate the new ecc then rewrite the modified 4k sector.

So that means it has to read, wait for the platters to spin around to the same place then write.

I'm sure it's more complicated than that, and for sequential writes I suspect the drive will hold the 512 byte writes in cache until it has enough of them to write a whole 4k sector efficiently. I guess it boils down to how busy your server will be and if you're going to use zvols and snapshots a lot. I didn't really notice anything weird going on until i tried copying 2tb of data to my 8k blocksize zvol and it grew to almost twice the size of the data it contained. I don't really need blazing performance to store my media library so i wish I'd stuck with ashift=9 and rather had the extra pool space. I'd love to see some iops comparison benchmarks, though.

Adbot
ADBOT LOVES YOU

Fancy_Lad
May 15, 2003
Would you like to buy a monkey?
I had posted a few pages ago about what controller card to look at if I wanted SMART information in Windows and was recommended an IBM M1015. Someone finally posted a batch of them for ~$70 on eBay and I got it in the other day.

It looks like SMART data is kinda a pain to pull off the card (at least with stock and IT firmware). I can get it to pull by using smartmontools with the "-d sat" options and, by extension, GSmartControl so it is available. I just don't see myself manually pulling the info very often. Really what I was thinking was some sort of "set and forget" solution where it will just email me if it detects an issue. Like Arconis Drive Monitor.

I can't seem to find any software that can pull SMART data and email if it detect a potential issue. Any suggestions?

Random: Boy that M1015 is very picky about what boards you can use to flash it. Nothing in my house (all AMD stuff) would do more than freeze attempting to do the erase. I took it into work and dug up an old Optiplex that did it first go.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply