|
Moey posted:Yea that's what was lingering in the back of my mind...I just couldn't bring myself to say it. 150GB[, not TB? Depending on your budget, you could SSD that poo poo. If not, it would be criminal to do anything less than RAID10 with that small of a dataset (or hell just do like a 3-way mirror using ZFS or something if you don't want to buy hardware. You can even use those super-speedy 1TB disks for room to grow)!
|
# ? Nov 5, 2010 03:21 |
|
|
# ? Jun 13, 2024 07:04 |
|
Moey posted:Any thoughts/opinions? Rather avoid buying an expensive Raid card... Get a copy of NexentaStor, the free trial one that's good to 12TB used. Plop two 100GB SSDs in the fucker as a L2ARC and watch the numbers fly. With modern hardware and an Intel Gigabit NIC, you can probably push 20k IOPS and ~90MB/sec through it easily. movax posted:Yeah, I don't know what the gently caress, they are the first-gen Seagate 1.5TB drives w/ firmware patch. I did upgrade/stop under-volting the CPU, which helped boost write performance. Intel NIC + a PowerConnect I figure is halfway decent network infrastructure, so... Yeah, you can just export/import the zpool after moving it across machines. Also, add is NOT the same as attach, so be very very loving sure which one you use before you gently caress with it. Run iostat -xen 5 and see what your drives are doing as you pull stuff over CIFS/NFS and to dev/null locally. Each disk should be able to do about 35-50MB/sec, and over the network, you should be able to get ~80-100MB/sec. I just checked mine and it'll do ~85ish over the network using Windows 7 CIFS and a 3com Managed Gigabit switch. No jumbo packets yet. Methylethylaldehyde fucked around with this message at 09:03 on Nov 5, 2010 |
# ? Nov 5, 2010 08:54 |
|
necrobobsledder posted:they're within about 20% of each other and writes easily saturate a gigabit ethernet connection at 40MBps, in fact. Not sure why you think 40MBps is easily saturating a gigabit connection. Saturating should be around 80-100MBps. I have a $30 dlink switch, onboard realtek nic cards and can push 100MBps without jumbo frames easily.
|
# ? Nov 5, 2010 15:04 |
|
I want to ditch my tower and go for a mac mini but the big thing holding me back is I need a decent storage solution that will work with my mac mini and my htpc which runs windows 7. Currently I just have the public folder on my hackintosh mounted on the htpc which seems to work fine. But the mini lacks expandability which the current tower has. So I have been looking at NAS and thought "christ, drobo is expensive!" I am looking at drobo that actually attach to the network, not usb. I want true NAS independent of any system in my setup. So reluctant to spend $800 on a diskless setup I came up with the idea of using unraid w/ an atom. Does anyone know what kind of throughput you can expect with this? Is it decent enough that I could be writing to the drive from the mac mini while streaming to the htpc at the same time or can an atom just not keep up? I came across this case yesterday: http://www.silentpcreview.com/fractal-array and if you coupled it with this super micro atom board: http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPE-H-D525.cfm You could essentially have an extremely low power 6x2TB unraid setup. Total cost would be $500ish instead of $800 sans disks and it would still not look completely hideous. So anyone have experience with unraid on an atom? Is it worth pursuing? Or ZFS? Whatever works I just don't want to spend $800 on a box that holds hard drives. flyboi fucked around with this message at 15:29 on Nov 5, 2010 |
# ? Nov 5, 2010 15:08 |
|
Methylethylaldehyde posted:Run iostat -xen 5 and see what your drives are doing as you pull stuff over CIFS/NFS and to dev/null locally. Each disk should be able to do about 35-50MB/sec, and over the network, you should be able to get ~80-100MB/sec. I just checked mine and it'll do ~85ish over the network using Windows 7 CIFS and a 3com Managed Gigabit switch. No jumbo packets yet. Roger, will do when I get back home. I almost forgot to ask, what kind of performance penalty am I looking at for running 16 drives off 2 1068Es, and the last 4 off the mobo SATA controller (though 2 of those 4 will be hotspares). And I should have 0 penalties for creating a pool w/ 2 vdevs to start and then adding a 3rd identical vdev in a few months, correct?
|
# ? Nov 5, 2010 15:21 |
|
Methylethylaldehyde posted:Get a copy of NexentaStor, the free trial one that's good to 12TB used. Plop two 100GB SSDs in the fucker as a L2ARC and watch the numbers fly. With modern hardware and an Intel Gigabit NIC, you can probably push 20k IOPS and ~90MB/sec through it easily. Never heard of NexentaStor before, but it looks interesting. Will do some investigating. Thanks
|
# ? Nov 5, 2010 15:33 |
|
Any opinions on the Seagate Barracuda LP drives for low power when working with ZFS and primarily large files? They known to cause problems like the WD Green drives? http://www.newegg.com/Product/Product.aspx?Item=N82E16822148413
|
# ? Nov 5, 2010 16:30 |
|
wang souffle posted:Any opinions on the Seagate Barracuda LP drives for low power when working with ZFS and primarily large files? They known to cause problems like the WD Green drives? I think if it isn't 4k sector drive, and it's at a fixed 5900rpm without any head-parking/other green crap, it's probably suitable for ZFS use. e: DS reports 512b sectors, so I think you may be good to go... e2: stop buying Hitachis you assholes, they keep going out of stock! movax fucked around with this message at 17:07 on Nov 5, 2010 |
# ? Nov 5, 2010 16:56 |
|
movax posted:Roger, will do when I get back home. I almost forgot to ask, what kind of performance penalty am I looking at for running 16 drives off 2 1068Es, and the last 4 off the mobo SATA controller (though 2 of those 4 will be hotspares). A single 1068E is overkill bandwidth wise, so no, two of them will not cause any problems. You can start with two, and move to three later with no performance penalty. All it'll do is add the third vdev to the pool of writable area and start striping writes across it. Also, you can mix and match vdev sizes, but it's not a good idea to mix vdev types. Wang Souffle: As long as the drive is either native 4k and not lying about it, or a 512 without the head parking fuckery the WD drives pull, then they're going to be fine with ZFS. As long as your SAS/SATA card is able to see the damned thing, ZFS is going to use it. Moey: NexentaStor has some snazzy GUIs that make dealing with it a little easier. Great to throw on a flash drive and see if it's what you're looking for. I bet for $2000 you can set up a ZFS box that'll support all those users with capacity to spare. Heaven help you if it takes a poo poo and you can't replace it same day from Fry's or MicroCenter though.
|
# ? Nov 5, 2010 18:51 |
|
Methylethylaldehyde posted:As long as the drive is either native 4k and not lying about it, or a 512 without the head parking fuckery the WD drives pull, then they're going to be fine with ZFS. As long as your SAS/SATA card is able to see the damned thing, ZFS is going to use it. That's the thing. I've been researching these drives for a couple days and can't find definitive word if they're 4k liars or not. And no idea how to find out about the head parking. Specs on the websites are very sparse for each manufacturer. Edit: With all major drive makers moving to 4k sectors, you'd figure OpenIndiana would handle this smoothly by now. Or do they, and the misreporting is causing all the issues?
|
# ? Nov 5, 2010 19:42 |
|
wang souffle posted:That's the thing. I've been researching these drives for a couple days and can't find definitive word if they're 4k liars or not. And no idea how to find out about the head parking. Specs on the websites are very sparse for each manufacturer. I looked at the datasheet for your Barracuda LP drives; they are 512-byte sector drives.
|
# ? Nov 5, 2010 20:48 |
|
movax posted:I looked at the datasheet for your Barracuda LP drives; they are 512-byte sector drives. Strange, this link has a mention of "advanced format" in the bottom right. Way to make it confusing, Samsung.
|
# ? Nov 5, 2010 21:13 |
|
wang souffle posted:Strange, this link has a mention of "advanced format" in the bottom right. Way to make it confusing, Samsung. Ah, I looked at this: http://www.seagate.com/docs/pdf/datasheet/disc/ds_barracuda_lp.pdf But it's possible that the 512 listed there is after emulation...probably the only way to be sure is to e-mail Seagate and ask them. Then post the answer here so that we may all know!
|
# ? Nov 5, 2010 21:56 |
|
I know this isn't a coupons/deals thread, but I figured this is relevant to the subject. Newegg has 2tb Samsung spinpoint 5400rpm drives on sale for $60 each. http://www.newegg.com/Product/Product.aspx?nm_mc=AFC-SlickDeals&cm_mmc=AFC-SlickDeals-_-NA-_-NA-_-NA&Item=N82E16822152245 coupon code: EMCZYNW48
|
# ? Nov 6, 2010 11:12 |
|
Triikan posted:I know this isn't a coupons/deals thread, but I figured this is relevant to the subject. Newegg has 2tb Samsung spinpoint 5400rpm drives on sale for $60 each. They use the 512b sector emulation fuckery the Western Digitals do. They're poo poo for ZFS. Real cheap though.
|
# ? Nov 6, 2010 13:32 |
|
Triikan posted:I know this isn't a coupons/deals thread, but I figured this is relevant to the subject. Newegg has 2tb Samsung spinpoint 5400rpm drives on sale for $60 each. Would getting two of those be good for a newbie jumping into this? I'm looking at getting the D-Link DNS-323 for media storage and playback.
|
# ? Nov 6, 2010 14:48 |
|
Methylethylaldehyde posted:They use the 512b sector emulation fuckery the Western Digitals do. They're poo poo for ZFS. Real cheap though. They might be poo poo but are they at least usable? I was planning on using them for media storage and I'll never need to pull more than 50MB/s through them over GigE. Performance I can deal with but stability and reliability are two things I really can't sacrifice.
|
# ? Nov 6, 2010 17:38 |
|
md10md posted:They might be poo poo but are they at least usable? I was planning on using them for media storage and I'll never need to pull more than 50MB/s through them over GigE. Performance I can deal with but stability and reliability are two things I really can't sacrifice. As long as you can deal with the array occasionally hardlocking for a few minutes, they work great. Not sure if the Samsung ones do the same poo poo as the WD ones do, but we'll see.
|
# ? Nov 6, 2010 22:57 |
|
I'm thinking about moving on from WHS. My main requirements are: * Handle many (16 right now) disks of varying capacities (500GB to 2TB) with a total of over 17TB of raw disk space. * One of my favorite features of WHS is not having to worry about multiple partitions...it's just one big pool of storage. * Some sort of parity to protect from at least 1 (the more the better) drive failure. * The main purpose of this storage is to store and stream HD video in the home. Streaming isn't too big of a bandwidth hog with hard drives on a gigabit network, but I do copy multi-gigabyte files to/from the array quite often so the closer it comes to saturating gigabit, the better. Is this raid/lvm/mdadm linux thing still a cool thing to do? Is this guide from the OP still accurate/up-to-date/the best? I was thinking that a linux thing would be best for me since I do lots of python development, and run several server apps written in python on my server... The main reservation I have right now is that, while I won't have any problems figuring out how to set this up, I'm not terribly interested in futzing with it everyday, and that's one thing WHS as provided me...I set it up and never have to think about it. Also, I will be running this on a fairly powerful machine (P55 mobo/Core 2 Quad/4GB RAM)...does this have any implications for which distro I should use? I'm most familiar with Ubuntu.
|
# ? Nov 7, 2010 04:40 |
|
Thermopyle posted:I'm thinking about moving on from WHS. You should be fine with almost all your assumptions except potentially the actual RAIDing of your drives. Be aware that unlike WHS, which distributes data across any combination of drive sizes, mdadm will require you to choose the smallest size drive within the array as the size to use for each of the devices that array is built from. This means you have one 500GB drive and 15 2TB drives, you'll waste 1.5TB on all 15 of them. The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size. Assuming this isn't too much of a burden, you can proceed with the rest of your plan, and if you use lvm you'll be able to combine all your mdadm arrays into a single big pool. Your computer should be plenty powerful enough to handle this and will probably get fairly close to saturating your gig network during reads, especially with that many spindles. Ubuntu is a fine choice for an OS. For reference, one of my servers is running Ubuntu Server edition with a 4 disk mdadm array of 7200rpm 1TB Seagates and I can get about 80-85MB/s transfer, with maybe a 20% cpu hit on the Core2Duo 2.16GHz (I think that's the cpu if I remember right). Once you set it up and configure samba or whatever you're going to use to access the data, you can pretty much ignore it and it will just work. Make sure to setup mdadm notifications though, in case you lose a disk you'll want to get an email or something right away so you can replace it.
|
# ? Nov 7, 2010 07:15 |
|
Welp, in the spirit of complicating my life, I have done the following: 1) Set up a server 2008 instance via VirtualBox on my openindiana system 2) While moderately to amazingly drunk, proceeded to figure out how the gently caress to get Active Directory and the various DNS bits working so I could join my solaris box to the AD authentication system 3) Learned how to edit poo poo in Vi. 4) Joined the openindiana box to a domain hosted within the openindiana instance. This bizarre mixture of ouroboros and Matryoshka doll will certainly one day bite me in the rear end hard. 5) Spent 5 hours arguing with idmap, chown and chgrp, ACLs and a case of beer in order to get my newly hosed with AD users to map properly to the solaris users and get permissions set up so I could actually gently caress with stuff properly. At long last I have fixed the file/folder permissions to the point where I can actually have the rest of my house have their own private folders as well as access the public ones.
|
# ? Nov 7, 2010 14:29 |
|
DLCinferno posted:The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size.
|
# ? Nov 7, 2010 15:01 |
|
Methylethylaldehyde posted:5) Spent 5 hours arguing with idmap, chown and chgrp, ACLs and a case of beer in order to get my newly hosed with AD users to map properly to the solaris users and get permissions set up so I could actually gently caress with stuff properly. zfs set sharesmb=on poolnamehere/zfsnamehere then navigate to the parent (like \\openindiana) right click the folder, and set the required AD permissions.
|
# ? Nov 7, 2010 16:08 |
|
adorai posted:/usr/bin/chmod A=everyone@:rwxpdDaARWcCos:fd:allow zfsnamehere That was the easy part. The oval office part was trying to trick Vbox into running each separate XP instance as a separate user, so the files it creates keep the user/group permissions required to actually use/delete files it downloads.
|
# ? Nov 7, 2010 17:10 |
|
Saukkis posted:Another way to accomplish the same is to split all the drives into suitably sized partitions, create arrays from the partitions and then combine them with LVM. I'm using an extreme version of this scenario with all my drives split to 10+ partitions. I did it for flexibility when changing and adding drives before RAID expansion was a practical option in Linux. True, but I didn't recommend that because you need to be very clever about how you're choosing your RAID levels on the partition arrays and which ones are going into the same array, otherwise a single drive failing could end up wiping out the entire array. In a simple example, assume two 500GB drives and one 1TB drive. Partition the TB in half and create a RAID5 array across the four partitions. Unfortunately, if that TB drive goes down, it will effectively kill two devices in the array and render it useless. I'd be curious to see what your partition/array map looks like - it must have a taken awhile to setup properly if you have over ten partitions on some disks?
|
# ? Nov 7, 2010 20:18 |
|
DLCinferno posted:You should be fine with almost all your assumptions except potentially the actual RAIDing of your drives. Be aware that unlike WHS, which distributes data across any combination of drive sizes, mdadm will require you to choose the smallest size drive within the array as the size to use for each of the devices that array is built from. This means you have one 500GB drive and 15 2TB drives, you'll waste 1.5TB on all 15 of them. The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size. Thanks. This is helpful. I've got enough 2TB and 1.5TB drives, but I'm going to have a problem with only have 2-1TB, 2-750GB, 1-500GB, and 1-400GB. Hrmph.
|
# ? Nov 8, 2010 03:05 |
|
Thermopyle posted:Thanks. This is helpful. I've got enough 2TB and 1.5TB drives, but I'm going to have a problem with only have 2-1TB, 2-750GB, 1-500GB, and 1-400GB. In that case, you do actually have enough drives to safely do what Saukkis suggested. For example, if you didn't mind losing 150GB, you could create 250GB partitions on each of the drives and build four RAID5 arrays from those partitions. Each array would have only one partition per drive, so you could lose an entire disk without losing any data. A little more complex to setup, but it would work.
|
# ? Nov 8, 2010 03:22 |
|
Methylethylaldehyde posted:Run iostat -xen 5 and see what your drives are doing as you pull stuff over CIFS/NFS and to dev/null locally. Each disk should be able to do about 35-50MB/sec, and over the network, you should be able to get ~80-100MB/sec. I just checked mine and it'll do ~85ish over the network using Windows 7 CIFS and a 3com Managed Gigabit switch. No jumbo packets yet. Results: code:
e: SMB from a Mac: code:
movax fucked around with this message at 03:43 on Nov 8, 2010 |
# ? Nov 8, 2010 03:26 |
|
DLCinferno posted:In that case, you do actually have enough drives to safely do what Saukkis suggested. For example, if you didn't mind losing 150GB, you could create 250GB partitions on each of the drives and build four RAID5 arrays from those partitions. Each array would have only one partition per drive, so you could lose an entire disk without losing any data. A little more complex to setup, but it would work. Oh yeah, that would work. Thanks! Now, I just have to work out some sort of plan for moving 12 TB of data from WHS to Ubuntu. My first thought is to use an older P4 PC as a temporary server, install Ubuntu, move as many hard drives as possible from WHS into it...up to my free space, copy data over the network to the Ubuntu server to fill those up, remove more from WHS, rinse, repeat. The problem with that plan, is that Ubuntu actually needs to end up on my current WHS machine. Are the arrays I create on one machine easily transferable to another machine with different hardware?
|
# ? Nov 8, 2010 03:31 |
|
Thermopyle posted:Are the arrays I create on one machine easily transferable to another machine with different hardware? Sure are. Literally, unplug from one machine, plug into the new one, and run one mdadm --assemble command per array. As long as the computer can see the same physical drives/partitions, it doesn't matter what hardware it's running. That's one of the main reasons I like ZFS/mdadm at home - no need to buy pricey hardware controllers, but you get most of the same benefits.
|
# ? Nov 8, 2010 04:35 |
|
I may have asked this before and forgotten the answer but I bitched out and went with a Windows Home Server instead of doing the right thing and sucking it up for an OS with a real ZFS implementation... I'm looking at OpenIndiana now which looks neat and has a GUI installer which was my major stopping point of the FreeBSD install that made me give up and go back to my comfortable Windows. On the ZFS implementation of OpenIndiana, if I have a group of 5 2TB hard drives right now with the plan to upgrade the remaining space in my case with more 2TB hard drives as funds and time allows (grand total of I think 9 or 10 drives), will I be able to add those drives to a ZFS pool without any data loss or do I have to treat it like a raid 5 and add everything at the same time as I build the raid? I guess I'm not totally against having a couple of different mounts for this array of drives but I would like to keep things as simple as possible. Also with regards to OpenIndiana, is it a fairly easy process to create windows compatible shares for hooking my XMBC and two desktops without requiring a lot of hassle on the user ends? Never done it, totally paranoid. I'm installing on a VM right now to mess around and make sure I understand what I'm doing before I do it for real, but I have a feeling making windows shares is going to fail spectacularly since I don't have a windows machine at work to test with anyway. (OSX whee) eta: Also (also) ZFS and those WD Green WD20EARS 2TB 64MB drives, good/bad? I realize now that they could be problematic. I have 5 of them so far. Telex fucked around with this message at 19:45 on Nov 8, 2010 |
# ? Nov 8, 2010 19:39 |
|
Telex posted:I may have asked this before and forgotten the answer but I bitched out and went with a Windows Home Server instead of doing the right thing and sucking it up for an OS with a real ZFS implementation... Your best bet would be to create a pool with one vdev, a RAIDZ of your 5 disks. Then when you get five more disks, create another RAIDZ vdev and add that to your pool. Sound confusing? It's really not. I've got the same current setup (5 drives) and will add 5 more when I need to. Here's how I did what I've got: code:
code:
|
# ? Nov 8, 2010 20:42 |
|
For anyone looking at or owning any of the 4KB-sector drives, here's a pretty good article on how to compensate for potential performance issues, as well as some discussion about what to expect in the future from drive manufacturers (i.e. more of the same): http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.html?ca=dgr-lnxw074KB-Disksdth-LX
|
# ? Nov 8, 2010 23:09 |
|
I'm checking out 4-6 drive NASs in the < 2500 range for a 20 person office with a couple of servers. I'm looking for something pre-built Thecus,NetGear, QNAP, or Synology are the brands that seem to come up. Right now the front runner seems to be the Synology 1010+ given the transfer speeds look very good. The primary use for the NAS is going to as a backup exec backup destination for the servers, but I'd also like to be able to back up the virtualized servers so I have runable copies of ESXi VMs on the NAS for disaster recovery purposes. I know this thread is primarily aimed at the build it yourselfers, but if anyone has any experience with any of the commercial brands I'd love to get some advice.
|
# ? Nov 9, 2010 00:56 |
|
DLCinferno posted:For anyone looking at or owning any of the 4KB-sector drives, here's a pretty good article on how to compensate for potential performance issues, as well as some discussion about what to expect in the future from drive manufacturers (i.e. more of the same):
|
# ? Nov 9, 2010 01:35 |
|
FISHMANPET posted:Your best bet would be to create a pool with one vdev, a RAIDZ of your 5 disks. Then when you get five more disks, create another RAIDZ vdev and add that to your pool. Sound confusing? It's really not. I've got the same current setup (5 drives) and will add 5 more when I need to. Here's how I did what I've got: Doesn't OpenIndiana still have the kernel level CIFS implementation that kicks all kind of rear end? Shouldn't he use that over samba?
|
# ? Nov 9, 2010 18:18 |
|
Goon Matchmaker posted:Doesn't OpenIndiana still have the kernel level CIFS implementation that kicks all kind of rear end? Shouldn't he use that over samba? Do you know how to configure it? Because I certainly don't. As far as I could ever tell, about all you could ever do with it was set it to "on" or "off" and not do nearly as much configuration as you could with a Samba install.
|
# ? Nov 9, 2010 18:25 |
|
if only there were some kind of documentation. Edit for my own question: Has anyone here ever successfully used zpool import -d with files larger than 4GB?
|
# ? Nov 9, 2010 18:47 |
|
Zhentar posted:if only there were some kind of documentation. To each his own. It's such a different paradigm that I'd rather just pull my smb.conf from the Linux box I had before than decipher all that poo poo, only to realize it doesn't support any options I've been using, and give up on it and go back to Samba.
|
# ? Nov 9, 2010 18:49 |
|
|
# ? Jun 13, 2024 07:04 |
|
Out of the box openindiana is retarded simple to set up. zfs create tank c0t0d0s0 zfs create tank/cifs zfs set sharesmb=on tank/cifs passwd god log in via windows xp/7 with fishmanpet/god and go nuts. The rest of the issues are file/folder permissions and ACLs.
|
# ? Nov 9, 2010 20:19 |