|
md10md posted:Yeah, I still need to do this. I have 2x750GB WD GPs and they just thrash the load_cycle. One drive has 1.3 million cycles. For my new drives I've found a away around it if WDIDLE3.EXE doesn't work. Just make a shell script that touches the disk (I do, date > .poke) every 5 seconds so the heads never park. It works great. Hopefully THAT won't decrease the lifespan of the drives, though. Mind sharing that script and how you implement it? Thats the kind of solution I want to try on my RAID5 WD Greens....
|
# ? May 14, 2010 19:14 |
|
|
# ? May 26, 2024 03:06 |
|
Samsung drives are the ones that don't carry over those settings across reboots while Seagates do write through onto EEPROM I believe. Western Digital is the biggest downer though with basically completely disabling most of these options on their green drives from what I can tell. Too many people building home megaraids (oftentimes failing) and are trying to upsell them on black drives. I'm willing to deal with the shortcomings of green drives for now until later when I actually need good IOPS and bandwidth. For low-power RAID setups on most OSes with a strong cost consideration, I'd recommend Samsung drives since the other manufacturers seem to have firmware oddities (reliability is about par across all of them unless it comes to old Maxtor drives and maybe some Lacie refurbished crap). If cost isn't a primary factor, then you're looking at black or raid edition drives from Western Digital regardless of whether you're using hardware or software RAID.
|
# ? May 14, 2010 19:22 |
|
FISHMANPET posted:Solaris has pretty much said gently caress you to SMART. I've got a 4+1 RaidZ with 1.5TB Samsungs. But no way to see how they're doing...
|
# ? May 14, 2010 19:52 |
|
Combat Pretzel posted:I would start to worry when checksum and I/O errors start appearing in zpool status. Use smartmontools, it works nicely. Getting it to loving run right requires a bit of fuckery, but now I have scripts that check for drive temps, head parking, and other poo poo. Heads are currently at about 3500 parks, with it going up 2-3 every hour or so.
|
# ? May 14, 2010 20:51 |
|
PrettyhateM posted:Mind sharing that script and how you implement it? Thats the kind of solution I want to try on my RAID5 WD Greens.... code:
|
# ? May 14, 2010 21:24 |
|
md10md posted:
Awesome thanks for posting this! Question under date > /blackhole/galaxy1/scripts/.poke So Poke is another script? and /blackhole/galaxy1/scripts is where you have it located?
|
# ? May 14, 2010 21:58 |
|
PrettyhateM posted:Awesome thanks for posting this! Question under date > /blackhole/galaxy1/scripts/.poke /blackhole/galaxy1/scripts/ is where the script "keep_alive" and the file ".poke" are kept. The script basically just writes the date to that file ".poke", overwriting the old date each time. It's a dot file so it doesn't show up when I browse the share. You can name the script whatever you want just make sure to "chmod +x script_name" in the directory so it's executable.
|
# ? May 14, 2010 22:11 |
|
md10md posted:/blackhole/galaxy1/scripts/ is where the script "keep_alive" and the file ".poke" are kept. The script basically just writes the date to that file ".poke", overwriting the old date each time. It's a dot file so it doesn't show up when I browse the share. You can name the script whatever you want just make sure to "chmod +x script_name" in the directory so it's executable. Awesome got it working! I was confused and didnt realized that line wasnt included in the script. Thanks!
|
# ? May 14, 2010 23:28 |
|
This is really odd. The insane parking that one hard drive I mentioned earlier did in a stock windows environment doesn't do it at all in linux. Like I didn't even have to tweak anything and off the bat its perfect in Ubuntu.
|
# ? May 15, 2010 19:23 |
|
Samba documentation seems to suggest that setting send and receive buffers can make quite a difference but I've read that while that might have been true a decade ago the current FreeBSB/Linux autotuning system will handle things better than manually set values. Can anyone confirm, this one way or the other or do I really need to go run some benchmarks? On a similar note, for a typical home based server is there any resource that suggests reasonable value to set the sysctl network value for typical steaming music/video server? Assume this will depend on whether disks are stand alone or placed in some kind of raid setup but some baseline values would be useful.
|
# ? May 15, 2010 19:24 |
|
I'm getting tons and tons of these errors in my log on my new machine. Any ideas what could cause this? Faulty controller? Since it happens to all of the drives (sdc-sdj) I’m guessing its not the drives. Only two of them are new, all others worked fine before. Ubuntu 10.04, LSI 3081E-R updated with newest initiator-target firmware and bios (11/09), WD20EADS drives. Already tried new cables, didn’t fix it. code:
|
# ? May 15, 2010 20:23 |
|
So, I've been lurking in this thread for a while and, semester having ended, was thinking it might be time to start building that server I've been saying I would for a long time. I am a rank amateur at this, and though I've been trying to follow the weft of this thread, but I'm out of my depth and was hoping you guys would be kind enough to advise. Please excuse my noobishness. I've got a pile of WD My Book externals that I've been collecting over the years, and it's gotten really inconvenient to swap out the USB and power cables when I'm searching for a particular file. So I've decided to amalgamate them into a single fileserver, throw them in a pool with redundancy, and then EOL them when the time comes. I have an old AMD64 machine lying around that I intend to put to this purpose, and was wondering what you guys thought. Given that all the drives are different sizes, I figured I'd go the ZFS route. Is ZFS still cool? Which operating system should I run as a beginner? Thanks.
|
# ? May 16, 2010 01:56 |
|
eames posted:I'm getting tons and tons of these errors in my log on my new machine. Any ideas what could cause this? Faulty controller? Since it happens to all of the drives (sdc-sdj) I’m guessing its not the drives. Only two of them are new, all others worked fine before. Either the controller or the power supply I would guess. How much overhead does the PS have versus it's actual load?
|
# ? May 16, 2010 02:02 |
|
Just got done testing my new Ubuntu 10.04 server NAS. God I love gigabit.
|
# ? May 16, 2010 02:57 |
|
AbstractBadger posted:So, I've been lurking in this thread for a while You seem to be in nearly the exact situation I was in at the end of last fall's semester, and I went with opensolaris based raidz, with an old s939 amd64 box. 4x 1tb 7200.12s later I had myself a nice 2.7tb logical raidz, and haven't looked back since, love it.
|
# ? May 16, 2010 03:45 |
|
Wanderer89 posted:You seem to be in nearly the exact situation I was in at the end of last fall's semester, and I went with opensolaris based raidz, with an old s939 amd64 box. 4x 1tb 7200.12s later I had myself a nice 2.7tb logical raidz, and haven't looked back since, love it. Excellent, that's a vote for opensolaris then, I take it. Unfortunately, I went ahead and bought 2x 1.5 tb WD Greens, which would seem ill-advised given all the posts in this thread, but I fear it's too late (they've been sitting in a box in my room unattended to for a long time). Have you any experience expanding the zpools beyond their original configuration? What resources did you use to get you underway?
|
# ? May 16, 2010 04:12 |
|
I have those drives, and depending on the revision, might have those exact drives, and my opensolaris raidz works fine. You can't expand them in the traditional sense like you can with RAID 5, but you can add more vdevs to a zfs pool to expand them. Example: A 4 disk raidz gives you 4.5 TB of useful space. If you want to expand that, you can make another 3 disk raidz and add it to the original pool. You can't add 1 disk to a raidz setup, but that capability might be coming in a future update, a lot of blogs are talking about how people are clamoring for it. Also, if you wait another few weeks, we should have a stable version 134 based release candidate available, which should cut down on the headaches getting everything set up. It also gives you some cool features, like dedupe. Most TV series will show a 5-15% savings, because of the intro and credits sharing almost entirely the same data.
|
# ? May 16, 2010 04:39 |
|
Methylethylaldehyde posted:I had read that about expanding zraid, but didn't put together exactly what it meant. So you mean that to expand the zraid, I need enough drives to implement another full zraid, at which point I can refer to it as one logical raid? That is, supposing it takes 4 drives to implement, I can only expand the raid with multiples of four (or more) drives each time?
|
# ? May 16, 2010 05:00 |
|
AbstractBadger posted:I had read that about expanding zraid, but didn't put together exactly what it meant. So you mean that to expand the zraid, I need enough drives to implement another full zraid, at which point I can refer to it as one logical raid? My understanding for Raid-Z is that as you drop in bigger drives in place of smaller ones, letting it rebuild each time, it'll rebuild the array until all drives have been replaced at which point i'll magically become bigger.
|
# ? May 16, 2010 05:07 |
|
Nam Taf posted:My understanding for Raid-Z is that as you drop in bigger drives in place of smaller ones, letting it rebuild each time, it'll rebuild the array until all drives have been replaced at which point i'll magically become bigger. Actually, you have to export and import the pool once all the disks are in for it to grow But for serious: So you make a 4+1 RaidZ pool (5 disks total, single disk parity). There's no way to turn that into a 5+1 pool without destroying the pool and recreating. Now technically speaking of you've got a pool made up of one vdev. A vdev is a single disks, a mirror of disks, or a RAIDZ* of disks. So you could buy 5 more drives, and make a (4+1) + (4+1) pool. You're basically running the two vdevs in RAID0. You could also get 4 more disks instead and run (4+1) + (3+1). You could buy two disks and attach a mirror device, or you could be really stupid and buy a single disk and attach that. Each vdev you attach goes in the same pool, so you can just keep dumping data onto the pool, and ZFS will split up data between the vdevs. You can't take a vdev out of a pool and keep the data on the vdev. Now with mdadm (linux raid) you can turn a 4+1 into a 5+1, but I'm pretty sure it's the only RAID solution that can do that.
|
# ? May 16, 2010 07:28 |
|
/\ You can also do this on Intel's motherboards. I went from a 2 + 2 (RAID10) to a 4+1 (RAID5) with no issue. I was toying with the idea of a 5+1 when I migrate my boot drive to an external controller, but I REALLY don't want to eventually sit through a 4.5TB RAID5 rebuild. I just got through the bulk of phase 1 migration tonight. I got the 4 WDs and one Seagate into a RAID 5, affording me 4TB (3.76 usable). And then I found out about MBR having a 2TB limit - so I had to use a utility to convert the megadrive to GPT. That said, jesus gently caress this thing is a cacaphony of errors. I'm looking forward to the day when I get another seagate, and can just jam it all into an external 6TB RAID6 box. Of course that'd mean I'd need 2-3 more 1.5s to populate the internal arrays. Ugh. Honestly, it seems like the easiest way to go about this all is to drop the cash, invest once, and do it right. No cheap cheap drives with firmware fuckery, no on-card softraid - just mdadm or ZFS and an asston of storage. If only I could afford that.
|
# ? May 16, 2010 07:35 |
|
So what is the preferred drive these days for cheap/reliable high-capacity RAID? Toying around with three 7200.10 320GBs right now on my linux box to see the performance level from them. Wouldn't use them in an actual server though cause most of my modern drives have higher capacity than all of these in RAID0 Not deadset on what platform this system will be or anything yet, but I am pretty sure I will need about 8-10GB of usable mildly safe storage.
|
# ? May 16, 2010 08:54 |
|
FISHMANPET posted:So what I'm getting from this is that I can only "expand" by adding the minimum number of drives necessary to build a new pool, or I could replace the drives in the current pool with larger drives. So, given my starting stats, let's say I have a 500GB, 2x750GB, 1x1TB, 1x1.5TB, and 1x2TB MyBooks, I could throw all of them and the one or two 1.5TB WD Greens I mentioned earlier into my initial array, and swap in larger drives as I EOL the older ones. When the day comes that I need to increase storage, I need 3 or more new drives to form another pool, at which point my original group of drives will constitute one vdev, and the new drives another. (Are these forever destined to be two separate virtual drives?). In terms of waiting for the update to ZFS in a couple of months, what is my upgrade path in terms of the OS/software? Can the arrays be updated to use the latest ZFS release, or are they condemned to the version they were started as?
|
# ? May 16, 2010 16:28 |
|
AbstractBadger posted:So what I'm getting from this is that I can only "expand" by adding the minimum number of drives necessary to build a new pool 1) scrub your current pool 2) purchase 3x 2tb drives 3) insert 2x 2tb drives into box 4) format your 2 2tb drives as ufs, and create 1 750gb sparse file on each one 5) mount sparse files with lofiadm 6) replace two of your existing zfs disks with sparse files 7) remove one of the 750 gb disks, replace with 2tb disk 8) perform steps 3 through 6 again (with just one disk this time) You now have 3 2tb disks with 1.25tb free, 2 750gb disks completely free, and 1 750gb disk that is in use. 9) create 1 sparse file of 700gb on each 2tb disk (2 of them on one of the disks) 10) mount the remaining sparse files, and create a zpool with the 4 sparse files and 2 empty 750gb disks. 11) do a zfs send from the old zpool to the new one. 12) do a zfs scrub on new pool 13) destroy old zpool, delete sparse files used for it 14) replace on of the sparse files on the 2tb drive with 2 of them with the extra 750gb disk You now have a 6x750gb raidz2, with 3 750gb disks and 3x 700gb sparse files on 2tb drives. 15) One by one unmount the sparse files, and replace them with the raw disk. You can now replace the 750gb drives one by one as time/money allows, and you have successfully expanded your array without any data loss, and without the need to do a full backup and restore. It's too bad no one has scripted this, it takes a lot of babysitting and will probably take over a week to rebuild all of the data multiple times.
|
# ? May 16, 2010 17:07 |
|
adorai posted:Hmm... I think I get it. I do sort of wish there was a way I could make a dry run of this, though.
|
# ? May 17, 2010 06:25 |
|
AbstractBadger posted:Hmm... I think I get it. I do sort of wish there was a way I could make a dry run of this, though. VirtualBox, VMware, Xen, QEMU, ....others
|
# ? May 17, 2010 12:40 |
|
I have a chance to get a few older Dell Poweredge rack servers (2850, 2650) for free. I was going to jump on the chance, but they don't have any hard drives in them and it appears that even older SCSI drivers, like Ultra3, are very expensive compared to the SATA counterparts. Like I can get a new 1TB SATA drive for $80 for 4 75GB SCSI drives for the same price. I haven't ever really looked at SCSI drives before, so is that just standard that they're that much more expensive due to the speed? I'd just be using this system for media storage so it seems like even though I can get the rack servers for free, it'd be a waste to pay more for those drives.
|
# ? May 17, 2010 14:22 |
|
Strict 9 posted:I have a chance to get a few older Dell Poweredge rack servers (2850, 2650) for free. I was going to jump on the chance, but they don't have any hard drives in them and it appears that even older SCSI drivers, like Ultra3, are very expensive compared to the SATA counterparts. Like I can get a new 1TB SATA drive for $80 for 4 75GB SCSI drives for the same price. The drives are going to be more reliable. Back when it was SCSI vs IDE it was pretty easy to put an IDE interface on lovely consumer quality stuff, and save the best for SCSI. Now that SATA's in town we have stuff like enterprise class SATA. You'll be paying somewhat for the speed, but also for the reliability.
|
# ? May 17, 2010 14:26 |
|
Methylethylaldehyde posted:Most TV series will show a 5-15% savings, because of the intro and credits sharing almost entirely the same data.
|
# ? May 17, 2010 20:11 |
|
Combat Pretzel posted:That a verifiable fact? Because I'm sure if the episodes have been encoded in two-pass VBR, bit allocations for said sections will be different. I'm going to try it on my machine sometime this week, I'll post a trip report.
|
# ? May 17, 2010 20:22 |
|
Strict 9 posted:I have a chance to get a few older Dell Poweredge rack servers (2850, 2650) for free. In addition to the drives, I hope you have some solar panels on your roof. The 2850 draws between ~250-400W (http://www.dell.com/downloads/global/corporate/environ/PE_2850.pdf). I thought about doing the same thing with some old company machines, but using more efficient, consumer components would probably pay for themselves in a year or so.
|
# ? May 17, 2010 21:04 |
|
When a disk is added to a ZFS pool, is it expected to be empty?
|
# ? May 18, 2010 22:51 |
|
Combat Pretzel posted:That a verifiable fact? Because I'm sure if the episodes have been encoded in two-pass VBR, bit allocations for said sections will be different. Yeah, that's the savings I saw from a half dozen TV shows I have on my media box now. You might end up with some special snowflake x264 encodes that are somehow different for each and ever block, but I'm guessing you'll still see some savings.
|
# ? May 18, 2010 23:14 |
|
My collection of external HDs for media storage was getting unmanageable and expensive, so I started looking for a simple box to jam a bunch of drives into. If you`re looking for something larger than a 2-drive enclosure then your choices are pretty limited. I finally found and bought one of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16817576001 It`s a simple box that can accommodate up to four drives with capacities of up to 2TB each. The base model has USB and eSata (caveat - if you want to use eSata then your controller must support port multiplier). They also make fancier versions with JBOD, RAID and FireWire: http://mediasonicinc.com/store/index.php?cPath=26_51 I was a little hesitant to buy this thing cause it`s so cheap, but the build quality is actually decent. The housing is metal and feels solid, and the buttons and mechanical interfaces are of a good quality. If anything is gonna go on this thing I`m guessing it`s the power supply brick. Vista installed drivers for this box automatically, and the two WD Green 2TB disks that I jammed into it showed up in the disk manager and were configured lickety-split. So far, this goon approves.
|
# ? May 19, 2010 04:57 |
|
AbstractBadger posted:When a disk is added to a ZFS pool, is it expected to be empty? I don't think it matters, ZFS will use it (and destroy whatever FAT was on it previously) anyway. Unless maybe it's just a spare, then it might not even really look at it until it re-silvers onto it. Someone please speak up if you know more precisely.
|
# ? May 19, 2010 15:14 |
|
I have a (probably stupid) question about growing an mdadm RAID 5 array. Does it redistribute the current data evenly on the current array to the new drives after adding them or is it just newly written data? The latter doesn't seem logical.
|
# ? May 19, 2010 15:24 |
|
IT Guy posted:I have a (probably stupid) question about growing an mdadm RAID 5 array. It recalculates and redistributes all the data, takes god damned forever.
|
# ? May 19, 2010 16:02 |
|
Methylethylaldehyde posted:Yeah, that's the savings I saw from a half dozen TV shows I have on my media box now. You might end up with some special snowflake x264 encodes that are somehow different for each and ever block, but I'm guessing you'll still see some savings. Any intro past the exact beginning of the file will not see any savings, because even if the outputted bitstream is bit exact for the intro of each episode, it is completely unlikely that it lines up on block boundaries, since any preceding content is of arbitrary size/length. For blocks to be deduped, they need to be exactly the same. I guess I have to try this myself to believe it. AbstractBadger posted:When a disk is added to a ZFS pool, is it expected to be empty? Spare disks are also initialized, since ZFS needs to be able to recognize it belonging to the pool. Combat Pretzel fucked around with this message at 17:58 on May 19, 2010 |
# ? May 19, 2010 17:55 |
|
Combat Pretzel posted:Hmmm. The only way I see this working is if the intro is at the beginning of the show, as frame and pixel accurate copies across the episodes. Any minor variance in pixels or the episode starting at a different frame will generate different bitstreams and influence bitrate allocation.
|
# ? May 19, 2010 18:08 |
|
|
# ? May 26, 2024 03:06 |
|
Methylethylaldehyde posted:It recalculates and redistributes all the data, takes god damned forever. I just did this Monday, only took 6 hours for a 4 TB array.
|
# ? May 19, 2010 18:57 |