Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
There's a number of people recommending more RAM if you enable dedupe on ZFS file systems. I'm not quite sure how ZFS implements snapshots, but it's just a form of delta disks in the end and if there's not a whole lot changing between snapshots and the on-disk format is efficient enough (read: not VMware's implementation) it could be worth enabling it for some help.

Adbot
ADBOT LOVES YOU

Delta-Wye
Sep 29, 2005

Zorak of Michigan posted:

Dedup would be a lot more CPU and memory intensive than snapshots, so I'd think you'd want to explore snapshots a little more. When you talk about pulling snapshots out, what exactly are you looking to do? I ask because if you just need to copy files from a snapshot to the web host, that's easy and totally non-disruptive, as Fishmanpet says. You just go to the snapshot directory, find what you want, and copy it over. In your particular use case I don't think you'd ever want to actually restore the whole snapshot, so a lot of the situations that I have trouble wrapping my head around would never come up anyway.

I have two situations I'm concerned about :
1) No offense DarkLotus (everything has been great so far!), but my hosting takes a dump and all my data is gone. In this case I want a newish backup I can restore immediately. For this, I just need to grab a copy every week or whatever, and have the newest one available. In this situation I don't even need to keep historical copies.

2) Security breach of some kind that I don't notice for a while. In this case, I need to troll through backups and find the latest copy without the problem, restore and update to fix the initial breach. In this case, I may need to mix-and-match files - the web and database content may be before the breach, but I could need the newest SVN to get my development work back. Having dated backups I can easily poke through and grab files from is good here.

Honestly, I need to do a bunch of upgrades to the NAS including larger disks. If I upgrade the disks, I will probaby have a couple TB of empty space and can easily just keep duplicate copies of the website, storage space be damned. However, I've got this fancy filesystem so if I was going to roll a script to download the contents, doing a snapshot of the directory wouldn't be a huge issue. Upgrading so I have dedup as an option would be cool too; it would be transparent in the sense that I would have a pool full of directories with timestamp names, and they would all look like full backups but the FS layer would not duplicate blocks (which would be a majority of the data).

FYI, I think the box does have a relatively small amount of ram (2GB if memory serves) and I don't know if the microATX board supports more. If the filesystem isn't mounted, I assume there is no penalty for having the dedup option set? It would make sense that ZFS wouldn't keep information about an unmounted filesystem in memory but I'm not sure how it's organized under the hood.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So I'm envisioning that every night (or however often) you mirror the live data to your NAS (with rsync or whatever), and once the sync is done, you take a snapshot. If you need restore the entire directory, you just look in /pool/.zfs/snapshot/`date --date="yesterday"` and pull everything out. If you need a specific file you just go into /pool/.zfs/snapshot/$dateNeeded and pull the files you need.

Zorak of Michigan
Jun 10, 2006

Dedup requires a lot of memory because it works via block-level checksum. If the system can't keep a table of checksums in memory, it's going to have to load it from disk as it processes writes, which is a prescription for some pretty awful write performance. Snapshots, by contrast, take advantage of the fact that ZFS is a copy-on-write filesystem, so they're basically free except for the disk space required to store older versions of files. I'd use snapshots for this purpose, without any doubt or hesitation.

Edit: Just make sure you use rsync or some other process that leaves identical files alone. If you just download a new copy every night, you'll be creating brand new files each time, and your nightly delta will be 100% of the size of the web site. Snapshots won't be so economical then.

Shane-O-Mac
May 24, 2006

Hypnopompic bees are extra scary. They turn into guns.

Shane-O-Mac posted:

I'm trying to set up FreeNAS 0.7 with Sabnzbd, Sickbeard, etc. I keep reading that I can add packages such as Sabnzbd through the web GUI. Apparently you go to System > Packages and do it through there. This button doesn't exist on my FreeNAS. I'm pretty confused.

You know, maybe I shouldn't even be using FreeNAS.

I want to turn an old PC into a home media server that runs the programs I listed. The computer is about 7 years old, so this is very basic stuff. I want to maximize performance with my old hardware while keeping energy costs low. I just assumed I should use FreeNAS, but is there a better way?

movax
Aug 30, 2008

Zorak of Michigan posted:

Dedup requires a lot of memory because it works via block-level checksum. If the system can't keep a table of checksums in memory, it's going to have to load it from disk as it processes writes, which is a prescription for some pretty awful write performance. Snapshots, by contrast, take advantage of the fact that ZFS is a copy-on-write filesystem, so they're basically free except for the disk space required to store older versions of files. I'd use snapshots for this purpose, without any doubt or hesitation.

Edit: Just make sure you use rsync or some other process that leaves identical files alone. If you just download a new copy every night, you'll be creating brand new files each time, and your nightly delta will be 100% of the size of the web site. Snapshots won't be so economical then.

I think for dedup, it was something like (rule-of-thumb from zfs-discuss maybe) 2GB of RAM per terabyte of data?

Longinus00
Dec 29, 2005
Ur-Quan

movax posted:

I think for dedup, it was something like (rule-of-thumb from zfs-discuss maybe) 2GB of RAM per terabyte of data?

ZFS Dedup FAQ posted:

3. Make sure your system has enough memory to support dedup. Determine the memory requirements for deduplicating your data as follows:

A. Use the zdb -S ouput to determine the in-core dedup table requirements:

Each in-core dedup table entry is approximately 320 bytes
Multiply the number of allocated blocks times 320. For example:
in-core DDT size = 3.75M x 320 = 1200M

B. Additional memory considerations from Roch's excellent blog:

20 TB of unique data stored in 128K records or more than 1TB of unique data in 8K records would require about 32 GB of physical memory. If you need to store more unique data than what these ratios provide, strongly consider allocating some large read optimized SSD to hold the deduplication table (DDT). The DDT lookups are small random I/Os that are well handled by current generation SSDs.

http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup

Wheelchair Stunts
Dec 17, 2005
Why not just use version control? It handles diffs/versioning and duplication.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire
So I am tired of FreeNAS 8 being a piece of poo poo, and if I am going to go to the hassle of copying/rebuilding my NAS, I think I am just going to pop a video card in it and have it output to video for watching movies. My NAS is only ever on for when I stream stuff off of it anyhow.

Therefore, I think I am going to put Windows on an SSD, for quick boot. And hell, if I am putting Windows on it, I think I am just going to make a VM server to run the NAS disks on through Windows, incase I need to move my NAS to a new hardware without having to rebuild the software raid.

Great idea or horrible idea?

evil_bunnY
Apr 2, 2003

Plenty of people pass their storage controllers to a VM. It makes managing the OS so much simpler (update hosed everything? Revert to snap).

devmd01
Mar 7, 2006

Elektronik
Supersonik

jeeves posted:

Great idea or horrible idea?

Horrible idea. Vmware server 2 is horribly slow and lacks the features you would need to do what I think you think you want to do. If you go the esxi/vsphere route, you won't get to do any video output. Adding layers of complexity like virtualization is a really bad idea unless you know what you're doing and why. Do you really want to gently caress something up messing with esxi and delete all of your movies?

KISS always, especially for a home setup. Do you really want to have to gently caress with it when something breaks? My file server for video/music is a fully separate box from my vm server for this reason. If my file server fucks up for any reason, I don't run the risk of my network being completely down since the router is a pfsense vm.

Just use windows software raid for whatever volumes you want if you can't afford a hardware controller, they'll transfer to new hardware just fine.

Edit: don't use intel matrix raid, it's garbage (unless it's improved significantly since the ICH10R)

kri kri
Jul 18, 2007

jeeves posted:

So I am tired of FreeNAS 8 being a piece of poo poo, and if I am going to go to the hassle of copying/rebuilding my NAS, I think I am just going to pop a video card in it and have it output to video for watching movies. My NAS is only ever on for when I stream stuff off of it anyhow.

Therefore, I think I am going to put Windows on an SSD, for quick boot. And hell, if I am putting Windows on it, I think I am just going to make a VM server to run the NAS disks on through Windows, incase I need to move my NAS to a new hardware without having to rebuild the software raid.

Great idea or horrible idea?

Sounds like it would be much easier to just put windows home server on it, which is like $50

Frozen Peach
Aug 25, 2004

garbage man from a garbage can
I ran into an interesting issue today, and I'd love for someone smarter than I to explain why the fix I'm implementing works.

The issue: we have a QNAP NAS. We set up a share for one department to use to store files for each SKU in a folder. SKUs are 6 digits long, and we have about 30,000 distinct folders. To keep it from being too hard to browse, we split that into 10 folders.

\\nas\share\SKUs\0\012346\<files go here>

The 0 is the first digit of the SKU. This means we have ~3000 folders in each of 10 folders on the share. Still a lot, but it's manageable. The problem is that on Mac OSX when a user connects to the share, it takes about a minute to open up each of the "first digit" folders. It doesn't matter if we use SMB or AFP, both are slow as molasses. On Windows it's almost instantaneous and we have no problems when trying to browse the share.

I've always assumed it was some combination of the sheer number of files in each of those folders, and the fact that they use color coded labels that Mac supports, so each time they open a folder OSX just can't handle that many files in a timely fashion. Like I've said, it just works when using Windows to browse.

In a blind attempt to fix the weird speed issues, I decided to try something crazy. I added an iSCSI server share on the QNAP, mounted it on our main active directory file share server, then shared THAT out. The same folder that takes 1-2 minutes to open, now takes 4-5 seconds.

Is SMB really just that awful? Is Apple's SMB/AFP client that awful? I can't fathom why adding a middle man to the file share in the form of iSCSI actually makes that much of a difference.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Frozen-Solid posted:

I ran into an interesting issue today, and I'd love for someone smarter than I to explain why the fix I'm implementing works.

The issue: we have a QNAP NAS. We set up a share for one department to use to store files for each SKU in a folder. SKUs are 6 digits long, and we have about 30,000 distinct folders. To keep it from being too hard to browse, we split that into 10 folders.

\\nas\share\SKUs\0\012346\<files go here>

The 0 is the first digit of the SKU. This means we have ~3000 folders in each of 10 folders on the share. Still a lot, but it's manageable. The problem is that on Mac OSX when a user connects to the share, it takes about a minute to open up each of the "first digit" folders. It doesn't matter if we use SMB or AFP, both are slow as molasses. On Windows it's almost instantaneous and we have no problems when trying to browse the share.

I've always assumed it was some combination of the sheer number of files in each of those folders, and the fact that they use color coded labels that Mac supports, so each time they open a folder OSX just can't handle that many files in a timely fashion. Like I've said, it just works when using Windows to browse.

In a blind attempt to fix the weird speed issues, I decided to try something crazy. I added an iSCSI server share on the QNAP, mounted it on our main active directory file share server, then shared THAT out. The same folder that takes 1-2 minutes to open, now takes 4-5 seconds.

Is SMB really just that awful? Is Apple's SMB/AFP client that awful? I can't fathom why adding a middle man to the file share in the form of iSCSI actually makes that much of a difference.

I find Snow Leopard's SMB to be downright awful. Our website assistant curses at it regularly when accessing our data server.

evil_bunnY
Apr 2, 2003

Apple SMB implementations have historically been comically bad.

GMontag
Dec 20, 2011

Frozen-Solid posted:

I ran into an interesting issue today, and I'd love for someone smarter than I to explain why the fix I'm implementing works.

The issue: we have a QNAP NAS. We set up a share for one department to use to store files for each SKU in a folder. SKUs are 6 digits long, and we have about 30,000 distinct folders. To keep it from being too hard to browse, we split that into 10 folders.

\\nas\share\SKUs\0\012346\<files go here>

The 0 is the first digit of the SKU. This means we have ~3000 folders in each of 10 folders on the share. Still a lot, but it's manageable. The problem is that on Mac OSX when a user connects to the share, it takes about a minute to open up each of the "first digit" folders. It doesn't matter if we use SMB or AFP, both are slow as molasses. On Windows it's almost instantaneous and we have no problems when trying to browse the share.

I've always assumed it was some combination of the sheer number of files in each of those folders, and the fact that they use color coded labels that Mac supports, so each time they open a folder OSX just can't handle that many files in a timely fashion. Like I've said, it just works when using Windows to browse.

In a blind attempt to fix the weird speed issues, I decided to try something crazy. I added an iSCSI server share on the QNAP, mounted it on our main active directory file share server, then shared THAT out. The same folder that takes 1-2 minutes to open, now takes 4-5 seconds.

Is SMB really just that awful? Is Apple's SMB/AFP client that awful? I can't fathom why adding a middle man to the file share in the form of iSCSI actually makes that much of a difference.

What version of OS X? Apple rewrote their SMB code from the ground up for Lion. Before that, they were stuck with an ancient version of code from the open source Samba project from before they switched their license from GPLv2 to GPLv3.

Wheelchair Stunts
Dec 17, 2005

devmd01 posted:

Horrible idea. Vmware server 2 is horribly slow and lacks the features you would need to do what I think you think you want to do. If you go the esxi/vsphere route, you won't get to do any video output. Adding layers of complexity like virtualization is a really bad idea unless you know what you're doing and why. Do you really want to gently caress something up messing with esxi and delete all of your movies?

KISS always, especially for a home setup. Do you really want to have to gently caress with it when something breaks? My file server for video/music is a fully separate box from my vm server for this reason. If my file server fucks up for any reason, I don't run the risk of my network being completely down since the router is a pfsense vm.

Just use windows software raid for whatever volumes you want if you can't afford a hardware controller, they'll transfer to new hardware just fine.

Edit: don't use intel matrix raid, it's garbage (unless it's improved significantly since the ICH10R)

I'd go further to say don't trust anything you don't understand technically and/or intimately with data you give a poo poo about.

Piglips
Oct 9, 2003

Can mixing (similar size) drives made by different manufacturers cause any significant complications in software-RAID?

I have some Seagates in a RAID1 that's coughing up SMART errors, but I can only get my hands on a WD at the moment... so just wondering if something like the drives' sector sizes may come into play here?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Piglips posted:

Can mixing (similar size) drives made by different manufacturers cause any significant complications in software-RAID?

I have some Seagates in a RAID1 that's coughing up SMART errors, but I can only get my hands on a WD at the moment... so just wondering if something like the drives' sector sizes may come into play here?
As a general rule, you should be fine. You'll be limited to the speed of the slowest drive (more or less) so just make sure you don't get one that's substantially slower than the rest of the array. As for sectors, unless the rest of your array is built using new 4K sector drives, and your replacement is an older 512b only drive, you'll be fine.

Frozen Peach
Aug 25, 2004

garbage man from a garbage can

GMontag posted:

What version of OS X? Apple rewrote their SMB code from the ground up for Lion. Before that, they were stuck with an ancient version of code from the open source Samba project from before they switched their license from GPLv2 to GPLv3.

It was bad on Leopard. We recently upgraded to Lion and it got worse.

Sir Sidney Poitier
Aug 14, 2006

My favourite actor


I'm getting sick of WHS loving up on me. Googling suggests that ZFS and FreeNAS are no good for using different-sized discs - what is my alternative to Drive Extender? I want to use the discs that I've got any not spend any more money... Or am I stuck with WHS?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
There' unRAID if you don't mind something on the Linux side of the fence. The problems I had with different sized disks was just juggling around and managing which files and directories are actually protected while never being that comfortable that everything would be alright in cause of a real failure. I've been encountering some peculiar problems with my OpenSolaris (yeah, it's obsolete, whatever) server that have required some hard resets and these all happened during longer periods of heavy writes. Thanks to ZFS being copy on write, I had zero problems with the consistency of my files on the file system. The same abusive rebooting situation for me with mdraid, XFS, and LVM in the past has resulted in some seriously bad corruption where I've lost a lot of irreplaceable data.

But hey, you keep backups, right?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Anjow posted:

I'm getting sick of WHS loving up on me. Googling suggests that ZFS and FreeNAS are no good for using different-sized discs - what is my alternative to Drive Extender? I want to use the discs that I've got any not spend any more money... Or am I stuck with WHS?

I use Linux + mdadm + LVM to RAID and pool 15 or so disks of varying sizes. I moved to this setup from WHS and haven't looked back.

Butt Soup Barnes
Nov 25, 2008

I have some questions for you experts here.

I'm a digital media pack rat. Storage was so cheap a year ago that I bought 4 2TB hard drives when they were on sale for like $75 a piece.

A month back one of the hard drives failed that was full of movies and it was a pain in the rear end getting everything back.

Is there any way to do some sort of RAID setup that would not require wiping my drives? Right now two of my drives are full, one has a few hundred gigs worth of files, and the fourth is going to be RMAd for a replacement.

Since I think the answer to my first question is a no, what is the best way to monitor my hard drives so I can get a replacement and backup the data before a failure?

MOLLUSC
Nov 30, 2005

Butt Soup Barnes posted:

I have some questions for you experts here.

I'm a digital media pack rat. Storage was so cheap a year ago that I bought 4 2TB hard drives when they were on sale for like $75 a piece.

A month back one of the hard drives failed that was full of movies and it was a pain in the rear end getting everything back.

Is there any way to do some sort of RAID setup that would not require wiping my drives? Right now two of my drives are full, one has a few hundred gigs worth of files, and the fourth is going to be RMAd for a replacement.

Since I think the answer to my first question is a no, what is the best way to monitor my hard drives so I can get a replacement and backup the data before a failure?

You'll need to transfer the data somewhere else temporarily while you build the array. I'd recommend using FreeNAS and set the 4 drives up as a RAID-Z1, it's pretty simple to get it up and running and you can configure it to email you if one of the drives fails.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
You could always backup all your hard drives to a service like Crashplan and restore it all back once your array is ready.

Butt Soup Barnes
Nov 25, 2008

MOLLUSC posted:

You'll need to transfer the data somewhere else temporarily while you build the array. I'd recommend using FreeNAS and set the 4 drives up as a RAID-Z1, it's pretty simple to get it up and running and you can configure it to email you if one of the drives fails.

Thanks, I figured I wouldn't be able to do it without backing up my drives first.

Looks like I'll have to buy a couple more 2TB HDDs, guess I'll wait till the price drops a bit.

Ninja edit: necro, unfortunately with 4+ TB of data it would take me weeks just to upload the data, and that's assuming their "unlimited" truly is unlimited.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Butt Soup Barnes posted:

Thanks, I figured I wouldn't be able to do it without backing up my drives first.

Looks like I'll have to buy a couple more 2TB HDDs, guess I'll wait till the price drops a bit.

Ninja edit: necro, unfortunately with 4+ TB of data it would take me weeks just to upload the data, and that's assuming their "unlimited" truly is unlimited.

It is truly unlimited. I've got many terabytes up there (because I didn't pay close enough attention to my folders getting backed up). It takes awhile to upload, but once you've got it up there it's nice to have it backed up.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Butt Soup Barnes posted:

Is there any way to do some sort of RAID setup that would not require wiping my drives? Right now two of my drives are full, one has a few hundred gigs worth of files, and the fourth is going to be RMAd for a replacement.
You have 4 disks:

disk1: RMA and empty
disk2: a little full
disk3: full
disk4: full

Partition disk 1 into 4 partitions, create raid5 array. copy all data from disk 2 to new array. repartition disk 2 into two partitions, replace disk1 partition 2 and disk 1 partition 4 with these partitions, expand disk1 partition 1 and disk1 partition 3 to expand array. Copy all data from disk3 to array, replace disk2 partition 1 with this disk. copy all data from disk4 to array, replace disk2 partition 2 with this disk. replace disk1 partition 1 with disk 2. now for the scary part, degrade the array and rebuild disk1.

voila, data juggling at it's finest.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

adorai posted:

voila, data juggling at it's finest.

How do you sleep at night? :v:

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

adorai posted:

You have 4 disks:

disk1: RMA and empty
disk2: a little full
disk3: full
disk4: full

Partition disk 1 into 4 partitions, create raid5 array. copy all data from disk 2 to new array. repartition disk 2 into two partitions, replace disk1 partition 2 and disk 1 partition 4 with these partitions, expand disk1 partition 1 and disk1 partition 3 to expand array. Copy all data from disk3 to array, replace disk2 partition 1 with this disk. copy all data from disk4 to array, replace disk2 partition 2 with this disk. replace disk1 partition 1 with disk 2. now for the scary part, degrade the array and rebuild disk1.

voila, data juggling at it's finest.

I didn't see nearly enough building degraded arrays :colbert:

berzerker
Aug 18, 2004
"If I could not go to heaven but with a party, I would not go there at all."

Thermopyle posted:

It is truly unlimited. I've got many terabytes up there (because I didn't pay close enough attention to my folders getting backed up). It takes awhile to upload, but once you've got it up there it's nice to have it backed up.

That's fine and well if you also have truly unlimited bandwidth, which few do anymore. Be very sure of that factor too before you get your internet account suspended.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Moey posted:

How do you sleep at night? :v:
Alcohol. Seriously though, it's not like I am advocating doing this for a production dataset, it's probably warez and porn.

FISHMANPET posted:

I didn't see nearly enough building degraded arrays :colbert:
There's only one degraded rebuild at the end.

Butt Soup Barnes
Nov 25, 2008

adorai posted:

You have 4 disks:

disk1: RMA and empty
disk2: a little full
disk3: full
disk4: full

Partition disk 1 into 4 partitions, create raid5 array. copy all data from disk 2 to new array. repartition disk 2 into two partitions, replace disk1 partition 2 and disk 1 partition 4 with these partitions, expand disk1 partition 1 and disk1 partition 3 to expand array. Copy all data from disk3 to array, replace disk2 partition 1 with this disk. copy all data from disk4 to array, replace disk2 partition 2 with this disk. replace disk1 partition 1 with disk 2. now for the scary part, degrade the array and rebuild disk1.

voila, data juggling at it's finest.

You are a god drat genius.

I have a few questions since I have never setup a RAID array. Excuse my ignorance.

-When I create the partitions, is there a specific filesystem that would be best suited? And I assume split up the partitions in equal sizes?

-My motherboard has RAID5 support. Is this hardware RAID? If so, is it configured through BIOS, or within Windows, or via some other method?

-When I am ready to degrade the array, how do I go about doing that?

Thanks for the help, I almost didn't even bother to ask since I was almost positive I wouldn't be able to.

And yeah, it's just a shitload of movies/tv shows so if for some reason it doesn't work it's not the end of the world.

Oddhair
Mar 21, 2004

I've recently been lucky enough to escape a failing HD, a Seagate ST31000333AS 1TB 7200RPM. I noticed reallocated sectors on the drive, and started the replacement process, but of course the drive was full in my WHS machine and HD prices have been jacked for 8 months now.

So I got the drive replaced with a WD20EARS, and ran Seagate's tools under DOS. Now their tool is claiming the drive is "passed after repair" when before it had failed the long test. I'm fully zeroing out the drive, also from SeaTools for DOS, but what can I really do with it? It's out of warranty, 25,000+ power-on hours, and had some reallocated sectors/bad blocks which are now allegedly fixed. I'll never trust anything important to it anymore, assuming I even use it, and I don't want to foist it off on some unsuspecting other person, so I can't sell it in good conscience, either.

Do I just accept that I'm trashing a mostly-functional drive and move on?

Edit: I should mention, I have a raft of dead drives I want to send to the guy who makes the metal roses, but I'll be disassembling them anyway. It seems like this one should be used for something, but I don't know from where that "seeming" comes.

Oddhair fucked around with this message at 19:38 on May 22, 2012

UndyingShadow
May 15, 2006
You're looking ESPECIALLY shadowy this evening, Sir

Oddhair posted:

I've recently been lucky enough to escape a failing HD, a Seagate ST31000333AS 1TB 7200RPM. I noticed reallocated sectors on the drive, and started the replacement process, but of course the drive was full in my WHS machine and HD prices have been jacked for 8 months now.

So I got the drive replaced with a WD20EARS, and ran Seagate's tools under DOS. Now their tool is claiming the drive is "passed after repair" when before it had failed the long test. I'm fully zeroing out the drive, also from SeaTools for DOS, but what can I really do with it? It's out of warranty, 25,000+ power-on hours, and had some reallocated sectors/bad blocks which are now allegedly fixed. I'll never trust anything important to it anymore, assuming I even use it, and I don't want to foist it off on some unsuspecting other person, so I can't sell it in good conscience, either.

Do I just accept that I'm trashing a mostly-functional drive and move on?

Edit: I should mention, I have a raft of dead drives I want to send to the guy who makes the metal roses, but I'll be disassembling them anyway. It seems like this one should be used for something, but I don't know from where that "seeming" comes.

Think about it, you'll never be able to trust that drive again. It's not worth the hassle.

Ceros_X
Aug 6, 2006

U.S. Marine
Turn it into an external that you use to loan out to friends. People never take as good of care of your poo poo as you do, so when it comes up lost/broken/stolen you can shrug it off.

Odette
Mar 19, 2011

Ceros_X posted:

Turn it into an external that you use to loan out to friends. People never take as good of care of your poo poo as you do, so when it comes up lost/broken/stolen you can shrug it off.

I actually do this with a WD 1TB that has about 250 reallocated sectors. It's worked well, so far.

Tiger.Bomb
Jan 22, 2012
OK Just a simple question, hoping for recommendations.

Right now my 'NAS' is a headless Windows XP PC with a 1 TB drive using window file sharing. My 'backup' is to use SyncToy monthly to copy over any changes to my backup.

PC specs - E6600 2.4 GHZ, 2GB RAM, 600Watt PSU, 500GB drive for OS and 1000GB storage drive.

I am about to run out of space on my 1TB drive. Is there a simple way to get two physical drives to APPEAR as one in windows? Right now I use the storage mostly for media to be viewed on XBMC, and it would be easiest to keep all my video media on one share so my XBMC setup isn't complicated.

Second question: with my 600W power supply, two drives, and a 8600gts (that is rarely used), do I have the power for another 1TB drive? The 1TB I copy to right now is an external with its own supply.

Thanks

Adbot
ADBOT LOVES YOU

Galler
Jan 28, 2008


You probably have 2-300 spare watts (if not more) so that's not even close to an issue.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply