|
There's a number of people recommending more RAM if you enable dedupe on ZFS file systems. I'm not quite sure how ZFS implements snapshots, but it's just a form of delta disks in the end and if there's not a whole lot changing between snapshots and the on-disk format is efficient enough (read: not VMware's implementation) it could be worth enabling it for some help.
|
# ? May 15, 2012 17:04 |
|
|
# ? May 10, 2024 06:02 |
Zorak of Michigan posted:Dedup would be a lot more CPU and memory intensive than snapshots, so I'd think you'd want to explore snapshots a little more. When you talk about pulling snapshots out, what exactly are you looking to do? I ask because if you just need to copy files from a snapshot to the web host, that's easy and totally non-disruptive, as Fishmanpet says. You just go to the snapshot directory, find what you want, and copy it over. In your particular use case I don't think you'd ever want to actually restore the whole snapshot, so a lot of the situations that I have trouble wrapping my head around would never come up anyway. I have two situations I'm concerned about : 1) No offense DarkLotus (everything has been great so far!), but my hosting takes a dump and all my data is gone. In this case I want a newish backup I can restore immediately. For this, I just need to grab a copy every week or whatever, and have the newest one available. In this situation I don't even need to keep historical copies. 2) Security breach of some kind that I don't notice for a while. In this case, I need to troll through backups and find the latest copy without the problem, restore and update to fix the initial breach. In this case, I may need to mix-and-match files - the web and database content may be before the breach, but I could need the newest SVN to get my development work back. Having dated backups I can easily poke through and grab files from is good here. Honestly, I need to do a bunch of upgrades to the NAS including larger disks. If I upgrade the disks, I will probaby have a couple TB of empty space and can easily just keep duplicate copies of the website, storage space be damned. However, I've got this fancy filesystem so if I was going to roll a script to download the contents, doing a snapshot of the directory wouldn't be a huge issue. Upgrading so I have dedup as an option would be cool too; it would be transparent in the sense that I would have a pool full of directories with timestamp names, and they would all look like full backups but the FS layer would not duplicate blocks (which would be a majority of the data). FYI, I think the box does have a relatively small amount of ram (2GB if memory serves) and I don't know if the microATX board supports more. If the filesystem isn't mounted, I assume there is no penalty for having the dedup option set? It would make sense that ZFS wouldn't keep information about an unmounted filesystem in memory but I'm not sure how it's organized under the hood.
|
|
# ? May 15, 2012 17:50 |
|
So I'm envisioning that every night (or however often) you mirror the live data to your NAS (with rsync or whatever), and once the sync is done, you take a snapshot. If you need restore the entire directory, you just look in /pool/.zfs/snapshot/`date --date="yesterday"` and pull everything out. If you need a specific file you just go into /pool/.zfs/snapshot/$dateNeeded and pull the files you need.
|
# ? May 15, 2012 19:06 |
|
Dedup requires a lot of memory because it works via block-level checksum. If the system can't keep a table of checksums in memory, it's going to have to load it from disk as it processes writes, which is a prescription for some pretty awful write performance. Snapshots, by contrast, take advantage of the fact that ZFS is a copy-on-write filesystem, so they're basically free except for the disk space required to store older versions of files. I'd use snapshots for this purpose, without any doubt or hesitation. Edit: Just make sure you use rsync or some other process that leaves identical files alone. If you just download a new copy every night, you'll be creating brand new files each time, and your nightly delta will be 100% of the size of the web site. Snapshots won't be so economical then.
|
# ? May 15, 2012 20:14 |
Shane-O-Mac posted:I'm trying to set up FreeNAS 0.7 with Sabnzbd, Sickbeard, etc. I keep reading that I can add packages such as Sabnzbd through the web GUI. Apparently you go to System > Packages and do it through there. This button doesn't exist on my FreeNAS. I'm pretty confused. You know, maybe I shouldn't even be using FreeNAS. I want to turn an old PC into a home media server that runs the programs I listed. The computer is about 7 years old, so this is very basic stuff. I want to maximize performance with my old hardware while keeping energy costs low. I just assumed I should use FreeNAS, but is there a better way?
|
|
# ? May 15, 2012 21:26 |
|
Zorak of Michigan posted:Dedup requires a lot of memory because it works via block-level checksum. If the system can't keep a table of checksums in memory, it's going to have to load it from disk as it processes writes, which is a prescription for some pretty awful write performance. Snapshots, by contrast, take advantage of the fact that ZFS is a copy-on-write filesystem, so they're basically free except for the disk space required to store older versions of files. I'd use snapshots for this purpose, without any doubt or hesitation. I think for dedup, it was something like (rule-of-thumb from zfs-discuss maybe) 2GB of RAM per terabyte of data?
|
# ? May 15, 2012 21:34 |
|
movax posted:I think for dedup, it was something like (rule-of-thumb from zfs-discuss maybe) 2GB of RAM per terabyte of data? ZFS Dedup FAQ posted:3. Make sure your system has enough memory to support dedup. Determine the memory requirements for deduplicating your data as follows: http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup
|
# ? May 15, 2012 22:01 |
|
Why not just use version control? It handles diffs/versioning and duplication.
|
# ? May 17, 2012 03:04 |
|
So I am tired of FreeNAS 8 being a piece of poo poo, and if I am going to go to the hassle of copying/rebuilding my NAS, I think I am just going to pop a video card in it and have it output to video for watching movies. My NAS is only ever on for when I stream stuff off of it anyhow. Therefore, I think I am going to put Windows on an SSD, for quick boot. And hell, if I am putting Windows on it, I think I am just going to make a VM server to run the NAS disks on through Windows, incase I need to move my NAS to a new hardware without having to rebuild the software raid. Great idea or horrible idea?
|
# ? May 17, 2012 06:12 |
|
Plenty of people pass their storage controllers to a VM. It makes managing the OS so much simpler (update hosed everything? Revert to snap).
|
# ? May 17, 2012 10:13 |
|
jeeves posted:Great idea or horrible idea? Horrible idea. Vmware server 2 is horribly slow and lacks the features you would need to do what I think you think you want to do. If you go the esxi/vsphere route, you won't get to do any video output. Adding layers of complexity like virtualization is a really bad idea unless you know what you're doing and why. Do you really want to gently caress something up messing with esxi and delete all of your movies? KISS always, especially for a home setup. Do you really want to have to gently caress with it when something breaks? My file server for video/music is a fully separate box from my vm server for this reason. If my file server fucks up for any reason, I don't run the risk of my network being completely down since the router is a pfsense vm. Just use windows software raid for whatever volumes you want if you can't afford a hardware controller, they'll transfer to new hardware just fine. Edit: don't use intel matrix raid, it's garbage (unless it's improved significantly since the ICH10R)
|
# ? May 17, 2012 13:20 |
|
jeeves posted:So I am tired of FreeNAS 8 being a piece of poo poo, and if I am going to go to the hassle of copying/rebuilding my NAS, I think I am just going to pop a video card in it and have it output to video for watching movies. My NAS is only ever on for when I stream stuff off of it anyhow. Sounds like it would be much easier to just put windows home server on it, which is like $50
|
# ? May 17, 2012 17:30 |
|
I ran into an interesting issue today, and I'd love for someone smarter than I to explain why the fix I'm implementing works. The issue: we have a QNAP NAS. We set up a share for one department to use to store files for each SKU in a folder. SKUs are 6 digits long, and we have about 30,000 distinct folders. To keep it from being too hard to browse, we split that into 10 folders. \\nas\share\SKUs\0\012346\<files go here> The 0 is the first digit of the SKU. This means we have ~3000 folders in each of 10 folders on the share. Still a lot, but it's manageable. The problem is that on Mac OSX when a user connects to the share, it takes about a minute to open up each of the "first digit" folders. It doesn't matter if we use SMB or AFP, both are slow as molasses. On Windows it's almost instantaneous and we have no problems when trying to browse the share. I've always assumed it was some combination of the sheer number of files in each of those folders, and the fact that they use color coded labels that Mac supports, so each time they open a folder OSX just can't handle that many files in a timely fashion. Like I've said, it just works when using Windows to browse. In a blind attempt to fix the weird speed issues, I decided to try something crazy. I added an iSCSI server share on the QNAP, mounted it on our main active directory file share server, then shared THAT out. The same folder that takes 1-2 minutes to open, now takes 4-5 seconds. Is SMB really just that awful? Is Apple's SMB/AFP client that awful? I can't fathom why adding a middle man to the file share in the form of iSCSI actually makes that much of a difference.
|
# ? May 18, 2012 20:59 |
|
Frozen-Solid posted:I ran into an interesting issue today, and I'd love for someone smarter than I to explain why the fix I'm implementing works. I find Snow Leopard's SMB to be downright awful. Our website assistant curses at it regularly when accessing our data server.
|
# ? May 18, 2012 21:03 |
|
Apple SMB implementations have historically been comically bad.
|
# ? May 18, 2012 21:10 |
|
Frozen-Solid posted:I ran into an interesting issue today, and I'd love for someone smarter than I to explain why the fix I'm implementing works. What version of OS X? Apple rewrote their SMB code from the ground up for Lion. Before that, they were stuck with an ancient version of code from the open source Samba project from before they switched their license from GPLv2 to GPLv3.
|
# ? May 19, 2012 00:07 |
|
devmd01 posted:Horrible idea. Vmware server 2 is horribly slow and lacks the features you would need to do what I think you think you want to do. If you go the esxi/vsphere route, you won't get to do any video output. Adding layers of complexity like virtualization is a really bad idea unless you know what you're doing and why. Do you really want to gently caress something up messing with esxi and delete all of your movies? I'd go further to say don't trust anything you don't understand technically and/or intimately with data you give a poo poo about.
|
# ? May 20, 2012 10:02 |
|
Can mixing (similar size) drives made by different manufacturers cause any significant complications in software-RAID? I have some Seagates in a RAID1 that's coughing up SMART errors, but I can only get my hands on a WD at the moment... so just wondering if something like the drives' sector sizes may come into play here?
|
# ? May 20, 2012 16:20 |
|
Piglips posted:Can mixing (similar size) drives made by different manufacturers cause any significant complications in software-RAID?
|
# ? May 20, 2012 18:24 |
|
GMontag posted:What version of OS X? Apple rewrote their SMB code from the ground up for Lion. Before that, they were stuck with an ancient version of code from the open source Samba project from before they switched their license from GPLv2 to GPLv3. It was bad on Leopard. We recently upgraded to Lion and it got worse.
|
# ? May 20, 2012 20:33 |
|
I'm getting sick of WHS loving up on me. Googling suggests that ZFS and FreeNAS are no good for using different-sized discs - what is my alternative to Drive Extender? I want to use the discs that I've got any not spend any more money... Or am I stuck with WHS?
|
# ? May 20, 2012 22:11 |
|
There' unRAID if you don't mind something on the Linux side of the fence. The problems I had with different sized disks was just juggling around and managing which files and directories are actually protected while never being that comfortable that everything would be alright in cause of a real failure. I've been encountering some peculiar problems with my OpenSolaris (yeah, it's obsolete, whatever) server that have required some hard resets and these all happened during longer periods of heavy writes. Thanks to ZFS being copy on write, I had zero problems with the consistency of my files on the file system. The same abusive rebooting situation for me with mdraid, XFS, and LVM in the past has resulted in some seriously bad corruption where I've lost a lot of irreplaceable data. But hey, you keep backups, right?
|
# ? May 20, 2012 22:21 |
|
Anjow posted:I'm getting sick of WHS loving up on me. Googling suggests that ZFS and FreeNAS are no good for using different-sized discs - what is my alternative to Drive Extender? I want to use the discs that I've got any not spend any more money... Or am I stuck with WHS? I use Linux + mdadm + LVM to RAID and pool 15 or so disks of varying sizes. I moved to this setup from WHS and haven't looked back.
|
# ? May 20, 2012 22:31 |
|
I have some questions for you experts here. I'm a digital media pack rat. Storage was so cheap a year ago that I bought 4 2TB hard drives when they were on sale for like $75 a piece. A month back one of the hard drives failed that was full of movies and it was a pain in the rear end getting everything back. Is there any way to do some sort of RAID setup that would not require wiping my drives? Right now two of my drives are full, one has a few hundred gigs worth of files, and the fourth is going to be RMAd for a replacement. Since I think the answer to my first question is a no, what is the best way to monitor my hard drives so I can get a replacement and backup the data before a failure?
|
# ? May 21, 2012 04:30 |
|
Butt Soup Barnes posted:I have some questions for you experts here. You'll need to transfer the data somewhere else temporarily while you build the array. I'd recommend using FreeNAS and set the 4 drives up as a RAID-Z1, it's pretty simple to get it up and running and you can configure it to email you if one of the drives fails.
|
# ? May 21, 2012 12:24 |
|
You could always backup all your hard drives to a service like Crashplan and restore it all back once your array is ready.
|
# ? May 21, 2012 22:26 |
|
MOLLUSC posted:You'll need to transfer the data somewhere else temporarily while you build the array. I'd recommend using FreeNAS and set the 4 drives up as a RAID-Z1, it's pretty simple to get it up and running and you can configure it to email you if one of the drives fails. Thanks, I figured I wouldn't be able to do it without backing up my drives first. Looks like I'll have to buy a couple more 2TB HDDs, guess I'll wait till the price drops a bit. Ninja edit: necro, unfortunately with 4+ TB of data it would take me weeks just to upload the data, and that's assuming their "unlimited" truly is unlimited.
|
# ? May 21, 2012 22:32 |
|
Butt Soup Barnes posted:Thanks, I figured I wouldn't be able to do it without backing up my drives first. It is truly unlimited. I've got many terabytes up there (because I didn't pay close enough attention to my folders getting backed up). It takes awhile to upload, but once you've got it up there it's nice to have it backed up.
|
# ? May 22, 2012 00:58 |
|
Butt Soup Barnes posted:Is there any way to do some sort of RAID setup that would not require wiping my drives? Right now two of my drives are full, one has a few hundred gigs worth of files, and the fourth is going to be RMAd for a replacement. disk1: RMA and empty disk2: a little full disk3: full disk4: full Partition disk 1 into 4 partitions, create raid5 array. copy all data from disk 2 to new array. repartition disk 2 into two partitions, replace disk1 partition 2 and disk 1 partition 4 with these partitions, expand disk1 partition 1 and disk1 partition 3 to expand array. Copy all data from disk3 to array, replace disk2 partition 1 with this disk. copy all data from disk4 to array, replace disk2 partition 2 with this disk. replace disk1 partition 1 with disk 2. now for the scary part, degrade the array and rebuild disk1. voila, data juggling at it's finest.
|
# ? May 22, 2012 02:12 |
|
adorai posted:voila, data juggling at it's finest. How do you sleep at night?
|
# ? May 22, 2012 02:16 |
|
adorai posted:You have 4 disks: I didn't see nearly enough building degraded arrays
|
# ? May 22, 2012 02:22 |
|
Thermopyle posted:It is truly unlimited. I've got many terabytes up there (because I didn't pay close enough attention to my folders getting backed up). It takes awhile to upload, but once you've got it up there it's nice to have it backed up. That's fine and well if you also have truly unlimited bandwidth, which few do anymore. Be very sure of that factor too before you get your internet account suspended.
|
# ? May 22, 2012 02:35 |
|
Moey posted:How do you sleep at night? FISHMANPET posted:I didn't see nearly enough building degraded arrays
|
# ? May 22, 2012 02:54 |
|
adorai posted:You have 4 disks: You are a god drat genius. I have a few questions since I have never setup a RAID array. Excuse my ignorance. -When I create the partitions, is there a specific filesystem that would be best suited? And I assume split up the partitions in equal sizes? -My motherboard has RAID5 support. Is this hardware RAID? If so, is it configured through BIOS, or within Windows, or via some other method? -When I am ready to degrade the array, how do I go about doing that? Thanks for the help, I almost didn't even bother to ask since I was almost positive I wouldn't be able to. And yeah, it's just a shitload of movies/tv shows so if for some reason it doesn't work it's not the end of the world.
|
# ? May 22, 2012 14:46 |
|
I've recently been lucky enough to escape a failing HD, a Seagate ST31000333AS 1TB 7200RPM. I noticed reallocated sectors on the drive, and started the replacement process, but of course the drive was full in my WHS machine and HD prices have been jacked for 8 months now. So I got the drive replaced with a WD20EARS, and ran Seagate's tools under DOS. Now their tool is claiming the drive is "passed after repair" when before it had failed the long test. I'm fully zeroing out the drive, also from SeaTools for DOS, but what can I really do with it? It's out of warranty, 25,000+ power-on hours, and had some reallocated sectors/bad blocks which are now allegedly fixed. I'll never trust anything important to it anymore, assuming I even use it, and I don't want to foist it off on some unsuspecting other person, so I can't sell it in good conscience, either. Do I just accept that I'm trashing a mostly-functional drive and move on? Edit: I should mention, I have a raft of dead drives I want to send to the guy who makes the metal roses, but I'll be disassembling them anyway. It seems like this one should be used for something, but I don't know from where that "seeming" comes. Oddhair fucked around with this message at 19:38 on May 22, 2012 |
# ? May 22, 2012 19:16 |
|
Oddhair posted:I've recently been lucky enough to escape a failing HD, a Seagate ST31000333AS 1TB 7200RPM. I noticed reallocated sectors on the drive, and started the replacement process, but of course the drive was full in my WHS machine and HD prices have been jacked for 8 months now. Think about it, you'll never be able to trust that drive again. It's not worth the hassle.
|
# ? May 23, 2012 06:33 |
|
Turn it into an external that you use to loan out to friends. People never take as good of care of your poo poo as you do, so when it comes up lost/broken/stolen you can shrug it off.
|
# ? May 23, 2012 11:29 |
|
Ceros_X posted:Turn it into an external that you use to loan out to friends. People never take as good of care of your poo poo as you do, so when it comes up lost/broken/stolen you can shrug it off. I actually do this with a WD 1TB that has about 250 reallocated sectors. It's worked well, so far.
|
# ? May 23, 2012 12:36 |
|
OK Just a simple question, hoping for recommendations. Right now my 'NAS' is a headless Windows XP PC with a 1 TB drive using window file sharing. My 'backup' is to use SyncToy monthly to copy over any changes to my backup. PC specs - E6600 2.4 GHZ, 2GB RAM, 600Watt PSU, 500GB drive for OS and 1000GB storage drive. I am about to run out of space on my 1TB drive. Is there a simple way to get two physical drives to APPEAR as one in windows? Right now I use the storage mostly for media to be viewed on XBMC, and it would be easiest to keep all my video media on one share so my XBMC setup isn't complicated. Second question: with my 600W power supply, two drives, and a 8600gts (that is rarely used), do I have the power for another 1TB drive? The 1TB I copy to right now is an external with its own supply. Thanks
|
# ? May 24, 2012 23:15 |
|
|
# ? May 10, 2024 06:02 |
|
You probably have 2-300 spare watts (if not more) so that's not even close to an issue.
|
# ? May 24, 2012 23:51 |