|
I think I'm gonna go with FlexRAID, but I don't understand how it creates parity through the Snapshot RAID, is this at all reliable? Am I creating a RAID and then using Flex Raid on top of that? I have 3 1TB's currently, 2 in a RAID-0 with my OS on another. If I were to buy 2 2TB drives, can I pool all 5 under FlexRAID and actually get parity?
|
# ? Jun 27, 2011 22:02 |
|
|
# ? May 15, 2024 02:34 |
|
Random question - running a software RAID5 using md on Ubuntu 10.10, ext3 filesystem. Realistically, what options do I have for either file-level or block-level deduplication?
|
# ? Jun 27, 2011 23:09 |
|
Something like this: http://www.opendedup.org/
|
# ? Jun 27, 2011 23:16 |
|
Longinus00 posted:https://ext4.wiki.kernel.org/index.php/Ext4_Howto#Bigger_File_System_and_File_Sizes Yeah, I saw that, but somewhere else I read that the version of the e2fsprogs included with Ubuntu 10.10 had the necessary code updates...
|
# ? Jun 28, 2011 03:56 |
|
Forbidden Kiss posted:I think I'm gonna go with FlexRAID, but I don't understand how it creates parity through the Snapshot RAID, is this at all reliable? Am I creating a RAID and then using Flex Raid on top of that? You're putting a layer on top of the filesystem. You shouldn't need to create a RAID then use FlexRAID on top (though you could). My understanding (and bear in mind, I'm mostly a UNIX guy, but this looked like a reasonable solution for Windows) is that FlexRAID will take differencing snapshots of your filesystem at intervals, and compare what files changed. It then keeps track of these. If there's a segment of the filesystem that's changing rapidly, it does proper mirroring. If not, it simply mirrors the snapshots across volumes so you can restore. Yes, you should be able to pool all 5 and actually get parity, based on the way it sounds.
|
# ? Jun 28, 2011 15:01 |
|
Nice new bigger boxes from Synology, they seemed to have gone full circle like ReadyNAS, originally targeting SOHO Windows-only users to now high priority to NFS and iSCSI based storage for hosting VMs. They're starting to look weak on the software side though, nary a word about NFSv4. http://www.synology.com/products/product.php?product_name=DS3611xs&lang=enu I'm a bit stumped about advertising an FPU: MrMoo fucked around with this message at 18:44 on Jun 28, 2011 |
# ? Jun 28, 2011 18:37 |
|
Diskstation DS3611xs DX baybee . I know it's been a bunch of anecdotes but would anyone in this thread willingly go with Synology for a new install, all else equal?
|
# ? Jun 28, 2011 22:22 |
|
I still use them and have not had an issue yet. I think there was either a bad batch of controllers, or a cooling problem with the people who reported the problems.
|
# ? Jun 28, 2011 23:13 |
|
My brand new Proliant Microserver apparently has no sleep function built into its bios, meaning it can not be as a FreeNAS device that tries to go to sleep to save power when not in use. That was my main concern for trying to just make it a pure NAS device, as I already have a low power server for home server (non-NAS) needs. I guess I'll just make my Proliant run on WHS2011 and keep it on 24/7 instead of having two machines on all of the time.
|
# ? Jun 28, 2011 23:18 |
|
evol262 posted:You're putting a layer on top of the filesystem. You shouldn't need to create a RAID then use FlexRAID on top (though you could). Can anyone confirm that this is how FlexRAID works? Also, with this type of RAID what gets dedicated to parity, will I get all 7TB's of the drives I pool together?
|
# ? Jun 29, 2011 03:06 |
|
Okay, so I'm trying to discuss with a friend what we need to build a NAS box, never done one before. I think we've settled on FreeBSD and RAID-Z with at least 5 drives to start. I'm not sure what the process is of setting everything up though. And what card do we get? Like is RAID-Z an option when you use FreeBSD? Is it even the best file system option? vvvvvvvvvvv if I'm setting up Z with FreeBSD then what do I need the hardware controller for? Like if I want a hardware RAID solution and have it as a NAS box, I would just get the card and install FreeNAS? Not sure what my options would be with there. Do people generally go with RAID-Z if they have the choice? Dudebro fucked around with this message at 05:17 on Jun 29, 2011 |
# ? Jun 29, 2011 04:15 |
|
Dudebro posted:Okay, so I'm trying to discuss with a friend what we need to build a NAS box, never done one before. Raid Z is certainly an option with FreeBSD. Make sure you are using 8.x.
|
# ? Jun 29, 2011 04:41 |
|
Dudebro posted:Okay, so I'm trying to discuss with a friend what we need to build a NAS box, never done one before. Raidz has nothing to do with raid cards and is only kind of related to real raid. http://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID-Z It's an option when setting up a multi device zfs pool. It'll give you protection not unlike what a raid5 would give you but with some extra pros and cons that come with its implementation in zfs.
|
# ? Jun 29, 2011 08:00 |
|
RAID-Z Pros: Awesome, flexible, gives you parity without pickiness, can have better performance than raid5 sometimes. Cons: Eats ram for breakfast, slower than vanilla raid, not as simple to understand, no hardware controllers. Why is it slower and also faster? Compare RAID5 using a fast software implementation or hardware controller to ZFS, and ZFS is going to use more resources and potentially be slower (if you are pegging the CPU or running out of memory). If you have a beefy system, ZFS can sometimes be faster (if the limiting factor would be regarding your disk speed and cache drives and so on). Most of the downsides of RAID-Z are really more ZFS issues than anything else. If you go ZFS, use RAID-Z.
|
# ? Jun 29, 2011 08:16 |
|
Forbidden Kiss posted:Can anyone confirm that this is how FlexRAID works? Also, with this type of RAID what gets dedicated to parity, will I get all 7TB's of the drives I pool together? You could try reading their wiki instead of asking here.
|
# ? Jun 29, 2011 15:51 |
|
Hmm, well if I wanted to just use hardware RAID, which card should I get and which file system? I'm thinking I need to run just FreeNAS instead of FreeBSD if I just want a NAS box, right?
|
# ? Jun 29, 2011 15:53 |
|
I am doing some testing right now with FreeNAS 8. If all goes well, I'm planning on using that behind my NAS. Currently I'm just testing with a VM and virtual disks. I created a 1gb disk for the OS install, and then 6 2gb disks to play around with currently. I am hitting a problem when creating a volume, it doesn't seem to complete. First volume I try to make is a 3 disk volume, ZFS, Raid-Z. Once I create that, I go to the volume to actually view it, this is what I get. Any idea on what I am doing wrong?
|
# ? Jun 29, 2011 16:52 |
|
Dudebro posted:Hmm, well if I wanted to just use hardware RAID, which card should I get and which file system? I'm thinking I need to run just FreeNAS instead of FreeBSD if I just want a NAS box, right? You could build it up from scratch with FreeBSD, but FreeNAS would probably be easier. LSI and 3ware are relatively good. If you're getting a battery-backed hardware RAID card, it doesn't matter much. I'm fond of ZFS, but it really wants raw disks. If you're going to make a hardware RAID5, and you have battery backup, XFS is a good solution. Try the console. Check on the zvol. I'm unsure of FreeNAS's implementation (other than being considered 'embedded', so / is ro), but ZFS generally wants to mount at /Volume1 for a zpool with that name. Unless it's using mountpoint=legacy. Your pool is probably fine, just need to check on it.
|
# ? Jun 29, 2011 17:19 |
|
What happens if you build it from the command line, e.g. zpool create Volume1 da1 da2 da3?
|
# ? Jun 29, 2011 17:27 |
|
Factory Factory posted:What happens if you build it from the command line, e.g. zpool create Volume1 da1 da2 da3? Created it without errors, not seeing it in the GUI at all though. Edit: Just found this setting, it was set to 2gb (which could be a problem since I only made my virtual disks 2gb, so I dropped the value to 1. Still no luck. Edit 2: So from cli I am now doing "zpool status -v" and it comes back with no pools available So it seems that it actually didn't make that pool, so I try and create a different one using the same disks, and its telling me that da1 is part of potentially active pool "testraidz". Hrm, I need to really figure out what I am doing with this before I roll it onto real hardware. Moey fucked around with this message at 17:47 on Jun 29, 2011 |
# ? Jun 29, 2011 17:36 |
|
zpool import -F testraidz zfs set mountpoint=/mnt/testraidz testraidz zpool status testraidz df -k
|
# ? Jun 29, 2011 18:25 |
|
I just finished building a Debian machine to be used primarily as an NAS (in addition to some other random tasks). I have 9 x 2TB disks that I need to combine into an array of disks. ZFS or Software RAID5? I realize the difference in size if I use single-parity for the RAIDZ vs RAID5, but I was just curious what the current opinion on this was.
|
# ? Jun 29, 2011 18:26 |
|
I was testing FreeNAS in VMWare the same way you were and ran into the same problem. Make your virtual disks bigger and it should work
|
# ? Jun 29, 2011 18:43 |
|
Yea, I'm not sure what ended up happening but now the CLI is not even recognizing the zfs command. Rebuilding the VM now, will bump up the size of the virtual disks, and I am going to take a base snapshot so if (once) I break it again, I won't have to actually re-install. Update: Re-installed FreeNAS, bumped all my disks to 8gb, created a volume with 3 disks, worked great. Thanks for the help everyone, now time to monkey around with this. Moey fucked around with this message at 19:36 on Jun 29, 2011 |
# ? Jun 29, 2011 18:44 |
|
RAID1E or RAID5? I've got three disks (well, the third is somewhere in Texas right now) I'm going to be attaching to the Adaptec 5405 card I posted earlier. For a few reasons I am locked to three disks, which only leaves me at using RAID 1E or 5. RAID1E seems to avoid the "write hole" that 5 has since it's an implementation of rotating mirror/stripe (and thus doesn't deal with a parity bit), but I could be wrong. Which is better to use? It's going to be used in my home ESXi box, running 4-5 not-storage-intensive VMs at a time. I'm not sure what metric to use to benchmark the different setups either. And on top of that, what should I be using for a stripe size?
|
# ? Jun 29, 2011 18:58 |
|
Ok, apparently ext4 just doesn't work well with 15+ TB filesystems. Which filesystem should I use for a large logical volumen? edit: I just talked to Theodore T'so, and he confirmed that not only does e2fsprogs not currently support > 16TB, the upcoming versions of e2fsprogs that will support >16TB won't be able to do an in-place conversion...filesystems will have to be reformatted. So, don't use ext4 filesystems on your storage if it's ever possible you'll hit 16TB. Are there any filesystems available that support > 16TB and can do an in-place conversion of ext4? There's no way I can backup the 15TB of data I already have... edit: GB != TB and also I cleared up my meaning Thermopyle fucked around with this message at 23:19 on Jun 29, 2011 |
# ? Jun 29, 2011 22:11 |
|
So I just did a:code:
Now, if I pull out one of these drives and put it in a different computer, will the other computer be able to read the contents? Or does mdadm do some magic that only makes it readable as a raid disk. It's not a boot drive, only data.
|
# ? Jun 29, 2011 23:05 |
|
Thermopyle posted:Are there any filesystems available that support > 16TB and can do an in-place conversion of 16TB? There's no way I can backup the 15TB of data I already have... Btrfs has allegedly 16EB support? I think you meant 16TB on the first line. MrMoo fucked around with this message at 23:14 on Jun 29, 2011 |
# ? Jun 29, 2011 23:10 |
|
Hmm. As mentioned on Ubuntu forums there's not really a good large storage solution on Linux right now that I can find. - EXT4 is limited to 16TB due to e2fsprogs. - JFS has a problem that does not allow expansion over 32TB. - XFS has a bug in the fsck that causes the machine to use all available memory and crash when running the fsck on a disk that has a large amount of data (~20TB). - BTRFS is a great idea, but with no fsck, it's currently experimental Is any of this wrong?
|
# ? Jun 30, 2011 00:09 |
|
Thermopyle posted:- BTRFS is a great idea, but with no fsck, it's currently experimental Hitting Fedora by default is the only way it's going to get more testing and drop experimental status. That's scheduled for Fedora 16 in October. https://fedoraproject.org/wiki/Releases/16
|
# ? Jun 30, 2011 02:21 |
|
Thermopyle posted:crash when running the fsck on a disk that has a large amount of data (~20TB). can you give me a link to this ? I need to send this to my manager who said xfs on a 28tb array was a good idea (I also get to walk around with a face for the next couple of weeks)
|
# ? Jun 30, 2011 05:00 |
|
dj_pain posted:can you give me a link to this ? I need to send this to my manager who said xfs on a 28tb array was a good idea (I also get to walk around with a face for the next couple of weeks) I can only find a mention of it here and since it is definitely not authoritative, I wouldn't use it as a "I told you so" resource. :p Ok, so seeing as how my LVM2 is stuck at 16TB, does anyone have any suggestions about where to go from here? I guess I can just start another filesystem with new disks, but I really wanted to keep everything together...thus the reason for LVM to begin with. What would you do?
|
# ? Jun 30, 2011 06:05 |
|
Thermopyle posted:Hmm. As mentioned on Ubuntu forums there's not really a good large storage solution on Linux right now that I can find. BTRFS doesn't have a fsck(it's under development right now going by the mailing list postings) but something nobody seems to remember is that neither does ZFS (and likely never will, just like it probably won't ever support shrinking). The problem of course is that BTRFS code has way more churn right now since it's under heavy development and new features are getting added all the time (aka potential regressions). Thermopyle posted:Ok, so seeing as how my LVM2 is stuck at 16TB, does anyone have any suggestions about where to go from here? I guess I can just start another filesystem with new disks, but I really wanted to keep everything together...thus the reason for LVM to begin with. I would just start up a second volume with a FS able to hold 16+TB and then migrate it all over down the road. If you really can't wait BTRFS can do in place ext migrations.
|
# ? Jun 30, 2011 06:58 |
|
Here is a pretty generic question for you NAS buffs-- spinning hard drives down: good or bad? I'm trying to put together a new microserver running on Win2008 or WHS2011, as I need to remote into it from work to home for proxy/etc, so it can't be a pure NAS device. I bought a nice low power Proliant Microserver, but the system itself will be on 24/7 since it is a home server. Should I set the non-system hard drives to spin down to save power, or is it not worth it? I am used to having all of my huge hard drives in my own personal computer that is off whenever I am not using it, so I don't know how much wear and tear having 5x drives on 24/7 will be. However I've read having drives spin up and down causes more wear than just having them run all of the time? My non-system disks on this server will be not in use like 95% of the time, so that is why I am curious. Do RAIDS allow for drives to spin down? Would it be better to just non-raid them, to let the non-system drives spin down when not in use?
|
# ? Jun 30, 2011 20:58 |
|
You can spin down with RAID 1 but not RAID 5. It's a good idea for home use to combat power outages, heat, and humidity. The question is whether you have any ports open on the Internet. I found that with script kiddies scanning ports all the time the machine is constantly spinning up to only serve the login page.
|
# ? Jun 30, 2011 21:37 |
|
MrMoo posted:You can spin down with RAID 1 but not RAID 5. It's a good idea for home use to combat power outages, heat, and humidity. The question is whether you have any ports open on the Internet. I found that with script kiddies scanning ports all the time the machine is constantly spinning up to only serve the login page. My system disk won't be raided, and it is what I will be remoting into to check my email and use proxy, which I do a lot. The secondary disks will store my NAS/home media data, and those are the ones I am considering spinning down. I take it head-parking on various hard drives is different than disks spinning down? I notice that it takes 2-5 seconds for my big secondary data disk on my main computer to spin up when not in use, and I am curious if my microserver will have the same thing. I am guessing if I NAS/DE those secondary disks then the drives will never be idle long enough to spin down/head park?
|
# ? Jun 30, 2011 21:41 |
|
I've been reading through the thread, but the OP is fairly old and I was hoping to get a recommendation based on the current lineup of pre-built Nas's out there. Is the Netgear ReadyNAS stuff still decent ? Basically I need a media storage, somewhere around 4TB. Currently I just use a Boxee Box with a usb drive plugged into it. Problem is when I transfer an 8gb movie, it transfers from my laptop to the drive at a max of 1.6MB a second. So I was hoping someone could recommend a NAS that will allow for quick network transfers. I was thinking of upgrading the entire network to 1000mbit/Wireless N. will that offer a speed upgrade versus my 100mbit/Wire;less G setup now ?
|
# ? Jun 30, 2011 22:12 |
|
Wireless is always going to be way slower than wired. And yeah, the ReadyNAS series seems good, if you don't mind them being loud. That was a problem for me due to a small apartment with my sound-sensitive girlfriend, so I had to buy a HP Microserver. I wish I had just been able to buy the easier to set up ReadyNAS, but oh well.
|
# ? Jul 1, 2011 00:17 |
|
jonathan posted:I've been reading through the thread, but the OP is fairly old and I was hoping to get a recommendation based on the current lineup of pre-built Nas's out there. The OP is ancient, I'm aware. I'm going to update it! Promises! (No Promises)
|
# ? Jul 1, 2011 08:55 |
|
|
# ? May 15, 2024 02:34 |
|
teamdest posted:The OP is ancient, I'm aware. I'm going to update it! Promises! (No Promises)
|
# ? Jul 1, 2011 09:52 |