Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Crackbone
May 23, 2003

Vlaada is my co-pilot.

Bob Morales posted:

You could do ESXi booting from iSCSI using FreeNAS or something

Yeah, that would require two boxes though, which was what I was hoping to avoid.

Adbot
ADBOT LOVES YOU

Moey
Oct 22, 2010

I LIKE TO MOVE IT
This is pretty common.

Get an IBM M1015 and connect your disks to it. Boot ESXi from a thumbdrive. Pass that M1015 to a FreeNAS/NAS4Free/WhateverZFS VM. Profit.

I have a similar setup but am waiting to load up on 3tb disks. My only downfall was going Mini-ITX, so I am capped at 16gb memory on my motherboard. Great box for "home production" stuff along with spinning up VMs for labs and study poo poo.

Crackbone
May 23, 2003

Vlaada is my co-pilot.

M1015 passthrough to the ZFS vm (so VT-d is required), then present that vm storage through iscsi to other vms?

That might work, but yeah, my immediate concern would be ram usage as I was hoping to do an ITX build myself. As I understand it the ram requirements for any of those solutions is pretty heavy.

evol262
Nov 30, 2010
#!/usr/bin/perl

Crackbone posted:

Looking for input on something I'm trying to work out.

I want to build an all-in-one box that would serve double duty as a fileserver/transcoder and run on ESXi/Hyper V for testing/learning purposes. I'd be shooting for ~6TB of storage (so 4x2TB raid 5), presenting that to vm host, and running a windows VM for media serving/file storage.

However, I get the impression that this probably isn't going to be feasible, or that it's just not a very good idea. In order to make this work I'd need a Hardware RAID card that's ESX/HV compatible, and even then the horror stories of trying to rebuild a failure on disks that size is worrying me. Also, apparently without write cache I'm hearing lots of problems with poor I/O, so that's pushing me into a card that's silly expensive.

Should I just drop the idea of a combined unit? My gut is saying yes but I thought I'd doublecheck here.
The "usual" way to do this is to install ESXi as normal on a motherboard SATA port and pass through a controller with VT-d. That way you don't need to worry about compatibility (Hyper-V's compatibility is essentially as good as Windows; ESXi is trickier, but "proper" RAID cards are almost always supported). The M1015 is popular, which also has support for ESXi if you want to go that route.

What are you wanting to run on the guest? ZFS wants to see whole devices, so I'd pass through a controller with disks attached in that case. Windows doesn't really care, and VMDKs are fine. If I were you, I'd run a Solaris or FreeBSD-based VM with a passthrough controller, with ZFS on the disks, and present the storage to another VM to whatever transcoding you want to do, but your call.

Don't worry about lack of write cache on a home NAS for this use case.

Bob Morales posted:

You could do ESXi booting from iSCSI using FreeNAS or something
You want your NAS to double as an ESXi box, so you should set up an external NAS just to SAN boot your NAS? This should have a "Yo dawg" in front of it.

Crackbone posted:

M1015 passthrough to the ZFS vm (so VT-d is required), then present that vm storage through iscsi to other vms?

That might work, but yeah, my immediate concern would be ram usage as I was hoping to do an ITX build myself. As I understand it the ram requirements for any of those solutions is pretty heavy.
16GB is ~$120 with 8GB DIMMs. ZFS wants ~1GB/TB for optimal performance, but does fine with less unless you enable dedupe. You can easily get 16GB into an ITX build. Do it.

evol262 fucked around with this message at 16:57 on Jun 11, 2013

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Crackbone posted:

M1015 passthrough to the ZFS vm (so VT-d is required), then present that vm storage through iscsi to other vms?

That might work, but yeah, my immediate concern would be ram usage as I was hoping to do an ITX build myself. As I understand it the ram requirements for any of those solutions is pretty heavy.

If I could go back in time, I would go with Micro-ATX. Currently running an I7-3770 in a Lian-Li PC-Q08. Change out the case to some similar. Size will be slightly bigger, but then I could use 32gb memory and have two PCI-e slots (extra NICs and M1015).

Crackbone
May 23, 2003

Vlaada is my co-pilot.

evol262 posted:

What are you wanting to run on the guest? ZFS wants to see whole devices, so I'd pass through a controller with disks attached in that case. Windows doesn't really care, and VMDKs are fine. If I were you, I'd run a Solaris or FreeBSD-based VM with a passthrough controller, with ZFS on the disks, and present the storage to another VM to whatever transcoding you want to do, but your call.

Don't worry about lack of write cache on a home NAS for this use case.


16GB is ~$120 with 8GB DIMMs. ZFS wants ~1GB/TB for optimal performance, but does fine with less unless you enable dedupe. You can easily get 16GB into an ITX build. Do it.

I don't really care about the OS on the VM that's managing the disks, as long as it's reliable and isn't hogging a huge amount of resources to do so. What's a realistic amount to assign ZFS assuming no dedupe? Why would you want to let the drive-managing OS be windows - if you couldn't do VT-d on the controller?

My concern on write cache was related more to trying to handle the disks before ESXi gets involved - apparently any controller without a write cache does about 20MB/s transfer max though ESXi.

Crackbone fucked around with this message at 17:24 on Jun 11, 2013

joe944
Jan 31, 2004

What does not destroy me makes me stronger.
I'd be curious to see how well freenas runs on esxi along with all your other VM's. My home set up consists of 2 servers, one esxi box and 1 freenas box. I like the idea of being able to work on them separately, and wouldn't want any additional risk that could affect my storage.

In case anyone was interested, I've got my freenas box running as the nut master for both itself and the esxi box. I have the nut client installed on all my vm's via puppet and found a module for esxi itself.

http://rene.margar.fr/2012/05/client-nut-pour-esxi-5-0/

I tested it and it works.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

joe944 posted:

I'd be curious to see how well freenas runs on esxi along with all your other VM's. My home set up consists of 2 servers, one esxi box and 1 freenas box. I like the idea of being able to work on them separately, and wouldn't want any additional risk that could affect my storage.

My whole thing is consolidation. The real questions is what kind of performance do you need from your storage, and what permanent (not lab) workloads are you running that need resources. I currently have a 256gb SSD installed locally for my VMs to run from, then will have the pass though array for media storage.

joe944
Jan 31, 2004

What does not destroy me makes me stronger.
If I was running a more powerful server I probably wouldn't have any issues with consolidating them but as it stands I'm working with 2 fairly low end systems. The 6 core amd cpu handles the esxi workload very well, although I need to upgrade the single SSD that is running all my VM's to something more robust. Can't complain though, running 15 vm's off 1 SSD is pretty impressive.

I use my esxi server for my website among other services that I consider permanent/crucial and need to be up 24/7, and tend to be a fan of slight overkill in most situations. Like I said, I'd be curious to find out how well they run. Another thing to note is I'm constantly saturating the gig nic on my freenas box and considering link aggregation since I'm using an HP procurve that supports it.

IOwnCalculus
Apr 2, 2003





Moey posted:

This is pretty common.

Get an IBM M1015 and connect your disks to it. Boot ESXi from a thumbdrive. Pass that M1015 to a FreeNAS/NAS4Free/WhateverZFS VM. Profit.

Yep, do this. Technically speaking you don't need the passed-through controller to be compatible with ESXi - it will pass anything through with VT-d - but it's a nice perk if you ever want to change how you're configured.

I've taken it a bit...far. My NAS4Free VM has my motherboard's on-board SATA controller, and two LSI 1064 controllers (they don't support >2TB but they're cheap as hell), passed to it via VT-d. I boot ESXi off of a thumb drive and I have a cheap lovely PCI SATA controller that ESXi actually supports to do the actual VM storage.

Maybe I don't do enough transcoding but I've found CPU load to be actually quite minimal on my box. I'm using an i5 2400 with 24GB of RAM, 16GB of which is dedicated to the NAS4Free VM.

One caveat with VT-d: You have to reserve 100% of the VM's RAM, so that 16GB can't be used by any other VM. Also, if you go Intel, you can't use a K-series chip since Intel doesn't want people overclocking them into cheap Xeons.

This reminds me, I promised kill your idols that I'd do some CIFS benchmarking, and I should probably try to do that now that my desktop is actually on :v:

BlankSystemDaemon
Mar 13, 2009




On a related note, anyone who wants ESXi on a HP Microserver which doesn't have VT-d, you might want to look at how to enable raw sata access in ESXi. I've got it running with a dual-NIC connected to two switches + onboard NIC for WAN link and pfSense for router+firewall and FreeNAS for filesharing as guest OS', all in one computer.

EDIT: Word Of warning, though. Don't do this unless you have a (spare) local machine which you can do zfs send to in order to have a physical backup of your data that you can easily restore from. Restoring from cloud, while an option, is not something you want to do if you know in advance that you'll potentially (read: definitely) be putting your data at risk.

EDIT2½: Nevermind about the PVSCSI and are using a FreeBSD based NAS - FreeBSD doesn't have a driver for it.

BlankSystemDaemon fucked around with this message at 20:16 on Jun 11, 2013

IOwnCalculus
Apr 2, 2003





So, benchmarks. My config:

Intel DQ67SWB3 + Intel i5 2400
24GB RAM (2x 8GB, 2x 4GB)
Broadcom 5706 dual-port gigabit ethernet adapter (one port assigned to WAN vSwitch, one assigned to LAN vSwitch)
VT-d passthrough of two LSI 1064 controllers and the on-board SATA controller to NAS4Free VM
ESXi 5.0

NAS4Free VM:
1 vCPU
16GB RAM (all reserved, due to passthrough)
vmxnet3 adapter with jumbo frames enabled on it and on vswitch
NAS4Free 9.1.0.1

Desktop PC:
Biostar T5 XE
12GB RAM
Realtek 8168 of some sort
Windows 7

Windows 7 VM:
4 vCPU
6GB RAM (no reservation but very little contention for RAM versus my pfsense and Ubuntu VMs)
vmxnet3 adapter
Windows 7

ZFS config:
code:
config:

	NAME        STATE     READ WRITE CKSUM
	aggregate   ONLINE       0     0     0
	  raidz1-0  ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    ada4    ONLINE       0     0     0
	    da3     ONLINE       0     0     0
	  raidz1-1  ONLINE       0     0     0
	    ada1    ONLINE       0     0     0
	    ada2    ONLINE       0     0     0
	    ada3    ONLINE       0     0     0
	  raidz1-2  ONLINE       0     0     0
	    da5     ONLINE       0     0     0
	    da8     ONLINE       0     0     0
	    da4     ONLINE       0     0     0
	cache
	  ada0      ONLINE       0     0     0
ada1,2,3 are 2TB reds. da* disks are all 1.5TB Samsung HD154UIs, most with a shitload of hours on them. ada4 is a Seagate 1.5TB 7200RPM. ada0 is a Mushkin 60GB SSD. All ada* disks are on the motherboard's controller, all of the da* disks are on the LSIs. Compression is enabled on all datasets, dedup is disabled on all datasets.

Using CrystalDiskMark against a mounted Samba share from the PC, with jumbo frames disabled on the PC:


Enabling jumbo frames made it all go to hell and stop responding to my PC. My switch is an ancient Netgear gigabit switch that's supposed to support it. I think those writes need some help, though, and I don't think that SSD is actually doing jack poo poo. Let's find out by running 'zpool remove aggregate ada0' and trying again:



Well, it does seem to do something, but I wonder if the difference will be more obvious when running from another VM on the same hardware, rather than to a physical machine.

Same test from the Win7 VM to eliminate external networking, with the cache drive enabled, (so just vmxnet3->vmxnet3), still jumbo frames disabled:


Cache enabled still, now with jumbo frames:


Cache drive disabled, still with jumbo frames:

IOwnCalculus fucked around with this message at 07:45 on Jun 13, 2013

Xythar
Dec 22, 2004

echoes of a contemporary nation
I picked up a QNAP TS412 a few days ago with 4x WD Red 3TBs for storing my media collection and basically anything else I feel like holding onto for the foreseeable future. I initially set it up as a RAID 5 but I've read a bunch of stuff since that says it's basically suicide to run RAID 5 with an array of that size since mathematically, the chance of an unrecoverable read error during resilver is pretty close to 1. Is this needless paranoia where Red drives are concerned or should I bother copying everything off and switching to RAID 10 or something? I don't really need that extra 3gb (yet, anyway) but it would take forever and be a pain, and I haven't really read much beyond the theoretical.

shifty
Jan 12, 2004

I dont know what you're talking about
I have an ESXi box consolidated with my FreeNAS, attempting to do what it appears Crackbone is trying. However, even using the RAID card it ended up sucking pretty badly for me. I started with a 4x3TB Raidz1 with nothing but an iSCSI volume presented to a Server 2012 Essentials VM. I tried to share my videos out to a Media Center Extender (XBox 360) through a shared folder on that iSCSI disk. Anything that was HD stuttered like crazy. I added 2 more drives as a ZFS mirror, and used CIFS to share out the same movies. It worked great. While watching a movie, I created an iSCSI volume on the same mirror. As soon as the iSCSI was created (I hadn't even configured the initiator yet), that movie I was watching stuttered until I deleted the iSCSI volume. So, until I come up with some way of fixing that, I have CIFS setup on my ZFS mirror, and my files that don't require good performance stay on the Raidz1 iSCSI volume, which works well enough for now.

I haven't been able to try presenting the iSCSI directly to the host, because when I tried passing through a NIC, I got tons of IRQ errors, so FreeNAS only has virtual NICs. I'm not sure if doing it that way would fix things or not. I eventually want to try putting FreeNAS on its own machine to see if that fixes anything, but it's not in the budget today.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Xythar posted:

I picked up a QNAP TS412 a few days ago with 4x WD Red 3TBs for storing my media collection and basically anything else I feel like holding onto for the foreseeable future. I initially set it up as a RAID 5 but I've read a bunch of stuff since that says it's basically suicide to run RAID 5 with an array of that size since mathematically, the chance of an unrecoverable read error during resilver is pretty close to 1. Is this needless paranoia where Red drives are concerned or should I bother copying everything off and switching to RAID 10 or something? I don't really need that extra 3gb (yet, anyway) but it would take forever and be a pain, and I haven't really read much beyond the theoretical.

http://www.snia.org/sites/default/education/tutorials/2007/fall/storage/WillisWhittington_Deltas_by_Design.pdf
http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162

The definitive answer to your concerns lies between these two articles. The skinny is this: UER rates of 10^14 mean that once you hit 12tb, there is a chance you'll hit a error on disk during a rebuild. Does this guarantee it? No, but its the risk most of us (me at least) won't take. Reds, being NAS drives shouldn't be UER 10^14, but I just checked and they are. Info found here: http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-771442.pdf

Anyway, raid 10 is an option, raid 6 (I don't think that model supports it), a larger raid 1 etc. Tempting fate with a raid 5, sure. It might make it on the second pass of a rebuild, who knows! It's a risk, that's all I mean to get at.

wang souffle
Apr 26, 2002
I'm currently running OpenIndiana on an i3 board with an M1015 and 8x 2TB drives in a raidz2 setup. There are a couple VMs running on top using KVM, but that's been a little unwieldy.

I'd like to move to a setup with VT-d, ECC, and ESXi. I'd pass through the M1015 to an OpenIndiana guest (or other ZFS-friendly OS) to host the array. That would still require the ZFS OS to be running if it was hosting the VM datastore, however. Does anyone have a better suggestion on how to make VMs easier to manage?

What's the preferred chipset for VT-d these days? I read somewhere that AMD includes it on most chipsets but Intel is all over the place.

Is ECC overkill?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Don't host the datastore on a machine running as a VM. Get an SSD and use that for the datastore.

You don't want to be like Mysoginist when he accidently Vmotioned his NAS VM onto the datastore hosted off his NAS VM.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

FISHMANPET posted:

Don't host the datastore on a machine running as a VM. Get an SSD and use that for the datastore.

You don't want to be like Mysoginist when he accidently Vmotioned his NAS VM onto the datastore hosted off his NAS VM.

This. I have a 256gb SSD for running all my VM OS.

Edit: I went with an i7-3770 for my CPU since it supports VT-D. Avoid the 'K' (unlocked) intel chips.

wang souffle
Apr 26, 2002

FISHMANPET posted:

Don't host the datastore on a machine running as a VM. Get an SSD and use that for the datastore.
So simple. I keep forgetting that SSDs are cheap these days.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

FISHMANPET posted:

You don't want to be like Mysoginist when he accidently Vmotioned his NAS VM onto the datastore hosted off his NAS VM.
For a professional VMware admin to make that mistake, that's pretty drat amusing especially because he'd have had to storage vMotion his VM off of there which is not done automatically by default.

I'd be very careful looking at AMD machines with lots of requirements because AMD just got rid of ECC on more recent consumer CPUs, so there's even less of a segment between meh consumer CPUs and full-blown server class CPUs. I think the Xeon E3-1235 or so class CPUs will be the cheapest that support ECC, VT-x (or the AMD-V extension), and VT-d features. I might suggest looking at older CPU generations than the most recent ones honestly because neither CPU vendor is exactly giving more and more to underpowered home server CPUs down the road. I think the whole cloud trend is dampening the home server market that'd be the sweet spot for such CPUs.

BnT
Mar 10, 2006

wang souffle posted:

I'm currently running OpenIndiana on an i3 board with an M1015 and 8x 2TB drives in a raidz2 setup. There are a couple VMs running on top using KVM, but that's been a little unwieldy.

I'd like to move to a setup with VT-d, ECC, and ESXi. I'd pass through the M1015 to an OpenIndiana guest (or other ZFS-friendly OS) to host the array. That would still require the ZFS OS to be running if it was hosting the VM datastore, however. Does anyone have a better suggestion on how to make VMs easier to manage?

What's the preferred chipset for VT-d these days? I read somewhere that AMD includes it on most chipsets but Intel is all over the place.

Is ECC overkill?

Have you considered giving Linux/KVM/ZFSonLinux a shot first? I understand that this doesn't directly address your issues with KVM, although Linux implementations might have some better KVM management options.

I recently moved from an ESXi/OpenIndiana/M1015/raidz2 to CentOS/ZFSonLinux. I'm very pleased with performance, have fully redundant hypervisor storage, and have more SATA ports available to ZFS than I did before. zpool import was a breeze, and VMDK files were converted without too many headaches with qemu-img. The built-in KVM management in CentOS (virt-manager via X11) is fine for my minimal uses and I don't have any Windows VMs for vSphere management anymore. I'd probably recommend some other distro to anyone else, but I've been using RHEL forever for work. I'm using a Supermicro C202 chipset with an Intel E3-1230, had no issues with VT-d, and found the IPMI to be very useful for headless operation.

In my view if ZFS isn't overkill, neither is ECC.

no go on Quiznos
May 16, 2007



Pork Pro

kill your idols posted:

Synology DS212j or DS112j and WD Red's. Should come under $400 shipped from Amazon for the 1-Bay unit. I've had the DS112j, DS212j, and DS713+; all have been great pieces of hardware. The GUI is straight forward and set and forget.

HOWTO: http://www.synology.com/us/solutions/backup/time_machine/time_machine.php

Thanks, I pulled the trigger and got a DS212j and a 3TB Red. I'm liking it, the GUI is pretty easy to use and I can set Time Machine and forget about it. Although the initial setup took a while (8 hours for a disk check and 26 hours :suicide: for the initial TM backup).

Moxie Omen
Mar 15, 2008

So after a bad drive and a stick of ram with a bad upper couple MB that only manifested itself in kernel panics and weirdness under heavy ZFS load and RMA'ing both of those, FreeNAS still couldn't seem to keep itself from kernel panic'ing all over itself (this time over trying to access swap!) with a medium amount of load. All the disks check out fine with SMART so I finally said gently caress it and dumped Ubuntu 12.04LTS and rebuilt my entire array from scratch. Finally finished copying my stuff over again and no problems so far. Just typical stupid Linux bullshit like it wouldn't automatically mount my ZFS filesystems on boot without some jiggering. I can live without the stupid web interface if it means my array doesn't start randomly corrupting or crashing all over itself.

UndyingShadow
May 15, 2006
You're looking ESPECIALLY shadowy this evening, Sir

Moxie Omen posted:

So after a bad drive and a stick of ram with a bad upper couple MB that only manifested itself in kernel panics and weirdness under heavy ZFS load and RMA'ing both of those, FreeNAS still couldn't seem to keep itself from kernel panic'ing all over itself (this time over trying to access swap!) with a medium amount of load. All the disks check out fine with SMART so I finally said gently caress it and dumped Ubuntu 12.04LTS and rebuilt my entire array from scratch. Finally finished copying my stuff over again and no problems so far. Just typical stupid Linux bullshit like it wouldn't automatically mount my ZFS filesystems on boot without some jiggering. I can live without the stupid web interface if it means my array doesn't start randomly corrupting or crashing all over itself.

Mine did that too until I set the ram values.

Moxie Omen
Mar 15, 2008

NNNngggg!!!! and one drive dropped itself and the array degraded itself overnight.

A little background:

I'm running 9 2TB WD Reds. I have both a 5 drive and a 4 drive enclosure in this case. One LSI 9211-8i for 8 of the disks and the lone 9th disk is on the mainboard SATA controller. I'm now thinking that's a problem since with both FreeNAS and now Linux if I have a lot of I/O going on, the drive that's on the mainboard SATA will start flipping out. I've swapped the drives around multiple times at this point and it's always that port regardless of drive. So I just bought another LSI 9211-8i out of anger. For a single drive. gently caress this thing.'

On the plus side if I ever REALLY flip out and buy another seven drives and more enclosures I guess I'll have the room to expand!

Moxie Omen fucked around with this message at 00:43 on Jun 20, 2013

Unexpected
Jan 5, 2010

You're gonna need
a bigger boat.
Hi,

I want to add 2TB of redundant storage to my desktop. Can I just buy two hard drives (e.g. red Western Digitals) and create a RAID array out of them inside my case? Or should I spend extra $200 and get a something like Synology's NAS Diskstation?

I will only be backing up multimedia files from my desktop and nothing from my network.

Not sure if you need this info but here's my motherboard: Asus P8P67 Deluxe.

Also, if the answer to the above question is "RAID", could you please suggest software to run backup?

Thanks.


Edit:
One more thing. I initially wanted to get two 3TB drives like this one but their reviews are much worse compared to 2TB ones. Is this because many more fail quickly? Or is performance worse?

Unexpected fucked around with this message at 18:44 on Jun 24, 2013

IOwnCalculus
Apr 2, 2003





If you don't care about having those files network accessible (or, rather, having your PC be on to access them) then there's not much reason to have those drives in a separate box.

Assuming you're on Windows, I would just set them up as a dynamic disk RAID1 (or mirror or whatever DD calls it, it's been forever since I've used it) - that way you can easily transport the disks into any other Windows system if your motherboard shits itself.

For backup - are you backing up just from your primary drive to this array, or external backup to other locations?

As far as the drive failures, I would doubt that the 3TB Reds are any less reliable than the 2TB ones - they're just getting purchased more, and Newegg is known to pack their drives in a manner that doesn't necessarily give them the protection they should have. If you're worried about that, you're better off ordering them from Amazon or any other vendor that doesn't just wrap the drives in bubblewrap and toss them into a box of peanuts.

Unexpected
Jan 5, 2010

You're gonna need
a bigger boat.

IOwnCalculus posted:

If you don't care about having those files network accessible (or, rather, having your PC be on to access them) then there's not much reason to have those drives in a separate box.

Assuming you're on Windows, I would just set them up as a dynamic disk RAID1 (or mirror or whatever DD calls it, it's been forever since I've used it) - that way you can easily transport the disks into any other Windows system if your motherboard shits itself.

For backup - are you backing up just from your primary drive to this array, or external backup to other locations?

As far as the drive failures, I would doubt that the 3TB Reds are any less reliable than the 2TB ones - they're just getting purchased more, and Newegg is known to pack their drives in a manner that doesn't necessarily give them the protection they should have. If you're worried about that, you're better off ordering them from Amazon or any other vendor that doesn't just wrap the drives in bubblewrap and toss them into a box of peanuts.

Thank you for fast response. I'm on 64bit Windows 7 and I'd like to back up from one of my internal drives to RAID.

IOwnCalculus
Apr 2, 2003





I think for that the built-in Windows backup probably works fine, but I've never used it so I'll let someone else chime in there. I'm a big fan of Crashplan which I guess can do local backups as well, but I only use it for computer-to-computer and computer-to-cloud backups.

Unexpected
Jan 5, 2010

You're gonna need
a bigger boat.
I've never heard of Crashplan before. In terms of cloud backup, are they like Mozy or Carbonite?

TheGreySpectre
Sep 18, 2012

You let the wolves in. Why would you do that?

porkface posted:

Expansion is really only practical for those on an extremely tight budget or anyone building a system with far more room for expansion. In the time it will take to fill a starter volume, it will be nearly impossible to find drives that match the size of today's drives.

I think this is highly dependent on what you are storing and what you consider a tight budget. I have 14TB right now of useable space and have done two expansions. Spending the extra 600 up front would have done nothing for me and I don't consider that to be be acting on a "tight budget"

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl

Unexpected posted:

I've never heard of Crashplan before. In terms of cloud backup, are they like Mozy or Carbonite?

Yeah, if you subscribe to them, you can backup files to them. It runs as a local service and can automatically back up your system to their servers as files are created.


Be aware that although they claim not to throttle, your upload speeds might not be as fast as your internet connection would otherwise permit. I've been stuck averaging about 3 megabit to them over the last month.

Mr Shiny Pants
Nov 12, 2012

Unexpected posted:

Thank you for fast response. I'm on 64bit Windows 7 and I'd like to back up from one of my internal drives to RAID.

Get a Microserver, seriously. They are cheap as hell right now and run something like napp it on it to get ZFS goodness.

All up to you of course :)

BlankSystemDaemon
Mar 13, 2009




It should also be noted that RAID is not a backup solution, but a way of having data spread over multiple redundant (as in redundant engineering) drives, whereas a backup is something capable of retaining multiple versions of the data (historical examples include optical media/tapedisks/harddisk on a shelf, nowadays off-site/cloud based solutions are used more because of increased bandwidth) in such a way that it is not suceptible to user error, malicious intent, or catastrophical hardware failure (think of anything from a fire in your house where you have data stored with family/friends up to and including nuclear attack where your data is in a cloud based solutions on multiple continents).

BlankSystemDaemon fucked around with this message at 21:22 on Jun 24, 2013

Thoom
Jan 12, 2004

LUIGI SMASH!

Farmer Crack-rear end posted:

Be aware that although they claim not to throttle, your upload speeds might not be as fast as your internet connection would otherwise permit. I've been stuck averaging about 3 megabit to them over the last month.

I get similar speeds on my gigabit connection. My local LAN backups run at 20-50Mbps, so it's definitely something on their end and not a software issue. I think it might be possible to squeeze out some extra performance by tuning various settings (like send/receive buffers).

IOwnCalculus
Apr 2, 2003





Farmer Crack-rear end posted:

Yeah, if you subscribe to them, you can backup files to them. It runs as a local service and can automatically back up your system to their servers as files are created.


Be aware that although they claim not to throttle, your upload speeds might not be as fast as your internet connection would otherwise permit. I've been stuck averaging about 3 megabit to them over the last month.

And if you don't subscribe, you can back up for free between your computers that you have Crashplan on, your friends computers that have Crashplan and have given you their friend code, or even locally on your machine. It does handle versioning and file integrity as well.

If you have a big library they will certainly take their sweet time on that initial backup, but they do really mean unlimited when they say it.

McGlockenshire
Dec 16, 2005

GOLLOCKS!

IOwnCalculus posted:

I think for that the built-in Windows backup probably works fine, but I've never used it so I'll let someone else chime in there.

It works but there is no way to tell it to not use all of the available space on the target disk and there is no way to prune old backups without traversing the filesystem yourself and nuking archive directories by hand. It will therefore use all available disk space and then whine very loudly about how stupid it is.

Unexpected
Jan 5, 2010

You're gonna need
a bigger boat.
Thank you all!

thebigcow
Jan 3, 2001

Bully!

McGlockenshire posted:

It works but there is no way to tell it to not use all of the available space on the target disk and there is no way to prune old backups without traversing the filesystem yourself and nuking archive directories by hand. It will therefore use all available disk space and then whine very loudly about how stupid it is.

Backup and Restore -> Manage Space -> View backups

At least it works under Windows 7. Still a pain in the rear end.

Adbot
ADBOT LOVES YOU

Rooted Vegetable
Jun 1, 2002
Been reading through the thread but I am not sure what people's feelings really are on drive pooling which isn't raid. I'm planning on an HP Microserver (either N54L or a Gen8) + Ubuntu (likely) to replace my current setup... Which is a old netbook, wine starter, a bunch of external drives and importantly Drive Bender.

I've got no massive need or desire for raid, so a year or two ago, drive pooling with Drive Bender was perfect for me. Especially important was the folder level redundancy/duplication and also the readability of drives if I had to use one individually for any reason... And I can reuse that pile of usb drives of different sizes I had.

Ok... Thing is, I can't find the recommended Linux alternative to Drive Bender that really meets the three needs above (selective duplication, single drive readable and heterogeneous drive combination)... Greyhole seems to be closer but a search isn't revealing thoughts on this?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply