Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Depends upon what virtualization software you're using, but every solution is internal to the virtualization platform. Virtualbox required you to edit some files last I saw and VMware workstation supports raw device mappings to let a VM (more or less) directly access raw disks you present. Xen / xVM Server has something like workstation but I never used it last I had a copy around.

Adbot
ADBOT LOVES YOU

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

necrobobsledder posted:

Depends upon what virtualization software you're using, but every solution is internal to the virtualization platform. Virtualbox required you to edit some files last I saw and VMware workstation supports raw device mappings to let a VM (more or less) directly access raw disks you present. Xen / xVM Server has something like workstation but I never used it last I had a copy around.

I've been trying VMWare's derivative for a bit but I'm hitting a snag - specifically, it won't let me mount the disks as anything but IDE, AND FreeNAS can't see them.

dj_pain
Mar 28, 2005

PopeOnARope posted:

I've been trying VMWare's derivative for a bit but I'm hitting a snag - specifically, it won't let me mount the disks as anything but IDE, AND FreeNAS can't see them.

http://www.vm-help.com/esx40i/SATA_RDMs.php

that's what I did

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

what is this posted:

Drobo's been the main company selling devices that do this. Unless you count Windows Home Server, which had a bunch of issues early on with data corruption, and now dropped the feature from the upcoming release because of issues with data corruption and horrible slowdowns in heavy/enterprise usage (admittedly it was fine in small consumer setups).

Hard drives are extremely cheap and in two years you can buy a new rack of drives. You may even want a new NAS. You can expand a RAID set with existing drive sizes without issue and without using the unevenly sized drives faux-raid feature.

The only reason to want different sized drives is because you have a bunch of junky old hard drives lying around, maybe a 250GB drive here, a 500GB drive there, a 1.5TB drive there, and hey just throw out the old small drives and buy a few 1TB, 1.5TB, or 2TB drives and be done with it. Hard drives are really, really cheap.

Giving up speed and reliability just because you have a four year old 250GB drive sitting in a USB enclosure that you think you can save some money on to store your precious animes is a hilarious joke. Buy a drive 10 time the size for $100.

So the answer is still "no, unless you're talking about Drobo".

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

dj_pain posted:

http://www.vm-help.com/esx40i/SATA_RDMs.php

that's what I did

Except I'm running Windows 7 as my base layer, not ESXi; this is an area where my inexperience really shows through. I'm having difficulty proceeding overall :(.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I'm used to ESX terminology, it might be marked as passthrough devices in Workstation.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I am having no luck installing any flavor of Solaris (Solaris Express 11, OpenIndiana, NexentaStor Community Edition). Could I ask you folks with experience to check out my thread in Haus of Tech Support?

frogbs
May 5, 2004
Well well well
So i'm quickly running out of space on my iMac (i take a ton of photos and do a lot of video work). I've been thinking about getting a 4 bay Firewire800 raid enclosure and filling it up with 2tb drives in a raid 5 or 10. So far I think i'm leaning towards the OWC Mercury Pro Qx2 filled with 4 Hitachi 2tb drives. Can anyone recommend any similar enclosures/solutions as an alternative to the OWC model? I'm not necessarily married to the idea of a FW800 device, i'd go gigabit if someone could provide me a compelling solution. Any suggestions/thoughts?

dj_pain
Mar 28, 2005

PopeOnARope posted:

Except I'm running Windows 7 as my base layer, not ESXi; this is an area where my inexperience really shows through. I'm having difficulty proceeding overall :(.

Ohh im sorry to hear that

CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!
:yaycloud::smithcloud:
So I've been playing around with the DS1010+

It arrived and I installed 5 1TB WD Caviar Black drives and I also put and extra 2 GB of RAM for whatever minor performance boost that might yield. I'd wanted to team the NICs together 802.3ad style but unfortunately my switches are all Dell Powerconnect 2724 units which only support ports configured in static LAGs for 802.3ad which the Synology unit does not support. (If anyone could suggest a cheap 16 port switch I could use for the NAS and the servers that would support please LACP speak up.) Especially if its a source of used ones.

Right now my main goal is to use the synology as a backup target datastore for our main SBS server and a secondary 2008 server which run on ESXi 4.1 hosts. But I'm also going to use it to share files over SMB so I formatted it as one big volume and am using the file level iSCSI for VMWare. I just got everything talking today and have only roughly tested the iSCSI read and write speeds, but they are certainly very inconsistant.

RIght now I'm just copying files between internal storage (RAID 5 15 K SAS) which is what the VM Runs on. I've tested both a windows initiator connecting to the Synology, and a disk created using an iSCSI ESX datastore on the synology with a disk created out of that datastore and presented to the host. The direct connection to Windows only seems to make a slight difference though the highest transfer rate I got 100MB/s was using the disk on the ESX datastore whatever that means.
When copying a 3 gig file I get read and write speeds which average in the 40-60 (but I have seen continuous writes of 90MB/s) however when copying folder with several gigs of photos it seems to top out at about 18 MB/s. Now its very clear that this is not even slightly scientific testing, but I thought I'd post about it here anyway. In between real work I'm going to do some more scientific benchmarking of the iSCSI performance as well as the SMB performance and see if I can come up with any useful information.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

dj_pain posted:

Ohh im sorry to hear that

I played with things a bit and managed to get it up and working - but then I hit an unexpected consequence - the absolute best speeds I was able to see were about 20MB/s. That's pretty pants. Conditions were as follows:

FreeNAS
512MB Ram on VM
Dual Core Machine on VM
Bridged Network
6 x 1TB Drives RAW mapped, 4 on SB710, 2 on SIL3132
Configuration - RAID 50

So I've just given up on that notion and opted to use the onboard RAID. You know, not like that was recommended much earlier or anything.

So I did some fuckery, and came to find out something very interesting - performance of a RAID 5 array on a PCI SIL3114 chipset is crap. As in, 15MB/s crap.

PopeOnARope fucked around with this message at 20:44 on Dec 22, 2010

Minty Swagger
Sep 8, 2005

Ribbit Ribbit Real Good
Heh, so I actually am running UNRAID right now. While I liked it for my ~*~Babby's First NAS~*~ I think I am ready to jump into something more powerful. I like Unraid's super simplicity, but data transfer rates (~30MBps transfer to 1TB WD Black drives) + slow updates to the software (Still working on what to do with 4K drives so its rolling the dice on if new drives work with it well right now) is lame.

So I want to upgrade to some 2TB drives, and I'm looking for the best platform for them I guess? I'd like to get some dual parity action going, so i was looking at RAID-Z2 which it appears Freenas can do. Is Freenas pretty simple to set up? UNRAID is pretty much format and set up USB thumdrive, put in HDs and then config as needed, sometimes in a GO script. I've read a lot of posts about setups with solaris and linux and ahh holy poo poo, I just want something slightly more robust than a consumer grade NAS since I already own the hardware.

TL;DR: Is Freenas's RAIDZ2 solid and reliable and is it easy to set up?

movax
Aug 30, 2008

Ugh, I got files coming out my rear end now. Still desperately waiting for Hitachi 7K2000s to dive below $100. Trying to stay away from the temptation to use a modified ZFS binary to force ashift to use 4K-sector drives. I doubt Oracle is going to move their asses on an official method to support 'em.

devilmouse
Mar 26, 2004

It's just like real life.

movax posted:

Trying to stay away from the temptation to use a modified ZFS binary to force ashift to use 4K-sector drives.

I've been using 6 Samsung 2TBs in a raidz2 and the recompiled zpool that supports ashift=12 for a few weeks now and everything seems right as rain. Granted at the same time, I'm still not moving to them fully for another few weeks of use just to be on the safe side, but no complaints (or errors) so far.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Oracle doesn't care about home userspoor people and unless enterprise class drives move up to 4k sectors I don't expect them to budge really. Given that a lot of enterprise folks aren't exactly going to run to 4k sector drives due to legacy reasons or that SAN vendors would have to potentially rewrite a substantial part of code (not to mention performance test it all over again) I don't think 4k sector support is anywhere near close to a big priority for folks like Oracle. Sure, Microsoft supports it alright now, but their interest is heavily in the home user park and their drive vendors gave them all the potential advantages of going with 4k sector drives.

I'm not quite sure how everyone else is having problems with ZFS with 4KB sector drives but I've got 2 RAIDZ arrays with similar performance, one using 512B sector drives and the other using 4KB sector drives and they're pretty similar. Maybe it's because the 2TB drives are jumpered for compatibility perhaps but I've been swell for several months now. I didn't do any fancy tuning or anything, just pulled them from the box, added them to a RAIDZ vdev pool, done.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I am going insane. I just accidentally nuked my Ubuntu install and all I have left is a single external drive with all my stuff on it. I am trying to figure out how well ZFS handles 4K sector drives, if at all, especially when they report 512 byte sectors both logical and physical. I am tearing my hair out here.

zfs-fuse is pretty bad. Solaris-based operating systems with native ZFS won't install on my system because of an Intel H55 chipset USB compatibility bug that won't be fixed.

Here's what I want:
1) Snapshotting
2) Writes faster than 15 MB/s in real-world
3) Offline access to folders with proper syncing between multiple computers (no concurrent write access except through aware programs like OneNote)
4) Drive-failure redundancy
5) Access to USB 3.0 for reasonable copy speed to an external hard drive

And I don't want to have to babysit the dang thing.

Here's what hasn't worked:
- Solaris Express, OpenSolaris, NexentaStor won't install natively and I'm not sure they support USB 3.0
- Ubuntu Server 10.10 with zfs-fuse has murderously slow writes and is prone to problems with boot script mount and dismount. Granted, the pools are running on partitions rather than whole drives, but still

Here are options I can think of:
- VMware ESXi on another drive, virtualize NexentaStor or OpenSolaris. Cons: ESXi doesn't have USB 3.0, ESXi might not be compatible with new consumer-grade mainboard
- Keep on plugging with Ubuntu, swap zfs-fuse for mdadm and a separate snapshotting tool (what tool? I dunno)
- Fuuuuuuuck nuke it all and use Windows Server 2008 R2 since I can get an educational license through Dreamspark. Volume Shadow Copy and Offline Folders and all that poo poo 'cause I'm working with Windows clients anyway.

Ethereal
Mar 8, 2003

Factory Factory posted:

I am going insane. I just accidentally nuked my Ubuntu install and all I have left is a single external drive with all my stuff on it. I am trying to figure out how well ZFS handles 4K sector drives, if at all, especially when they report 512 byte sectors both logical and physical. I am tearing my hair out here.

zfs-fuse is pretty bad. Solaris-based operating systems with native ZFS won't install on my system because of an Intel H55 chipset USB compatibility bug that won't be fixed.

Here's what I want:
1) Snapshotting
2) Writes faster than 15 MB/s in real-world
3) Offline access to folders with proper syncing between multiple computers (no concurrent write access except through aware programs like OneNote)
4) Drive-failure redundancy
5) Access to USB 3.0 for reasonable copy speed to an external hard drive

And I don't want to have to babysit the dang thing.

Here's what hasn't worked:
- Solaris Express, OpenSolaris, NexentaStor won't install natively and I'm not sure they support USB 3.0
- Ubuntu Server 10.10 with zfs-fuse has murderously slow writes and is prone to problems with boot script mount and dismount. Granted, the pools are running on partitions rather than whole drives, but still

Here are options I can think of:
- VMware ESXi on another drive, virtualize NexentaStor or OpenSolaris. Cons: ESXi doesn't have USB 3.0, ESXi might not be compatible with new consumer-grade mainboard
- Keep on plugging with Ubuntu, swap zfs-fuse for mdadm and a separate snapshotting tool (what tool? I dunno)
- Fuuuuuuuck nuke it all and use Windows Server 2008 R2 since I can get an educational license through Dreamspark. Volume Shadow Copy and Offline Folders and all that poo poo 'cause I'm working with Windows clients anyway.

Have you tried FreeBSD? It has ZFS support natively (though it's ZFS v. 14 or so, not bleeding edge)

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I have not. I may. But I'd be giving up deduplication and that could end up bloating me like crazy unless I find a different way to store system images. :bang:

what is this
Sep 11, 2001

it is a lemur

bob arctor posted:

So I've been playing around with the DS1010+

It arrived and I installed 5 1TB WD Caviar Black drives and I also put and extra 2 GB of RAM for whatever minor performance boost that might yield. I'd wanted to team the NICs together 802.3ad style but unfortunately my switches are all Dell Powerconnect 2724 units which only support ports configured in static LAGs for 802.3ad which the Synology unit does not support. (If anyone could suggest a cheap 16 port switch I could use for the NAS and the servers that would support please LACP speak up.) Especially if its a source of used ones.

The best cheap switch money can buy, in my opinion, is the netgear gs108tv2. It's under $100. Full duplex gigabit, 802.3ad LACP, jumbo frame support, QoS, VLANs, PoE, everything you'd want from a managed switch. Only downside is it's only 8 ports.

http://powershift.netgear.com/upload/product/gs108t-200/gs108tv2_ds_10dec09.pdf


quote:

Right now my main goal is to use the synology as a backup target datastore for our main SBS server and a secondary 2008 server which run on ESXi 4.1 hosts. But I'm also going to use it to share files over SMB so I formatted it as one big volume and am using the file level iSCSI for VMWare. I just got everything talking today and have only roughly tested the iSCSI read and write speeds, but they are certainly very inconsistant.

I'm not sure putting everything in one big volume is the best way to achieve what you're going for. You'll get much better performance from block level iSCSI.

Format two volumes, one for CIFS and the other for iSCSI presentation.

quote:

RIght now I'm just copying files between internal storage (RAID 5 15 K SAS) which is what the VM Runs on. I've tested both a windows initiator connecting to the Synology, and a disk created using an iSCSI ESX datastore on the synology with a disk created out of that datastore and presented to the host. The direct connection to Windows only seems to make a slight difference though the highest transfer rate I got 100MB/s was using the disk on the ESX datastore whatever that means.
When copying a 3 gig file I get read and write speeds which average in the 40-60 (but I have seen continuous writes of 90MB/s) however when copying folder with several gigs of photos it seems to top out at about 18 MB/s. Now its very clear that this is not even slightly scientific testing, but I thought I'd post about it here anyway. In between real work I'm going to do some more scientific benchmarking of the iSCSI performance as well as the SMB performance and see if I can come up with any useful information.

Copying many medium sized photos over CIFS can incur a large cost in overhead.

mpeg4v3
Apr 8, 2004
that lurker in the corner

Factory Factory posted:

I have not. I may. But I'd be giving up deduplication and that could end up bloating me like crazy unless I find a different way to store system images. :bang:

I've been following a thread on the hardforums by a guy that's dedicated to making a FreeBSD liveCD specifically for ZFS with a web GUI. He just put out an experimental release running the latest version of FreeBSD 9 with ZFS v28 compiled in, and supporting ashift=12. The thread is:
http://hardforum.com/showthread.php?t=1521803

I haven't used it, as I'm just running a built-from-scratch 8.1 install and I don't need dedup, but lots of people have had good luck with it.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
That looks perfect, except that, according to the thread, it has major performance issues with 4-5 disk RAIDZ/Z2 pools. :bang:

I need to get out of the Windows "if it's released it's supported" mindset.

e: What about virtualizing OpenIndiana or NexentaStor under Ubuntu? Would virtualized native ZFS work better than zfs-fuse (i.e. a userspace filesystem)?

Factory Factory fucked around with this message at 18:56 on Dec 25, 2010

devilmouse
Mar 26, 2004

It's just like real life.

Factory Factory posted:

That looks perfect, except that, according to the thread, it has major performance issues with 4-5 disk RAIDZ/Z2 pools. :bang:

A 5 disk raidz vdev/pool will be plenty fast. Not sure where you're seeing that it would have perf problems. The author of that "distro" generally recommends a 5disk raidz or 6 disk raidz2 as the best perf/redundancy option.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

devilmouse posted:

A 5 disk raidz vdev/pool will be plenty fast. Not sure where you're seeing that it would have perf problems. The author of that "distro" generally recommends a 5disk raidz or 6 disk raidz2 as the best perf/redundancy option.

He posted a big thing on the next-to-last page responding to issues, and the 4/5-disk thing was regarding RAIDZ/Z2 and 4k sector disks especially. Basically, RAIDZ is not optimized for smaller pools, and it is not optimized for 4k sector disks, and the combination is pretty bad. It's based on the zpool version, apparently.

Considering that zfs-fuse (zpool v23) was giving me 15 MB/s writes on a 4-disk RAIDZ2 pool, when each disk benches ~110 MB/s by itself, I believe him that there are issues.

I may just start figuring out a ZFS alternative, like ext4 with a snaptshotting tool or NTFS with VSS. It's not like I'm going to be changing the number of disks in the pool or adding write cache SSDs or anything.

devilmouse
Mar 26, 2004

It's just like real life.
Odd. I just saw this on the second to last page after the post you're talking about :

sub.mesa posted:

For 4K disks the ideal vdev configuration is:
- mirrors (no issue with 4K sectors)
- 5-disk RAID-Z (or 9-disk)
- 6-disk RAID-Z2 (or 10-disk)

15MB/s writes is horrid. Oof. That's worse than virtualized performance when I briefly tried that. Right now, I get anywhere between 350-500MB/s writes on a 6x 2TB 4k disks in raidz2 running on Open Indiana, depending on the type of benchmark (dd, filebench, bonnie++) and test.

If FreeBSD supports your hardware, even with 512 sector emulation, you should hit way higher than 15MB/s.

A 4-disk raidz2 seems like a bizarre setup though. If you wanted a 4 disk setup with 2 disk redundancy, why not set up a pair of 2 disks in mirrored vdevs like this:

pool: tank
vdev1: mirror (disk1, disk2)
vdev2: mirror (disk3, disk4)

mpeg4v3
Apr 8, 2004
that lurker in the corner
I've got three vdevs in my pool, 6x 1TB in RAIDZ, 3x 1.5TB in RAIDZ, and 3x 4k 2TB in RAIDZ. Not ideal, but eh. Just from the completely non-scientific test of "copy poo poo to a samba share on the server", I get about 50MB/sec. I am using a v14 pool version though, not the newer one.

WilWheaton
Oct 11, 2006

It'd be hard to get bored on this ship!
Has anyone ever come across a PCI (not pci-e) card in their travels that would support e-sata with port multiplier support under opensolaris? My googling has turned up nothing unfortunately

Ethereal
Mar 8, 2003

Factory Factory posted:

I have not. I may. But I'd be giving up deduplication and that could end up bloating me like crazy unless I find a different way to store system images. :bang:

How long do you think you could live without dedup? From the looks of it, ZFS v28 has been patched in and should probably make an appearance in the coming months in the STABLE branch.

http://ivoras.sharanet.org/blog/tree/2010-12-13.zfs-v28-imminent.html

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

WilWheaton posted:

Has anyone ever come across a PCI (not pci-e) card in their travels that would support e-sata with port multiplier support under opensolaris? My googling has turned up nothing unfortunately

Solaris doesn't support port multipliers. Maybe OpenIndiana/Luminos will in time, but there's basically no motivation to support such cheap consumer poo poo in an enterprise class OS.

Stanley Pain
Jun 16, 2001

by Fluffdaddy
I'm running a 10 disk Raidz2 through a virtualized install of Open Indiana. The only thing that actually worked was Vmware Workstation with physical disk access. Server, Hyper-V, and VirtualBox didn't work at all or where really, really slow.

noapparentfunction
Apr 27, 2006

spin that 45 funk.
I recently bought a 1TB Western Digital MyBook World Edition. It's got a tiny, dinosaur-sized brain running some version of Linux that manages the software required to act as a local uPnP / DLNA server so you can access whatever you throw on it on a computer or your XBox/PS3/etc.

If you want to go a little further, you can SSH into it and set up something like Ampache, where you can download a client for your phone and stream all your music from the server itself. It also supports Transmission, PHP, mySQL, and a few other useful services that you can access remotely. Forward the correct ports and password-protect everything, and it's a really great little box for just $140.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Anyone of you running OpenSolaris/OpenIndiana in a VM as a file server? What sort of performance are you getting out of it using CIFS?

Minty Swagger
Sep 8, 2005

Ribbit Ribbit Real Good
So if I want to switch to a raidz2 from unraid, is FREENAS decent and simple, or should I read up on being ale to roll my own setup? I wouldn't do anything else with it except maybe run SABNZBD and sickbeard if that's possible.

Ideally if I could just install the OS to a usb stick, put it in and configure it from a web interface that would loving cool btu I guess I can do other wigglework if needed.

FooGoo
Oct 21, 2008
I didn't see this question posed in the fact or forum so here it goes:

Is there any advantage to buying an external drive versus an internal drive and an enclosure provided it will only be used occasionally and won't be thrown across the room?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Thanks to advice given earlier in the thread by DLCInferno and others, I've now moved all my data from WHS to an Ubuntu machine with mdadm+LVM.

I've got a mix of drive sizes in multiple arrays...

code:
md5 : active raid5 sdk2[0] sdo1[4] sdm2[3] sdl2[1]
      2927197440 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md4 : active raid5 sdk1[0] sdn1[4] sdm1[3] sdl1[1]
      2927197440 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md2 : active raid5 sdb3[0] sdc3[1] sdd3[3]
      2867772928 blocks super 1.2 level 5, 128k chunk, algorithm 2 [3/3] [UUU]

md3 : active raid5 sdh1[1] sdj1[3] sdi1[4] sde1[0]
      5860535424 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md6 : active raid5 sda1[0] sdf1[1] sdg1[3]
      488391680 blocks super 1.2 level 5, 128k chunk, algorithm 2 [3/3] [UUU]

md0 : active raid0 sdd1[2] sdb1[0] sdc1[1]
      5855040 blocks 64k chunks

md1 : active raid5 sdb2[0] sdc2[1] sdd2[2]
      58593152 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
I copy to/from the box over a gigabit network at 100-120MB/s (WHS on the same hardware did 60-70 MB/s) and I've got a nice linux machine for dicking around with. My total usable storage is somewhere around 15TB now...

It took frickin forever copying data off the NTFS drives to existing arrays and then expanding the arrays with that drive (I probably ended up with 150+ hours of copy/RAID growing), but it's done!

Thanks for the advice, guys.

mexecan
Jul 10, 2006
I'm looking for some feedback for a storage solution for backing up my photos from my Macbook Pro. Currently, my data is backed up on my 500mb Apple Time Capsule. In hindsight, I regret taking this route as these early time capsule models have proven to be somewhat flaky and I now find myself looking to supplement my backups.

I'm looking for 1TB of storage. Should I be looking at an 'internal' 3.5" HD in an enclosure or just go with a dedicated external drive?

what is this
Sep 11, 2001

it is a lemur
Why don't you buy a Synology DS211J, put in two 2TB hard drives in RAID1, put it on your network, and continue using Time Machine?

This has several advantages:

(1) Your backup will be in RAID1. Currently you have one drive, if it fails you lose your backup. In the solution you're proposing you will also have one drive. RAID will give you redundancy so your backups are a bit more safe.

(2) You can keep using Time Machine. Apple's built in backup software is undeniably excellent, and most importantly incredibly convenient. The best backup is one that happens. Switching to something you'll have to manually remember to back up is a bad idea.

(3) You can continue backing up over the network or wifi. Again, this is very convenient. Convenience is the most important thing for home user backups.

(4) Synology's software does all kinds of other stuff - you could put your iTunes library on the NAS, you could use it to share things with a DLNA TV if you have one, you can access files on a shared drive from iOS or Android phone, or your computer anywhere on the internet - it's just a good solution.

Expect to pay around $230 for a DS211J, and around $100 for each 2TB hard drive.

This puts your total price at $430. This is probably the least amount of money you can spend to do this the "right way" in terms of a network backup.

If you buy a cheap external USB 1TB drive, you'll forget to plug it into your laptop, and your backups won't happen all the time. A large number of enclosures also have no active cooling (fans) so there's some potential for the drive to overheat. Furthermore, it could easily be tugged off the table by the USB or powercord and fall on the ground. Finally, you have no redundancy so if the drive dies, so do your backups.

Buying a dedicated NAS for backup is better because you don't have to remember to turn it on or plug it in, it has active cooling fans, there's dedicated redundancy, and you can stick the NAS in a closet where it's not going to be kicked over or accidentally knocked off a desk.

Drevoak
Jan 30, 2007

what is this posted:

Expect to pay around $230 for a DS211J, and around $100 for each 2TB hard drive.

This puts your total price at $430. This is probably the least amount of money you can spend to do this the "right way" in terms of a network backup.

The DS211J is $208 directly from amazon, they sell out of em frequently unfortunately. Western Digital has a MIR for their 2TB drive, get a $20 visa rewards card. Getting the DS211J and 2 drives comes out to about 370ish.

Drevoak fucked around with this message at 20:06 on Dec 28, 2010

Jonny 290
May 5, 2005



[ASK] me about OS/2 Warp
Also, it's hacky but be aware that you can back up to a USB disk from AFAIK any Synology device so if you really wanted to shoestring it and had less than 500GB if data at the present time, you could buy a 211 with one drive, plug the TM into that and back it up to the TM till you got a second drive. Just be aware of the dangers and situation if you pull this.

gregday
May 23, 2003

Uh, so I have a fairly stupid question about ZFS. I've been using ZFS-fuse on my Linux box for awhile, but every time I've reconfigured, I've just offloaded all the data to a large disk and rebuilt the array. Well, soon I'm going to be replacing some disks and that won't be an option.

So does ZFS actually care which disk is sdb, sdc, sdd, and so on? Or does it look at the disks themselves for some sort of token?

Adbot
ADBOT LOVES YOU

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

FooGoo posted:

I didn't see this question posed in the fact or forum so here it goes:

Is there any advantage to buying an external drive versus an internal drive and an enclosure provided it will only be used occasionally and won't be thrown across the room?
Whenever I've looked externals were available for slightly cheaper then enclosure and drive. Externals often also have nicer or more refined looks and may have some extra features.

Otherwise I would usually recommend enclosure and drive. You can upgrade the drive to bigger one later, you can choose what drive goes inside, instead of most likely the cheapest drive the manufacturer could find. If there's some kind of failure with the enclosure you can take the drive out and have better chances of recovery without voiding the warranty which would probably happen with external.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply