|
Depends upon what virtualization software you're using, but every solution is internal to the virtualization platform. Virtualbox required you to edit some files last I saw and VMware workstation supports raw device mappings to let a VM (more or less) directly access raw disks you present. Xen / xVM Server has something like workstation but I never used it last I had a copy around.
|
# ? Dec 21, 2010 05:50 |
|
|
# ? May 12, 2024 21:08 |
|
necrobobsledder posted:Depends upon what virtualization software you're using, but every solution is internal to the virtualization platform. Virtualbox required you to edit some files last I saw and VMware workstation supports raw device mappings to let a VM (more or less) directly access raw disks you present. Xen / xVM Server has something like workstation but I never used it last I had a copy around. I've been trying VMWare's derivative for a bit but I'm hitting a snag - specifically, it won't let me mount the disks as anything but IDE, AND FreeNAS can't see them.
|
# ? Dec 21, 2010 06:25 |
|
PopeOnARope posted:I've been trying VMWare's derivative for a bit but I'm hitting a snag - specifically, it won't let me mount the disks as anything but IDE, AND FreeNAS can't see them. http://www.vm-help.com/esx40i/SATA_RDMs.php that's what I did
|
# ? Dec 21, 2010 06:31 |
|
what is this posted:Drobo's been the main company selling devices that do this. Unless you count Windows Home Server, which had a bunch of issues early on with data corruption, and now dropped the feature from the upcoming release because of issues with data corruption and horrible slowdowns in heavy/enterprise usage (admittedly it was fine in small consumer setups). So the answer is still "no, unless you're talking about Drobo".
|
# ? Dec 21, 2010 06:46 |
|
dj_pain posted:http://www.vm-help.com/esx40i/SATA_RDMs.php Except I'm running Windows 7 as my base layer, not ESXi; this is an area where my inexperience really shows through. I'm having difficulty proceeding overall .
|
# ? Dec 21, 2010 07:19 |
|
I'm used to ESX terminology, it might be marked as passthrough devices in Workstation.
|
# ? Dec 21, 2010 09:51 |
|
I am having no luck installing any flavor of Solaris (Solaris Express 11, OpenIndiana, NexentaStor Community Edition). Could I ask you folks with experience to check out my thread in Haus of Tech Support?
|
# ? Dec 21, 2010 12:25 |
|
So i'm quickly running out of space on my iMac (i take a ton of photos and do a lot of video work). I've been thinking about getting a 4 bay Firewire800 raid enclosure and filling it up with 2tb drives in a raid 5 or 10. So far I think i'm leaning towards the OWC Mercury Pro Qx2 filled with 4 Hitachi 2tb drives. Can anyone recommend any similar enclosures/solutions as an alternative to the OWC model? I'm not necessarily married to the idea of a FW800 device, i'd go gigabit if someone could provide me a compelling solution. Any suggestions/thoughts?
|
# ? Dec 22, 2010 06:05 |
|
PopeOnARope posted:Except I'm running Windows 7 as my base layer, not ESXi; this is an area where my inexperience really shows through. I'm having difficulty proceeding overall . Ohh im sorry to hear that
|
# ? Dec 22, 2010 06:27 |
|
So I've been playing around with the DS1010+ It arrived and I installed 5 1TB WD Caviar Black drives and I also put and extra 2 GB of RAM for whatever minor performance boost that might yield. I'd wanted to team the NICs together 802.3ad style but unfortunately my switches are all Dell Powerconnect 2724 units which only support ports configured in static LAGs for 802.3ad which the Synology unit does not support. (If anyone could suggest a cheap 16 port switch I could use for the NAS and the servers that would support please LACP speak up.) Especially if its a source of used ones. Right now my main goal is to use the synology as a backup target datastore for our main SBS server and a secondary 2008 server which run on ESXi 4.1 hosts. But I'm also going to use it to share files over SMB so I formatted it as one big volume and am using the file level iSCSI for VMWare. I just got everything talking today and have only roughly tested the iSCSI read and write speeds, but they are certainly very inconsistant. RIght now I'm just copying files between internal storage (RAID 5 15 K SAS) which is what the VM Runs on. I've tested both a windows initiator connecting to the Synology, and a disk created using an iSCSI ESX datastore on the synology with a disk created out of that datastore and presented to the host. The direct connection to Windows only seems to make a slight difference though the highest transfer rate I got 100MB/s was using the disk on the ESX datastore whatever that means. When copying a 3 gig file I get read and write speeds which average in the 40-60 (but I have seen continuous writes of 90MB/s) however when copying folder with several gigs of photos it seems to top out at about 18 MB/s. Now its very clear that this is not even slightly scientific testing, but I thought I'd post about it here anyway. In between real work I'm going to do some more scientific benchmarking of the iSCSI performance as well as the SMB performance and see if I can come up with any useful information.
|
# ? Dec 22, 2010 07:23 |
|
dj_pain posted:Ohh im sorry to hear that I played with things a bit and managed to get it up and working - but then I hit an unexpected consequence - the absolute best speeds I was able to see were about 20MB/s. That's pretty pants. Conditions were as follows: FreeNAS 512MB Ram on VM Dual Core Machine on VM Bridged Network 6 x 1TB Drives RAW mapped, 4 on SB710, 2 on SIL3132 Configuration - RAID 50 So I've just given up on that notion and opted to use the onboard RAID. You know, not like that was recommended much earlier or anything. So I did some fuckery, and came to find out something very interesting - performance of a RAID 5 array on a PCI SIL3114 chipset is crap. As in, 15MB/s crap. PopeOnARope fucked around with this message at 20:44 on Dec 22, 2010 |
# ? Dec 22, 2010 15:20 |
|
Heh, so I actually am running UNRAID right now. While I liked it for my ~*~Babby's First NAS~*~ I think I am ready to jump into something more powerful. I like Unraid's super simplicity, but data transfer rates (~30MBps transfer to 1TB WD Black drives) + slow updates to the software (Still working on what to do with 4K drives so its rolling the dice on if new drives work with it well right now) is lame. So I want to upgrade to some 2TB drives, and I'm looking for the best platform for them I guess? I'd like to get some dual parity action going, so i was looking at RAID-Z2 which it appears Freenas can do. Is Freenas pretty simple to set up? UNRAID is pretty much format and set up USB thumdrive, put in HDs and then config as needed, sometimes in a GO script. I've read a lot of posts about setups with solaris and linux and ahh holy poo poo, I just want something slightly more robust than a consumer grade NAS since I already own the hardware. TL;DR: Is Freenas's RAIDZ2 solid and reliable and is it easy to set up?
|
# ? Dec 24, 2010 06:48 |
|
Ugh, I got files coming out my rear end now. Still desperately waiting for Hitachi 7K2000s to dive below $100. Trying to stay away from the temptation to use a modified ZFS binary to force ashift to use 4K-sector drives. I doubt Oracle is going to move their asses on an official method to support 'em.
|
# ? Dec 25, 2010 03:00 |
|
movax posted:Trying to stay away from the temptation to use a modified ZFS binary to force ashift to use 4K-sector drives. I've been using 6 Samsung 2TBs in a raidz2 and the recompiled zpool that supports ashift=12 for a few weeks now and everything seems right as rain. Granted at the same time, I'm still not moving to them fully for another few weeks of use just to be on the safe side, but no complaints (or errors) so far.
|
# ? Dec 25, 2010 04:18 |
|
Oracle doesn't care about I'm not quite sure how everyone else is having problems with ZFS with 4KB sector drives but I've got 2 RAIDZ arrays with similar performance, one using 512B sector drives and the other using 4KB sector drives and they're pretty similar. Maybe it's because the 2TB drives are jumpered for compatibility perhaps but I've been swell for several months now. I didn't do any fancy tuning or anything, just pulled them from the box, added them to a RAIDZ vdev pool, done.
|
# ? Dec 25, 2010 05:01 |
|
I am going insane. I just accidentally nuked my Ubuntu install and all I have left is a single external drive with all my stuff on it. I am trying to figure out how well ZFS handles 4K sector drives, if at all, especially when they report 512 byte sectors both logical and physical. I am tearing my hair out here. zfs-fuse is pretty bad. Solaris-based operating systems with native ZFS won't install on my system because of an Intel H55 chipset USB compatibility bug that won't be fixed. Here's what I want: 1) Snapshotting 2) Writes faster than 15 MB/s in real-world 3) Offline access to folders with proper syncing between multiple computers (no concurrent write access except through aware programs like OneNote) 4) Drive-failure redundancy 5) Access to USB 3.0 for reasonable copy speed to an external hard drive And I don't want to have to babysit the dang thing. Here's what hasn't worked: - Solaris Express, OpenSolaris, NexentaStor won't install natively and I'm not sure they support USB 3.0 - Ubuntu Server 10.10 with zfs-fuse has murderously slow writes and is prone to problems with boot script mount and dismount. Granted, the pools are running on partitions rather than whole drives, but still Here are options I can think of: - VMware ESXi on another drive, virtualize NexentaStor or OpenSolaris. Cons: ESXi doesn't have USB 3.0, ESXi might not be compatible with new consumer-grade mainboard - Keep on plugging with Ubuntu, swap zfs-fuse for mdadm and a separate snapshotting tool (what tool? I dunno) - Fuuuuuuuck nuke it all and use Windows Server 2008 R2 since I can get an educational license through Dreamspark. Volume Shadow Copy and Offline Folders and all that poo poo 'cause I'm working with Windows clients anyway.
|
# ? Dec 25, 2010 12:19 |
|
Factory Factory posted:I am going insane. I just accidentally nuked my Ubuntu install and all I have left is a single external drive with all my stuff on it. I am trying to figure out how well ZFS handles 4K sector drives, if at all, especially when they report 512 byte sectors both logical and physical. I am tearing my hair out here. Have you tried FreeBSD? It has ZFS support natively (though it's ZFS v. 14 or so, not bleeding edge)
|
# ? Dec 25, 2010 13:31 |
|
I have not. I may. But I'd be giving up deduplication and that could end up bloating me like crazy unless I find a different way to store system images.
|
# ? Dec 25, 2010 14:14 |
|
bob arctor posted:So I've been playing around with the DS1010+ The best cheap switch money can buy, in my opinion, is the netgear gs108tv2. It's under $100. Full duplex gigabit, 802.3ad LACP, jumbo frame support, QoS, VLANs, PoE, everything you'd want from a managed switch. Only downside is it's only 8 ports. http://powershift.netgear.com/upload/product/gs108t-200/gs108tv2_ds_10dec09.pdf quote:Right now my main goal is to use the synology as a backup target datastore for our main SBS server and a secondary 2008 server which run on ESXi 4.1 hosts. But I'm also going to use it to share files over SMB so I formatted it as one big volume and am using the file level iSCSI for VMWare. I just got everything talking today and have only roughly tested the iSCSI read and write speeds, but they are certainly very inconsistant. I'm not sure putting everything in one big volume is the best way to achieve what you're going for. You'll get much better performance from block level iSCSI. Format two volumes, one for CIFS and the other for iSCSI presentation. quote:RIght now I'm just copying files between internal storage (RAID 5 15 K SAS) which is what the VM Runs on. I've tested both a windows initiator connecting to the Synology, and a disk created using an iSCSI ESX datastore on the synology with a disk created out of that datastore and presented to the host. The direct connection to Windows only seems to make a slight difference though the highest transfer rate I got 100MB/s was using the disk on the ESX datastore whatever that means. Copying many medium sized photos over CIFS can incur a large cost in overhead.
|
# ? Dec 25, 2010 14:35 |
|
Factory Factory posted:I have not. I may. But I'd be giving up deduplication and that could end up bloating me like crazy unless I find a different way to store system images. I've been following a thread on the hardforums by a guy that's dedicated to making a FreeBSD liveCD specifically for ZFS with a web GUI. He just put out an experimental release running the latest version of FreeBSD 9 with ZFS v28 compiled in, and supporting ashift=12. The thread is: http://hardforum.com/showthread.php?t=1521803 I haven't used it, as I'm just running a built-from-scratch 8.1 install and I don't need dedup, but lots of people have had good luck with it.
|
# ? Dec 25, 2010 18:36 |
|
That looks perfect, except that, according to the thread, it has major performance issues with 4-5 disk RAIDZ/Z2 pools. I need to get out of the Windows "if it's released it's supported" mindset. e: What about virtualizing OpenIndiana or NexentaStor under Ubuntu? Would virtualized native ZFS work better than zfs-fuse (i.e. a userspace filesystem)? Factory Factory fucked around with this message at 18:56 on Dec 25, 2010 |
# ? Dec 25, 2010 18:52 |
|
Factory Factory posted:That looks perfect, except that, according to the thread, it has major performance issues with 4-5 disk RAIDZ/Z2 pools. A 5 disk raidz vdev/pool will be plenty fast. Not sure where you're seeing that it would have perf problems. The author of that "distro" generally recommends a 5disk raidz or 6 disk raidz2 as the best perf/redundancy option.
|
# ? Dec 25, 2010 19:21 |
|
devilmouse posted:A 5 disk raidz vdev/pool will be plenty fast. Not sure where you're seeing that it would have perf problems. The author of that "distro" generally recommends a 5disk raidz or 6 disk raidz2 as the best perf/redundancy option. He posted a big thing on the next-to-last page responding to issues, and the 4/5-disk thing was regarding RAIDZ/Z2 and 4k sector disks especially. Basically, RAIDZ is not optimized for smaller pools, and it is not optimized for 4k sector disks, and the combination is pretty bad. It's based on the zpool version, apparently. Considering that zfs-fuse (zpool v23) was giving me 15 MB/s writes on a 4-disk RAIDZ2 pool, when each disk benches ~110 MB/s by itself, I believe him that there are issues. I may just start figuring out a ZFS alternative, like ext4 with a snaptshotting tool or NTFS with VSS. It's not like I'm going to be changing the number of disks in the pool or adding write cache SSDs or anything.
|
# ? Dec 25, 2010 19:33 |
|
Odd. I just saw this on the second to last page after the post you're talking about :sub.mesa posted:For 4K disks the ideal vdev configuration is: 15MB/s writes is horrid. Oof. That's worse than virtualized performance when I briefly tried that. Right now, I get anywhere between 350-500MB/s writes on a 6x 2TB 4k disks in raidz2 running on Open Indiana, depending on the type of benchmark (dd, filebench, bonnie++) and test. If FreeBSD supports your hardware, even with 512 sector emulation, you should hit way higher than 15MB/s. A 4-disk raidz2 seems like a bizarre setup though. If you wanted a 4 disk setup with 2 disk redundancy, why not set up a pair of 2 disks in mirrored vdevs like this: pool: tank vdev1: mirror (disk1, disk2) vdev2: mirror (disk3, disk4)
|
# ? Dec 25, 2010 19:46 |
|
I've got three vdevs in my pool, 6x 1TB in RAIDZ, 3x 1.5TB in RAIDZ, and 3x 4k 2TB in RAIDZ. Not ideal, but eh. Just from the completely non-scientific test of "copy poo poo to a samba share on the server", I get about 50MB/sec. I am using a v14 pool version though, not the newer one.
|
# ? Dec 25, 2010 20:08 |
|
Has anyone ever come across a PCI (not pci-e) card in their travels that would support e-sata with port multiplier support under opensolaris? My googling has turned up nothing unfortunately
|
# ? Dec 25, 2010 22:16 |
|
Factory Factory posted:I have not. I may. But I'd be giving up deduplication and that could end up bloating me like crazy unless I find a different way to store system images. How long do you think you could live without dedup? From the looks of it, ZFS v28 has been patched in and should probably make an appearance in the coming months in the STABLE branch. http://ivoras.sharanet.org/blog/tree/2010-12-13.zfs-v28-imminent.html
|
# ? Dec 26, 2010 03:44 |
|
WilWheaton posted:Has anyone ever come across a PCI (not pci-e) card in their travels that would support e-sata with port multiplier support under opensolaris? My googling has turned up nothing unfortunately Solaris doesn't support port multipliers. Maybe OpenIndiana/Luminos will in time, but there's basically no motivation to support such cheap consumer poo poo in an enterprise class OS.
|
# ? Dec 26, 2010 04:24 |
|
I'm running a 10 disk Raidz2 through a virtualized install of Open Indiana. The only thing that actually worked was Vmware Workstation with physical disk access. Server, Hyper-V, and VirtualBox didn't work at all or where really, really slow.
|
# ? Dec 26, 2010 04:28 |
|
I recently bought a 1TB Western Digital MyBook World Edition. It's got a tiny, dinosaur-sized brain running some version of Linux that manages the software required to act as a local uPnP / DLNA server so you can access whatever you throw on it on a computer or your XBox/PS3/etc. If you want to go a little further, you can SSH into it and set up something like Ampache, where you can download a client for your phone and stream all your music from the server itself. It also supports Transmission, PHP, mySQL, and a few other useful services that you can access remotely. Forward the correct ports and password-protect everything, and it's a really great little box for just $140.
|
# ? Dec 26, 2010 06:23 |
|
Anyone of you running OpenSolaris/OpenIndiana in a VM as a file server? What sort of performance are you getting out of it using CIFS?
|
# ? Dec 26, 2010 16:35 |
|
So if I want to switch to a raidz2 from unraid, is FREENAS decent and simple, or should I read up on being ale to roll my own setup? I wouldn't do anything else with it except maybe run SABNZBD and sickbeard if that's possible. Ideally if I could just install the OS to a usb stick, put it in and configure it from a web interface that would loving cool btu I guess I can do other wigglework if needed.
|
# ? Dec 26, 2010 20:26 |
|
I didn't see this question posed in the fact or forum so here it goes: Is there any advantage to buying an external drive versus an internal drive and an enclosure provided it will only be used occasionally and won't be thrown across the room?
|
# ? Dec 28, 2010 03:40 |
|
Thanks to advice given earlier in the thread by DLCInferno and others, I've now moved all my data from WHS to an Ubuntu machine with mdadm+LVM. I've got a mix of drive sizes in multiple arrays... code:
It took frickin forever copying data off the NTFS drives to existing arrays and then expanding the arrays with that drive (I probably ended up with 150+ hours of copy/RAID growing), but it's done! Thanks for the advice, guys.
|
# ? Dec 28, 2010 04:42 |
|
I'm looking for some feedback for a storage solution for backing up my photos from my Macbook Pro. Currently, my data is backed up on my 500mb Apple Time Capsule. In hindsight, I regret taking this route as these early time capsule models have proven to be somewhat flaky and I now find myself looking to supplement my backups. I'm looking for 1TB of storage. Should I be looking at an 'internal' 3.5" HD in an enclosure or just go with a dedicated external drive?
|
# ? Dec 28, 2010 05:58 |
|
Why don't you buy a Synology DS211J, put in two 2TB hard drives in RAID1, put it on your network, and continue using Time Machine? This has several advantages: (1) Your backup will be in RAID1. Currently you have one drive, if it fails you lose your backup. In the solution you're proposing you will also have one drive. RAID will give you redundancy so your backups are a bit more safe. (2) You can keep using Time Machine. Apple's built in backup software is undeniably excellent, and most importantly incredibly convenient. The best backup is one that happens. Switching to something you'll have to manually remember to back up is a bad idea. (3) You can continue backing up over the network or wifi. Again, this is very convenient. Convenience is the most important thing for home user backups. (4) Synology's software does all kinds of other stuff - you could put your iTunes library on the NAS, you could use it to share things with a DLNA TV if you have one, you can access files on a shared drive from iOS or Android phone, or your computer anywhere on the internet - it's just a good solution. Expect to pay around $230 for a DS211J, and around $100 for each 2TB hard drive. This puts your total price at $430. This is probably the least amount of money you can spend to do this the "right way" in terms of a network backup. If you buy a cheap external USB 1TB drive, you'll forget to plug it into your laptop, and your backups won't happen all the time. A large number of enclosures also have no active cooling (fans) so there's some potential for the drive to overheat. Furthermore, it could easily be tugged off the table by the USB or powercord and fall on the ground. Finally, you have no redundancy so if the drive dies, so do your backups. Buying a dedicated NAS for backup is better because you don't have to remember to turn it on or plug it in, it has active cooling fans, there's dedicated redundancy, and you can stick the NAS in a closet where it's not going to be kicked over or accidentally knocked off a desk.
|
# ? Dec 28, 2010 19:21 |
|
what is this posted:Expect to pay around $230 for a DS211J, and around $100 for each 2TB hard drive. The DS211J is $208 directly from amazon, they sell out of em frequently unfortunately. Western Digital has a MIR for their 2TB drive, get a $20 visa rewards card. Getting the DS211J and 2 drives comes out to about 370ish. Drevoak fucked around with this message at 20:06 on Dec 28, 2010 |
# ? Dec 28, 2010 20:04 |
|
Also, it's hacky but be aware that you can back up to a USB disk from AFAIK any Synology device so if you really wanted to shoestring it and had less than 500GB if data at the present time, you could buy a 211 with one drive, plug the TM into that and back it up to the TM till you got a second drive. Just be aware of the dangers and situation if you pull this.
|
# ? Dec 28, 2010 20:56 |
|
Uh, so I have a fairly stupid question about ZFS. I've been using ZFS-fuse on my Linux box for awhile, but every time I've reconfigured, I've just offloaded all the data to a large disk and rebuilt the array. Well, soon I'm going to be replacing some disks and that won't be an option. So does ZFS actually care which disk is sdb, sdc, sdd, and so on? Or does it look at the disks themselves for some sort of token?
|
# ? Dec 28, 2010 22:53 |
|
|
# ? May 12, 2024 21:08 |
|
FooGoo posted:I didn't see this question posed in the fact or forum so here it goes: Otherwise I would usually recommend enclosure and drive. You can upgrade the drive to bigger one later, you can choose what drive goes inside, instead of most likely the cheapest drive the manufacturer could find. If there's some kind of failure with the enclosure you can take the drive out and have better chances of recovery without voiding the warranty which would probably happen with external.
|
# ? Dec 28, 2010 23:33 |