Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
what is this
Sep 11, 2001

it is a lemur
Enclosure + drive is always better for 3.5" drives


Manufacturers who sell external hard drives these days almost never include active cooling.

Buy this enclosure: http://www.newegg.com/Product/Product.aspx?Item=N82E16817173043

which has a fantastic gently caress-off sized fan to keep your hard drive nice and cool, and plenty of space for air circulation inside.

The only downside is it's a USB and eSATA enclosure only, no firewire. $35.





If you want FW800 you're going to have to buy something like this enclosure from Other World Computing which costs $79 and doesn't have a nice big fan and airflow space.

what is this fucked around with this message at 00:10 on Dec 29, 2010

Adbot
ADBOT LOVES YOU

frogbs
May 5, 2004
Well well well

what is this posted:

Why don't you buy a Synology DS211J, put in two 2TB hard drives in RAID1, put it on your network, and continue using Time Machine?

I should say that while this is a great idea, I haven't had much luck getting our Synology DS209 to talk to our iMacs. Transfers would just stop intermittently, and we were getting a ton of 'broken pipe' errors in the logs, support was excellent in trying to troubleshoot it with us, but we just couldn't figure out the problem, so we ended up returning it. I've heard other reports of Synology stuff not working well with Apple hardware. I should note that SMB through windows worked great, its just unfortunate that we are an almost entirely Mac based shop;

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

gregday posted:

So does ZFS actually care which disk is sdb, sdc, sdd, and so on? Or does it look at the disks themselves for some sort of token?
ZFS writes its own labels on each disk and as well replicates pool information to each one.

what is this
Sep 11, 2001

it is a lemur

frogbs posted:

I should say that while this is a great idea, I haven't had much luck getting our Synology DS209 to talk to our iMacs. Transfers would just stop intermittently, and we were getting a ton of 'broken pipe' errors in the logs, support was excellent in trying to troubleshoot it with us, but we just couldn't figure out the problem, so we ended up returning it. I've heard other reports of Synology stuff not working well with Apple hardware. I should note that SMB through windows worked great, its just unfortunate that we are an almost entirely Mac based shop;

I've had no problems with synologys and macs.


I don't know enough to troubleshoot your issue, but it's not something I've experienced with lots of macs, AFP, iSCSI, CIFS, and a couple different models of synology storage units.


Maybe it's some kind of jumbo frames/MTU network switching incompatibility?

frogbs
May 5, 2004
Well well well

what is this posted:

I've had no problems with synologys and macs.


I don't know enough to troubleshoot your issue, but it's not something I've experienced with lots of macs, AFP, iSCSI, CIFS, and a couple different models of synology storage units.


Maybe it's some kind of jumbo frames/MTU network switching incompatibility?

Their support department was completely flummoxed as well, I think it was a bad power supply. I guess if there is any information I can pass on, its that Synology's support department are extremely patient and thoughtful, I was incredibly impressed. Despite our troubles, we're considering trying another Synology product, the DS211

I thought it might be a jumbo frames issue, but I disabled them on every device and it made no difference.

kill your idols
Sep 11, 2003

by T. Finninho
Just got done putting together my OpenIndiana build, but I can't run an internal benchmark? I've used the command before, but I get some kind of error. What the gently caress did I do wrong?

code:
nick@openindiana:/vault# zpool status

  pool: vault
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        vault       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1d1    ONLINE       0     0     0
            c2d0    ONLINE       0     0     0
            c2d1    ONLINE       0     0     0
            c4d0    ONLINE       0     0     0
            c4d1    ONLINE       0     0     0

errors: No known data errors
nick@openindiana:/vault# dd if=/dev/zero of=/vault/zerofile.000 bs=4m count=8000
dd: bad numeric argument: "4m"
:f5:

devilmouse
Mar 26, 2004

It's just like real life.
Capitalize your M?

edit: vvv Foiled! I'll check the syntax when I get home.

devilmouse fucked around with this message at 02:26 on Dec 29, 2010

kill your idols
Sep 11, 2003

by T. Finninho

devilmouse posted:

Capitalize your M?


code:
nick@openindiana:/vault# dd if=/dev/zero of=zerofile.000 bs=4M count=10000
dd: bad numeric argument: "4M"
nick@openindiana:/vault# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                   7.45G  20.6G    47K  /rpool
rpool/ROOT              3.58G  20.6G    31K  legacy
rpool/ROOT/openindiana  3.58G  20.6G  3.56G  /
rpool/dump              1.87G  20.6G  1.87G  -
rpool/export            96.5K  20.6G    32K  /export
rpool/export/home       64.5K  20.6G    32K  /export/home
rpool/export/home/nick  32.5K  20.6G  32.5K  /export/home/nick
rpool/swap              1.99G  22.5G   126M  -
vault                   3.00G  3.56T  3.00G  /vault

kill your idols fucked around with this message at 02:40 on Dec 29, 2010

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

kill your idols posted:

code:
nick@openindiana:/vault# dd if=/dev/zero of=zerofile.000 bs=4M count=10000
dd: bad numeric argument: "4M"
nick@openindiana:/vault# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                   7.45G  20.6G    47K  /rpool
rpool/ROOT              3.58G  20.6G    31K  legacy
rpool/ROOT/openindiana  3.58G  20.6G  3.56G  /
rpool/dump              1.87G  20.6G  1.87G  -
rpool/export            96.5K  20.6G    32K  /export
rpool/export/home       64.5K  20.6G    32K  /export/home
rpool/export/home/nick  32.5K  20.6G  32.5K  /export/home/nick
rpool/swap              1.99G  22.5G   126M  -
vault                   3.00G  3.56T  3.00G  /vault

I don't have access to openindiana manpages right now, but I'd guess that their version of dd doesn't expand modifiers on the block size. Just do bs=4194304

kill your idols
Sep 11, 2003

by T. Finninho
nick@openindiana:/vault# dd if=/dev/zero of=/vault/test/zerofile.000 bs=4194304 count=10000
10000+0 records in
10000+0 records out


Seemed like it worked, but no write speed shown unless I bring up a zpool iostat. Thought it would display the speed after the test.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
dd isn't a speed test. It's just a raw data output utility. What you're doing there is taking input from /dev/zero and outputting it to a file called /vault/test/zerofile.000, with a block size of 4 megs, and doing that 10000 times. It doesn't care how fast it is. You'd need something separate to monitor that.

Charles Martel
Mar 7, 2007

"The Hero of the Age..."

The hero of all ages
I'm new to this whole DAS/NAS server thing, and I'm still confused.

What I want to do is create a digital "archive" of computer software made up mostly of CD-Images. It's going to require 8TB of hard drive space, and more into the future. I want to store them in a way that's cost-effective, yet reliable so the data lasts as long as possible. Now the questions:
According to the OP, a NAS with RAID-5 or RAID-Z would be ideal, but I'm still waffling between buying a pre-built one or to roll my own. Now the questions:

1) What are the major advantages between pre-built NAS arrays and building your own?

2) Can I set up multiple pre-built NAS arrays into one RAID-5 array? (i.e. putting two 4TB MyBook World NASes together in a RAID-5 configuration)?

3) Is Raid-Z only a solution for DIY NAS systems?

Charles Martel fucked around with this message at 07:54 on Dec 29, 2010

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

kill your idols posted:

Just got done putting together my OpenIndiana build, but I can't run an internal benchmark? I've used the command before, but I get some kind of error. What the gently caress did I do wrong?

:f5:
They're replacing a lot of their encumbered stuff with FreeBSD code. So if there are any issues with parameters, might be worthwhile to scope them FreeBSD man pages.

NeuralSpark
Apr 16, 2004

G-Prime posted:

dd isn't a speed test. It's just a raw data output utility. What you're doing there is taking input from /dev/zero and outputting it to a file called /vault/test/zerofile.000, with a block size of 4 megs, and doing that 10000 times. It doesn't care how fast it is. You'd need something separate to monitor that.

Using DD in this fashion is a pretty basic sequential throughput test. He's probably expecting DD to output the average data rate from the transfer like it does in OS X, Linux, and FreeBSD. The version of DD they have must be really old.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
I'll agree with you on that. It's just not the intended use of dd. There's very little reason to care how fast the write is done for the traditional usage of the command.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Well, running a NAS off of Windows Server 2008 R2 Standard sure is... excessive. But it works. Volume Shadow Service is good, Dynamic Disks RAID 5 is functional enough, and it supports USB 3.0 for my external drive (I'll take double the speed for config/data backups any day). I lost having either dual-parity or hot-swap, as Dynamic Disks don't support either and my mainboard doesn't even have fakeRAID, but I figure that besides the WD Green series, I've only had two drives die in ~15 years of enthusiast computing, so I'll take the one in a billion chance that I lose two array drives and an external simultaneously.

Windows Server takes up a huge amount of disk space compared to Ubuntu, ~24 GB compared to ~2.5, but that's likely in part because it's stock full of enterprise role installers. I haven't yet fooled around with Hyper-V, but that's neither here nor there.

If anyone wants to fool around with it, I got my license through DreamSpark. I signed up with my alum's .edu e-mail, downloaded an ISO and clicked to generate a license key, and I was done.

Henrik Zetterberg
Dec 7, 2007

I've got a Supermicro X7SPA-HF-O (6 on-board SATA) with an Atom D510 out for delivery right now. Can't wait to drop this baby into my Unraid system and replace my old AMD 2800+ with only PCI and enjoy my lower electric bill.

Bardlebee
Feb 24, 2009

Im Blind.
Hey guys I am looking for a backup solution for my small business. Currently we use tapes, which we have to switch out every day. I find this inefficient as someone cannot be at each location every day. I am looking for a software that can basically backup an image of an entire server, and then upload that image automatically to a specified network location.

I have heard Acronis was good, but I don't want to lay down 500+ bucks without knowing what I am purchasing. Oh, yeah cost is an option for this business by the way, my hands are tied on that.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Are there any web interfaces for seeing the status of my RAID arrays and LVM volumes?

kill your idols
Sep 11, 2003

by T. Finninho

NeuralSpark posted:

:eng101:

Using Bonnie I got the following benchmarks of a 5 disk, Hitachi HDS721010CLA332 raidz1-0 with the included hardware:

code:

System release: SunOS openindiana 5.11 oi_148
Memory size: 3840 Megabytes
System Configuration: MSI MS-7623
BIOS Configuration: American Megatrends Inc. V11.5 08/10/2010
Processor: AMD Athlon(tm) II X2 250 Processor CPU1

------Sequential Output------ --Sequential Input-      --Random-
    --Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size  K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
7672M  71814 94 221845 59 111435 31 54921  97 424945 51 408.8   2

And a Crystal Mark test result over SMB/CIFS from W7 running in Hyper-V over a cheap "green" gigabit switch:




I'll have to research it some more to see if this is any good before I move any of my data over, or go with another solution (FreeBSD.) ZFS is interesting though. Maybe a beefer chip and some more RAM will boost results.

CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!
:yaycloud::smithcloud:

Bardlebee posted:

Hey guys I am looking for a backup solution for my small business. Currently we use tapes, which we have to switch out every day. I find this inefficient as someone cannot be at each location every day. I am looking for a software that can basically backup an image of an entire server, and then upload that image automatically to a specified network location.

I have heard Acronis was good, but I don't want to lay down 500+ bucks without knowing what I am purchasing. Oh, yeah cost is an option for this business by the way, my hands are tied on that.

How many servers, how many locations? how much data do the servers hold? how much data is added or changes on a daily basis?

Without that kind of info it's pretty hard to make a recommendation of any time. Once thing I can say is taking a full image of a server and uploading it somewhere on a daily basis will probably prove impractical unless you have very small servers.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
So what's the consensus on btrfs? It has no fsck and raid capabilities yet does it?

Bardlebee
Feb 24, 2009

Im Blind.

bob arctor posted:

How many servers, how many locations? how much data do the servers hold? how much data is added or changes on a daily basis?

Without that kind of info it's pretty hard to make a recommendation of any time. Once thing I can say is taking a full image of a server and uploading it somewhere on a daily basis will probably prove impractical unless you have very small servers.

Sorry, I meant a weekly or even bi-weekly basis. Just two locations, three servers. Two servers at one location, one at another. Each of them don't have more then 100 Gig's of data on them. Not much is changed on a daily basis besides the contents of a few excel files and a SQL database.

What do you think?

a cyberpunk goose
May 21, 2007

Hey Storage thread! I have a project popping up at work and I wanted to run some ideas past this thread.

We are a non profit community access TV station, and members can come in and use one of our five iMac's to sit down and use Final Cut Pro to edit their poo poo. With the current system, have tons of ~200gb firewire800 hard drives our members request from the back, and we go and fetch it for them and plug it in. These people are storing hundreds of hours hours of precious work and unedited footage on portable drives. This violates a pretty basic backup rule which is simply that portable storage is not a good backup.

Naturally, as a computer guy, I think "Oh well we should just set up a 8TB RAID-1 or something on a $400 computer that is just a motherboard, 10/100/1000 network card and a bunch of high volume hard drives, running ubuntu & samba"

Is this a blatantly bad idea for any reason? Has anyone ever set up a NAS/SAN type thing for Final Cut Pro before? Are there things I'm not accounting for, like would it lag horribly in Final Cut and not be worth the effort? Ideally they'd edit their projects straight off the share to simplify things (try explaining to a bunch of 70 year olds how to push their project onto the server when they are done).


VVVVV: Nice! Found some examples of people saying that a proper FreeNAS setup makes FCP editing a breeze, this is promising.

a cyberpunk goose fucked around with this message at 19:37 on Dec 30, 2010

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Mido posted:

Naturally, as a computer guy, I think "Oh well we should just set up a 8TB RAID-1 or something on a $400 computer that is just a motherboard, 10/100/1000 network card and a bunch of high volume hard drives, running ubuntu & samba"

Is this a blatantly bad idea for any reason? Has anyone ever set up a NAS/SAN type thing for Final Cut Pro before? Are there things I'm not accounting for, like would it lag horribly in Final Cut and not be worth the effort? Ideally they'd edit their projects straight off the share to simplify things (try explaining to a bunch of 70 year olds how to push their project onto the server when they are done).

I would ask in the Openfiler or Freenas forums.

http://sourceforge.net/apps/phpbb/freenas/index.php

http://www.openfiler.com/community/forums

what is this
Sep 11, 2001

it is a lemur
keep in mind that if you want more than one person to work on files on the iSCSI LUN at a time you need to use Apple's xSAN or some other filesystem that expects multiple users.

quadratic
May 2, 2002
f(x) = ax^2 + bx + c
Is the Samsung Spinpoint F3 HD103SJ appropriate for use in a hardware RAID situation? The OP talks about enterprise vs. consumer drives, but the post is a few years old, so I don't know if it reflects the current state of things.

SopWATh
Jun 1, 2000
Are there any good PCI SATA RAID controllers I should look for? I've got an old PIII Tualatin (Asus TUSL2-C) motherboard with a 1.0GHz P3 and 512MB of ram. It's old, but relatively low power and never breaks.

Basically I want to see if I can get some configuration working for a media server. I'd like some data protection, hence the RAID, but it doesn't need to be high performance since there'll only be one or two users max.

Newegg lists a bunch of Syba brand controllers, are they any good?

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

SopWATh posted:

Are there any good PCI SATA RAID controllers I should look for? I've got an old PIII Tualatin (Asus TUSL2-C) motherboard with a 1.0GHz P3 and 512MB of ram. It's old, but relatively low power and never breaks.

Tom's hardware gave this one a positive review:

http://cgi.ebay.com/HighPoint-RocketRAID-1640-SATA-RAID-PCI-card-4-channels-/150540835783?pt=LH_DefaultDomain_0&hash=item230ceedbc7

http://www.tomshardware.com/reviews/cheap-reliable-raid-5-storage-compared,832-5.html

ruro
Apr 30, 2003

quadratic posted:

Is the Samsung Spinpoint F3 HD103SJ appropriate for use in a hardware RAID situation? The OP talks about enterprise vs. consumer drives, but the post is a few years old, so I don't know if it reflects the current state of things.

I'm using these drives on a 3Ware 9650se and have for the past year or so without any issues at all. I have a battery backup module though, which helps (I had a couple of delayed write errors while doing its 6 monthly BBU test).

ruro fucked around with this message at 00:49 on Dec 31, 2010

Scuttle_SE
Jun 2, 2005
I like mammaries
Pillbug
So, status update on Greyhole here.

I've been running my fileserver under Win2008 Server, no raid or anything, just a bunch of disks, and it has been working fine, except that I had 24 disks and a shitton of shares. I was planning on moving to WHS when the new version came out, but since MS decided to be retarded and remove the Drive Extender from it, I no longer saw a reason for it.

I then found Greyhole, which is basically Drive Extender for Linux

it works basically the same. You copy your files to a "landing zone", Greyhole moves the file to a disk in your pool and puts a symlink in its place. To do this Greyhole has a daemon that looks for new activity in the Samba-log every 10 seconds.



It also has redundancy the same way WHS has/had. You can choose to have the files in a share duplicated over two, or more, physical disks



To be honest, it was a bit tricky to set everything up at first. I am quite familiar with Linux, the problems arose mostly due to some errors in the documentation and some misunderstandings from my part, but I've had it up and running for some time now, and it has been working flawlessly. I am running it under a install of Ubuntu Server 10.10 64bit, but it should work with any version of Linux if I understand it corretly...

The configuration is pretty straighforward, you define the Samba-shares you want "greyholed"
code:
num_copies[Music] = 1
num_copies[Video] = 1
num_copies[Pictures] = 1
num_copies[Software] = 1
num_copies[TVShows] = 1
...and what disks your pool will use

code:
storage_pool_directory = /greyhole/pool/disk-01/gh-pool, min_free: 1000gb
storage_pool_directory = /greyhole/pool/disk-02/gh-pool, min_free: 25gb
storage_pool_directory = /greyhole/pool/disk-03/gh-pool, min_free: 25gb
storage_pool_directory = /greyhole/pool/disk-04/gh-pool, min_free: 25gb
That's basically it.

Removing a disk, in case it starts to go bad, it easy too, with a command you tell Greyhole to move everything from the affected disk. It'll spread it out to your remaining disks, and when it's done you can safely remove your failing disk.

Oh, and it reports free space correctly to Samba too.

Click here for the full 765x536 image.


Right now my setup looks like this:

code:
Greyhole Statistics
===================

Storage Pool
                                  Total -  Used =  Free + Attic = Possible
  /greyhole/pool/disk-01/gh-pool: 1834G - 1066G =  675G +    0G =  675G
  /greyhole/pool/disk-02/gh-pool: 1834G - 1390G =  350G +    0G =  350G
  /greyhole/pool/disk-03/gh-pool: 1834G - 1392G =  349G +    0G =  349G
  /greyhole/pool/disk-04/gh-pool: 1834G - 1392G =  349G +    0G =  349G
  /greyhole/pool/disk-05/gh-pool: 1834G - 1391G =  350G +    0G =  350G
  /greyhole/pool/disk-06/gh-pool: 1375G -  956G =  349G +    0G =  349G
  /greyhole/pool/disk-07/gh-pool: 1834G - 1390G =  350G +    0G =  350G
  /greyhole/pool/disk-08/gh-pool: 1834G - 1391G =  350G +    0G =  350G
  /greyhole/pool/disk-09/gh-pool: 1375G -  955G =  350G +    0G =  350G
  /greyhole/pool/disk-10/gh-pool: 1375G -  955G =  350G +    0G =  350G
  /greyhole/pool/disk-11/gh-pool: 1375G -  956G =  350G +    0G =  350G
  /greyhole/pool/disk-12/gh-pool: 1375G -  956G =  350G +    0G =  350G
  /greyhole/pool/disk-13/gh-pool: 1375G -  956G =  350G +    0G =  350G
  /greyhole/pool/disk-14/gh-pool: 1375G -  956G =  350G +    0G =  350G
  /greyhole/pool/disk-15/gh-pool: 1375G -  213G = 1092G +    0G = 1092G
  /greyhole/pool/disk-16/gh-pool: 1375G -  956G =  349G +    0G =  349G
  /greyhole/pool/disk-17/gh-pool: 1834G -  412G = 1329G +    0G = 1329G
  /greyhole/pool/disk-18/gh-pool: 1834G -  413G = 1328G +    0G = 1328G
The "Attic" is a version of the Recycle Bin you can turn on if you want added security, it basically leaves deleted files on the disk until the "min_free" parameter is met.

The one downside I can see right now is that it looks like a one man project, who know if his interest runs out in three months...

Other than that, a great little piece of software if you don't want to commit to a ZFS-pool or something like that...If you need a competent replacement for WHS

vanjalolz
Oct 31, 2006

Ha Ha Ha HaHa Ha
Does anyone have any tips for trouble shooting a Nexenta/Solaris system? Everything has been running great for the last couple of years, but yesterday everything started going stupid slow.

I think that the system is slow because disk read is slow - starting a new process (such as an ssh session or 'top' etc) is slow as hell. When I managed to run top, I didn't see any weird processes burning through CPU. I ran zpool status and it said my storage pool was fine, my syspool had one corrupt file (because I had to hard reboot the first time I ran into the issue). The corrupt file was an mrtg config which I never use.

It all started yesterday while I was streaming a video file. Everything was running great, I paused the file for 10 minutes and it wouldn't resume. I quickly found out that samba and ssh weren't responding so I rebooted but it didn't help.

tl;dr: Nexenta 2.0 system became slow overnight for no reason. I think its I/O related. How can I confirm/diagnose?

DLCinferno
Feb 22, 2003

Happy

Thermopyle posted:

Thanks to advice given earlier in the thread by DLCInferno and others, I've now moved all my data from WHS to an Ubuntu machine with mdadm+LVM.

I copy to/from the box over a gigabit network at 100-120MB/s (WHS on the same hardware did 60-70 MB/s) and I've got a nice linux machine for dicking around with. My total usable storage is somewhere around 15TB now...

It took frickin forever copying data off the NTFS drives to existing arrays and then expanding the arrays with that drive (I probably ended up with 150+ hours of copy/RAID growing), but it's done!

Thanks for the advice, guys.

Really good to hear it was successful. Cheers.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

vanjalolz posted:

tl;dr: Nexenta 2.0 system became slow overnight for no reason. I think its I/O related. How can I confirm/diagnose?
Run pfexec fmadm faulty and see whether there's a device loving up.

devilmouse
Mar 26, 2004

It's just like real life.

Combat Pretzel posted:

Anyone of you running OpenSolaris/OpenIndiana in a VM as a file server? What sort of performance are you getting out of it using CIFS?

Anyone?

I might try it today or tomorrow. Right now I'm deciding between running Solaris as the host OS with Virtualbox (I haven't played with 4.0 yet...) providing the VM to other OSs or running ESXi as the host with Solaris for ZFS inside that.

In other news - why would Win Vista have such poo poo read performance from a CIFS share (provided by aforementioned Solaris)? It writes to it over gigabit at 60-80MB/s but only reads at 10MB/s? The internal disk benchmarks are well above saturating gigabit speeds but the poor windows box can't read it for poo poo.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Here's some data from my fiddling:

- CPU is a Intel Core 2 Quad Q9450, RAM is 8GB DDR-800 with ECC (i.e. a little slower than non-ECC).
- Host system is Windows 7 x64, hypervisor is VirtualBox 4.
- Guest system is Solaris Express 11 with VBox guest additions.
- The guest uses the e1000g virtual device with jumbo frames enabled, and is bridged to the host adapter, which is a real e1000g device (an Intel PRO/1000 PT). Jumbo frames is enabled in /kernel/drv/e1000g.conf, parameter MaxFrameSize set to 3 (16K frames).
- Apparently both the virtual and physical adapters do need to support jumbo frames for this to work.
- Don't expect host-only networking to give any performance boost, just because it uses virtual adapters. It doesn't do jumbo frames.
- The VirtualBox bridge filter driver in Windows doesn't actually touch the network or host hardware when you're addressing the host adapter's IP address. So bridged mode is just fine.

Without jumbo frames, I get 25-30MB/s. With jumbo frames, I get around 65-70MB/s. Reading that is, haven't tried writing.

That is from a single disk zpool in the guest over CIFS to a single NTFS disk on the host.

VirtualBox is a pain in the rear end tho, since out of the box you have to manually run it or put it in Autostart. Since you need to run it as administrator to use raw disk access, you get the drat UAC each boot (not an issue here, the box runs 24/7 anyway). Apparently there are open source tools that allow you to set up VBoxHeadless as service with a privileged account.

I was considering running Solaris as host, seeing as I did so for over three years until June 2010. But I became somewhat dependent on various Windows applications, and they behave relatively rear end in VirtualBox with a Windows guest. No idea, why that is. Maybe EPI and VPID I get with the upcoming Sandy Bridge might fix that.

I shortly considered ESXi, until I found out that it's a Linux based hypervisor and apparently doesn't allow me to use my actual hardware. The solution I was looking for is to get a fully usable Windows box and a ZFS datastore in one box. Right now I have 8GB of RAM, the test VM has 1.5GB assigned currently. The upgrade of the coming days will also come with 16GB of RAM, I'll be running the VM at 4GB then, for mighty ZFS cachin'.

--edit: I also tried VMware. Jumbo frames don't work with its e1000g virtual device, they get all dropped. Then there's vmxnet3 device, theoretically only available on ESXi, but editing the vmx file allows you to enable it. The VMware tools for Solaris actually ship the vmxnet3 driver. Which however doesn't do jumbo frames on Solaris. Can't set MTU and VMware documentation even says so. I ended up with 25-30MB/s that way, too. I guess that disqualifies ESXi, too.

Combat Pretzel fucked around with this message at 01:53 on Jan 2, 2011

frogbs
May 5, 2004
Well well well

frogbs posted:

So i'm quickly running out of space on my iMac (i take a ton of photos and do a lot of video work). I've been thinking about getting a 4 bay Firewire800 raid enclosure and filling it up with 2tb drives in a raid 5 or 10. So far I think i'm leaning towards the OWC Mercury Pro Qx2 filled with 4 Hitachi 2tb drives. Can anyone recommend any similar enclosures/solutions as an alternative to the OWC model? I'm not necessarily married to the idea of a FW800 device, i'd go gigabit if someone could provide me a compelling solution. Any suggestions/thoughts?

So i'm getting closer to pulling the trigger on the owc enclosure, can anyone offer and thoughts on it or any other owc products?

tboneDX
Jan 27, 2009
I just recently (today) got my openindiana server back up and running after a hard drive failure, and I'm trying to resolve a few permanent data errors. The one in question is within a snapshot, so I can't just delete the file normally, and there is a clone of that snapshot (opensolaris-1), so I can't just destroy the snapshots...

Here are my rpool snapshots and the error I described:

code:
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool                                         14.8G   168G    83K  /rpool
rpool@01_01_2011                                  0      -    83K  -
rpool/ROOT                                    12.8G   168G    21K  legacy
rpool/ROOT/opensolaris                        36.9M   168G  5.18G  /
rpool/ROOT/opensolaris-1                      32.3M   168G  6.49G  /
rpool/ROOT/opensolaris-2                      12.8G   168G  4.91G  /
rpool/ROOT/opensolaris-2@install               204M      -  3.56G  -
rpool/ROOT/opensolaris-2@2010-11-20-21:56:54   416M      -  5.18G  -
rpool/ROOT/opensolaris-2@2010-11-20-23:40:33  1.94G      -  6.49G  -
rpool/ROOT/opensolaris-2@2010-11-23-12:52:06   117M      -  6.74G  -

errors: Permanent errors have been detected in the following files:

        rpool/ROOT/opensolaris-2@2010-11-20-23:40:33:/var/pkg/download/b0/b0f72b1432f45b7fdb0ab2a2860af4b33780a74c
I thought I would be able to mount the snapshot and remove the file manually, but that doesn't seem to be possible. This is more of an annoyance than anything, but I'd appreciate some help.

devilmouse
Mar 26, 2004

It's just like real life.

Combat Pretzel posted:

Here's some data from my fiddling:

Interesting stuff. I ended up spending most of yesterday screwing around with it and my results weren't that far off from yours. ZFS ran at about 1/2 to 2/3rds of the speed when I had Solaris installed as a guest under ESXi. Rather than deal with the hassle of it, I decided to just slap Solaris Express 11 on it and virtualize from VirtualBox. I don't have to run any Windows stuff thankfully, just a few linux/BSD instance to test stuff for work.

I'm running it on a Xeon x3440 with 8G of ECC, on a Supermicro X8SI6-F and getting 80 MB/s reads and 100MB/s writes to the CIFS share which is plenty enough. Internal disk benchmarks on a 6x Samsung F4 raidz2 are between 350 MB/s and 450 MB/s.

Everything seems to be up and running now and I'm just letting it run random break-in tests to make sure everything's fine. The only weird thing that's happened so far is a 3 beep tone in the middle of the night that woke me up. But there was nothing in the system or motherboard logs, so I'm half wondering if I didn't just dream it.

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

tboneDX posted:

I thought I would be able to mount the snapshot and remove the file manually, but that doesn't seem to be possible. This is more of an annoyance than anything, but I'd appreciate some help.
Have you run a scrub yet?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply