|
Enclosure + drive is always better for 3.5" drives Manufacturers who sell external hard drives these days almost never include active cooling. Buy this enclosure: http://www.newegg.com/Product/Product.aspx?Item=N82E16817173043 which has a fantastic gently caress-off sized fan to keep your hard drive nice and cool, and plenty of space for air circulation inside. The only downside is it's a USB and eSATA enclosure only, no firewire. $35. If you want FW800 you're going to have to buy something like this enclosure from Other World Computing which costs $79 and doesn't have a nice big fan and airflow space. what is this fucked around with this message at 00:10 on Dec 29, 2010 |
# ? Dec 29, 2010 00:03 |
|
|
# ? May 28, 2024 08:59 |
|
what is this posted:Why don't you buy a Synology DS211J, put in two 2TB hard drives in RAID1, put it on your network, and continue using Time Machine? I should say that while this is a great idea, I haven't had much luck getting our Synology DS209 to talk to our iMacs. Transfers would just stop intermittently, and we were getting a ton of 'broken pipe' errors in the logs, support was excellent in trying to troubleshoot it with us, but we just couldn't figure out the problem, so we ended up returning it. I've heard other reports of Synology stuff not working well with Apple hardware. I should note that SMB through windows worked great, its just unfortunate that we are an almost entirely Mac based shop;
|
# ? Dec 29, 2010 00:41 |
|
gregday posted:So does ZFS actually care which disk is sdb, sdc, sdd, and so on? Or does it look at the disks themselves for some sort of token?
|
# ? Dec 29, 2010 00:45 |
|
frogbs posted:I should say that while this is a great idea, I haven't had much luck getting our Synology DS209 to talk to our iMacs. Transfers would just stop intermittently, and we were getting a ton of 'broken pipe' errors in the logs, support was excellent in trying to troubleshoot it with us, but we just couldn't figure out the problem, so we ended up returning it. I've heard other reports of Synology stuff not working well with Apple hardware. I should note that SMB through windows worked great, its just unfortunate that we are an almost entirely Mac based shop; I've had no problems with synologys and macs. I don't know enough to troubleshoot your issue, but it's not something I've experienced with lots of macs, AFP, iSCSI, CIFS, and a couple different models of synology storage units. Maybe it's some kind of jumbo frames/MTU network switching incompatibility?
|
# ? Dec 29, 2010 01:09 |
|
what is this posted:I've had no problems with synologys and macs. Their support department was completely flummoxed as well, I think it was a bad power supply. I guess if there is any information I can pass on, its that Synology's support department are extremely patient and thoughtful, I was incredibly impressed. Despite our troubles, we're considering trying another Synology product, the DS211 I thought it might be a jumbo frames issue, but I disabled them on every device and it made no difference.
|
# ? Dec 29, 2010 01:29 |
|
Just got done putting together my OpenIndiana build, but I can't run an internal benchmark? I've used the command before, but I get some kind of error. What the gently caress did I do wrong?code:
|
# ? Dec 29, 2010 02:10 |
|
Capitalize your M? edit: vvv Foiled! I'll check the syntax when I get home. devilmouse fucked around with this message at 02:26 on Dec 29, 2010 |
# ? Dec 29, 2010 02:11 |
|
devilmouse posted:Capitalize your M? code:
kill your idols fucked around with this message at 02:40 on Dec 29, 2010 |
# ? Dec 29, 2010 02:25 |
|
kill your idols posted:
I don't have access to openindiana manpages right now, but I'd guess that their version of dd doesn't expand modifiers on the block size. Just do bs=4194304
|
# ? Dec 29, 2010 03:01 |
|
nick@openindiana:/vault# dd if=/dev/zero of=/vault/test/zerofile.000 bs=4194304 count=10000 10000+0 records in 10000+0 records out Seemed like it worked, but no write speed shown unless I bring up a zpool iostat. Thought it would display the speed after the test.
|
# ? Dec 29, 2010 03:14 |
|
dd isn't a speed test. It's just a raw data output utility. What you're doing there is taking input from /dev/zero and outputting it to a file called /vault/test/zerofile.000, with a block size of 4 megs, and doing that 10000 times. It doesn't care how fast it is. You'd need something separate to monitor that.
|
# ? Dec 29, 2010 03:17 |
|
I'm new to this whole DAS/NAS server thing, and I'm still confused. What I want to do is create a digital "archive" of computer software made up mostly of CD-Images. It's going to require 8TB of hard drive space, and more into the future. I want to store them in a way that's cost-effective, yet reliable so the data lasts as long as possible. Now the questions: According to the OP, a NAS with RAID-5 or RAID-Z would be ideal, but I'm still waffling between buying a pre-built one or to roll my own. Now the questions: 1) What are the major advantages between pre-built NAS arrays and building your own? 2) Can I set up multiple pre-built NAS arrays into one RAID-5 array? (i.e. putting two 4TB MyBook World NASes together in a RAID-5 configuration)? 3) Is Raid-Z only a solution for DIY NAS systems? Charles Martel fucked around with this message at 07:54 on Dec 29, 2010 |
# ? Dec 29, 2010 03:41 |
|
kill your idols posted:Just got done putting together my OpenIndiana build, but I can't run an internal benchmark? I've used the command before, but I get some kind of error. What the gently caress did I do wrong?
|
# ? Dec 29, 2010 13:12 |
|
G-Prime posted:dd isn't a speed test. It's just a raw data output utility. What you're doing there is taking input from /dev/zero and outputting it to a file called /vault/test/zerofile.000, with a block size of 4 megs, and doing that 10000 times. It doesn't care how fast it is. You'd need something separate to monitor that. Using DD in this fashion is a pretty basic sequential throughput test. He's probably expecting DD to output the average data rate from the transfer like it does in OS X, Linux, and FreeBSD. The version of DD they have must be really old.
|
# ? Dec 29, 2010 14:51 |
|
I'll agree with you on that. It's just not the intended use of dd. There's very little reason to care how fast the write is done for the traditional usage of the command.
|
# ? Dec 29, 2010 16:07 |
|
Well, running a NAS off of Windows Server 2008 R2 Standard sure is... excessive. But it works. Volume Shadow Service is good, Dynamic Disks RAID 5 is functional enough, and it supports USB 3.0 for my external drive (I'll take double the speed for config/data backups any day). I lost having either dual-parity or hot-swap, as Dynamic Disks don't support either and my mainboard doesn't even have fakeRAID, but I figure that besides the WD Green series, I've only had two drives die in ~15 years of enthusiast computing, so I'll take the one in a billion chance that I lose two array drives and an external simultaneously. Windows Server takes up a huge amount of disk space compared to Ubuntu, ~24 GB compared to ~2.5, but that's likely in part because it's stock full of enterprise role installers. I haven't yet fooled around with Hyper-V, but that's neither here nor there. If anyone wants to fool around with it, I got my license through DreamSpark. I signed up with my alum's .edu e-mail, downloaded an ISO and clicked to generate a license key, and I was done.
|
# ? Dec 29, 2010 16:29 |
|
I've got a Supermicro X7SPA-HF-O (6 on-board SATA) with an Atom D510 out for delivery right now. Can't wait to drop this baby into my Unraid system and replace my old AMD 2800+ with only PCI and enjoy my lower electric bill.
|
# ? Dec 29, 2010 20:14 |
|
Hey guys I am looking for a backup solution for my small business. Currently we use tapes, which we have to switch out every day. I find this inefficient as someone cannot be at each location every day. I am looking for a software that can basically backup an image of an entire server, and then upload that image automatically to a specified network location. I have heard Acronis was good, but I don't want to lay down 500+ bucks without knowing what I am purchasing. Oh, yeah cost is an option for this business by the way, my hands are tied on that.
|
# ? Dec 29, 2010 23:16 |
|
Are there any web interfaces for seeing the status of my RAID arrays and LVM volumes?
|
# ? Dec 29, 2010 23:24 |
|
NeuralSpark posted:Using Bonnie I got the following benchmarks of a 5 disk, Hitachi HDS721010CLA332 raidz1-0 with the included hardware: code:
I'll have to research it some more to see if this is any good before I move any of my data over, or go with another solution (FreeBSD.) ZFS is interesting though. Maybe a beefer chip and some more RAM will boost results.
|
# ? Dec 30, 2010 06:16 |
|
Bardlebee posted:Hey guys I am looking for a backup solution for my small business. Currently we use tapes, which we have to switch out every day. I find this inefficient as someone cannot be at each location every day. I am looking for a software that can basically backup an image of an entire server, and then upload that image automatically to a specified network location. How many servers, how many locations? how much data do the servers hold? how much data is added or changes on a daily basis? Without that kind of info it's pretty hard to make a recommendation of any time. Once thing I can say is taking a full image of a server and uploading it somewhere on a daily basis will probably prove impractical unless you have very small servers.
|
# ? Dec 30, 2010 07:02 |
|
So what's the consensus on btrfs? It has no fsck and raid capabilities yet does it?
|
# ? Dec 30, 2010 17:08 |
|
bob arctor posted:How many servers, how many locations? how much data do the servers hold? how much data is added or changes on a daily basis? Sorry, I meant a weekly or even bi-weekly basis. Just two locations, three servers. Two servers at one location, one at another. Each of them don't have more then 100 Gig's of data on them. Not much is changed on a daily basis besides the contents of a few excel files and a SQL database. What do you think?
|
# ? Dec 30, 2010 18:16 |
|
Hey Storage thread! I have a project popping up at work and I wanted to run some ideas past this thread. We are a non profit community access TV station, and members can come in and use one of our five iMac's to sit down and use Final Cut Pro to edit their poo poo. With the current system, have tons of ~200gb firewire800 hard drives our members request from the back, and we go and fetch it for them and plug it in. These people are storing hundreds of hours hours of precious work and unedited footage on portable drives. This violates a pretty basic backup rule which is simply that portable storage is not a good backup. Naturally, as a computer guy, I think "Oh well we should just set up a 8TB RAID-1 or something on a $400 computer that is just a motherboard, 10/100/1000 network card and a bunch of high volume hard drives, running ubuntu & samba" Is this a blatantly bad idea for any reason? Has anyone ever set up a NAS/SAN type thing for Final Cut Pro before? Are there things I'm not accounting for, like would it lag horribly in Final Cut and not be worth the effort? Ideally they'd edit their projects straight off the share to simplify things (try explaining to a bunch of 70 year olds how to push their project onto the server when they are done). VVVVV: Nice! Found some examples of people saying that a proper FreeNAS setup makes FCP editing a breeze, this is promising. a cyberpunk goose fucked around with this message at 19:37 on Dec 30, 2010 |
# ? Dec 30, 2010 18:50 |
|
Mido posted:Naturally, as a computer guy, I think "Oh well we should just set up a 8TB RAID-1 or something on a $400 computer that is just a motherboard, 10/100/1000 network card and a bunch of high volume hard drives, running ubuntu & samba" I would ask in the Openfiler or Freenas forums. http://sourceforge.net/apps/phpbb/freenas/index.php http://www.openfiler.com/community/forums
|
# ? Dec 30, 2010 19:09 |
|
keep in mind that if you want more than one person to work on files on the iSCSI LUN at a time you need to use Apple's xSAN or some other filesystem that expects multiple users.
|
# ? Dec 30, 2010 19:48 |
|
Is the Samsung Spinpoint F3 HD103SJ appropriate for use in a hardware RAID situation? The OP talks about enterprise vs. consumer drives, but the post is a few years old, so I don't know if it reflects the current state of things.
|
# ? Dec 30, 2010 20:31 |
|
Are there any good PCI SATA RAID controllers I should look for? I've got an old PIII Tualatin (Asus TUSL2-C) motherboard with a 1.0GHz P3 and 512MB of ram. It's old, but relatively low power and never breaks. Basically I want to see if I can get some configuration working for a media server. I'd like some data protection, hence the RAID, but it doesn't need to be high performance since there'll only be one or two users max. Newegg lists a bunch of Syba brand controllers, are they any good?
|
# ? Dec 30, 2010 21:49 |
|
SopWATh posted:Are there any good PCI SATA RAID controllers I should look for? I've got an old PIII Tualatin (Asus TUSL2-C) motherboard with a 1.0GHz P3 and 512MB of ram. It's old, but relatively low power and never breaks. Tom's hardware gave this one a positive review: http://cgi.ebay.com/HighPoint-RocketRAID-1640-SATA-RAID-PCI-card-4-channels-/150540835783?pt=LH_DefaultDomain_0&hash=item230ceedbc7 http://www.tomshardware.com/reviews/cheap-reliable-raid-5-storage-compared,832-5.html
|
# ? Dec 30, 2010 22:26 |
|
quadratic posted:Is the Samsung Spinpoint F3 HD103SJ appropriate for use in a hardware RAID situation? The OP talks about enterprise vs. consumer drives, but the post is a few years old, so I don't know if it reflects the current state of things. I'm using these drives on a 3Ware 9650se and have for the past year or so without any issues at all. I have a battery backup module though, which helps (I had a couple of delayed write errors while doing its 6 monthly BBU test). ruro fucked around with this message at 00:49 on Dec 31, 2010 |
# ? Dec 31, 2010 00:45 |
|
So, status update on Greyhole here. I've been running my fileserver under Win2008 Server, no raid or anything, just a bunch of disks, and it has been working fine, except that I had 24 disks and a shitton of shares. I was planning on moving to WHS when the new version came out, but since MS decided to be retarded and remove the Drive Extender from it, I no longer saw a reason for it. I then found Greyhole, which is basically Drive Extender for Linux it works basically the same. You copy your files to a "landing zone", Greyhole moves the file to a disk in your pool and puts a symlink in its place. To do this Greyhole has a daemon that looks for new activity in the Samba-log every 10 seconds. It also has redundancy the same way WHS has/had. You can choose to have the files in a share duplicated over two, or more, physical disks To be honest, it was a bit tricky to set everything up at first. I am quite familiar with Linux, the problems arose mostly due to some errors in the documentation and some misunderstandings from my part, but I've had it up and running for some time now, and it has been working flawlessly. I am running it under a install of Ubuntu Server 10.10 64bit, but it should work with any version of Linux if I understand it corretly... The configuration is pretty straighforward, you define the Samba-shares you want "greyholed" code:
code:
Removing a disk, in case it starts to go bad, it easy too, with a command you tell Greyhole to move everything from the affected disk. It'll spread it out to your remaining disks, and when it's done you can safely remove your failing disk. Oh, and it reports free space correctly to Samba too. Click here for the full 765x536 image. Right now my setup looks like this: code:
The one downside I can see right now is that it looks like a one man project, who know if his interest runs out in three months... Other than that, a great little piece of software if you don't want to commit to a ZFS-pool or something like that...If you need a competent replacement for WHS
|
# ? Dec 31, 2010 00:57 |
|
Does anyone have any tips for trouble shooting a Nexenta/Solaris system? Everything has been running great for the last couple of years, but yesterday everything started going stupid slow. I think that the system is slow because disk read is slow - starting a new process (such as an ssh session or 'top' etc) is slow as hell. When I managed to run top, I didn't see any weird processes burning through CPU. I ran zpool status and it said my storage pool was fine, my syspool had one corrupt file (because I had to hard reboot the first time I ran into the issue). The corrupt file was an mrtg config which I never use. It all started yesterday while I was streaming a video file. Everything was running great, I paused the file for 10 minutes and it wouldn't resume. I quickly found out that samba and ssh weren't responding so I rebooted but it didn't help. tl;dr: Nexenta 2.0 system became slow overnight for no reason. I think its I/O related. How can I confirm/diagnose?
|
# ? Dec 31, 2010 04:40 |
|
Thermopyle posted:Thanks to advice given earlier in the thread by DLCInferno and others, I've now moved all my data from WHS to an Ubuntu machine with mdadm+LVM. Really good to hear it was successful. Cheers.
|
# ? Dec 31, 2010 06:00 |
|
vanjalolz posted:tl;dr: Nexenta 2.0 system became slow overnight for no reason. I think its I/O related. How can I confirm/diagnose?
|
# ? Dec 31, 2010 12:12 |
|
Combat Pretzel posted:Anyone of you running OpenSolaris/OpenIndiana in a VM as a file server? What sort of performance are you getting out of it using CIFS? Anyone? I might try it today or tomorrow. Right now I'm deciding between running Solaris as the host OS with Virtualbox (I haven't played with 4.0 yet...) providing the VM to other OSs or running ESXi as the host with Solaris for ZFS inside that. In other news - why would Win Vista have such poo poo read performance from a CIFS share (provided by aforementioned Solaris)? It writes to it over gigabit at 60-80MB/s but only reads at 10MB/s? The internal disk benchmarks are well above saturating gigabit speeds but the poor windows box can't read it for poo poo.
|
# ? Jan 1, 2011 14:28 |
|
Here's some data from my fiddling: - CPU is a Intel Core 2 Quad Q9450, RAM is 8GB DDR-800 with ECC (i.e. a little slower than non-ECC). - Host system is Windows 7 x64, hypervisor is VirtualBox 4. - Guest system is Solaris Express 11 with VBox guest additions. - The guest uses the e1000g virtual device with jumbo frames enabled, and is bridged to the host adapter, which is a real e1000g device (an Intel PRO/1000 PT). Jumbo frames is enabled in /kernel/drv/e1000g.conf, parameter MaxFrameSize set to 3 (16K frames). - Apparently both the virtual and physical adapters do need to support jumbo frames for this to work. - Don't expect host-only networking to give any performance boost, just because it uses virtual adapters. It doesn't do jumbo frames. - The VirtualBox bridge filter driver in Windows doesn't actually touch the network or host hardware when you're addressing the host adapter's IP address. So bridged mode is just fine. Without jumbo frames, I get 25-30MB/s. With jumbo frames, I get around 65-70MB/s. Reading that is, haven't tried writing. That is from a single disk zpool in the guest over CIFS to a single NTFS disk on the host. VirtualBox is a pain in the rear end tho, since out of the box you have to manually run it or put it in Autostart. Since you need to run it as administrator to use raw disk access, you get the drat UAC each boot (not an issue here, the box runs 24/7 anyway). Apparently there are open source tools that allow you to set up VBoxHeadless as service with a privileged account. I was considering running Solaris as host, seeing as I did so for over three years until June 2010. But I became somewhat dependent on various Windows applications, and they behave relatively rear end in VirtualBox with a Windows guest. No idea, why that is. Maybe EPI and VPID I get with the upcoming Sandy Bridge might fix that. I shortly considered ESXi, until I found out that it's a Linux based hypervisor and apparently doesn't allow me to use my actual hardware. The solution I was looking for is to get a fully usable Windows box and a ZFS datastore in one box. Right now I have 8GB of RAM, the test VM has 1.5GB assigned currently. The upgrade of the coming days will also come with 16GB of RAM, I'll be running the VM at 4GB then, for mighty ZFS cachin'. --edit: I also tried VMware. Jumbo frames don't work with its e1000g virtual device, they get all dropped. Then there's vmxnet3 device, theoretically only available on ESXi, but editing the vmx file allows you to enable it. The VMware tools for Solaris actually ship the vmxnet3 driver. Which however doesn't do jumbo frames on Solaris. Can't set MTU and VMware documentation even says so. I ended up with 25-30MB/s that way, too. I guess that disqualifies ESXi, too. Combat Pretzel fucked around with this message at 01:53 on Jan 2, 2011 |
# ? Jan 2, 2011 01:35 |
|
frogbs posted:So i'm quickly running out of space on my iMac (i take a ton of photos and do a lot of video work). I've been thinking about getting a 4 bay Firewire800 raid enclosure and filling it up with 2tb drives in a raid 5 or 10. So far I think i'm leaning towards the OWC Mercury Pro Qx2 filled with 4 Hitachi 2tb drives. Can anyone recommend any similar enclosures/solutions as an alternative to the OWC model? I'm not necessarily married to the idea of a FW800 device, i'd go gigabit if someone could provide me a compelling solution. Any suggestions/thoughts? So i'm getting closer to pulling the trigger on the owc enclosure, can anyone offer and thoughts on it or any other owc products?
|
# ? Jan 2, 2011 05:49 |
|
I just recently (today) got my openindiana server back up and running after a hard drive failure, and I'm trying to resolve a few permanent data errors. The one in question is within a snapshot, so I can't just delete the file normally, and there is a clone of that snapshot (opensolaris-1), so I can't just destroy the snapshots... Here are my rpool snapshots and the error I described: code:
|
# ? Jan 2, 2011 08:13 |
|
Combat Pretzel posted:Here's some data from my fiddling: Interesting stuff. I ended up spending most of yesterday screwing around with it and my results weren't that far off from yours. ZFS ran at about 1/2 to 2/3rds of the speed when I had Solaris installed as a guest under ESXi. Rather than deal with the hassle of it, I decided to just slap Solaris Express 11 on it and virtualize from VirtualBox. I don't have to run any Windows stuff thankfully, just a few linux/BSD instance to test stuff for work. I'm running it on a Xeon x3440 with 8G of ECC, on a Supermicro X8SI6-F and getting 80 MB/s reads and 100MB/s writes to the CIFS share which is plenty enough. Internal disk benchmarks on a 6x Samsung F4 raidz2 are between 350 MB/s and 450 MB/s. Everything seems to be up and running now and I'm just letting it run random break-in tests to make sure everything's fine. The only weird thing that's happened so far is a 3 beep tone in the middle of the night that woke me up. But there was nothing in the system or motherboard logs, so I'm half wondering if I didn't just dream it.
|
# ? Jan 2, 2011 13:45 |
|
|
# ? May 28, 2024 08:59 |
|
tboneDX posted:I thought I would be able to mount the snapshot and remove the file manually, but that doesn't seem to be possible. This is more of an annoyance than anything, but I'd appreciate some help.
|
# ? Jan 2, 2011 15:50 |