|
Drevoak posted:The DS211J is $208 directly from amazon, they sell out of em frequently unfortunately. Western Digital has a MIR for their 2TB drive, get a $20 visa rewards card. Getting the DS211J and 2 drives comes out to about 370ish.
|
# ? Jan 9, 2011 10:12 |
|
|
# ? May 31, 2024 12:22 |
|
7K2000s are $10 off at the 'egg until tomorrow...ugh why won't they get any cheaper
|
# ? Jan 9, 2011 19:05 |
|
Methylethylaldehyde posted:The error rate on the disks is almost as high as the actual number of bits, so you can end up with unrecoverable errors on otherwise perfect disks durring a rebuild. ...so I should use 1TB disks instead? Or use RAID-6 instead? I'm backing up images of computer software that I DON'T want to lose, along with me and my fiance's digital photo collection, documents, music, etc.
|
# ? Jan 9, 2011 19:44 |
|
Charles Martel posted:...so I should use 1TB disks instead? Or use RAID-6 instead? I'm backing up images of computer software that I DON'T want to lose, along with me and my fiance's digital photo collection, documents, music, etc. Burn the photos off to DVD once a month. A RAID is not a backup, it's just data protection.
|
# ? Jan 9, 2011 19:58 |
|
Also, not just ANY DVDs, DVD+Rs, hopefully Imation / Taiyo Yuden. And try to avoid letting them get in contact with water or high humidity. It may be worthwhile to add error recovery files (PAR files, for example) to the backups.
|
# ? Jan 9, 2011 20:36 |
|
Charles Martel posted:I DON'T want to lose If you don't want to lose it, it needs to be backed up offsite. And not using DVDs.
|
# ? Jan 9, 2011 21:05 |
|
Charles Martel posted:...so I should use 1TB disks instead? Or use RAID-6 instead? I'm backing up images of computer software that I DON'T want to lose, along with me and my fiance's digital photo collection, documents, music, etc. Use a RAID locally, and pay for Carbonite/JungleDisk/whatever offsite/online system. It's pretty much the best us average home users can do.
|
# ? Jan 9, 2011 21:55 |
|
movax posted:Use a RAID locally, and pay for Carbonite/JungleDisk/whatever offsite/online system. It's pretty much the best us average home users can do. Is there an offsite/online system that does 12TB for an affordable price?
|
# ? Jan 10, 2011 01:10 |
|
Telex posted:Is there an offsite/online system that does 12TB for an affordable price? 12TB, shee-it. I know one of those guys has a "all the space you want" plan that's pretty cheap, but has speed caps on downloads/uploading data to them, that's probably your best bet. Is uh, all of that 12TB critical enough to require off-site backup?
|
# ? Jan 10, 2011 02:03 |
|
12TB is quite expensive to reliably back-up. Market segmentation gets you here and you start being better off going to a managed services provider where they'll start charging you a couple grand a month for that kind of storage as an "entry level" tier. If you look at how much Amazon S3 charges for that kind of storage (and not counting the bandwidth) you'll see it may be cheaper to write to a bunch of hard drives annually and have it stored in a bank vault.
|
# ? Jan 10, 2011 02:09 |
|
Sanity check: is this data worth lotsa or is it simply personally important? People are reacting very strongly to the formatting you used.
|
# ? Jan 10, 2011 03:50 |
|
Power-bill came in and reminded me that I hadn't messed with power configuration for my fileserver yet... ( I know). So, it's OpenSolaris: SunOS megatron 5.11 snv_134 i86pc i386 i86pc Solaris (I'm going to be going to OpenIndiana once I move my old hardware to server once Sandy Bridge stuff gets here). I did the following in my /etc/power.conf: code:
Nothing accesses files during the day (at work), and the VMs I run are on a separate 2-disk mirror. I've got Windows boxes with shares on the 8-drive tank mapped, but for all practical purposes, there should be 0 I/O going on.
|
# ? Jan 10, 2011 05:23 |
|
movax posted:Power-bill came in and reminded me that I hadn't messed with power configuration for my fileserver yet... I'd be curious to see what your power usage is like. I just kill-a-watted my OS server (default power configuration) the other day, and it was about 90-100W during normal load. I only have a 4-drive main pool, and a 2-drive system mirror, but I'm curious to see the difference. Also, my main processor is a celeron, so that may affect things a bit.
|
# ? Jan 10, 2011 07:44 |
|
movax posted:I did the following in my /etc/power.conf:
|
# ? Jan 10, 2011 13:07 |
|
Combat Pretzel posted:Unless you're running a Nehalem or upwards, make sure it says cpupm enable poll-mode, or else it's going to generate a huge shitload of cross-calls and wake-ups of the CPU. gently caress I knew it, I have a stupidly high number of genunix calls in powertop, assumed that was because I had VirtualBox running or similar. CPU is a AMD 240e, so at least it's new enough (Family 16) to be able to do C and P state power scaling. I will make that change now then, thanks. Any tips on checking if drives are spinning down other than waiting 30 minutes, then listening for drives spinning up at the server? powertop: code:
Can't wait to move all the hardware over to Intel in a few days (old E6600 + P5Q Pro going in, P45 chipset). tboneDX posted:I'd be curious to see what your power usage is like. I just kill-a-watted my OS server (default power configuration) the other day, and it was about 90-100W during normal load. I only have a 4-drive main pool, and a 2-drive system mirror, but I'm curious to see the difference. Also, my main processor is a celeron, so that may affect th It's so stupidly high. I built this back in late 2007/early 2008, and it was easily in excess of 200W idling. I think it's down to like 130W or so now perhaps, with the CPU undervolted. It's attached through a UPS, so I can toss on a Kill-A-Watt and subtract the constant 20-30W draw of the UPS to figure out what it is now. Drives: 30GB SSD OS, 60GB SSD ZIL/L2ARC, 8x1.5TB 7200rpm, 2x250GB 7200rpm. movax fucked around with this message at 15:47 on Jan 10, 2011 |
# ? Jan 10, 2011 15:44 |
|
I bought a Mac mini with a Drobo after losing 8TB of data to silly mdadm/LVM-related fuckup after a drive failure. As expected, the even the 2nd gen Drobo S is still extremely slow compared to all other solutions, but I just don’t have the time, energy or enthusiasm to mess with Linux/mdadm/lvm anymore. Snow Leopard Server quite nice so far and saved me a lot of time. 5x WD20EADS, dual-disk redundancy, FW800: Slow, slow, slow and overpriced, but I had the whole setup done within 30 minutes and power consumption is down to 24W idle (mac+drobo).
|
# ? Jan 15, 2011 13:17 |
|
I think I have a couple of morse code keys around here that might transfer your data a bit faster, if you're interested. Any ideas on rebuild time?
|
# ? Jan 15, 2011 17:31 |
|
movax posted:gently caress I knew it, I have a stupidly high number of genunix calls in powertop, assumed that was because I had VirtualBox running or similar. CPU is a AMD 240e, so at least it's new enough (Family 16) to be able to do C and P state power scaling. quote:I will make that change now then, thanks. Any tips on checking if drives are spinning down other than waiting 30 minutes, then listening for drives spinning up at the server? quote:
My C2Q with Solaris on bare-metal got like 200 wake ups during idle, when I still ran it as main. Using poll-mode, naturally. The first implementations of event-mode made it shoot up to 2500 wake-ups, but they noticed those hyperfast reactions and throttled it a little. But with pre-Nehalems, it was still over the top. quote:Can't wait to move all the hardware over to Intel in a few days (old E6600 + P5Q Pro going in, P45 chipset). Also, if you're using SATA, you can speed up things a little by forcing the AHCI driver to use MSI. Add set ahci:ahci_msi_enabled = 1 to /etc/system. It's two years ago I've gotten assurances that it's fine and stable. It also ran well here on my ICH9 system. Yet, the project page lists it still in development and isn't enabled by default. To see, if it worked, run pfexec mdb -k and then enter ::interrupts, it should say MSI somewhere on the line of ahci_intr. Combat Pretzel fucked around with this message at 19:27 on Jan 15, 2011 |
# ? Jan 15, 2011 19:21 |
|
eames posted:I bought a Mac mini with a Drobo after losing 8TB of data to silly mdadm/LVM-related fuckup after a drive failure.
|
# ? Jan 15, 2011 19:48 |
|
Star War Sex Parrot posted:I really, really want to do this setup. I already have the Macs and ~40TB in spare WD enterprise drives. I just can't get over the cost of the Drobo.
|
# ? Jan 15, 2011 21:42 |
|
Combat Pretzel posted:If the server has idled a while, you just need to run pfexec format. It'll touch all disks and you'll hear them spin up, if they were powered down. Solaris can power down disks, it worked here in the past. quote:E6600 is also pre-Nehalem, you'd still need to set poll-mode. You'll also not get C2 and deeper power states with pre-Nehalem CPUs. This is not going to change either, based on what the various Intel people working on it said on the OpenSolaris forums, before Oracle moved the project back into secrecy. There's no intention to make it work on older platforms. Argh. I am using SATA, through LSI 1068E controllers, I will try that out. My dmesg is currently littered with ioapic/pcplusmp messages every minute (literally my message logs have grown to gigs in size). I think it might be from some APIC issues or similar? Might try out the Intel motherboard to see what happens, guess I can toss the cpu powerm over to poll-mode. Going to BSD+ZFS is getting more and more tempting...
|
# ? Jan 16, 2011 05:30 |
|
Jonny 290 posted:I think I have a couple of morse code keys around here that might transfer your data a bit faster, if you're interested. Reviews mention 6 hours, but apparently it depends on how much actual data is stored on the volume. I just hope I’ll never find out. quote:Why not look at one of the synology, thecus, netgear, or qnap models? Yeah, if performance is a factor, stay away from the Drobos still. The larger "enterprise drobos" (haha) look really nice, but the prices are just plain outlandish ($3k+). I just need my NAS to saturate 802.11n and play 1080p simultaneously, so the Drobo’s performance didn’t bother me too much.
|
# ? Jan 16, 2011 10:28 |
|
movax posted:It doesn't seem to be working sadly, SMB browsing is still instantaneous...Windows mounting SMB shares as network drives shouldn't keep generating I/O reqs, should it? Once you manage to get it working, you should consider disabling last access times, too. If the data resides in ARC due to earlier access, ZFS would still need to spin up the disks to update the last access times. It'd also break things dependent on access times, so if you intend to run make to build various software, make sure the filesystem you're doing it on has last access times enabled. movax posted:Argh. I am using SATA, through LSI 1068E controllers, I will try that out. My dmesg is currently littered with ioapic/pcplusmp messages every minute (literally my message logs have grown to gigs in size). I think it might be from some APIC issues or similar? Might try out the Intel motherboard to see what happens, guess I can toss the cpu powerm over to poll-mode.
|
# ? Jan 16, 2011 13:28 |
|
Combat Pretzel posted:I don't think so. I have a virtual machine with Solaris Express 11 running as virtual file server and the shares are mounted on the host 24/7. The disks don't seem to get touched, if I'm not perusing them, because the host shuts them down after a while (they're used in the VM as raw disks, but no direct hardware access yet, since VBox doesn't do that (yet)). I think I will try to attack this problem again once I get OpenIndiana installed, system migrated over to the Intel HW, add the SSD I bought months ago for L2ARC/ZIL. I was chatting with some fellow storage geeks over the weekend though, and all of us still suffer minor, niggling problems no matter what (one guy uses hardware RAID, the other does soft via mdadm, another is ZFS all the way). Thinking about brewing our own storage controller for people that just want a ton of SATA ports to throw consumer disks at, based on Silicon Image or Marvell PCIe SATA controllers/multipliers. Motherboard SATA ports from the chipset/decent controllers don't seem to cause problems, so why not just use more of those same chips?
|
# ? Jan 16, 2011 23:26 |
|
movax posted:Thinking about brewing our own storage controller for people that just want a ton of SATA ports to throw consumer disks at, based on Silicon Image or Marvell PCIe SATA controllers/multipliers. Motherboard SATA ports from the chipset/decent controllers don't seem to cause problems, so why not just use more of those same chips? The first six ports don't use SI or Marvell or anything, they're built into the motherboard chipset. Motherboards that have more than 6 ports have an additional controller, usually Marvell, that isn't supported very well in anything. It's all about the drivers, if somebody wants to sit down and make a chipset and write great drivers for everything, more power to them. That LSI 1068E is great in Solaris, but not so great in Linux or BSD. There's an 8 port Marvell card that's great in Linux and BSD. And of course everything works with Windows. I'm not sure why Intel doesn't jump in and start making ICH cards, perhaps it's technically not possible to do that, I don't really understand the architecture well enough.
|
# ? Jan 16, 2011 23:53 |
|
FISHMANPET posted:The first six ports don't use SI or Marvell or anything, they're built into the motherboard chipset. Motherboards that have more than 6 ports have an additional controller, usually Marvell, that isn't supported very well in anything. It's all about the drivers, if somebody wants to sit down and make a chipset and write great drivers for everything, more power to them. That LSI 1068E is great in Solaris, but not so great in Linux or BSD. There's an 8 port Marvell card that's great in Linux and BSD. And of course everything works with Windows. Yes, I know the first <x> ports are from the ICH/PCH, and as of right now, it's not possible to standalone them, which kinda sucks, because they are awesomely well supported. I'm going to start exploring support for the Sil and Marvell chips; I know backblaze used the Sil chips to great effect. Also interested in seeing the effect of FIS-switching based port multipliers on performance when attached to mechanical consumer drives. My 1068E is being terribly useless in Solaris when it comes to working SMART support, which is unfortunate. It was working in an older version, hopefully it returns in OpenIndiana. Hopefully it does, I have another one sitting in a box awaiting new drives.
|
# ? Jan 17, 2011 00:00 |
|
movax posted:My 1068E is being terribly useless in Solaris when it comes to working SMART support, which is unfortunate. It was working in an older version, hopefully it returns in OpenIndiana. Hopefully it does, I have another one sitting in a box awaiting new drives. Isn't the SMART thing just because Solaris doesn't support SMART very well? Which seems really stupid to not support SMART in an OS that is otherwise so perfect for storage.
|
# ? Jan 17, 2011 00:05 |
|
FISHMANPET posted:Isn't the SMART thing just because Solaris doesn't support SMART very well? Which seems really stupid to not support SMART in an OS that is otherwise so perfect for storage. Didn't Google prove that smart wasn't that great? Fake Edit: http://storagemojo.com/2007/02/19/googles-disk-failure-experience/ Kinda.
|
# ? Jan 17, 2011 00:48 |
|
Forgot what the state of SMART was in Solaris. But I remember it to be a clusterfuck. SATA devices are treated like SAS upwards the storage stack. That works pretty well for regular usage. Except that there isn't anything beyond translation of general disk operation, i.e. SMART specific stuff, because translation between the SAS and SATA equivalents isn't implemented.
|
# ? Jan 17, 2011 23:40 |
|
I just set up my first Synology DS211J for a customer and have a DS410 on order for myself. WOOOO! What a slick little unit that DS211J has been. Transfered some data over to it on a test and it seemed nice and snappy through its built-in file management page. I'm hoping we can keep the customer using that rather than dicking around with windows filesharing. This is nice and braindead and anything to reduce customer calls is a win for us. Build time for a 1TB array (2 drives mirrored) was about 5 hours using the Synology Hybrid RAID setting. I'm sure the bigger models that have more RAM will make this a faster process but it just sat on a desk and purred away without asking for any effort on my part. So bye bye Buffalo Trashstations, Hellooooooo Synology. Please don't tell me terrible things are about to happen, let me dream for just a little while that this NAS isn't going to end in tears like the other ones we've tried.
|
# ? Jan 21, 2011 01:35 |
|
CuddleChunks have fun when it sets all your volumes read-only without any recovery options.
|
# ? Jan 21, 2011 01:40 |
|
Synology just announced a 3.1 beta, cleaning up the MSIE-only poo poo from Surveillance Station seems the biggest improvement:quote:DSM 3.1 evaluates the efficiency of various operations. Synology DiskStation is the very first to support the sharing of print, fax and scan on a multifunction printer. Multiple administrators can now log on the same Synology DiskStation. In addition, File Browser supports preview of photos, videos, PDF, and Office documents, along with various searching criteria, making searches more precise and quickly. Getting the results has become timelier by performing database index to file names. http://www.synology.com/enu/support/beta/Synology_DSM3.1_2011.php
|
# ? Jan 21, 2011 04:37 |
|
Speaking of Synology, does anyone have any experience with the DS1010+ or DS1511+? Thinking of getting one to replace my thrown-together Linux mdadm home file server. From what I can tell the 1511+ just has a faster processor, but I'm curious if there is anything else different or anything else I should know before getting one.
|
# ? Jan 21, 2011 22:25 |
|
I have the 1511. The reason I picked the 1511 instead of the 1010 is because 1511+ was the only one available at the time. The only difference is the processor from what I can tell. I love it because I use it as a NAS and SAN with VMware and Hyper-V. The only negative thing I've come across is the in-ability to rename the admin account. Other than that, I get 105+/- MB transfer rate on average with RAID 5.
|
# ? Jan 22, 2011 01:06 |
|
welp, not knowing what I'm doing when moving drives around has scared the poo poo out of me and losing all my dataz. Is there any particular brand of external e-sata JBOD raid enclosure out there that is better than the rest or are they all pretty much the same? I think I want to get a 5-drive bay setup so I can JBOD 5 2TB disks to have an offline raid to back my stuff up and be able to re-work the main ZFS raid I'm running with now.
|
# ? Jan 22, 2011 18:58 |
|
Synology time! I have pretty good experience with the Synology units, working with the DS410j and the new DS411+. The question now is my father really likes the slick interface and wants to get one for himself. In the past I knew that the only thing really holding the DS410j back was the limited RAM. Start up a torrent, load up the audio server... really anything besides normal SMB stuff and it lagged with maxed out RAM. The DS411+ on the other hand floats at 150-200MB of RAM usage and toots along just fine. He wants a NAS, but doesn't want to explode the budget either. Units in mind are the DS211 priced at 289 (1.6GHz, 256MB) or the DS411j (1.2Ghz, 128MB). Its either a 4-bay that is slightly more expensive that could hold lots of storage in RAID5 or a 2-bay that is limited to RAID1, but gets good transfer rates with background activity. Does the 400MHz lead on the old DS410j help out the DS411j even with the limited RAM? Besides being one backup method for family photos (other being a hard drive at the bank) he will eventually use it to host media to stream to his iPhone or a WD TV Live sort of box. It would be great to have both models on hand to see if they can handle what I am asking, but they aren't. Could anyone with one of those models chime in and see if what sorts of limits crop up when either of those are tasked?
|
# ? Jan 23, 2011 07:00 |
|
CuddleChunks posted:
Don't use Hybrid RAID on two drives instead of RAID 1. In fact, just don't use it at all. Buy drives that are the same size and use a normal RAID setting.
|
# ? Jan 26, 2011 04:36 |
|
Anyone given this attempt at ZFS + Linux a try yet? e: nvm, you still can't easily access ZFS volumes (yet); I think I'll re-roll server with OpenIndiana. Maybe power management will work, who knows! (E6600/Conroes need poll-mode on Solaris I think a poster here said earlier) movax fucked around with this message at 05:32 on Jan 26, 2011 |
# ? Jan 26, 2011 05:29 |
|
movax posted:Anyone given this attempt at ZFS + Linux a try yet? I looked at it and skipped over it because it doesn't have a full implementation of the ZFS Posix Layer, which I gather means you can't just mount a pool. I've seen people hack together things with that plus LVM, but it looked way too finicky for something you would want to rely on.
|
# ? Jan 26, 2011 05:33 |
|
|
# ? May 31, 2024 12:22 |
|
movax posted:Anyone given this attempt at ZFS + Linux a try yet? er... 1.3 How do I mount the file system? You can’t… at least not today. While we have ported the majority of the ZFS code to the Linux kernel that does not yet include the ZFS posix layer. The only interface currently available from user space is the ZVOL virtual block device. Yeah, not quite yet. Nice thought though.
|
# ? Jan 26, 2011 05:37 |