Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Bryne
Feb 13, 2008

The Treachery of Forums

Drevoak posted:

The DS211J is $208 directly from amazon, they sell out of em frequently unfortunately. Western Digital has a MIR for their 2TB drive, get a $20 visa rewards card. Getting the DS211J and 2 drives comes out to about 370ish.
Is it ok to put Greens in a setup like this? I'm looking at doing basically the exact same thing but people bitch about the powersaving features on the drives.

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

7K2000s are $10 off at the 'egg until tomorrow...ugh why won't they get any cheaper :(

Charles Martel
Mar 7, 2007

"The Hero of the Age..."

The hero of all ages

Methylethylaldehyde posted:

The error rate on the disks is almost as high as the actual number of bits, so you can end up with unrecoverable errors on otherwise perfect disks durring a rebuild.

Also, A big pile of older drives, one goes bad, you start a rebuild, and the thrashing they go through during the rebuild manages to cause another drive to die. Bye bye data!

...so I should use 1TB disks instead? Or use RAID-6 instead? I'm backing up images of computer software that I DON'T want to lose, along with me and my fiance's digital photo collection, documents, music, etc.

NeuralSpark
Apr 16, 2004

Charles Martel posted:

...so I should use 1TB disks instead? Or use RAID-6 instead? I'm backing up images of computer software that I DON'T want to lose, along with me and my fiance's digital photo collection, documents, music, etc.

Burn the photos off to DVD once a month. A RAID is not a backup, it's just data protection.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Also, not just ANY DVDs, DVD+Rs, hopefully Imation / Taiyo Yuden. And try to avoid letting them get in contact with water or high humidity. It may be worthwhile to add error recovery files (PAR files, for example) to the backups.

Zhentar
Sep 28, 2003

Brilliant Master Genius

Charles Martel posted:

I DON'T want to lose

If you don't want to lose it, it needs to be backed up offsite.

And not using DVDs.

movax
Aug 30, 2008

Charles Martel posted:

...so I should use 1TB disks instead? Or use RAID-6 instead? I'm backing up images of computer software that I DON'T want to lose, along with me and my fiance's digital photo collection, documents, music, etc.

Use a RAID locally, and pay for Carbonite/JungleDisk/whatever offsite/online system. It's pretty much the best us average home users can do.

Telex
Feb 11, 2003

movax posted:

Use a RAID locally, and pay for Carbonite/JungleDisk/whatever offsite/online system. It's pretty much the best us average home users can do.

Is there an offsite/online system that does 12TB for an affordable price?

movax
Aug 30, 2008

Telex posted:

Is there an offsite/online system that does 12TB for an affordable price?

12TB, shee-it. I know one of those guys has a "all the space you want" plan that's pretty cheap, but has speed caps on downloads/uploading data to them, that's probably your best bet. Is uh, all of that 12TB critical enough to require off-site backup?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
12TB is quite expensive to reliably back-up. Market segmentation gets you here and you start being better off going to a managed services provider where they'll start charging you a couple grand a month for that kind of storage as an "entry level" tier. If you look at how much Amazon S3 charges for that kind of storage (and not counting the bandwidth) you'll see it may be cheaper to write to a bunch of hard drives annually and have it stored in a bank vault.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Sanity check: is this data worth lotsa :20bux: or is it simply personally important? People are reacting very strongly to the formatting you used.

movax
Aug 30, 2008

Power-bill came in and reminded me that I hadn't messed with power configuration for my fileserver yet... (:wtc: I know). So, it's OpenSolaris:
SunOS megatron 5.11 snv_134 i86pc i386 i86pc Solaris

(I'm going to be going to OpenIndiana once I move my old hardware to server once Sandy Bridge stuff gets here).

I did the following in my /etc/power.conf:
code:
movax@megatron:~$ cat /etc/power.conf
#
# Copyright 1996-2002 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
#pragma ident   "@(#)power.conf 2.1     02/03/04 SMI"
#
# Power Management Configuration File
#

device-dependency-property removable-media /dev/fb
autopm                  enable
device-thresholds /pci@0,0/pci1022,9603@2/pci15d9,a580@0/sd@0,0 30m
device-thresholds /pci@0,0/pci1022,9603@2/pci15d9,a580@0/sd@1,0 30m
device-thresholds /pci@0,0/pci1022,9603@2/pci15d9,a580@0/sd@2,0 30m
device-thresholds /pci@0,0/pci1022,9603@2/pci15d9,a580@0/sd@3,0 30m
device-thresholds /pci@0,0/pci1022,9603@2/pci15d9,a580@0/sd@4,0 30m
device-thresholds /pci@0,0/pci1022,9603@2/pci15d9,a580@0/sd@5,0 30m
device-thresholds /pci@0,0/pci1022,9603@2/pci15d9,a580@0/sd@6,0 30m
device-thresholds /pci@0,0/pci1022,9603@2/pci15d9,a580@0/sd@7,0 30m

autoS3                  default
cpu-threshold           1s
# Auto-Shutdown         Idle(min)       Start/Finish(hh:mm)     Behavior
autoshutdown            30              9:00 9:00               noshutdown
cpupm  enable
Got the values from 'format' command, hopefully they are persistent across boot? Main question: how can I confirm this is working? Plug in a Kill-A-Watt, check it every now and then?

Nothing accesses files during the day (at work), and the VMs I run are on a separate 2-disk mirror. I've got Windows boxes with shares on the 8-drive tank mapped, but for all practical purposes, there should be 0 I/O going on.

tboneDX
Jan 27, 2009

movax posted:

Power-bill came in and reminded me that I hadn't messed with power configuration for my fileserver yet...

I'd be curious to see what your power usage is like. I just kill-a-watted my OS server (default power configuration) the other day, and it was about 90-100W during normal load. I only have a 4-drive main pool, and a 2-drive system mirror, but I'm curious to see the difference. Also, my main processor is a celeron, so that may affect things a bit.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

movax posted:

I did the following in my /etc/power.conf:
Unless you're running a Nehalem or upwards, make sure it says cpupm enable poll-mode, or else it's going to generate a huge shitload of cross-calls and wake-ups of the CPU.

movax
Aug 30, 2008

Combat Pretzel posted:

Unless you're running a Nehalem or upwards, make sure it says cpupm enable poll-mode, or else it's going to generate a huge shitload of cross-calls and wake-ups of the CPU.

gently caress I knew it, I have a stupidly high number of genunix calls in powertop, assumed that was because I had VirtualBox running or similar. CPU is a AMD 240e, so at least it's new enough (Family 16) to be able to do C and P state power scaling. I will make that change now then, thanks. Any tips on checking if drives are spinning down other than waiting 30 minutes, then listening for drives spinning up at the server?

powertop:
code:
                        OpenSolaris PowerTOP version 1.2

C-states (idle power)   Avg     Residency       P-states (frequencies)
C0 (cpu running)                (12.5%)          800 Mhz        91.3%
C1                      3.2ms   (87.5%)         1600 Mhz        0.0%
                                                2100 Mhz        0.0%
                                                2400 Mhz        8.7%

Wakeups-from-idle per second: 704.4     interval: 5.0s
no ACPI power usage estimate available

Top causes for wakeups:
23.7% (167.0)               <kernel> :  ohci`ohci_handle_root_hub_status_change
18.1% (127.4)               <kernel> :  genunix`cv_wakeup
15.2% (107.0)                  sched :  <xcalls> unix`dtrace_xcall_func
14.2% (100.2)               <kernel> :  genunix`realitexpire
14.2% (100.2)               <kernel> :  genunix`clock
Seems like my CPU's already speedstep'd properly?

Can't wait to move all the hardware over to Intel in a few days (old E6600 + P5Q Pro going in, P45 chipset).

tboneDX posted:

I'd be curious to see what your power usage is like. I just kill-a-watted my OS server (default power configuration) the other day, and it was about 90-100W during normal load. I only have a 4-drive main pool, and a 2-drive system mirror, but I'm curious to see the difference. Also, my main processor is a celeron, so that may affect th

It's so stupidly high. I built this back in late 2007/early 2008, and it was easily in excess of 200W idling. I think it's down to like 130W or so now perhaps, with the CPU undervolted. It's attached through a UPS, so I can toss on a Kill-A-Watt and subtract the constant 20-30W draw of the UPS to figure out what it is now.

Drives: 30GB SSD OS, 60GB SSD ZIL/L2ARC, 8x1.5TB 7200rpm, 2x250GB 7200rpm.

movax fucked around with this message at 15:47 on Jan 10, 2011

eames
May 9, 2009

I bought a Mac mini with a Drobo after losing 8TB of data to silly mdadm/LVM-related fuckup after a drive failure.

As expected, the even the 2nd gen Drobo S is still extremely slow compared to all other solutions, but I just don’t have the time, energy or enthusiasm to mess with Linux/mdadm/lvm anymore. Snow Leopard Server quite nice so far and saved me a lot of time.

5x WD20EADS, dual-disk redundancy, FW800:



Slow, slow, slow and overpriced, but I had the whole setup done within 30 minutes and power consumption is down to 24W idle (mac+drobo).

Jonny 290
May 5, 2005



[ASK] me about OS/2 Warp
I think I have a couple of morse code keys around here that might transfer your data a bit faster, if you're interested.

Any ideas on rebuild time?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

movax posted:

gently caress I knew it, I have a stupidly high number of genunix calls in powertop, assumed that was because I had VirtualBox running or similar. CPU is a AMD 240e, so at least it's new enough (Family 16) to be able to do C and P state power scaling.
I'm not sure what the state of AMD power saving is, but judging your PowerTOP output, it's doing its job.

quote:

I will make that change now then, thanks. Any tips on checking if drives are spinning down other than waiting 30 minutes, then listening for drives spinning up at the server?
If the server has idled a while, you just need to run pfexec format. It'll touch all disks and you'll hear them spin up, if they were powered down. Solaris can power down disks, it worked here in the past.


quote:

code:
18.1% (127.4)               <kernel> :  genunix`cv_wakeup
15.2% (107.0)                  sched :  <xcalls> unix`dtrace_xcall_func
14.2% (100.2)               <kernel> :  genunix`realitexpire
You can't prevent genunix`clock, because the Solaris kernel's still not tickless. But everything above is result of event-mode (while PowerTOP uses dtrace, it filters its own tracing. The dtrace calls above should be the event-mode). You can set it on pre-Nehalem and AMD CPUs, but it fucks with the system. Poll-mode works off the clock currently AFAIK and doesn't cause additional calls and wake-ups.

My C2Q with Solaris on bare-metal got like 200 wake ups during idle, when I still ran it as main. Using poll-mode, naturally. The first implementations of event-mode made it shoot up to 2500 wake-ups, but they noticed those hyperfast reactions and throttled it a little. But with pre-Nehalems, it was still over the top.

quote:

Can't wait to move all the hardware over to Intel in a few days (old E6600 + P5Q Pro going in, P45 chipset).
E6600 is also pre-Nehalem, you'd still need to set poll-mode. You'll also not get C2 and deeper power states with pre-Nehalem CPUs. This is not going to change either, based on what the various Intel people working on it said on the OpenSolaris forums, before Oracle moved the project back into secrecy. There's no intention to make it work on older platforms.

Also, if you're using SATA, you can speed up things a little by forcing the AHCI driver to use MSI. Add set ahci:ahci_msi_enabled = 1 to /etc/system. It's two years ago I've gotten assurances that it's fine and stable. It also ran well here on my ICH9 system. Yet, the project page lists it still in development and isn't enabled by default. To see, if it worked, run pfexec mdb -k and then enter ::interrupts, it should say MSI somewhere on the line of ahci_intr.

Combat Pretzel fucked around with this message at 19:27 on Jan 15, 2011

Star War Sex Parrot
Oct 2, 2003

eames posted:

I bought a Mac mini with a Drobo after losing 8TB of data to silly mdadm/LVM-related fuckup after a drive failure.

As expected, the even the 2nd gen Drobo S is still extremely slow compared to all other solutions, but I just don’t have the time, energy or enthusiasm to mess with Linux/mdadm/lvm anymore. Snow Leopard Server quite nice so far and saved me a lot of time.

5x WD20EADS, dual-disk redundancy, FW800:



Slow, slow, slow and overpriced, but I had the whole setup done within 30 minutes and power consumption is down to 24W idle (mac+drobo).
I really, really want to do this setup. I already have the Macs and ~40TB in spare WD enterprise drives. I just can't get over the cost of the Drobo. :(

what is this
Sep 11, 2001

it is a lemur

Star War Sex Parrot posted:

I really, really want to do this setup. I already have the Macs and ~40TB in spare WD enterprise drives. I just can't get over the cost of the Drobo. :(
I'd really recommend against getting a drobo. Why not look at one of the synology, thecus, netgear, or qnap models?

movax
Aug 30, 2008

Combat Pretzel posted:

If the server has idled a while, you just need to run pfexec format. It'll touch all disks and you'll hear them spin up, if they were powered down. Solaris can power down disks, it worked here in the past.
It doesn't seem to be working sadly, SMB browsing is still instantaneous...Windows mounting SMB shares as network drives shouldn't keep generating I/O reqs, should it?

quote:

E6600 is also pre-Nehalem, you'd still need to set poll-mode. You'll also not get C2 and deeper power states with pre-Nehalem CPUs. This is not going to change either, based on what the various Intel people working on it said on the OpenSolaris forums, before Oracle moved the project back into secrecy. There's no intention to make it work on older platforms.

Also, if you're using SATA, you can speed up things a little by forcing the AHCI driver to use MSI. Add set ahci:ahci_msi_enabled = 1 to /etc/system. It's two years ago I've gotten assurances that it's fine and stable. It also ran well here on my ICH9 system. Yet, the project page lists it still in development and isn't enabled by default. To see, if it worked, run pfexec mdb -k and then enter ::interrupts, it should say MSI somewhere on the line of ahci_intr.

Argh. I am using SATA, through LSI 1068E controllers, I will try that out. My dmesg is currently littered with ioapic/pcplusmp messages every minute (literally my message logs have grown to gigs in size). I think it might be from some APIC issues or similar? Might try out the Intel motherboard to see what happens, guess I can toss the cpu powerm over to poll-mode.

Going to BSD+ZFS is getting more and more tempting...

eames
May 9, 2009

Jonny 290 posted:

I think I have a couple of morse code keys around here that might transfer your data a bit faster, if you're interested.

Any ideas on rebuild time?

Reviews mention 6 hours, but apparently it depends on how much actual data is stored on the volume. I just hope I’ll never find out.

quote:

Why not look at one of the synology, thecus, netgear, or qnap models?

Yeah, if performance is a factor, stay away from the Drobos still. The larger "enterprise drobos" (haha) look really nice, but the prices are just plain outlandish ($3k+).

I just need my NAS to saturate 802.11n and play 1080p simultaneously, so the Drobo’s performance didn’t bother me too much.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

movax posted:

It doesn't seem to be working sadly, SMB browsing is still instantaneous...Windows mounting SMB shares as network drives shouldn't keep generating I/O reqs, should it?
I don't think so. I have a virtual machine with Solaris Express 11 running as virtual file server and the shares are mounted on the host 24/7. The disks don't seem to get touched, if I'm not perusing them, because the host shuts them down after a while (they're used in the VM as raw disks, but no direct hardware access yet, since VBox doesn't do that (yet)).

Once you manage to get it working, you should consider disabling last access times, too. If the data resides in ARC due to earlier access, ZFS would still need to spin up the disks to update the last access times. It'd also break things dependent on access times, so if you intend to run make to build various software, make sure the filesystem you're doing it on has last access times enabled.

movax posted:

Argh. I am using SATA, through LSI 1068E controllers, I will try that out. My dmesg is currently littered with ioapic/pcplusmp messages every minute (literally my message logs have grown to gigs in size). I think it might be from some APIC issues or similar? Might try out the Intel motherboard to see what happens, guess I can toss the cpu powerm over to poll-mode.

Going to BSD+ZFS is getting more and more tempting...
I'd rather wait until you've tried the Intel board before making that decision. Sun had people from Intel itself working on the code, so Intel hardware should be way better supported. As I said, I had the least issues with my C2Q and ICH9-based mainboard, contrary to what I initially expected. Hardware support was sure as hell better than what FreeBSD had to offer. Back then at least.

movax
Aug 30, 2008

Combat Pretzel posted:

I don't think so. I have a virtual machine with Solaris Express 11 running as virtual file server and the shares are mounted on the host 24/7. The disks don't seem to get touched, if I'm not perusing them, because the host shuts them down after a while (they're used in the VM as raw disks, but no direct hardware access yet, since VBox doesn't do that (yet)).

Once you manage to get it working, you should consider disabling last access times, too. If the data resides in ARC due to earlier access, ZFS would still need to spin up the disks to update the last access times. It'd also break things dependent on access times, so if you intend to run make to build various software, make sure the filesystem you're doing it on has last access times enabled.

I think I will try to attack this problem again once I get OpenIndiana installed, system migrated over to the Intel HW, add the SSD I bought months ago for L2ARC/ZIL.

I was chatting with some fellow storage geeks over the weekend though, and all of us still suffer minor, niggling problems no matter what (one guy uses hardware RAID, the other does soft via mdadm, another is ZFS all the way). Thinking about brewing our own storage controller for people that just want a ton of SATA ports to throw consumer disks at, based on Silicon Image or Marvell PCIe SATA controllers/multipliers. Motherboard SATA ports from the chipset/decent controllers don't seem to cause problems, so why not just use more of those same chips?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

movax posted:

Thinking about brewing our own storage controller for people that just want a ton of SATA ports to throw consumer disks at, based on Silicon Image or Marvell PCIe SATA controllers/multipliers. Motherboard SATA ports from the chipset/decent controllers don't seem to cause problems, so why not just use more of those same chips?

The first six ports don't use SI or Marvell or anything, they're built into the motherboard chipset. Motherboards that have more than 6 ports have an additional controller, usually Marvell, that isn't supported very well in anything. It's all about the drivers, if somebody wants to sit down and make a chipset and write great drivers for everything, more power to them. That LSI 1068E is great in Solaris, but not so great in Linux or BSD. There's an 8 port Marvell card that's great in Linux and BSD. And of course everything works with Windows.

I'm not sure why Intel doesn't jump in and start making ICH cards, perhaps it's technically not possible to do that, I don't really understand the architecture well enough.

movax
Aug 30, 2008

FISHMANPET posted:

The first six ports don't use SI or Marvell or anything, they're built into the motherboard chipset. Motherboards that have more than 6 ports have an additional controller, usually Marvell, that isn't supported very well in anything. It's all about the drivers, if somebody wants to sit down and make a chipset and write great drivers for everything, more power to them. That LSI 1068E is great in Solaris, but not so great in Linux or BSD. There's an 8 port Marvell card that's great in Linux and BSD. And of course everything works with Windows.

I'm not sure why Intel doesn't jump in and start making ICH cards, perhaps it's technically not possible to do that, I don't really understand the architecture well enough.

Yes, I know the first <x> ports are from the ICH/PCH, and as of right now, it's not possible to standalone them, which kinda sucks, because they are awesomely well supported. I'm going to start exploring support for the Sil and Marvell chips; I know backblaze used the Sil chips to great effect. Also interested in seeing the effect of FIS-switching based port multipliers on performance when attached to mechanical consumer drives.

My 1068E is being terribly useless in Solaris when it comes to working SMART support, which is unfortunate. It was working in an older version, hopefully it returns in OpenIndiana. :( Hopefully it does, I have another one sitting in a box awaiting new drives.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

movax posted:

My 1068E is being terribly useless in Solaris when it comes to working SMART support, which is unfortunate. It was working in an older version, hopefully it returns in OpenIndiana. :( Hopefully it does, I have another one sitting in a box awaiting new drives.

Isn't the SMART thing just because Solaris doesn't support SMART very well? Which seems really stupid to not support SMART in an OS that is otherwise so perfect for storage.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

FISHMANPET posted:

Isn't the SMART thing just because Solaris doesn't support SMART very well? Which seems really stupid to not support SMART in an OS that is otherwise so perfect for storage.

Didn't Google prove that smart wasn't that great?


Fake Edit: http://storagemojo.com/2007/02/19/googles-disk-failure-experience/

Kinda.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Forgot what the state of SMART was in Solaris. But I remember it to be a clusterfuck. SATA devices are treated like SAS upwards the storage stack. That works pretty well for regular usage. Except that there isn't anything beyond translation of general disk operation, i.e. SMART specific stuff, because translation between the SAS and SATA equivalents isn't implemented.

CuddleChunks
Sep 18, 2004

I just set up my first Synology DS211J for a customer and have a DS410 on order for myself. WOOOO! What a slick little unit that DS211J has been. Transfered some data over to it on a test and it seemed nice and snappy through its built-in file management page. I'm hoping we can keep the customer using that rather than dicking around with windows filesharing. This is nice and braindead and anything to reduce customer calls is a win for us.

Build time for a 1TB array (2 drives mirrored) was about 5 hours using the Synology Hybrid RAID setting. I'm sure the bigger models that have more RAM will make this a faster process but it just sat on a desk and purred away without asking for any effort on my part. So bye bye Buffalo Trashstations, Hellooooooo Synology.

Please don't tell me terrible things are about to happen, let me dream for just a little while that this NAS isn't going to end in tears like the other ones we've tried.

paradigmm
May 28, 2002

by Y Kant Ozma Post
CuddleChunks have fun when it sets all your volumes read-only without any recovery options.

MrMoo
Sep 14, 2000

Synology just announced a 3.1 beta, cleaning up the MSIE-only poo poo from Surveillance Station seems the biggest improvement:

quote:

DSM 3.1 evaluates the efficiency of various operations. Synology DiskStation is the very first to support the sharing of print, fax and scan on a multifunction printer. Multiple administrators can now log on the same Synology DiskStation. In addition, File Browser supports preview of photos, videos, PDF, and Office documents, along with various searching criteria, making searches more precise and quickly. Getting the results has become timelier by performing database index to file names.

Enterprises call for data availability and reliability. The DSM 3.1 provides a workaround by allowing users to schedule synchronization of specific shared folders from one Synology DiskStation to another, providing an immediate recovery solution and allowing documents to be shared seamlessly. In addition, the Synology Hybrid RAID technology now supports two disks fault tolerance. Furthermore, users can choose multiple destinations in network backup for stacked layers of protection while the Windows ACL support allows data to be restored along with its complex access permission settings.

Synology is gearing up to meet the ever increasing demand for mobile device support. The industry’s first AirPrint support allows users wirelessly print documents from iPhone and iPad to any AirPrint-enabled printer connected to DiskStation. Additionally, the iPad-native DS photo+ and DS audio applications are now available. Their tailor-made UIs bring intuitive hands-on experience. By shaking iPad gently, the playing song will immediately jump to the next. Furthermore, the new DS file application allows the users of iPhone download and upload documents from and to Synology NAS server.

The DSM 3.1 extends users’ multimedia experience by offering several overhauled applications. The Download Station automates every process, from searching downloadable files on the Internet to aggregating the latest available torrent through the RSS feeds. The Audio Station 3 features an equalizer to facilitate hearing sensation, a mini player to contribute to multitasking, personal playlists and editable ID3 Tags. The Personalized Photo Station allows every account user to host his or her own photo album and blog. Users can easily share the link of a photo album on facebook, twitter, and Plurk™.

Surveillance Station 5 now runs on Internet Explorer®, Firefox®, and Chrome®, and allows users to detect any motions, missing and foreign objects, camera occlusion and focus lost, and adjust image quality for detailed viewing. The Mail Station 2 supports multiple email domain fetching and multiple SMTP servers to offer centralized management of all emails and protect users’ privacy. Additionally, selected Synology DiskStations are VMware® vSphere™and Citrix® XenServer™certified, and compatible with Microsoft Hyper-V™, providing a seamless storage solution for virtualization servers.

http://www.synology.com/enu/support/beta/Synology_DSM3.1_2011.php

hannibal
Jul 27, 2001

[img-planes]
Speaking of Synology, does anyone have any experience with the DS1010+ or DS1511+? Thinking of getting one to replace my thrown-together Linux mdadm home file server. From what I can tell the 1511+ just has a faster processor, but I'm curious if there is anything else different or anything else I should know before getting one.

ElvisG
Aug 18, 2004
I have the 1511. The reason I picked the 1511 instead of the 1010 is because 1511+ was the only one available at the time. The only difference is the processor from what I can tell. I love it because I use it as a NAS and SAN with VMware and Hyper-V.

The only negative thing I've come across is the in-ability to rename the admin account. Other than that, I get 105+/- MB transfer rate on average with RAID 5.

Telex
Feb 11, 2003

welp, not knowing what I'm doing when moving drives around has scared the poo poo out of me and losing all my dataz.

Is there any particular brand of external e-sata JBOD raid enclosure out there that is better than the rest or are they all pretty much the same? I think I want to get a 5-drive bay setup so I can JBOD 5 2TB disks to have an offline raid to back my stuff up and be able to re-work the main ZFS raid I'm running with now.

dietcokefiend
Apr 28, 2004
HEY ILL HAV 2 TXT U L8TR I JUST DROVE IN 2 A DAYCARE AND SCRATCHED MY RAZR
Synology time!

I have pretty good experience with the Synology units, working with the DS410j and the new DS411+. The question now is my father really likes the slick interface and wants to get one for himself. In the past I knew that the only thing really holding the DS410j back was the limited RAM. Start up a torrent, load up the audio server... really anything besides normal SMB stuff and it lagged with maxed out RAM. The DS411+ on the other hand floats at 150-200MB of RAM usage and toots along just fine.

He wants a NAS, but doesn't want to explode the budget either. Units in mind are the DS211 priced at 289 (1.6GHz, 256MB) or the DS411j (1.2Ghz, 128MB). Its either a 4-bay that is slightly more expensive that could hold lots of storage in RAID5 or a 2-bay that is limited to RAID1, but gets good transfer rates with background activity.

Does the 400MHz lead on the old DS410j help out the DS411j even with the limited RAM? Besides being one backup method for family photos (other being a hard drive at the bank) he will eventually use it to host media to stream to his iPhone or a WD TV Live sort of box.

It would be great to have both models on hand to see if they can handle what I am asking, but they aren't. Could anyone with one of those models chime in and see if what sorts of limits crop up when either of those are tasked?

what is this
Sep 11, 2001

it is a lemur

CuddleChunks posted:


Build time for a 1TB array (2 drives mirrored) was about 5 hours using the Synology Hybrid RAID setting. I'm sure the bigger models that have more RAM will make this a faster process but it just sat on a desk and purred away without asking for any effort on my part. So bye bye Buffalo Trashstations, Hellooooooo Synology.

Please don't tell me terrible things are about to happen, let me dream for just a little while that this NAS isn't going to end in tears like the other ones we've tried.

Don't use Hybrid RAID on two drives instead of RAID 1. In fact, just don't use it at all.

Buy drives that are the same size and use a normal RAID setting.

movax
Aug 30, 2008

Anyone given this attempt at ZFS + Linux a try yet?

e: nvm, you still can't easily access ZFS volumes (yet); I think I'll re-roll server with OpenIndiana. Maybe power management will work, who knows! (E6600/Conroes need poll-mode on Solaris I think a poster here said earlier)

movax fucked around with this message at 05:32 on Jan 26, 2011

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

movax posted:

Anyone given this attempt at ZFS + Linux a try yet?

I looked at it and skipped over it because it doesn't have a full implementation of the ZFS Posix Layer, which I gather means you can't just mount a pool. I've seen people hack together things with that plus LVM, but it looked way too finicky for something you would want to rely on.

Adbot
ADBOT LOVES YOU

Telex
Feb 11, 2003

movax posted:

Anyone given this attempt at ZFS + Linux a try yet?

e: nvm, you still can't easily access ZFS volumes (yet); I think I'll re-roll server with OpenIndiana. Maybe power management will work, who knows! (E6600/Conroes need poll-mode on Solaris I think a poster here said earlier)

er...

1.3 How do I mount the file system?

You can’t… at least not today. While we have ported the majority of the ZFS code to the Linux kernel that does not yet include the ZFS posix layer. The only interface currently available from user space is the ZVOL virtual block device.

Yeah, not quite yet. Nice thought though.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply