Steakandchips posted:Can take up to 8GB ram. Speaking of the N40L, I recommend getting the out of band BMC (a vKVM solution so you can run the server headless; it was made for the N36L but should work on the N40L) along with a one or two-port NIC from Intel as a zpool with +4 disks and raidz1 can easily do +200MBps transfer speed if the drives are fast enough (Mine run at 230/180MBps with LAGG'd NICs). If I remember correctly, the N40L doesn't have the same motherboard NIC onboard (it uses Broadcom NC107i whereas the N36L uses Broadcom NetXtreme BCM5723), so you may not run into the bge driver issue in FreeBSD that I've mentioned several times in this thread. Also note that if you want to use the eSATA and ODD SATA ports for a 5th and 6th drive, you need to hack or use an already-hacked BIOS as the ODD one only runs at SATA150 speeds and the eSATA doesn't support ACHI out-of-the-box (with the hacked BIOS, both support ACHI and SATA300). Some people do some pretty crazy HP Microserver mods. Apparently I completely missed this when it was news, but HP intend to take the microservers market seriously which means we're hopefully looking at a refresh within BlankSystemDaemon fucked around with this message at 13:54 on Oct 28, 2012 |
|
# ? Oct 28, 2012 13:12 |
|
|
# ? Jun 6, 2024 15:11 |
|
Steakandchips posted:You, sir, need a HP Proliant Microserver. How low power is one of these? I've debated buying one or building something similar to run FreeNAS (or even do double duty as a second HTPC) but it seems like it still uses more power than a Synology.
|
# ? Oct 28, 2012 14:21 |
WeaselWeaz posted:How low power is one of these? I've debated buying one or building something similar to run FreeNAS (or even do double duty as a second HTPC) but it seems like it still uses more power than a Synology. With a HDHomeRun, FreeBSD (for zfs-based filesharing on SMB) and MythTV you can actually have a complete HTPC/fileserver. You'll also need graphics card, and there's a EVGA card that fits and has a fan. Be aware that not all cards fit due to fan/heatsink configuration. BlankSystemDaemon fucked around with this message at 14:44 on Oct 28, 2012 |
|
# ? Oct 28, 2012 14:32 |
|
WeaselWeaz posted:How low power is one of these? I've debated buying one or building something similar to run FreeNAS (or even do double duty as a second HTPC) but it seems like it still uses more power than a Synology. 200w PSU in it, so if you slap a small GPU in one of the PCIe ports, it can handle it. Actual power consumption is what D. Ebdrup states (34-40w). FYI, you only need a GPU in there if you want to actually watch 1080p off it. I just run it headless and RDP in to it, or on occasion, if I want to fiddle with the BIOS, just connect it via it's mobo's VGA out to a monitor.
|
# ? Oct 28, 2012 17:13 |
|
WeaselWeaz posted:How low power is one of these? I've debated buying one or building something similar to run FreeNAS (or even do double duty as a second HTPC) but it seems like it still uses more power than a Synology. I bring this up from time to time because sometimes people don't realize this: My server is based on an Intel Q6600 quad-core CPU with 4GB of ram, and 15 (maybe more, I forget) hard drives. According to my Kill-A-Watt it costs between 4 and 5 dollars a month to run at my local electricity rate of 8 cents/kwh. That may or may not be significant to you, but there you go.
|
# ? Oct 28, 2012 17:38 |
|
WeaselWeaz posted:How low power is one of these? I've debated buying one or building something similar to run FreeNAS (or even do double duty as a second HTPC) but it seems like it still uses more power than a Synology. So, basically, gently caress power use.
|
# ? Oct 28, 2012 18:09 |
|
So every few days my ZFS file server "drops" some disks and claims data loss, but a reboot and a resliver finds all the data just fine. My first instinct is controller failure, but looking at what's failed, it's both on the onboard ports and my controller card, so that's probably out. All the drives are in one physical enclosure, so maybe that's bad? But this is the second enclosure I've put in, because the first one may or may not have crapped out and took some drives with it. So I'm not sure what else to look at. Power supply? How could I test that?
|
# ? Oct 28, 2012 18:51 |
|
I'm looking at the Synology DS 1010 to replace my Drobo FS. I like the Drobo but it's slow: the read/write speeds are terrible(that should be better with the Drobo 5D with the SSD but ehhh) and the processor power is pokey, running SABnzbd and SickBeard made it crash on me. Any experiences with the Synology DS1010? I picked it because it has the Synology SHR RAID poo poo so I can just use my hodgepodge of drives without worrying about matching as well as the app support Synology has. It seems more advanced than Drobo's app situation, which they seem to be phasing out as none of the currently available Drobo's will run Drobo Apps
|
# ? Oct 28, 2012 19:03 |
|
IOwnCalculus posted:Photographs, but swap the file for any other (video, text, whatever) and the user error for any other (accidental deletion, accidental overwrite, bad edit, whatever) and you still have an error that RAID of any type can't directly resolve. Packrats Unit - Reminder, RAID is not backup! When I gave some presentations on building your own NAS, I used the example of no jury on the planet refusing to convict your SO because you thought RAID was backup for baby pictures. DrDork posted:There are server-class motherboards and enclosures (usually rackmount style) which support multiple/redundant PSUs to ensure redundancy on that front. Otherwise there's no real way to do it that I could possibly recommend for reliability (there are some ways to daisy-chain PSUs to get extra power, but I wouldn't be comfortable recommending them to you). Getting a UPS for it would prevent external power failure (or at least give it long enough to gracefully shut down) and hopefully prevent power surge issues. Mind you, redundant power supplies kick you into the enterprise level of stuff, replete with stupid price penalties--even a "cheap" one will run you several hundred dollars. What do you guys think about some kind of kit that would let people turn two consumer PSUs into a redundant setup safely? I don't know how many cases actually leave room for two ATX PSUs though. Also update on weird name reoslution issues: OSX can see Solaris server in the "Network" window, but I can't ping megatron by name in shell. I'm using the default 2wire router for internet, so maybe I need to get some better DNS going. Windows still can't resolve megatron via NetBIOS, and I don't seem to have nmblookup or nmbd or anything on my Solaris box. movax fucked around with this message at 19:16 on Oct 28, 2012 |
# ? Oct 28, 2012 19:10 |
|
DrDork posted:So, basically, gently caress power use. This man knows what's up.
|
# ? Oct 28, 2012 19:20 |
|
Steakandchips posted:You, sir, need a HP Proliant Microserver. It might not be a bad idea to load one up with 6x3 TB WD Red drives though. Are there any special mounting considerations to put two additional hard drives in the optical slot?
|
# ? Oct 28, 2012 19:22 |
|
movax posted:What do you guys think about some kind of kit that would let people turn two consumer PSUs into a redundant setup safely? I don't know how many cases actually leave room for two ATX PSUs though. ashgromnies posted:That sounds awesome for me with the one exception that it uses traditional RAID and expanding your size isn't easy down the line. There's nothing really special to consider when using drives in the optical bay of the N40L other than, as has been mentioned before, needing to get special firmware to enable better SATA speeds and AHCI on those ports. You would probably want to get a little mounting unit if you're planning on putting a platter drive up there (only one 3.5" platter will fit--you can get 2 in there if you're using 2.5" drives), but 5.25->3.5" adapters are cheap. WD Reds are pretty fantastic, but unfortunately expensive right now. If the extra space is worth it to you, though, then by all means go for them. Personally I'd probably grab the 2TB ones and then chill until I actually ran out of space. Then I'd buy the 3TBs at a good bit less than what they cost today (or maybe 4TBs if it takes awhile to run out of space), move the entire array over, and either toss the 2TB ones into an expansion vdev, or just sell them off. DrDork fucked around with this message at 20:07 on Oct 28, 2012 |
# ? Oct 28, 2012 20:04 |
|
Will the MicroServer/FreeNAS spin down drives when not in use? Will it go to sleep? Apparently the Synology stuff will hibernate at ~4 watts when not in use (and wake up automatically?) so that's sweet. I don't think I'll use this more than 1-2 times a day, so I'd like everything to spin down when not in use. Before you ask why I'm getting something I'll use that infrequently, it's because all my computers have tiny SSDs and it's a pain sharing files between them, plus I have a bunch of raw images I'd like to archive over a year (and probably back up to Amazon Glacier?).
|
# ? Oct 28, 2012 20:34 |
It can do it, but it's not recommended. The thing about spindown and headparking when idle (and spin-up when doing anything or scheduled scrubs/S.M.A.R.T checks or anything at all happens) is that it causes more wear and tear on the drives, which directly impacts the drives lifetime. Additionally, you'll find that if you do get a NAS, you start using it more and more.
|
|
# ? Oct 28, 2012 20:46 |
|
Ninja Rope posted:Will the MicroServer/FreeNAS spin down drives when not in use? Will it go to sleep? Apparently the Synology stuff will hibernate at ~4 watts when not in use (and wake up automatically?) so that's sweet. I don't think I'll use this more than 1-2 times a day, so I'd like everything to spin down when not in use. FreeNAS/NAS4Free will also (if you let it) run a power daemon that'll underclock the CPU substantially; it'll knock it down from 1500Mhz to 500Mhz, so you get some power savings there, too. Also remember that the downsides of the deep-sleep power-savings options like the Synology hibernate and drive spin-down is that you're going to have to wait a little while for them to kick back on every time you need them. Personally I'd pay the $1/mo and enjoy the extended average life times of my drives.
|
# ? Oct 28, 2012 21:36 |
|
ashgromnies posted:I'm looking at the Synology DS 1010 to replace my Drobo FS. I like the Drobo but it's slow: the read/write speeds are terrible(that should be better with the Drobo 5D with the SSD but ehhh) and the processor power is pokey, running SABnzbd and SickBeard made it crash on me. Isn't the DS-1010 a bit old? The current 5-bay model is the DS-1512+ That being said, I picked up a Synology DS-412+ (pretty much the same hardware as the DS-1512+ but 4 drives instead of 5) largely based on the positive reviews that Synology units get here in this thread and elsewhere. If you don't want to roll your own NAS for the various reasons you've mentioned, I don't think you can do much better than the Synology devices.
|
# ? Oct 29, 2012 00:41 |
|
Is that still true with "Green" drives? I assume they were rated for more start/stop cycles since the firmware is super power conservative? I was thinking green drives + the TLER fix.
|
# ? Oct 29, 2012 01:14 |
|
Ninja Rope posted:Is that still true with "Green" drives? I assume they were rated for more start/stop cycles since the firmware is super power conservative? I was thinking green drives + the TLER fix. Also be aware that the TLER fix does not work on an increasingly large number of current production WD drives. If that's the path you're planning on taking, certainly check and see if the specific model number you're going to buy still allows the edit. Some of the ones which don't allow TLER do still allow edits to the idle setting (which is what will combat the aggressive head parking--TLER is more about how long the system will wait for the drive to try to recover from an error before deciding it's failed--normal desktop drives get a long time, since they're the only error recovery shot you've got, while RAID drives get a short time, since kicking most errors up to the RAID controller to deal with is usually the better option).
|
# ? Oct 29, 2012 01:38 |
|
Ninja Rope posted:Is that still true with "Green" drives? I assume they were rated for more start/stop cycles since the firmware is super power conservative? I was thinking green drives + the TLER fix. You can disable the head parking with a program from WD. If you're using software raid you don't really need to futz with TLER, that's more for hardware raid where the firmware is programmed to work with enterprise drives.
|
# ? Oct 29, 2012 03:38 |
|
If anyone is interested I'm selling a like-new Synology DS 212+ on SA Mart. I upgraded to the DS 412+ this week and am loving the new hardware. I am thinking about upgrading the RAM from 1GB to 2GB. Can I buy any stick of 2GB DDR3 PC-10600 RAM or an I restricted to certain manufactures if I want it to run on the 412+? Thanks!
|
# ? Oct 29, 2012 03:44 |
|
If anyone is looking for cheaper Synology products and don't need this year's model they do have a refurbished store. The products on there currently are usually all they have but occasionally the odd other device shows up for a few days. http://store.synologyamerica.com/Refurbished-Products-C10.aspx
|
# ? Oct 29, 2012 06:18 |
|
Has anyone experienced random infinite reboot cycling with any Synology products, such as the DS1511+?
|
# ? Oct 29, 2012 06:21 |
|
DrDork posted:Also be aware that the TLER fix does not work on an increasingly large number of current production WD drives. If that's the path you're planning on taking, certainly check and see if the specific model number you're going to buy still allows the edit. Some of the ones which don't allow TLER do still allow edits to the idle setting (which is what will combat the aggressive head parking--TLER is more about how long the system will wait for the drive to try to recover from an error before deciding it's failed--normal desktop drives get a long time, since they're the only error recovery shot you've got, while RAID drives get a short time, since kicking most errors up to the RAID controller to deal with is usually the better option). Longinus00 posted:You can disable the head parking with a program from WD.
|
# ? Oct 29, 2012 06:24 |
And wdidle doesn't work on newer Green drives.
|
|
# ? Oct 29, 2012 08:56 |
|
ashgromnies posted:It might not be a bad idea to load one up with 6x3 TB WD Red drives though. Are there any special mounting considerations to put two additional hard drives in the optical slot? It depends on the mounting bracket. I know a few people who've reported that something like the Nexus DoubleTwin can fit two drives in there without modification and another few who cut out some of the plate at the bottom to provide easier access to the gap underneath (they mounted a 2.5" hdd there to run the OS on it too) and give it a bit better ventilation for those drives. Then again for 6 (or 7 with the OS or swap drive if you run the OS off a USB) drives most people end up putting a cheap 8 port RAID controller in it as well to take advantage of hardware RAID.
|
# ? Oct 29, 2012 09:46 |
|
Didn't see this posted here but FreeNAS 8.3 was released on Friday. Changelog and download links here: http://sourceforge.net/projects/freenas/files/FreeNAS-8.3.0/RELEASE/ Doesn't seem that much different than 8.2, but 8.3 has support for zfs v28 just like NAS4Free now.
|
# ? Oct 29, 2012 20:45 |
|
Ok, so I'm going to give ZFS a try. I've come across a few places online saying that you should have a max of 9 drives in a raidz2 pool. I have 10 drives. Is this something I shouldn't do? What's the consequences of using 10 instead of 9?
|
# ? Oct 30, 2012 00:15 |
|
In theory, too large a risk of having >2 drive failures concurrently and losing all data on the array. How hard up are you on space? Unless you need every last gig I think I'd be more comfortable with a 10-drive Z3 than a 10-drive Z2 just to gain 2-3TB on an array that's already 14TB+ in size.
|
# ? Oct 30, 2012 00:29 |
|
Wouldn't a 10-drive RAID-Z3 cause problems on 4KB drives? I've read that for good performance, with 4KB advanced format drives at least, you want (128KB / (num drives-parity drives)) to be a multiple of 4KB, meaning recommended configs of 3/5/9 disks for RAID-Z, 4/6/10 for RAID-Z2, and 5/7/11 for RAID-Z3.
frumpsnake fucked around with this message at 00:44 on Oct 30, 2012 |
# ? Oct 30, 2012 00:40 |
|
Hmm, well I came across this:quote:The way raidz works, you get the IOps (I/O operations per second) of a single drive for each raidz vdev. Also, when resilvering (rebuilding the array after replacing a drive) ZFS has to touch every drive in the raidz vdev. If there are more than 8 or 9, this process will thrash the drives and take several days to complete (if it ever does).
|
# ? Oct 30, 2012 00:54 |
|
Thermopyle posted:Ok, so I'm going to give ZFS a try. I've come across a few places online saying that you should have a max of 9 drives in a raidz2 pool. I have 10 drives. Thermopyle posted:Hmm, well I came across this: evil_bunnY fucked around with this message at 01:00 on Oct 30, 2012 |
# ? Oct 30, 2012 00:57 |
|
I'm currently running a windows 7 NAS/media center running Plex. I have years of personal photos and other documents stored in there that I would be devastated to lose. No backup other than some DVDs burned of key files, however I don't trust them, and have also had some photographs stored to HD which became corrupted where half the image is random pixelation (different drives a long time ago). I've tried CrashPlan briefly for the past couple weeks, and it seems perfect for what I want for backup. Unlimited storage, offsite, and version retention to boot. Still doesn't solve my bitrot paranoia, as I would potentially not notice corrupted files getting backed up to CrashPlan until several/many years later, at which point the versioning might not go back far enough. So I want to switch my NAS to FreeNAS and run RAID-Z. I've never used RAID before, and it is confusing... even after skimming/following this thread. Am I right to assume that I would have my 3x RAIDZ drives, and then another separate drive for FreeNAS? And that the raid array's parity drive can't be smaller than the other two without limiting overall capacity? I'm fine with tinkering around with configurations and babysitting for a few weeks to start, but from there on out I'd like to be essentially hands off. I'm not interested in off the shelf NAS solutions as I don't want to have to replace the entire thing at once when capacitors go bad, etc. I currently have 4GB of RAM (mobo supports 16GB), and 1x2TB and 2x1TB hard disks available for the array. I have just over 2TB of data including media, maybe only 250GB of which is the super valuable stuff that needs to be in the array. Do I *need* more than 4GB of RAM? (I'm not entertaining thoughts of deduplication) It looks like I can get 3TB seagate drives for under $150, which is the top end of what I want to spend right now. I don't understand silvering without having done it before... Am I able to swap in larger capacity drives in the future one at a time, re-silvering at each step in order to make the upgrade? I also guess I need to give up my Plex server... but I could always throw another lesser machine in the corner for that I suppose. Otherwise are there any parity solutions I could simply schedule to run on my Windows 7 box? I wouldn't have a problem adding drives to store parity files and running a scan once a week. That would also give me the bonus of sticking with stupid-easy Windows and my drives can be read independently. Does anyone do this?
|
# ? Oct 30, 2012 03:08 |
|
BTW, Amazon and Newegg are selling the 3TB Seagate Barracuda for $120 down from $150. Ordered two over the weekend to fill the last 2 of my 4 bay NAS.
|
# ? Oct 30, 2012 03:19 |
|
The ST3000DM001s have worked well for me, just make sure you apply the CC4H firmware update or they'll overenthusiatically park their heads.
frumpsnake fucked around with this message at 03:29 on Oct 30, 2012 |
# ? Oct 30, 2012 03:27 |
|
Let's see it, Thermopyle!
|
# ? Oct 30, 2012 03:54 |
|
Star War Sex Parrot posted:Let's see it, Thermopyle! As I mentioned earlier in the thread, I'm upgrading all the drives in my server from a mix of different drives to 3TB drives. I was going to buy some USB-SATA cables to transfer data from my old drives to new ones since I didn't have enough SATA ports in my server to do it. Well, I realized I had enough ports in my main desktop, so I hooked up my new drives to it. Once I get ZFS figured out, I'll copy my 15+ TB of data over the LAN to these drives, then remove old drives from server, and put these new drives in their place.
|
# ? Oct 30, 2012 04:33 |
|
Thermopyle posted:As I mentioned earlier in the thread, I'm upgrading all the drives in my server from a mix of different drives to 3TB drives. I was going to buy some USB-SATA cables to transfer data from my old drives to new ones since I didn't have enough SATA ports in my server to do it. Jesus lord. Does anyone have any good recommendations for a clean external drive bay so this poo poo doesn't happen to me? I'm going to have somewhere between 8-10 drives.
|
# ? Oct 30, 2012 04:42 |
|
Megaman posted:Jesus lord. Does anyone have any good recommendations for a clean external drive bay so this poo poo doesn't happen to me? I'm going to have somewhere between 8-10 drives. I think that's temporary dude.
|
# ? Oct 30, 2012 06:15 |
|
Megaman posted:Jesus lord. Does anyone have any good recommendations for a clean external drive bay so this poo poo doesn't happen to me? I'm going to have somewhere between 8-10 drives. In any event, don't worry, there are ways to work around it--you're far more likely to have issues finding SATA ports for them all than worrying about where to physically store them.
|
# ? Oct 30, 2012 06:41 |
|
|
# ? Jun 6, 2024 15:11 |
|
movax posted:I think that's temporary dude. If it was permanent they would be gpus and this would be in the bitcoin thread. It does look like that copy will take a while. I'm trying not to accumulate that much data but I suspect I will build an amahi box next year.
|
# ? Oct 30, 2012 09:15 |