Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Moey
Oct 22, 2010

I LIKE TO MOVE IT

Cavepimp posted:

Moey, if you're still around I'd be interested in comparing some more notes on these Qnap 809U's. I did go ahead and buy them, because I figured if most people were getting results like yours their forums and the internet would have exploded with rage by now.

Still around, sorry I didn't get back with more numbers as planned (the specific NAS I was testing on, was thrown into production hosting scans). Once I get into the office tomorrow, I will figure out if I still have one to test on, or if I have to relocate that scan directory. Since you are getting good results, it makes me want to make mine get some good performance (like yours). Thanks for the update, it gives me hope with these.

Adbot
ADBOT LOVES YOU

KuruMonkey
Jul 23, 2004
Longinus00: I hadn't considered Raid0 rather than JBOD because...I just hadn't. Is there any practical difference between the 2 if read/write performance isn't an issue (this here NAS will be accessed over standard ethernet, not gigabit, so I'm not really sweating performance...)

PopeOnARope: Considering "recapturing" consists of "open the cupboard, take out the drives with the backup on them", I don't think I'm underestimating. (this is not data with a high churn rate - it is LITERALLY ripped Dr Who DVDs etc - if all else fails, I have the DVDs...)

Any chance of some input on my original questions regarding the Synology system itself?

I'll reiterate, since they've been thoroughly lost in the JBOD==TheLivingEnd dogpile:

(with a Synology, probably DS411j)

If I start with 2 drives, when I want to add a third, can I expand the volume, or must I wipe and start again? Is there a way to image what's on the drives and then reload? - extending this question: does using Raid0 rather than the dreaded JBOD of DOOM! change the answer?

Can I set the NAS up to create 2 shares, with different credentials? (public and private, basically) How good is the synology software for doing this? (specifically; when accessing and managing from a Mac?) Can I set the share sizes? (better; can I leave them to grow dynamically as used?)

Has anyone used the iOS apps synology put out? Do they work well? (they are, it must be said, the main reason I'm leaning to going with a synology rather than just a home-brew linux based thing - so if they suck...)

Lastly; how picky do I have to be about drives for this thing? In the UK, using the drive compatobility list, I find that of the ~20 models of HDD I'm offered on the vendor's site, only 2-3 will actually match anything on the compatibility list at all. (this seems to be a common issue with US derived HW compatability lists versus UK supply of devices, though)

what is this
Sep 11, 2001

it is a lemur
You can expand RAID volumes. It's not as straight-foward as you might think though.


Don't do RAID0. Don't do JBOD. Buy 3TB drives or whatever. You're being an idiot if you store stuff in RAID0 unless you have very specific needs for the speed it can give you, and with a NAS, in your situation where you're using it to store dr who videos or the wonder years, you don't. Don't be an idiot.

If there's any difference RAID0 is worse than JBOD. The fact that you even have to ask this means you have no idea what you're talking about, which means you shouldn't use RAID0 or JBOD. It's stupid. Don't do stupid things unless you understand them really well and have good justifications. You don't.

what is this
Sep 11, 2001

it is a lemur
Let me put it this way. If you think you have enough money to store 4TB of data, but you can only afford that with two drives in RAID0, you don't have enough money to store that much data.

RAID0 isn't a loophole you can use because you're too poor to build your storage properly. It isn't a shortcut. It's a way to destroy all your data.

Look at how much money you have. Find combinations of two or three or more drives. If you can buy only two drives, guess what, you can only store one drive worth of data redundantly. If you can buy three or four drives, you can store 2/3 or maybe 3/4 of that data.

If you can only afford 2x2TB drives, you can afford to properly store 2TB of data. Not 4TB. JBOD isn't a magic wand. You can't afford to store 4TB of data if that's the money you have. Wait and save up.
If you can afford 4 x 3TB drives, you can afford to store 9 TB or so in RAID5.

KennyG
Oct 22, 2002
Here to blow my own horn.

lazydog posted:

23GB a dollar
or
25GB a dollar for their cheapest 2TB at $79
edit: 28GB/$1 with a 2TB if you count a $10 mail in rebate

Wikipedia posted:

Converting from mpg or to L/100 km (or vice versa) involves the use of the reciprocal function, which is not distributive. Therefore, the average of two fuel economy numbers gives different values if those units are used, because one of the functions is reciprocal, thus not linear. If two people calculate the fuel economy average of two groups of cars with different units, the group with better fuel economy may be one or the other. However, from the point of energy used as a shared method of measure, the result shall be the same in both the cases.

I just typed up this huge rant on what a bad idea it is to do GB/$ instead of $/TB or $/GB. I am willing to back off from the ledge, but am I the only one that sees this in the same MPG vs L/100KM?

Few items are priced in X per $ (unless they are currencies in which case they are usually priced such that X is > 1)

Cavepimp
Nov 10, 2006

Moey posted:

Still around, sorry I didn't get back with more numbers as planned (the specific NAS I was testing on, was thrown into production hosting scans). Once I get into the office tomorrow, I will figure out if I still have one to test on, or if I have to relocate that scan directory. Since you are getting good results, it makes me want to make mine get some good performance (like yours). Thanks for the update, it gives me hope with these.

No problem. I already threw my primary one into half-production (there were things we aren't capturing with our current backups, so it makes me feel a little better) but I still have my secondary unit available to tweak around with for a few weeks if we want to try some different configs and compare. I purposely went simple with the config on that first one and it seems to be good so far, but I want to play around with teaming and jumbo frames to see what happens too. I'm not really expecting it to make much difference for my purposes, maybe another 5-10mb/s, but it could squeeze a little more performance out of it or kill the performance if something isn't right. I read a lot of the posts on their forums from people having problems and it was almost always a config problem or someone trying to make it more complicated than it really is.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

KuruMonkey posted:

PopeOnARope: Considering "recapturing" consists of "open the cupboard, take out the drives with the backup on them", I don't think I'm underestimating. (this is not data with a high churn rate - it is LITERALLY ripped Dr Who DVDs etc - if all else fails, I have the DVDs...)

Are your backup drives pre-existing, or are you purchasing them new for this? If it's the first, then that's handy. If it's the latter, then you're wasting money.

And it's still going to take you a few hours to re-rip it all again. I guess if your time is worth less than a $60 drive, that's cool too.

dj_pain
Mar 28, 2005

You know what I love ? sas controllers :D and how even this running as a vm in esxi I can get awesome rebuild speeds
code:
md126 : active raid5 sdc1[6](S) sdh1[7] sdb1[0] sdf1[5] sdd1[4] sde1[2] sdg1[1]
      4883799680 blocks level 5, 64k chunk, algorithm 2 [6/5] [UUU_UU]
      [=======>.............]  recovery = 37.7% (369175680/976759936) finish=141.5min speed=71547K/sec
      bitmap: 7/8 pages [28KB], 65536KB chunk

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Did this thread every come to any kind of conclusion as far as using ZFS with ESXi and RDM?

I'm starting to get pissed at not having a Linux, but not as pissed as I'd be at losing ZFS. Or should I just wait out that ZFS on Linux project?

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Cavepimp, my testing will have to wait a little bit since they are all in production currently, but I will free one up shortly. I would love to get some performance like you are.


Question for the thread:

Is there a default goto case for building a NAS? Ideally I would like something tiny, 4-6 drive bays, hot swappable would be a big plus (not really needed though), and not insanely expensive.

I found this little Chenbro guy, but it is pretty pricy, and the proprietary power supply worries me.

http://www.newegg.com/Product/Product.aspx?Item=N82E16811123128

Any suggestions oh wise packrats?

Edit:

This guy looks pretty nice too (Lian Li PC-Q08

http://www.newegg.com/Product/Product.aspx?Item=N82E16811112265

Moey fucked around with this message at 22:06 on May 26, 2011

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Moey posted:

I found this little Chenbro guy, but it is pretty pricy, and the proprietary power supply worries me.

http://www.newegg.com/Product/Product.aspx?Item=N82E16811123128

Any suggestions oh wise packrats?

Edit:

This guy looks pretty nice too (Lian Li PC-Q08

http://www.newegg.com/Product/Product.aspx?Item=N82E16811112265

I wrote the pissy review about the proprietary PSU, and I was pissy ITT, too. It actually wasn't my system's problem (overheating was, a motherboard design flaw), but for business-critical production I still wouldn't recommend it just because of that issue. Otherwise, I haven't seen a case that matches its features.

I replaced it with that very Lian Li, and while all the screws are a pain in the rear end, it's a solid case. Ventilation is much better, but not good enough to outweigh the motherboard's lovely chipset heatsink when the CPU cooler is in the way of the airflow stream. So if you go with it, stay away from the Gigabtye GA-H55N-USB3.

Downsides: blue light fan is annoying (I swapped it out), optical drive cover is very flimsy (mine broke off), and it's a little tough to get all the cables out from under the motherboard when installing it.

Now I have a Dell Poweredge T110 II in production.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Haha, I didn't even notice that was you who wrote that review. That Lian Li seems to be on the top of my list currently. I wish it had front hot swap bays like the Chenbro, but I guess I can't have it all :(

What are you leaning towards software wise? At first I was looking at OpenIndiana, but now am leaning towards FreeNAS. I'm looking to setup either a RAID 5 or maybe ZFS. I mostly going to be storing audio/video and random file backup. Performance won't be a huge issue, but I would like to not have to worry about issues while trying to stream multiple 1080p videos. I currently have 1 WD 2tb EARS, so I would ideally like to keep expanding on those, but am very open to suggestions! (Same with suggestions for CPU!)

Moey fucked around with this message at 00:26 on May 27, 2011

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I was constrained by my motherboard choice, since Solaris and variants don't boot on H55 (or didn't, in December). At the time, FreeBSD didn't have the most up-to-date zpool version, either, so I moved on to Ubuntu Server. After hosing the install entirely because I didn't understand fsck (and after trying both zfs-fuse and mdadm after zfs-fuse sucked awfully), I just said "screw it," got an SSD boot drive, and went with Windows Server 2008 R2 on an educational license, using Dynamic Disks RAID and Volume Shadow Service in place of ZFS.

I've previously used FreeNAS 7RC1, but it was dead-dog slow on the Atom SFF box I stuck it on, like 125 KB/s over GbE slow using software RAID 1 (not even ZFS mirror). I haven't kept up with it, but at the time, it didn't seem like a blistering performer even among those who've had better luck. The Atom's raw speed wasn't even the limiting factor; it almost never broke 15% CPU utilization.

I'm not a ZFS expert, but if you're doing more than two 1080p streams at once, I'd go with a non-ZFS solution. Hard benchmarks are hard to find, but at least a few folks think/show that ZFS gets slower with concurrent read access faster than Linux softRAID and Ext3. I have no personal experience, since I've never gotten ZFS to work well for me, period.

In terms of expanding from your single drive, only a few upgrade paths exist that don't involve a separate copy of the data and building a new array. Off the top of my head, you've got Intel Rapid Storage RAID on a compatible motherboard. Specifically, single drive migrated to RAID 1 for redundancy, then split/reset to single and migrated to RAID 5. If you're fine with deleting and recreating arrays, then forget this; it limits you to Windows anyway.

If you're sticking with the WD Green drive, find a copy of WDIDLE and turn down their super-aggressive head parking, otherwise any RAID array will desync faster than it can resync again.

Fileserving isn't too CPU intensive. Any interruption of HTPC streams will have a lot more to do with each stream's bitrate, network bandwidth/latency, and storage bandwidth than CPU speed. Sapphire makes some nice mini-ITX boards based on the AMD E350 APU. It would let you use the machine as an HTPC point, too, if you were so inclined. Intel Atom works similarly, though you'd need an Ion2 board to get HTPC out of it. Looking at higher-end stuff for transcoding, light gaming, or virtualizing services etc., Athlon II X2 chips are nice and cheap, and Sandy Bridge Core i3s are not only speedy and serviceable HTPC chips but they can handle fileserving loads without leaving low-power mode.

BlackMK4
Aug 23, 2006

wat.
Megamarm
Any way to easily convert ext3 to zfs easily? Thinking about running FreeBSD instead of Linux on my home server this time around.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

BlackMK4 posted:

Any way to easily convert ext3 to zfs easily? Thinking about running FreeBSD instead of Linux on my home server this time around.

Yeah, back up everything and rebuild your array from scratch, then copy it back over. There's no conversion or anything like that available.

BlackMK4
Aug 23, 2006

wat.
Megamarm
:( I see my storage doubling then.

devmd01
Mar 7, 2006

Elektronik
Supersonik

BlackMK4 posted:

:( I see my storage doubling then.

And that's a bad thing because...?

crm
Oct 24, 2004

so I've got two separate 2TB disks that are mostly full, and I'm a little concerned one will off itself.

So I'm thinking I'd like something in the 8TB-10TB in some sort of RAID5 (or maybe even 6) configuration.

Preferably in a standalone box - for those parameters, what would you guys recommend?

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Factory Factory posted:

Advice

Currently I just have one machine that is being used as an HTPC and also file storage. What I would like to have is a standalone NAS and a standalone HTPC, then that machine I am currently using will be setup with ESXi to host all the random VMs that I dream about.


As for migrating the data, I would be fine with buying 3 new drives, building an array, transfering the data from the existing drive, then expanding the array with the stand alone drive (risky?). I just need to figure out what the hell to use for the OS and arrays. Would I be better off grabbing a Dell PERC5/i card off ebay? From my reading, I would have to do some modifications so the thing doesn't overheat, but if it is going to get me better performance on the array, I figure it would be worth it.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Moey posted:

As for migrating the data, I would be fine with buying 3 new drives, building an array, transfering the data from the existing drive, then expanding the array with the stand alone drive (risky?). I just need to figure out what the hell to use for the OS and arrays. Would I be better off grabbing a Dell PERC5/i card off ebay? From my reading, I would have to do some modifications so the thing doesn't overheat, but if it is going to get me better performance on the array, I figure it would be worth it.

The type of migration I was talking about before wasn't RAID expansion, it was just using the ability of Intel Rapid Storage Tech FakeRAID to migrate data from one disk onto a new array. To go from a 3-drive RAID array to a 4-drive array, for example, requires backing up and nuking the entire disk set.

Live expansion in a homebrew NAS is pretty much limited to mdadm on Linux. The process is slightly risky, but much less than it used to be, I've Googled in a cursory manner. But with 2 TB Green drives, you'd be taking your life into your hands letting it rebuild with no redundancy that long.

A PERC5/i would be a decent card (here's a nice guide about modding it for general use). It has very regular RAID 5 performance vs. Intel RST FakeRAID, but it has some downsides: it's hot, and it only does RAID expansion via Dell software on a PowerEdge.

Now, with ZFS or using LVM (logical volume manager) in Linux, you get some extra options for expansion: using multiple disks and/or RAID arrays as one logical filesystem. Say you start with a 3-drive RAID 5, and you wanted to add four more drives hanging off a PERC5/i later. Using LVM, you would have previously had your 3-drive mdadm array as the sole member of a logical volume. If you add the PERC's RAID array as a second member of the LVM volume, LVM will present to the OS the sum of each array's usable space in a single volume. It will also balance used space between each volume behind the scenes.

You could take the hardware RAID array from the PERC and add it as an entire vdev (virtual device) to your ZFS pool, the same way each of the original 3 drives were added to a RAIDZ vdev and then to a pool. This will give you a little redundancy in parity, but since at 7 drives ZFS best practices suggest a 2-drive-parity RAIDZ2 pool anyway, it's actually for the best. Even if you just use the PERC as a JBOD controller, you can do roughly the same thing: create a new RAIDZ vdev, and add that to an existing storage pool, and ZFS will balance data in the pool on each vdev.

Practically speaking, the PERC5 is a lot of work for what is, admittedly, a really nice controller, but it's not a necessary controller. Its biggest convenience is the two SFF SAS connectors so that you can hang all your drives off one card. But mdadm and ZFS are very tolerant of where a drive is attached, especially ZFS. If you want all the goodies of the PERC5/i without the labor, Newegg has 4x- and 8xSAS cards that you can flash with a JBOD firmware, skipping the "hideous mutant heatsink" step.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
If you want to do some software RAID, definitively skip the PERC and get that bigger Intel card. I've got the SuperMicro version and it rocks in my ZFS box, the lowest hassle anything in that box.

I can't speak for ZFS read ability, because my HTPCs can't handle 1080p :(

Moey
Oct 22, 2010

I LIKE TO MOVE IT
That x8 Intel card looks fancy. After reading this review, I assume speed wise I would be fine? All this poo poo is still confusing to me.

quote:

I have this in a simple JBOD configuration in a PCI-E x8 slot. I'm using it with FreeBSD and I have 6 1.5 TB drives in a RAID-Z configuration via ZFS. Simply put.... ZFS rocks and this card can write a stream at 335MB/sec to the drive volume. It can read a stream at over 500 MB/sec to the drive volume. This is exactly what I wanted. This is a rebadged LSI card and I found that it used the FreeBSD LSI module with no issue. This is a bare card, expect nothing else.


Edit:

If I am just using that card for JBOD and my OS will be handling the array, is there a need for the card, or can I just use the onboard sata chipset? So confused....

Moey fucked around with this message at 17:04 on May 27, 2011

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
You can use onboard SATA. The card may still be useful, as you have 7 drive bays and the most SATA connectors you're likely to find on a mini-ITX board is 5 plus an eSATA port, so not enough ports for all the bays. And that's just assuming you aren't cramming multiple drivers per bay - you could stuff 16 2.5" disks in that Lian Li case, plus dangle an SSD or four without blocking airflow, if you got the right adapters and some tape.

And yeah, that Intel card is fancy. Not as fancy as the PERC, but still fancy.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Moey posted:

That x8 Intel card looks fancy. After reading this review, I assume speed wise I would be fine? All this poo poo is still confusing to me.



Edit:

If I am just using that card for JBOD and my OS will be handling the array, is there a need for the card, or can I just use the onboard sata chipset? So confused....

The card is needed if you're going to use more ports than the motherboard has. My server right now uses 9 ports (5 disk RaidZ + hot spare, mirrored boot drives, optical drive) and will be using 5 more when I run out of space (another 5 drive RAIDZ). With 6 onboard, the 8 on my card gives me just enough.

japtor
Oct 28, 2005

crm posted:

so I've got two separate 2TB disks that are mostly full, and I'm a little concerned one will off itself.

So I'm thinking I'd like something in the 8TB-10TB in some sort of RAID5 (or maybe even 6) configuration.

Preferably in a standalone box - for those parameters, what would you guys recommend?
The Synology 5 disk box might work (8TB RAID5 w/2TB disks)...but it's pretty expensive, around $1000. Alternatively you could go with a 4 disk setup with 3TB drives for 9TB (if they're ok for HW RAID use) which would help a bunch price wise, down to the $500 range or so, and it seems like there's a lot more options out there for 4 disks.

If you don't need a NAS there's cheaper USB3/eSATA/FW boxes out there, like $200-300 I think. There's ones with HW RAID or you can just set them up as separate disks to run your own SW RAID.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
So I'll have to figure out how many disks I am going to go with. I figure 4 disks would be plenty of storage for me. I was reading this article and this guy was testing expanding arrays with virtual disks, I think I'll do some testing with that before I go live.

http://rskjetlein.blogspot.com/2009/08/expanding-zfs-pool.html

Also what other hardware should I be looking at, any CPU/amount of memory recommendations? Do I want/need a disk for cache?

BlackMK4
Aug 23, 2006

wat.
Megamarm

devmd01 posted:

And that's a bad thing because...?

I will need a second raid card and have to ditch one of my Intel nics.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Moey posted:

So I'll have to figure out how many disks I am going to go with. I figure 4 disks would be plenty of storage for me. I was reading this article and this guy was testing expanding arrays with virtual disks, I think I'll do some testing with that before I go live.

http://rskjetlein.blogspot.com/2009/08/expanding-zfs-pool.html

Also what other hardware should I be looking at, any CPU/amount of memory recommendations? Do I want/need a disk for cache?

I just wish ZFS could do something like vacate a vdev, so you could remove it from the pool.

crm
Oct 24, 2004

japtor posted:

The Synology 5 disk box might work (8TB RAID5 w/2TB disks)...but it's pretty expensive, around $1000. Alternatively you could go with a 4 disk setup with 3TB drives for 9TB (if they're ok for HW RAID use) which would help a bunch price wise, down to the $500 range or so, and it seems like there's a lot more options out there for 4 disks.

If you don't need a NAS there's cheaper USB3/eSATA/FW boxes out there, like $200-300 I think. There's ones with HW RAID or you can just set them up as separate disks to run your own SW RAID.

I've got a FreeNAS box - what's recommended esata box to go along with that?

Star War Sex Parrot
Oct 2, 2003

After looking at NAS options for the last year, I just decided that there's no point when I already have my iMac on all the time, and just went with a My Book Studio Edition II via FW800. I swapped the stock 1TB drives with 3TB drives, natch. :)

japtor
Oct 28, 2005

crm posted:

I've got a FreeNAS box - what's recommended esata box to go along with that?
...I wish I could tell you :shobon:. I'd just go on Amazon and Newegg, search "# bay drive enclosure", and do research on different stuff you like from there.

KuruMonkey
Jul 23, 2004

PopeOnARope posted:

Are your backup drives pre-existing, or are you purchasing them new for this? If it's the first, then that's handy. If it's the latter, then you're wasting money.

And it's still going to take you a few hours to re-rip it all again. I guess if your time is worth less than a $60 drive, that's cool too.

As I said in my original post, way back; I already have 2x 2TB drives with the data already on, AND already have 4x 1Tb drives with a complete, up to date, maintained backup, sat in the cupboard already.

So; as soon as a new NAS comes along, I already have 8Tb of backup drives to use as a backup of the NAS, and it starts out up to date; the 1Tb drives can simply be ready to backup the inevitable growth. So I don't care AT ALL about lack of reundancy IN the NAS itself. Do not care, will not care.

Hence I'm ignoring rants about raid and "you can't afford to redundantly store the data you're already storing redundantly". Because, frankly, they're just noise from moronos who never actually bothered to read my questions or description of the system as-is before knee-jerking to regurgitate their nuggets of received wisdom as fast as possible.

I don't really hold out any hope of meaningful input on my actual questions at this point, so go back to repeating "don't use JBOD" endlessly, I guess... (it seems to make some posters feel very clever)

kill your idols
Sep 11, 2003

by T. Finninho

KuruMonkey posted:

:jerkbag:

Coming to a thread geared towards backups and redundancy and you get negative response to not using redundancy? Might as well walk into a Synagogue dressed as Hitler.

I went back and re-read every one of your posts, and I have no idea why. Why even bother to ask the goons their opinion if you are already set in your way of how you want YOUR poo poo collecting?

Don't ask for help if you can't handle the criticism. :emo:


Anyway, back to some ZFS goodness. Anyone not use the reservation of storage on their zpools? Seems I'm losing about 560GB of space on a 4x2TB RaidZ.

EDIT: I really finding the HP ProLiant MicroServer tempting with 5 drives and the USB header for the FreeNAS 8 install.

kill your idols fucked around with this message at 01:08 on May 29, 2011

nosml
Jan 14, 2003

kill your idols posted:

I really finding the HP ProLiant MicroServer tempting with 5 drives and the USB header for the FreeNAS 8 install.

x2. I've debated between buying a Drobo, Netgear ReadyNas, Synology or rolling my own solution, but for various reasons cannot justify the purchase over my current JBOD method. I think the HP unit finally fits my needs perfectly at a price I like.

Now, the debate between going with 2TB or 3TB drives. Any thoughts? Currently, 2TB seems to be the best price point.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
The reason the Proliant Microserver dropped off my radar was a web chat with HP talking about their warranty. They told me that if you don't buy the drives with mounting sleds from HP (at, for example, $530 for a 2 TB drive), it voids the warranty on the entire machine.

Dell at least has the courtesy to say "We'll cover the rest of the machine as long as you remove the aftermarket drives before we work on it."

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire
I've decided on just keeping my little extremely low power ASUS EEE Box as a headless server I remote into from work for email/proxy, as I didn't want to hassle with making a NAS be Windows Server based just so I could remote into it as a replacement for the EEE Box.

So what seems to be the favorite blend of Linux/Unix NAS OS these days? FreeNAS?

reborn
Feb 21, 2007

kill your idols posted:

Coming to a thread geared towards backups and redundancy and you get negative response to not using redundancy? Might as well walk into a Synagogue dressed as Hitler.

I went back and re-read every one of your posts, and I have no idea why. Why even bother to ask the goons their opinion if you are already set in your way of how you want YOUR poo poo collecting?

Don't ask for help if you can't handle the criticism. :emo:


Anyway, back to some ZFS goodness. Anyone not use the reservation of storage on their zpools? Seems I'm losing about 560GB of space on a 4x2TB RaidZ.

EDIT: I really finding the HP ProLiant MicroServer tempting with 5 drives and the USB header for the FreeNAS 8 install.

I haven't noticed any such issues with my pools.

I recently rolled my own FreeNAS box since 8.0 went stable and I have nothing but good things to say. It's certainly not as quick as other solutions but it fits my needs perfectly. I've got ZFS served via iSCI to my ESXi box which I use for everything from serving media to backups of my laptops/pc.

So far I haven't noticed any issues and the performance is good enough for me. I don't have any hard benchmarks at the moment.

kill your idols
Sep 11, 2003

by T. Finninho

nosml posted:

x2. I've debated between buying a Drobo, Netgear ReadyNas, Synology or rolling my own solution, but for various reasons cannot justify the purchase over my current JBOD method. I think the HP unit finally fits my needs perfectly at a price I like.

Now, the debate between going with 2TB or 3TB drives. Any thoughts? Currently, 2TB seems to be the best price point.

2TB drives I think are the sweet spot right now. With the move to 3TB drives on the way, I can see alot of deals going on, plus people looking to expand their storage needs and selling their current 2TB drives. With the 3 year warranty on most hardware from the big names, I can't see taking a chance on some used stuff that bad. Worse thing is you need to RMA it and take some downtime; which is always the case buying "used."

I've been searching around everywhere on more info on the HP unit, and so far, I can't see a single thing keeping me away but the fact of spending $320ish on it. I can find on some crazy promo, or even used, I might have no choice.

EDIT: Newegg has it for $272.84 shipped, with instant discount and promo code. oshit.

kill your idols fucked around with this message at 01:20 on May 30, 2011

wanderlost
Dec 3, 2010
I've had two drobos for years now and I love them to death, but I need something else. The drobos are great for my large 5+TB media library, but for my personal photos and documents, I don't need that much space. My storage needs will always easily be covered by a single drive, so I'm looking for a 2 drive, NAS that's as easy as the drobos, and dead reliable. Synology is the other name that gets thrown around?

Adbot
ADBOT LOVES YOU

caberham
Mar 18, 2009

by Smythe
Grimey Drawer
Can someone please point to any tools or resources to check whether dynamic dns is working or not?

I registered for a free dns hosting service and then tried running the synology DDNS service. Everything seem to be working and my friends can connect to my server but after a while there's an error and no one can connect from the outside.

Even the port forwarding EZ internet is not working and if I try manually opening the ports through the router port forwarding rules, I get a "fail" when I test the connection through the synology service.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply