Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!
Raid 0 is pretty much dead. SSDs give enough performance for all but the most obscene workloads while not doubling up on failure rates. RAID 0 just doubles the data loss in the case where one drive fails when compared to two independent drives for no benefit in terms of total space while still underperforming any decent SSD.

No matter whether you're going for capacity or speed it doesn't bring anything to the table except as a really lovely middle ground.

Adbot
ADBOT LOVES YOU

phosdex
Dec 16, 2005

I wouldn't say 0 is dead but it's usefulness is definitely down due to ssds. But if you have somethign that wants high iops for random read and writes while not caring too much about data loss and need large capacity you might use it.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
It is stupidly useless in a NAS though, as any ONE drive can easily saturate a gigabit ethernet line. The speed benefit is wasted, and in return you get a huge increase in your chances to lose data.

phosdex
Dec 16, 2005

But you don't normally saturate gigabit with heavy random reads and writes on mechanical drives.

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


Skandranon posted:

Depending how you set your mirror up, no you don't have to clear it first. Windows will allow you to simply add a mirror to any (dynamic I think) drive. Can't speak for the rest. The only point of failure with RAID 1 is that both fail at the same time, and that is only plausible if you get 2 disk from the exact same production run and that run in particular has a high failure rate. Otherwise, it should be random as to when your drives die. Also, there is only so much you can do to protect your data, and in your case, RAID 1 is the best you can do. Still might get hit by a meteor tomorrow, but today, RAID 1 is your best bet for redundancy. If you only have 2TB though, something like Crashplan or Backblaze might be a good thing to look into as well.

Cool, thanks, that being the only way for raid 1 to fail completely is sufficiently 'lightning strikes' for me.

Here's what's probably a really dumb question. I've got an old PC laying around I want to use to gently caress around with a bare-metal VM on (ESXi I guess is the free one?). Would setting up FreeNAS or something like that as a VM on that old hardware be a seriously bad idea? (I know a VM is another layer between the OS and the disks but I have no idea if that's a big deal or not as far as storage goes)

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Ciaphas posted:

Cool, thanks, that being the only way for raid 1 to fail completely is sufficiently 'lightning strikes' for me.

Here's what's probably a really dumb question. I've got an old PC laying around I want to use to gently caress around with a bare-metal VM on (ESXi I guess is the free one?). Would setting up FreeNAS or something like that as a VM on that old hardware be a seriously bad idea? (I know a VM is another layer between the OS and the disks but I have no idea if that's a big deal or not as far as storage goes)

It's not a bad idea at all, a number of people here have done that. However, one thing to keep in mind is ESXi is extremely picky about drivers.

phosdex
Dec 16, 2005

Ciaphas posted:

Cool, thanks, that being the only way for raid 1 to fail completely is sufficiently 'lightning strikes' for me.

Here's what's probably a really dumb question. I've got an old PC laying around I want to use to gently caress around with a bare-metal VM on (ESXi I guess is the free one?). Would setting up FreeNAS or something like that as a VM on that old hardware be a seriously bad idea? (I know a VM is another layer between the OS and the disks but I have no idea if that's a big deal or not as far as storage goes)

FreeNAS will want hardware control of the disk controller. In ESXi this is known as hardware/PCI passthrough and requires your cpu and motherboard to support Intel VT-d/AMD IOMMU. You typically cannot passthrough your motherboards onboard sata controller, so you'll need a separate card (or a nice server board with a secondary controller). The FreeNAS forums has a sticky post that basically says the world will end if you virtualize it, but there are many people who do so without problems.

IOwnCalculus
Apr 2, 2003





phosdex posted:

FreeNAS will want hardware control of the disk controller. In ESXi this is known as hardware/PCI passthrough and requires your cpu and motherboard to support Intel VT-d/AMD IOMMU. You typically cannot passthrough your motherboards onboard sata controller, so you'll need a separate card (or a nice server board with a secondary controller). The FreeNAS forums has a sticky post that basically says the world will end if you virtualize it, but there are many people who do so without problems.

I've passed through the onboard SATA just fine, but you still need an addon controller. Whichever controller you pass through won't be available to ESXi to use as a datastore, and the FreeNAS VM needs to live in a datastore that doesn't require FreeNAS to already be running.

phosdex
Dec 16, 2005

huh, I've got an x99 board with dual intel controllers and couldn't get the second one to appear in the passthrough window when I wasn't using it. I'm not sure how supermicro did that dual thing, maybe it is really just 1.

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


Skandranon posted:

Depending how you set your mirror up, no you don't have to clear it first. Windows will allow you to simply add a mirror to any (dynamic I think) drive. Can't speak for the rest. The only point of failure with RAID 1 is that both fail at the same time, and that is only plausible if you get 2 disk from the exact same production run and that run in particular has a high failure rate. Otherwise, it should be random as to when your drives die. Also, there is only so much you can do to protect your data, and in your case, RAID 1 is the best you can do. Still might get hit by a meteor tomorrow, but today, RAID 1 is your best bet for redundancy. If you only have 2TB though, something like Crashplan or Backblaze might be a good thing to look into as well.

Coming back to the cloud backup thing, it looks like neither of those really supports backing up from a NAS at all, at least according to google. Is this actually so, and why?

(edit: nevermind, looks like Crashplan does support it with a bit of finagling. That's what I get for googling at 1am :()

Ciaphas fucked around with this message at 09:20 on Aug 15, 2015

JacksAngryBiome
Oct 23, 2014
Freenas has a crashplan plugin. It does take some finaggling to set up.

EpicCodeMonkey
Feb 19, 2011
Holy christ, Synology have themselves a mighty fine NAS product with a great web UI, but they don't know poo poo about package management. I managed to screw mine up royally installing GitLab (just to mess around with it) via their package manager - initial install went well but overnight it imploded. Not to worry, I went into the management console to re-start it, but nothing happened.

Plan B was to uninstall and re-install GitLab, which apparently auto-deletes all its databases. No option to retain them in case you re-install later; uninstall just nukes them all. However, in this case it told me I had to uninstall the Miranda DB and Docker dependencies first, before I could remove GitLab. Except you guessed it, trying to uninstall either of those two tells me I have to remove GitLab first.

Ended up having to SSH into the thing, nuke enough of the package directories to make it think they were removed, re-install them to fix everything up, then uninstall them to rid myself of the madness. Never again - now I'm using their bare-bones Git server package that has no dependencies.

Why the crap can't they use a real package manager that can fix this sort of thing, instead of making their own lovely home-grown system?

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


JacksAngryBiome posted:

Freenas has a crashplan plugin. It does take some finaggling to set up.

I didn't even consider that, I was thinking in terms of backing up via my desktop. :downs:

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Ciaphas posted:

I didn't even consider that, I was thinking in terms of backing up via my desktop. :downs:

I do believe Crashplan will allow you to backup mapped network drives, though Backblaze won't, as they are fuckers.

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


phosdex posted:

FreeNAS will want hardware control of the disk controller. In ESXi this is known as hardware/PCI passthrough and requires your cpu and motherboard to support Intel VT-d/AMD IOMMU. You typically cannot passthrough your motherboards onboard sata controller, so you'll need a separate card (or a nice server board with a secondary controller). The FreeNAS forums has a sticky post that basically says the world will end if you virtualize it, but there are many people who do so without problems.

Thanks for warning me about this. Looks like my old computer--a Gigabyte GA-P55M-UD2 with an i5 750 (and 16gb DDR3)--doesn't support VT-d at all, so I won't be trying that.

Guess I can still get the disks together and install FreeNAS on it easy enough, though whether I want to go to all that fuss and bother or just buy a Synology I'm still not sure on. (Seems like kind of a beefy machine just to be running a NAS, though, but I'm not really sure what to do with it otherwise.)

Aside question, anyone know if the disk in a WD My Book Live can be salvaged as is or will they have some fuckery going on there?

(edit) While I'm here, looks like I can't get Synology anything at fry's--just Netgear (which I see I should avoid) and WD. Anyone know if anything on this page can be considered good to buy?

Ciaphas fucked around with this message at 20:13 on Aug 15, 2015

BlankSystemDaemon
Mar 13, 2009



phosdex posted:

FreeNAS will want hardware control of the disk controller. In ESXi this is known as hardware/PCI passthrough and requires your cpu and motherboard to support Intel VT-d/AMD IOMMU. You typically cannot passthrough your motherboards onboard sata controller, so you'll need a separate card (or a nice server board with a secondary controller). The FreeNAS forums has a sticky post that basically says the world will end if you virtualize it, but there are many people who do so without problems.
I imagine that the reason for the world ending if FreeNAS is virtualized is more that they simply cannot guarentee that anything won't break as a result of virtualizing it, as they don't test for that - but that's just them covering their rear end as FreeBSD runs fine as a guest OS (and a hypervisor*) and there is no reason to believe that the things they put on top will gently caress anything up.

*: I've been playing with iohyve on my FreeBSD workstation and once I get around to building a new server, I'm seriously considering using FreeBSD+zfs+bhyve+iohyve+jails+iocage rather than ESXi as FreeBSD with those tools can do everything that ESXi can.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
From what I understand, a lot of the concern comes with what happens when you have a disk fail in a virtualized environment. Without having direct access to the drive(s), FreeNAS/ZFS may freak the gently caress out when you try to replace it. Or it might work fine. :iiam: Also some concern about write-holes, since without direct access FreeNAS may think it wrote something, but the host OS hasn't bothered with it yet. That's a bit of a fringe case, though, and shouldn't be an issue at all if you're on a UPS.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Regarding RAID0, if it's all about performance, you're better off with RAID1, which has a higher theoretical read throughput and gets you redundancy as well.

IOwnCalculus
Apr 2, 2003





DrDork posted:

From what I understand, a lot of the concern comes with what happens when you have a disk fail in a virtualized environment. Without having direct access to the drive(s), FreeNAS/ZFS may freak the gently caress out when you try to replace it. Or it might work fine. :iiam: Also some concern about write-holes, since without direct access FreeNAS may think it wrote something, but the host OS hasn't bothered with it yet. That's a bit of a fringe case, though, and shouldn't be an issue at all if you're on a UPS.

Those issues go away if you pass the controller through to the storage vm. The only virtual disk it would deal with would be what it boots from.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

IOwnCalculus posted:

Those issues go away if you pass the controller through to the storage vm. The only virtual disk it would deal with would be what it boots from.
Yeah, absolutely. Hence the need for VT-d, at which point you're probably ok on all fronts. I just was talking about virtualizing the whole thing with no passthrough.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
Only 541gb left free in my 4 drive SHR2 array (2 drive redundancy). Im using 4 3tb reds right now. Anyone used the 5tb seagates pulled from externals? Right now my case maxes at 6 drives (5 hot swaps) and im considering getting a 5tb for my fifth drive then move the others over in the future to 5tb drives.

edit: apparently its poo poo lol

Just gonna get another 3tb red and then just get a few 16tb ssd's in a couple years

Don Lapre fucked around with this message at 17:49 on Aug 16, 2015

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I dunno how well Moore's Law works out for prices, but given that 16TB SSD is going to probably be like $200k for enterprises it'll still be like $5k in another 2 years I'd say (supposed to be double density every 18 months but price is another question, yada yada).

I'm just not giving a drat about the money and dumping 8 4 TB drives into a mid tower case (those Toshiba drives run a little warm, but they're pretty cheap and reliable which makes them tops in the price-performance-reliability curve if you ask me). After trying for years and years to get the right server rack and/or small server setup that really works well for my needs, I'm going back to the tried and true server in tower + beefy hardware + custom setup + lots of drives that I had in freakin' college. If it hasn't happened by now, it just ain't gonna happen :(

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Don Lapre posted:

Only 541gb left free in my 4 drive SHR2 array (2 drive redundancy). Im using 4 3tb reds right now. Anyone used the 5tb seagates pulled from externals? Right now my case maxes at 6 drives (5 hot swaps) and im considering getting a 5tb for my fifth drive then move the others over in the future to 5tb drives.

edit: apparently its poo poo lol

Just gonna get another 3tb red and then just get a few 16tb ssd's in a couple years

How fast are you eating up space? I'd probably get 2 more 3tb drives to max out the case, and then re-evaluate the hard drive market when you again need to upgrade.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Skandranon posted:

How fast are you eating up space? I'd probably get 2 more 3tb drives to max out the case, and then re-evaluate the hard drive market when you again need to upgrade.

not fast enough lol. Yea, ill just add 3tbs when i need them.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Don Lapre posted:

not fast enough lol. Yea, ill just add 3tbs when i need them.

The reason I say get 2 now is that adding drives and re-configuring an array takes a non-trivial amount of time (depending upon case & array config). You'll mostly save that time if you do 2 drives at once.

Prescription Combs
Apr 20, 2005
   6
I currently have a pieced together 'server' acting as a NAS/plex transcoder. While the proc is adequate for transcoding, i3-4xxx something, the write speed of the RAID 5 array is dog poo poo.

When I originally set it up I didn't research much and did a software(i know, i know) raid 5 with the Intel chipset. Would going to a hardware PCI-e RAID controller help out with the atrocious write rates? I don't really notice any high CPU when writes are being performed to disk but I don't really know where the parity calculations are performed cause I don't really know jack poo poo about raid.

It currently has 3x 3TB drives in the array. One did fail and was replaced. The rebuild took about a day and a half :gonk:

Any suggestions?

Cooter Brown
Sep 24, 2004

Would running a BTRFS based NAS in a VM environment be controversial as well? I realize BTRFS is less mature than ZFS, but it does have some decent looking NAS tooling. I'm not looking to make the jump right now, just curious about my future options.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Cooter Brown posted:

Would running a BTRFS based NAS in a VM environment be controversial as well? I realize BTRFS is less mature than ZFS, but it does have some decent looking NAS tooling. I'm not looking to make the jump right now, just curious about my future options.

https://btrfs.wiki.kernel.org/index.php/FAQ#Is_btrfs_stable.3F

Heh, honestly I feel like btrfs is the great white hope on linux but it keeps failing to reach maturity.

frunksock
Feb 21, 2002

What's a good 4+ bay external JBOD enclosure? What's best for the external connectivity? I have memories of eSATA being a pain in the rear end. Is USB3 awesome? Thunderbolt? Something else?

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

ILikeVoltron posted:

https://btrfs.wiki.kernel.org/index.php/FAQ#Is_btrfs_stable.3F

Heh, honestly I feel like btrfs is the great white hope on linux but it keeps failing to reach maturity.

I feel like btrfs is an amazing concept that will NEVER reach maturity or wide-scale acceptance, because of fundamental problems like not being able to accurately measure the free space you have. I realize that's an intentional design consideration, and required by the very nature of being able to define redundancy-state at a per-file level, but that's pretty much critical information.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I kind of don't mind too much as long as it's within a fairly ok ballpark range. Who cares if there's exactly n results from a Google search unless the number is below a couple thousand or so when it's a human that's supposed to interpret the search results? If I know I've got less than, say, 10% disk space free, I'm already in a bit of a panic mode already. Hell, at work I have monitoring setup to warn at 25% because performance starts degrading for most file systems at about the 20% mark anyway and because it would give a little more time if there was a runaway process consuming disk.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

G-Prime posted:

I feel like btrfs is an amazing concept that will NEVER reach maturity or wide-scale acceptance, because of fundamental problems like not being able to accurately measure the free space you have. I realize that's an intentional design consideration, and required by the very nature of being able to define redundancy-state at a per-file level, but that's pretty much critical information.
Uhm, isn't a file system supposed to have a spacemap or something to quickly decide where to write new data? How in the hell can't it tell you how much free space there is?

Zorak of Michigan
Jun 10, 2006


Combat Pretzel posted:

Uhm, isn't a file system supposed to have a spacemap or something to quickly decide where to write new data? How in the hell can't it tell you how much free space there is?

They've made it so flexible that while you can see how much raw space free space you have at this exact moment, you can't predict how it will be consumed if your configuration is fancy. Allowing varying redundancy levels on a detail basis and then doing backend compression means that writing 1GB might use 1GB of storage, or 2GB of storage, or 100mb of storage, or anywhere in between.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
That's just a traditional issue with compression and dedupe in general, you can get very similar problems with ZFS or a multi-million dollar SAN. Let me tell you how weird things can get when a $200k SAN keels over randomly because even it didn't realize that it was about to run out of free space for LUNs on thin provisioned storage with deduplication set at block level - very.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

Combat Pretzel posted:

Uhm, isn't a file system supposed to have a spacemap or something to quickly decide where to write new data? How in the hell can't it tell you how much free space there is?

https://btrfs.wiki.kernel.org/index.php/FAQ#Why_is_free_space_so_complicated.3F

The short version is that they allow RAID-like redundancy definition all the way down to an individual file level, at runtime. Their description hurts my brain, but it makes the most sense of any explanation I can find. It's a really cool idea, but it seems painful at best to work with.

Red Dad Redemption
Sep 29, 2007

I'm thinking about either building an NAS and using FreeNAS as the OS or buying one premade (QNAP or Synology). Either way, I'd use the NAS to store and stream (via Plex) the cabinet full of DVDs that we've accumulated over the years. One of the big differences would be OS related, so I'm wondering whether anyone has experience with FreeNAS / ZFS, especially with Plex. Would it be significantly better / worse than the types of OS that come with QNAP or Synology? Any general thoughts on whether building an NAS would be better or worse than going with a prebuilt system?

E: After looking in more depth at FreeNAS / ZFS, it looks like it would ultimately be a more expensive option than a TS-253. More expandability and power, but I don't necessarily need those. Building would be fun, but the rational choice looks like the QNAP.

E2: Finished tweaking my spec for a FreeNAS box - fun, but significantly more expensive than a QNAP. I really do feel like building it and taking a crack at managing FreeNAS, but even if I splurge on an overkill machine, the key issue remains whether someone new to servers and (to some degree) networking, namely me, will be able to dive in and digest it all without profoundly loving everything up. Still on the fence.

Red Dad Redemption fucked around with this message at 02:22 on Aug 19, 2015

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

Uhm, isn't a file system supposed to have a spacemap or something to quickly decide where to write new data? How in the hell can't it tell you how much free space there is?

I've watched a presentation about ZFS from the guy who worked on it and one of the reasons why they can't do block pointer rewrites is because of the reporting of disk space usage apparently.

It seems that the free space reporting is used in many places to get accurate readings that they can't just change the underlying blocks because it would break it.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
There seems to be something related to bp_rewrite going on in OpenZFS. There's apparently ongoing work to implement vdev eviction, which also requires shuffling around of blocks and then retiring the vdev. So it doesn't seem to be an impossible problem, just a terribly annoying one.

--edit:
This:

http://blog.delphix.com/alex/2015/01/15/openzfs-device-removal/

Combat Pretzel fucked around with this message at 06:10 on Aug 18, 2015

Zorak of Michigan
Jun 10, 2006


If they get that working, it will be amazing. Not only would users of OpenZFS enjoy the functionality, but I could wander around to all my Oracle Solaris reps and demand to know why their ZFS implementation is now behind OpenZFS in critical features.

Adbot
ADBOT LOVES YOU

phongn
Oct 21, 2006

One of the OpenZFS devs hinted at a block pointer rewrite implementation but we might not see it until the end of the year or longer.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply