Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
So why is OpenSolaris and Raid-Z not more popular? I had a friend set it up on an old box that he had, and it worked on all the hardware that he had lying around. I'm considering doing this myself, but wondering what OpenSolaris would like on random cobbled-together hardware. I've heard lots of raving about how good ZFS is, I want to see it myself.

Adbot
ADBOT LOVES YOU

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
^^^^^
Yes ZFS is for you. ZFS is loving amazing. This box that I talk about below was built out of random hardware lying around, and is using both onboard nic and an add-in card.

My friend brought over his Opensolaris box to let me dump some data I had backed up for him to it, and I'm seeing transfer speed slow down as time passes. Furthermore, I seem to have overloaded it by copying too many things at once and I lost the network share for a second. I ssh'ed into it, and everything looks fine, but transfer speed keeps dropping from the initial 60MB/s writes I was seeing all the way down to 20MB/s. Is everything OK as long as zpool status returns 0 errors?

I don't know much about zfs, how full should the volume be allowed to run? Its on 4x1TB drives, so it has about 2.67TB logical space. Of this, about 800GB is available right now.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Methylethylaldehyde posted:

The speed drop might just be a side effect of your host computer's disks. Most drives will transfer 60mb/sec pretty easily if it's large files on the outer sectors, but as you fragment the files, move smaller files, or move toward the inner tracks, the drives will slow down quite a bit. I know that internal disk to disk transfers of files on my computer will go anywhere from 150MB/sec SSD to RAID array all the way down to ~15MB/sec slow assed disk to RAID array.

All of the disks involved were Seagate 7200.12 TB drives, including the drive that I was transferring from. I have two in my machine and the NAS has 4. They've been really fast except for this one time, I'm hoping that it was just an isolated problem. These drives have been fast as all hell for sustained file transfers, its impressive how quick a 2-platter TB drive is.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

G-Prime posted:

Maybe somebody here knows this offhand. I'm finding really lacking and almost misleading information all over the place regarding AMD processors and ECC RAM. Anybody know where there's a clear list of which ones support it? I'm considering buying one of the APUs (for stupid reasons, but reasons nonetheless), but wouldn't be opposed to one of the standalone CPUs and getting a GPU if I decide I need it down the road. Either way, I want to have ECC RAM for this build and can't find info on anything but AMD's embedded Kabini APUs having support for ECC, nothing else seems to list it anywhere.

If I recall correctly, all AM2 and newer AMD chips can support ECC RAM, it's entirely dependent on the motherboard supporting it.

What usage case is this? If extremely low budget is the issue, the Intel Pentium parts all support ECC RAM, as well as some i3s.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Factory Factory posted:

32 GB of RAM isn't enough for more than 32 TB of ZFS storage anyway, is it? You'd want to go with another softRAID.

What are people doing instead of RAID-Z anyway? I really like ZFS but want to be able to scale past 32TB as 4TB drives get cheaper.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Krailor posted:

In windows 8 you can set your OneDrive files to be Online Only. This will automatically delete the local copy once it's been uploaded to OneDrive so it's not taking up space on your local hard drive. I know for regular files it will download a local version only if you open it and then remove the local version again once you've finished interacting with it.

Like this?: http://windows.microsoft.com/en-us/windows-8/onedrive-online-available-offline

It looks like MS isn't too confident in their Onedrive app:

quote:

We don't recommend working with online-only files at a command prompt or using Windows PowerShell or other command-line tools. Some actions might result in errors, or even cause file contents to be deleted.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Thermopyle posted:

Versioned backups are the best loving thing.

Thanks, Crashplan!

Does Crashplan count a NAS box as "1 computer"? If so, wowza. What a deal.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
How terrible an idea is single drive parity with 5TB drives? I've read the theory that unrecoverable read error rates are high enough that you'll get one while reading a 5TB drive, but isn't that just an argument that you can't use a single high capacity drive in general anywhere to keep data?

I'm looking to scale up my home NAS from 2 mirrored 5TB drives to a 4 or 5 drive raid z1 array, and I'd really like to avoid losing 2 drives to parity or mirroring.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
My thought was that backups are necessary anyway, and with how insanely fast home internet connections are getting restoring from backup wouldn't be as miserable as it used to be. Also, as you said, the checksumming filesystems should be able to recognize the corruption created by the URE and say "OK, that file is hosed, but everything else is fine".

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Skandranon posted:

It may only be able to use a single core for parity calculations, and you are railing that core.

If that were true, wouldn't he be at 40%+ CPU usage? It's a dual core.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

ufarn posted:

When a Synology is listed as nominally supporting 1080p transcoding, is it 1080 period, or only for some mediocre bitrate? I wouldn't be surprised it it turned out it couldn't do half my videos with decent results.

That must be with hardware acceleration, AKA Intel quicksync. Quicksync can chew on 1080p60 video all day long without any trouble, but Plex doesn't support quicksync at all so I hope you are using Emby and not Plex.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Don Lapre posted:

The hardware transcoding does not work with plex and poo poo as far as i know.

It works well with Emby, a Plex competitor that's pretty feature-equal.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

necrobobsledder posted:

I'm a tad bummed out that it didn't come with 8x DIMM slots though because functionally it's not all that different from the mini ITX Xeon Ds out besides the addition of the SAS controller.

Isn't 4 DIMMs a limitation of the Xeon-D platform so that they don't cannibalize E5 xeon sales too badly? The bigger Xeon-Ds look pretty drat favorable vs a E5-2630L v4 or similar. It looks like officially all the Xeon-D max out at 128GB RAM.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

priznat posted:

Any good recommendations for a low power motherboard/cpu combo? I'd like something that is like the appliances from Qnap/Synology where they consume ~35W under load and normally barely anything.

Not much luck finding decent N3150/J1900 motherboards with more than 2 SATA3 ports.

Your drives are going to consume more power than your mobo / CPU, and I'd suggest that the simpler option might be to find a B150/H110/H170 motherboard you like and put a Pentium G4400 or similar on it.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Shumagorath posted:

Your "forever" archive should definitely be optical stored in a fire safe (not that Blurays won't just melt but that's why you have an offsite copy). Dual-layer Bluray is still a thing, right? That's 50GB per disc.

Why not tape drives? LTO-6 stuff is really widely available now, and for the quantity of backup he's having to do the 6.25 TB compressed capacity on a single tape would take the sting off.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Arsenic Lupin posted:

"Forever" means "readable in 50 years", which means you need a format that you are absolutely, positively sure you'll be able to find working readers for. Blu-Ray is a lot more likely than a tape drive.

Yeah, tape's a bad call for putting it in a vault. You're right. Will burned blu-rays last 50 years though? I've had burned CDs and DVDs die of old age already!

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Shachi posted:

I should preface that "forever" really should mean "for as long as it's my problem" ;)

But yeah blue rays are likely where that stuff will go. Forevers are a really small percentage of total volume though.

I just need a large and expandable storage option. Something I regularly back up and store to and occasionally pull data from as it's requested. I need a recommendation for a "better than a drobo" drobo, that's a little more flexible than just a RAID array as drives die/need to be expanded.

Synology stuff is pretty good. Check out the DS2415+ if you want 12 bays, DS1815+ if you can get away with 8.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Moey posted:

I'm using two 8tb archive drives for media storage and have no complaints.

Using them in RAID 1, or JBOD?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Moey posted:

Seperate drives with a robocopy script backing up primary to backup. That seems to be the best use case for me currently.

I'm very interested in the singled drives because of the dirt-cheap cost per GB, but I've seen lots of nebulous warnings not to use them in arrays, even RAID 1s. I haven't read any concrete reasons why other than their abysmal random write performance, and it's got me wondering if using a modern CoW filesystem on them in a RAID Z2 or something might actually being acceptable.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Gozinbulx posted:

Ubuntu + hardware raid or something else?

I'm going to guess ZFS on Ubuntu? It's pretty great on 16.04.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Gozinbulx posted:

So I'm looking at a Xeon E3-1231V3B build with 16 gb ECC Ram and 4x4TB Reds. Should I slap FreeNAS on there and wait for docker and maybe play with those apps or do the aformentioned Ubuntu + ZFS. I imagine then that Ubuntu runs docker? I don't even know if I'll need docker but it sounds pretty neat. FreeNAS sounds pretty good but I'm open to persuasion as to why I should roll out Ubuntu + ZFS.

FreeNAS is not a general purpose OS. If you need a NAS, it's likely your best option. If you need a general purpose server that you will be running arbitrary software on and don't mind spending a bit more time to maintain and harden, or are comfortable just having it scheduled to update itself regularly and trust the ubuntu package maintainers, then you want Ubuntu.

Run FreeNAS if it does what you need.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

necrobobsledder posted:

I'd rather have docker run natively on a Linux host than inside a VM if I had an option. It's clunky enough for me dealing with Docker on OS X as a developer when I set it up when messing with the different options to get the docker client to work remotely through VirtualBox or VMware Fusion (although there's a neat xhyve based option some random guy wrote). Furthermore, from a production standpoint you already have enough security headaches with docker and adding a VM to the mix is more busywork.

My understanding is that Docker on FreeNAS is native, not inside a VM.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
7200 RPM drives seem like a losing spot to be right now. If you want speed, why the hell are you spinning a platter? And if you're after cheap, bulk, energy efficient storage 7200 doesn't help on any of those 3 counts.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

IOwnCalculus posted:

The biggest change I'd make to that SA Drivebox build is to find some flavor of used LSI HBA on eBay and use that instead of the Syba Asmedia controller. You might be out another $30 or $40 in total between the card and the SAS->SATA breakout cable, but you get a controller with much better support for things like ZFS.

Can you elaborate a bit on this? My understanding was that you're flashing those things into JBOD mode anyway, and that ZFS has the software do all the heavy lifting so that the hardware doesn't matter in general. Given that you're not using any hardware RAID or special driver features on either controller, why does the controller matter?

Is performance on those cheapo Asmedia controllers just awful or something?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

apropos man posted:

I know it's a waste and kind of stupid. I feel like practising setting up RAID1. Today I went into CeX (a kind of junk shop for second hand phones, DVD's and computer parts here in the UK) and I was gonna throw £15 at a drive to pair with the WD Blue but they had nothing that looked decent, only a really scratched looking Samsung 250GB.

I have a spare OCZ Arc that I'm not really using and that was gonna end up in the pair. I might have a look in town tomorrow for a semi-decent mechanical drive.

If you want to play with RAID levels and assembling volumes, you could instead look at learning ZFS. You can instead of using physical drives, create a vdev backed by a file on disk. You can then assemble / do stuff with RAID arrays with the whole array running from separate files on the same disk.

You could also just use VMs, but a RAID 1 of a hard disk and an SSD sounds like a bad time.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
What sort of chassis has 4+ 2.5" 15mm bays?

http://www.pcworld.com/article/3130234/storage-drives/seagate-drops-the-worlds-largest-tiny-hard-drive.html

PCWorld is reporting that 5TB 2.5" drives are going to be $85, which is suspect. Even if the pricing is a wash, I'd prefer 2.5" drives if there were a case that could make a 4-8 drive home NAS a good bit smaller.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Moey posted:

The DS416slim only takes 12.5mm drives. Rats.

Exactly! Who are these things for? Not laptops, not small enclosures, what's the usage case for a 15mm 2.5" drive?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
I used XP Pro x64 Edition for a couple years, and if drivers were available for your hardware it was outright a better OS than XP.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Has anybody set up Ceph storage in their home lab as a proof of concept? It looks pretty appealing, but the minimum scale where it starts to make sense is far larger than home NAS scale, more like 300TB+ clusters.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

D. Ebdrup posted:

I thought that was the entire point of installing Plex on a Synology play line NAS, since Plex lists as a requirement for Synology NAS that they be Intel-based and at least the 416play features a N3060 which supports Inttel Quick Sync Video and on-die transcoding.

Plex doesn't use QuickSync, as noted. Plex competitor Emby uses quicksync to transcode and can thus serve a bunch of transcoded video streams from an Atom.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

emocrat posted:

I have, just sitting there, able to used right now, the following: i5 2500 cpu, with corresponding H67m Motherbaord (6 SATA ports) and 8GB of ram, decent sized case and powersupply. I have a 128GB SSD as a system disk, 2 3TB WD Reds, a 1TB HDD and a few other smaller HDD's that probably don't matter cause they are getting too old to consider.

So, things I am considering and my current understanding:

FreeNas. Seems like a super solid platform that is probably the best I can get in terms of data integrity. I would have to buy basically 100% new hardware, due to the need for ECC. Also would require some planning on the HDD's as they need to be matched sizes. I am a little worried about my ability to properly configure it, as it seems there are a ton things to set up and I have no experience at all with FreeBSD or ZFS.
So, suggestions? Right this minute UnRaid seems like a really good idea for me, but maybe it has drawbacks I am not aware of. Are there other platforms I should consider? I am willing to spend some cash, although the prospect of an entire new systems for ECC + set of HDD's seems a bit much. Thanks.

Only being able to add drives in the form of a complete new zpool is the biggest downside of FreeNAS. Even though the community circlejerks over ECC, you sure don't need it. FreeNAS without ECC isn't a time-bomb like alarmist grognards will tell you, and really is just more comparable to any OS at all without ECC.

If you've got the budget to add a whole batch of drives at once and don't need to grow over time, I'd recommend FreeNAS over the other options you're considering. Below is some info about FreeNAS on non-ECC RAM.

http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Mr Shiny Pants posted:

If you go trough the trouble of researching ZFS ( which tells me you care about your data ) why not go whole hog and also buy ECC RAM. Couple of dollars more.

I don't get these discussions.

If you're repurposing an old CPU/MB/RAM into a NAS like this guy is, then you're looking at having to replace all 3 of those components just to get ECC memory. This isn't a new build where it adds $20 to the BOM, but rather the difference between spending $400 and $0.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

fnkels posted:

I'm wondering if anybody has any experience with using Windows 10 as a Plex Media Server.

I upgraded my main computer and I want to turn my old one into a dedicated Plex Media Server. Since it already has Windows 10 on it, it seems like a waste to wipe everything and stick OpenMediaVault on it or something. Are there any drawbacks I should be aware of? I'm imagining that I have pretty good flexibility working in the Windows 10 environment.

It works fine, all the normal downsides of using client windows as a server OS apply (frequent updates causing restarts, need a Pro license to remote desktop into it). My Plex server runs on a Win10 PC and it's OK.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Platystemon posted:

When I see “ReFS”, I think “MurderFS”.

You're thinking of ReiserFS, ReFS is something entirely different.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
I've seen a bunch of love both here and elsewhere recently for ZFS, and fewer people seem to be going with good ol mdadm / lvm / ext4. What's the big downside of mdadm that I'm missing? Being able to expand arrays 1 disk at a time seems pretty drat nice.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Pryor on Fire posted:

I am confused as to why there is so much discussion around Seagate and WD drives in this thread. Why would you buy anything besides HGST right now? Seems like a pretty easy choice.

*edit is it just because the OP is from 2012?

Because they're more expensive, and frequently it's a better strategy to buy more cheaper drives and just realize that you're going to replace 1-3 drives in your RAID-Z2 over time. Keep spares!

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

DrDork posted:

Unless you have a fairly small NAS (at which point maybe you could get away with a simple mirroring arrangement on another computer you've got), or you really don't care whatsoever about prices, you're going to have to wait a good bit longer than two generations:

In 2013 you could get a 250GB SSD for ~$160, or $0.64/GB.
In 2017 you can get a 960GB SSD for ~$220, or $0.23/GB.

Assuming the same linear progress, in 2021 you should be able to get a 4TB SSD for $0.06/GB, or $230. In the meantime you could probably get a 10TB HDD in 2021 for $100.

Of course, you can always just say "gently caress you, I'm rich" and buy the 4TB SSD's already available...for a cool $1500.

I don't think that we can expect the same linear progress anymore. Future HDD capacity improvements look extremely expensive and complex enough that they won't decrease in price dramatically, it might be more like we see 40TB Enterprise HDDs for $600 using HAMR.

In the consumer space, it looks like there's a crossover coming where SSDs are going to be cheaper per GB than HDDs.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Paul MaudDib posted:

Thanks, this is what I wanted to get at here. Glacier is good in theory but there are some hidden costs to be aware of if you actually need to pull it back. If so - don't do it fast or you will pay like 10-100x as much.

Google Coldline storage seems to be a lot more predictable and less punitive if you need to actually restore a backup. Glacier has so many gotchas that it keeps me away.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Also check out Unraid if you're looking for non-ZFS solutions and want more flexibility.

Adbot
ADBOT LOVES YOU

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
At the very least, you could probably get $150+ on craigslist for it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply