|
So why is OpenSolaris and Raid-Z not more popular? I had a friend set it up on an old box that he had, and it worked on all the hardware that he had lying around. I'm considering doing this myself, but wondering what OpenSolaris would like on random cobbled-together hardware. I've heard lots of raving about how good ZFS is, I want to see it myself.
|
# ¿ Sep 24, 2009 20:53 |
|
|
# ¿ Apr 29, 2024 11:24 |
|
^^^^^ Yes ZFS is for you. ZFS is loving amazing. This box that I talk about below was built out of random hardware lying around, and is using both onboard nic and an add-in card. My friend brought over his Opensolaris box to let me dump some data I had backed up for him to it, and I'm seeing transfer speed slow down as time passes. Furthermore, I seem to have overloaded it by copying too many things at once and I lost the network share for a second. I ssh'ed into it, and everything looks fine, but transfer speed keeps dropping from the initial 60MB/s writes I was seeing all the way down to 20MB/s. Is everything OK as long as zpool status returns 0 errors? I don't know much about zfs, how full should the volume be allowed to run? Its on 4x1TB drives, so it has about 2.67TB logical space. Of this, about 800GB is available right now.
|
# ¿ Oct 2, 2009 04:29 |
|
Methylethylaldehyde posted:The speed drop might just be a side effect of your host computer's disks. Most drives will transfer 60mb/sec pretty easily if it's large files on the outer sectors, but as you fragment the files, move smaller files, or move toward the inner tracks, the drives will slow down quite a bit. I know that internal disk to disk transfers of files on my computer will go anywhere from 150MB/sec SSD to RAID array all the way down to ~15MB/sec slow assed disk to RAID array. All of the disks involved were Seagate 7200.12 TB drives, including the drive that I was transferring from. I have two in my machine and the NAS has 4. They've been really fast except for this one time, I'm hoping that it was just an isolated problem. These drives have been fast as all hell for sustained file transfers, its impressive how quick a 2-platter TB drive is.
|
# ¿ Oct 7, 2009 23:31 |
|
G-Prime posted:Maybe somebody here knows this offhand. I'm finding really lacking and almost misleading information all over the place regarding AMD processors and ECC RAM. Anybody know where there's a clear list of which ones support it? I'm considering buying one of the APUs (for stupid reasons, but reasons nonetheless), but wouldn't be opposed to one of the standalone CPUs and getting a GPU if I decide I need it down the road. Either way, I want to have ECC RAM for this build and can't find info on anything but AMD's embedded Kabini APUs having support for ECC, nothing else seems to list it anywhere. If I recall correctly, all AM2 and newer AMD chips can support ECC RAM, it's entirely dependent on the motherboard supporting it. What usage case is this? If extremely low budget is the issue, the Intel Pentium parts all support ECC RAM, as well as some i3s.
|
# ¿ Aug 1, 2014 22:23 |
|
Factory Factory posted:32 GB of RAM isn't enough for more than 32 TB of ZFS storage anyway, is it? You'd want to go with another softRAID. What are people doing instead of RAID-Z anyway? I really like ZFS but want to be able to scale past 32TB as 4TB drives get cheaper.
|
# ¿ Oct 13, 2014 20:22 |
|
Krailor posted:In windows 8 you can set your OneDrive files to be Online Only. This will automatically delete the local copy once it's been uploaded to OneDrive so it's not taking up space on your local hard drive. I know for regular files it will download a local version only if you open it and then remove the local version again once you've finished interacting with it. Like this?: http://windows.microsoft.com/en-us/windows-8/onedrive-online-available-offline It looks like MS isn't too confident in their Onedrive app: quote:We don't recommend working with online-only files at a command prompt or using Windows PowerShell or other command-line tools. Some actions might result in errors, or even cause file contents to be deleted.
|
# ¿ Oct 28, 2014 19:48 |
|
Thermopyle posted:Versioned backups are the best loving thing. Does Crashplan count a NAS box as "1 computer"? If so, wowza. What a deal.
|
# ¿ Dec 4, 2014 18:33 |
|
How terrible an idea is single drive parity with 5TB drives? I've read the theory that unrecoverable read error rates are high enough that you'll get one while reading a 5TB drive, but isn't that just an argument that you can't use a single high capacity drive in general anywhere to keep data? I'm looking to scale up my home NAS from 2 mirrored 5TB drives to a 4 or 5 drive raid z1 array, and I'd really like to avoid losing 2 drives to parity or mirroring.
|
# ¿ Feb 23, 2016 17:07 |
|
My thought was that backups are necessary anyway, and with how insanely fast home internet connections are getting restoring from backup wouldn't be as miserable as it used to be. Also, as you said, the checksumming filesystems should be able to recognize the corruption created by the URE and say "OK, that file is hosed, but everything else is fine".
|
# ¿ Feb 23, 2016 17:57 |
|
Skandranon posted:It may only be able to use a single core for parity calculations, and you are railing that core. If that were true, wouldn't he be at 40%+ CPU usage? It's a dual core.
|
# ¿ Feb 24, 2016 17:47 |
|
ufarn posted:When a Synology is listed as nominally supporting 1080p transcoding, is it 1080 period, or only for some mediocre bitrate? I wouldn't be surprised it it turned out it couldn't do half my videos with decent results. That must be with hardware acceleration, AKA Intel quicksync. Quicksync can chew on 1080p60 video all day long without any trouble, but Plex doesn't support quicksync at all so I hope you are using Emby and not Plex.
|
# ¿ Mar 24, 2016 22:01 |
|
Don Lapre posted:The hardware transcoding does not work with plex and poo poo as far as i know. It works well with Emby, a Plex competitor that's pretty feature-equal.
|
# ¿ Mar 24, 2016 22:36 |
|
necrobobsledder posted:I'm a tad bummed out that it didn't come with 8x DIMM slots though because functionally it's not all that different from the mini ITX Xeon Ds out besides the addition of the SAS controller. Isn't 4 DIMMs a limitation of the Xeon-D platform so that they don't cannibalize E5 xeon sales too badly? The bigger Xeon-Ds look pretty drat favorable vs a E5-2630L v4 or similar. It looks like officially all the Xeon-D max out at 128GB RAM.
|
# ¿ Apr 27, 2016 16:35 |
|
priznat posted:Any good recommendations for a low power motherboard/cpu combo? I'd like something that is like the appliances from Qnap/Synology where they consume ~35W under load and normally barely anything. Your drives are going to consume more power than your mobo / CPU, and I'd suggest that the simpler option might be to find a B150/H110/H170 motherboard you like and put a Pentium G4400 or similar on it.
|
# ¿ Jul 18, 2016 16:01 |
|
Shumagorath posted:Your "forever" archive should definitely be optical stored in a fire safe (not that Blurays won't just melt but that's why you have an offsite copy). Dual-layer Bluray is still a thing, right? That's 50GB per disc. Why not tape drives? LTO-6 stuff is really widely available now, and for the quantity of backup he's having to do the 6.25 TB compressed capacity on a single tape would take the sting off.
|
# ¿ Aug 1, 2016 16:38 |
|
Arsenic Lupin posted:"Forever" means "readable in 50 years", which means you need a format that you are absolutely, positively sure you'll be able to find working readers for. Blu-Ray is a lot more likely than a tape drive. Yeah, tape's a bad call for putting it in a vault. You're right. Will burned blu-rays last 50 years though? I've had burned CDs and DVDs die of old age already!
|
# ¿ Aug 1, 2016 16:46 |
|
Shachi posted:I should preface that "forever" really should mean "for as long as it's my problem" Synology stuff is pretty good. Check out the DS2415+ if you want 12 bays, DS1815+ if you can get away with 8.
|
# ¿ Aug 1, 2016 18:38 |
|
Moey posted:I'm using two 8tb archive drives for media storage and have no complaints. Using them in RAID 1, or JBOD?
|
# ¿ Sep 7, 2016 15:11 |
|
Moey posted:Seperate drives with a robocopy script backing up primary to backup. That seems to be the best use case for me currently. I'm very interested in the singled drives because of the dirt-cheap cost per GB, but I've seen lots of nebulous warnings not to use them in arrays, even RAID 1s. I haven't read any concrete reasons why other than their abysmal random write performance, and it's got me wondering if using a modern CoW filesystem on them in a RAID Z2 or something might actually being acceptable.
|
# ¿ Sep 7, 2016 15:23 |
|
Gozinbulx posted:Ubuntu + hardware raid or something else? I'm going to guess ZFS on Ubuntu? It's pretty great on 16.04.
|
# ¿ Sep 19, 2016 21:07 |
|
Gozinbulx posted:So I'm looking at a Xeon E3-1231V3B build with 16 gb ECC Ram and 4x4TB Reds. Should I slap FreeNAS on there and wait for docker and maybe play with those apps or do the aformentioned Ubuntu + ZFS. I imagine then that Ubuntu runs docker? I don't even know if I'll need docker but it sounds pretty neat. FreeNAS sounds pretty good but I'm open to persuasion as to why I should roll out Ubuntu + ZFS. FreeNAS is not a general purpose OS. If you need a NAS, it's likely your best option. If you need a general purpose server that you will be running arbitrary software on and don't mind spending a bit more time to maintain and harden, or are comfortable just having it scheduled to update itself regularly and trust the ubuntu package maintainers, then you want Ubuntu. Run FreeNAS if it does what you need.
|
# ¿ Sep 20, 2016 16:50 |
|
necrobobsledder posted:I'd rather have docker run natively on a Linux host than inside a VM if I had an option. It's clunky enough for me dealing with Docker on OS X as a developer when I set it up when messing with the different options to get the docker client to work remotely through VirtualBox or VMware Fusion (although there's a neat xhyve based option some random guy wrote). Furthermore, from a production standpoint you already have enough security headaches with docker and adding a VM to the mix is more busywork. My understanding is that Docker on FreeNAS is native, not inside a VM.
|
# ¿ Sep 20, 2016 17:05 |
|
7200 RPM drives seem like a losing spot to be right now. If you want speed, why the hell are you spinning a platter? And if you're after cheap, bulk, energy efficient storage 7200 doesn't help on any of those 3 counts.
|
# ¿ Oct 11, 2016 23:00 |
|
IOwnCalculus posted:The biggest change I'd make to that SA Drivebox build is to find some flavor of used LSI HBA on eBay and use that instead of the Syba Asmedia controller. You might be out another $30 or $40 in total between the card and the SAS->SATA breakout cable, but you get a controller with much better support for things like ZFS. Can you elaborate a bit on this? My understanding was that you're flashing those things into JBOD mode anyway, and that ZFS has the software do all the heavy lifting so that the hardware doesn't matter in general. Given that you're not using any hardware RAID or special driver features on either controller, why does the controller matter? Is performance on those cheapo Asmedia controllers just awful or something?
|
# ¿ Oct 13, 2016 20:23 |
|
apropos man posted:I know it's a waste and kind of stupid. I feel like practising setting up RAID1. Today I went into CeX (a kind of junk shop for second hand phones, DVD's and computer parts here in the UK) and I was gonna throw £15 at a drive to pair with the WD Blue but they had nothing that looked decent, only a really scratched looking Samsung 250GB. If you want to play with RAID levels and assembling volumes, you could instead look at learning ZFS. You can instead of using physical drives, create a vdev backed by a file on disk. You can then assemble / do stuff with RAID arrays with the whole array running from separate files on the same disk. You could also just use VMs, but a RAID 1 of a hard disk and an SSD sounds like a bad time.
|
# ¿ Oct 14, 2016 19:23 |
|
What sort of chassis has 4+ 2.5" 15mm bays? http://www.pcworld.com/article/3130234/storage-drives/seagate-drops-the-worlds-largest-tiny-hard-drive.html PCWorld is reporting that 5TB 2.5" drives are going to be $85, which is suspect. Even if the pricing is a wash, I'd prefer 2.5" drives if there were a case that could make a 4-8 drive home NAS a good bit smaller.
|
# ¿ Oct 19, 2016 20:41 |
|
Moey posted:The DS416slim only takes 12.5mm drives. Rats. Exactly! Who are these things for? Not laptops, not small enclosures, what's the usage case for a 15mm 2.5" drive?
|
# ¿ Oct 19, 2016 21:03 |
|
I used XP Pro x64 Edition for a couple years, and if drivers were available for your hardware it was outright a better OS than XP.
|
# ¿ Oct 21, 2016 15:14 |
|
Has anybody set up Ceph storage in their home lab as a proof of concept? It looks pretty appealing, but the minimum scale where it starts to make sense is far larger than home NAS scale, more like 300TB+ clusters.
|
# ¿ Nov 7, 2016 16:52 |
|
D. Ebdrup posted:I thought that was the entire point of installing Plex on a Synology play line NAS, since Plex lists as a requirement for Synology NAS that they be Intel-based and at least the 416play features a N3060 which supports Inttel Quick Sync Video and on-die transcoding. Plex doesn't use QuickSync, as noted. Plex competitor Emby uses quicksync to transcode and can thus serve a bunch of transcoded video streams from an Atom.
|
# ¿ Nov 11, 2016 17:44 |
|
emocrat posted:I have, just sitting there, able to used right now, the following: i5 2500 cpu, with corresponding H67m Motherbaord (6 SATA ports) and 8GB of ram, decent sized case and powersupply. I have a 128GB SSD as a system disk, 2 3TB WD Reds, a 1TB HDD and a few other smaller HDD's that probably don't matter cause they are getting too old to consider. Only being able to add drives in the form of a complete new zpool is the biggest downside of FreeNAS. Even though the community circlejerks over ECC, you sure don't need it. FreeNAS without ECC isn't a time-bomb like alarmist grognards will tell you, and really is just more comparable to any OS at all without ECC. If you've got the budget to add a whole batch of drives at once and don't need to grow over time, I'd recommend FreeNAS over the other options you're considering. Below is some info about FreeNAS on non-ECC RAM. http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
|
# ¿ Nov 14, 2016 17:15 |
|
Mr Shiny Pants posted:If you go trough the trouble of researching ZFS ( which tells me you care about your data ) why not go whole hog and also buy ECC RAM. Couple of dollars more. If you're repurposing an old CPU/MB/RAM into a NAS like this guy is, then you're looking at having to replace all 3 of those components just to get ECC memory. This isn't a new build where it adds $20 to the BOM, but rather the difference between spending $400 and $0.
|
# ¿ Nov 15, 2016 16:35 |
|
fnkels posted:I'm wondering if anybody has any experience with using Windows 10 as a Plex Media Server. It works fine, all the normal downsides of using client windows as a server OS apply (frequent updates causing restarts, need a Pro license to remote desktop into it). My Plex server runs on a Win10 PC and it's OK.
|
# ¿ Jan 3, 2017 16:56 |
|
Platystemon posted:When I see “ReFS”, I think “MurderFS”. You're thinking of ReiserFS, ReFS is something entirely different.
|
# ¿ Jan 5, 2017 18:32 |
|
I've seen a bunch of love both here and elsewhere recently for ZFS, and fewer people seem to be going with good ol mdadm / lvm / ext4. What's the big downside of mdadm that I'm missing? Being able to expand arrays 1 disk at a time seems pretty drat nice.
|
# ¿ Jan 8, 2017 04:27 |
|
Pryor on Fire posted:I am confused as to why there is so much discussion around Seagate and WD drives in this thread. Why would you buy anything besides HGST right now? Seems like a pretty easy choice. Because they're more expensive, and frequently it's a better strategy to buy more cheaper drives and just realize that you're going to replace 1-3 drives in your RAID-Z2 over time. Keep spares!
|
# ¿ Jan 23, 2017 16:37 |
|
DrDork posted:Unless you have a fairly small NAS (at which point maybe you could get away with a simple mirroring arrangement on another computer you've got), or you really don't care whatsoever about prices, you're going to have to wait a good bit longer than two generations: I don't think that we can expect the same linear progress anymore. Future HDD capacity improvements look extremely expensive and complex enough that they won't decrease in price dramatically, it might be more like we see 40TB Enterprise HDDs for $600 using HAMR. In the consumer space, it looks like there's a crossover coming where SSDs are going to be cheaper per GB than HDDs.
|
# ¿ Feb 7, 2017 16:27 |
|
Paul MaudDib posted:Thanks, this is what I wanted to get at here. Glacier is good in theory but there are some hidden costs to be aware of if you actually need to pull it back. If so - don't do it fast or you will pay like 10-100x as much. Google Coldline storage seems to be a lot more predictable and less punitive if you need to actually restore a backup. Glacier has so many gotchas that it keeps me away.
|
# ¿ Feb 8, 2017 17:13 |
|
Also check out Unraid if you're looking for non-ZFS solutions and want more flexibility.
|
# ¿ Feb 14, 2017 23:34 |
|
|
# ¿ Apr 29, 2024 11:24 |
|
At the very least, you could probably get $150+ on craigslist for it.
|
# ¿ Mar 14, 2017 21:58 |