Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe
Another pretty happy unRAID experience:

I used an old Intel 845 board with a Pentium 4, 512MB of memory, and a built-in Intel network adapter. A couple of caveats about the install ... the new beta of unRAID didn't detect my network at all, but I just needed to edit one very simple config file to give the box an IP address on the correct submask.

The other odd thing with my motherboard is that it would not boot from the USB flash drive using the recommended "syslinux x:", but did fine with "syslinux -ma x:"

After you boot into it, you have a command-line login. You login as root, with no password :wth: for the first time, and then access the machine's interface from another PC on the network. All the controls are through a web browser on a remote machine.

And spartan controls they are, though it's more than enough to get the job done. I was able to get my random junker hard drives assigned (the largest as parity, the others as part of the array) and building the array in about 3 minutes. It's really simple.

I almost didn't try unRAID, because I'd gotten hooked on pooled storage with an LVM/ZFS setup or RAID pooling, but the user shares accomplish the same thing. You can set the individual drives to not be shown as shares to the world, and only your predefined user shares will show up on the network. You don't have much granular control over the pool, however. It was a case of: Disk 1 and 3 are part of share, "Editing" and Disk 2 and 5 are part of share, "Imaging". Perhaps you can get a little more modular with partitioning.

It uses ReiserFS 3.6 as the file system under Slackware, (political/social issues aside) which I wouldn't be a 100% on if it weren't a RAID system to start with. I do like the fact that the system can be pulled apart and resurrected on most other Linux systems if needed. No proprietary RAID setup that can't be read any other way, like those SOHO boxes that have their own special brand of storage.

There is a user-level security interface for shares, but it's disabled in the "Basic" (freeware) version, which is a shame, but then again, you don't really have to demo GID/UID to know how it's going to work if you had it.

After I built my array with 5 old, semi-junker drives ranging from 20GB to 300GB, I damaged it in a few ways to see how it would handle it. I yanked out the smallest one, rebuilt, yanked out the largest, rebuilt, yanked out two (couldn't start array, but OS lets you know what the issue is), hard-powered the system a few times as a simulated power outage, and reset things and looked at the file system. Everything worked flawlessly, and as long as you only lose one drive at a time, you'll stay up and running. Read/write speed is much higher than a little US Robotics 8700 NAS box I've set up at the office, but tops out on my machine at about 40MB/sec, so it's a good deal slower than ZFS on the same hardware. Of course, I haven't wired my house for Ethernet yet, so most things end up flowing over wireless or short, loose runs of CAT-5, so speed isn't a big concern. The USR box would stutter when asked to write new data and read something medium-sized (800MB), but the unRAID box doesn't.

What I liked:

Much easier than ZFS/LVM management for me.
Simple interface, simple shares
Works as expected
Drives aren't locked into proprietary file system or RAID setup
Disasters are easy to fix, diagnostics are well displayed
User security and user shares (Plus/Pro version for security)
Very simple to add or remove parts of the array.
Latest beta is supporting more network cards and hardware than before.
Configurable caching and spin-down for all drives/shares in the system.
Config files on the flash drive are really, really simple.

What I didn't like:

Costs about $75 for the "Pro" version to support up to 16 drives and to provide security, and that's permanently locked to one particular USB flash drive. Choose wisely.
Middle of the road for speed. ZFS is faster, SOHO boxes are slower.
Instructions on the web site are sparse. You'll have to read through the forums to find some answers. The instruction manual is good, but incomplete.
1 drive failure is all that you can sustain before needing to replace hardware, regardless of the size of your array.

I'm keeping mine as part of my network. I like the setup, I like the simplicity, and it's serious speed overkill when attached to a non-GigE network. I bought the Pro version, and this is what I'm going to slap all of my smaller, older drives in as they get pulled out of current systems. We already have quite a bit of network storage in the house, but this fills a niche for me to recycle older, smaller drives that would otherwise never be used in a system or an array.

I have a feeling that I'll probably do a custom build for the box to make it as power-efficient as possible, though. 16 drives at an average of 10 watts each, plus the power supply loss, motherboard, and CPU consumptions ... I'm guessing this box will use about 300 watts in its current incarnation if it were fully populated. I probably won't fill it with really small drives, as I don't feel like paying for an extra 20GB every month in electricity when the same money would pay for 500GB or a terabyte.

Adbot
ADBOT LOVES YOU

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe
Hoping for a little advice.

I've got a need for a hot backup, light production, and roaming folder storage system to replace a QNAP "enterprise" NAS that just bit me with its shortcomings (lost one disk in an 8 disk RAID-10, lost everything with 7 good remaining drives, support was no help at all, had to restore from backups).

I have this 24 drive case sitting around, I have eight older 1TB Hitachi Enterprise drives and eight new WD SE 3TB drives looking for a home. The Hitachis are three years into their life, so I don't want to use them for production data, just as another pool for onsite backups of the WD's "important" content.

My goal is to create a ZFS or BTRFS system out of this stuff that replicates some of the functionality of the QNAP, particularly the AD integration where, when I add a user to AD, they automatically get a home folder on the storage system with a quota (which I then map to their login as a personal drive to follow them around). Most of the space will be used for nightly/weekly backups of VMs, and a few HA VMs.

I've fallen a little bit behind on ZFS/BTRFS. Which should I use at this point? NAS4free, FreeNAS, ZFSonLinux and set it up manually? Recommended motherboard? I've had great luck with SuperMicro over the years, particularly their IPMI implementation for home-grown stuff, so I'd like a good model from them. I really like the idea of ZFS running a home folder for the users since it will tie directly into the Windows "Previous Versions" functionality (Does BTRFS do this? I can't find an answer). Aside from the AD integration, it needs to support NFS thoroughly, and that's really it for requirements. Ideally, I'd like it to be RAID-Z2 or RAID-10 equivalent storage/safety. I'll happily give up some storage space for safety and speed.

I'm planning on 16GB of ECC RAM for the build, and a Xeon E3v2. I can spend about $1000 beyond the drives and chassis on this little project.

Advice?

insularis fucked around with this message at 05:51 on Feb 24, 2014

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

KS posted:

I know this is the home thread, but you're talking about a work system, so my advice is Illumos. I had stability issues with Nas4Free and performance issues with FreeNAS before biting the bullet and going for something Solaris-based. It paid off -- the system is both fast and rock solid. I find the documentation is way better as well, because about 95% of it agrees with the Oracle docs still. Some things, like disabling cache flush on a per-device basis, don't even seem to be implemented on FreeBSD.

This is on a 32x 3TB array with a ZeusRAM and all the goodies. Nas4free runs perfectly fine at home on my little 6 disk setup. Your mileage may vary.

Thanks, that's the sort of advice I was looking for. My shop is too small for me to feel comfortable posting in SAN megathread, where people have real budgets.

Does Illumos have any sort of a front-end GUI to make it easier to manage? I'm pretty comfortable doing things through the commandline, but if I moved on, it would be nice to leave something easy to use in the documentation for the next guy. Looks like Napp-It came back. Last time I looked, it had become abandonware, now it's kinda-sorta commercial.

insularis fucked around with this message at 15:00 on Feb 24, 2014

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

priznat posted:

Speaking of off the shelf NAS setups, how are Qnaps? It looked like for the longest while they had superior hardware specs to Synology but their software wasn't as nice looking. Now they just had a major refresh on the software and it looks as good as the Synology stuff. Are they pretty even or one is better for one market than the other? (Home use vs soho etc)

I'm on a home built ZFS FreeNAS box these days, but I bought a QNAP 8 bay SMB Box about 5 years ago and filled it with Hitachi 1GB drives in a RAID 10 for a small business. It's been regularly updated by QNAP, been through many power outages, and never been powered off for more than 2 hours. It's always come back up just fine, without ever losing data (it's their on-site backup array).

Just last week, it sent me an alert that one of the drives had its first reported reallocated sector ... after 5 years. It gets about 500GB written to it every single weekday.

Just one data point from one dude, but I like that box.

insularis fucked around with this message at 06:35 on Dec 11, 2015

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe
Well, I had ZFS save my rear end for the first major time at home. I have a 17mo old toddler, and set the keyboard down for just a few seconds. The window that had focus was our media center machine's access to ZFS media share. He deleted the entire TV folder (3TB, about 12000 files). I didn't even get mildly irritated. I just logged into FreeNAS, reverted to the daily snapshot for that folder, and went on with my day. Glorious.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe
Windows exposed CIFS share. It needs write access for several reasons. Mostly, I have a keyboard lock macro that seals the keyboard after a combo keystroke or timeout, but I just forgot this one time.

I have all the delete safeties turned off on Windows for the particular account I was working on for a few minutes. It's protected in several layers from utter destruction. Snapshots, ZFS Send weekly jobs to a separate pool, one offline copy, etc. And, this is only media, nothing too terribly irreplaceable.

I do all this stuff so I don't have to worry.

Snapshots were just fantastic for an instantaneous multiple terabyte restore. So, so nice. That's what I was getting at. Restoring from backups usually suuuuuuuuucks. This is like cheating.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Megaman posted:

Since upgrading to freenas 9.3 stable I'm getting:

Firmware version 15 does not match driver version 16 for /dev/mps0

Should I be concerned about this and how do I resolve this?

I've never updated mine, and have had no issues on two separate machines (I'm on version 16 and 17 respectively, getting warnings about version 19).

I've heard there are issues with version 19+ since Avago took over LSI, but that's unconfirmed. I'm just sticking at my current versions.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

necrobobsledder posted:

Snapshots aren't exactly magic. If you take a snapshot, delete data, and go write a bunch more data you will run out of disk like the data hadn't been deleted.

Still, snapshots before an operation on large volumes of data is probably not a bad idea. Would have saved me from an rsync that decided that it's totally cool to not transfer, count them as successful, and delete all source files.

Yeah, I know. I generally keep ahead on storage so there's quite a bit free at any given time (e.g., 30% free when possible, add vdev at that point).

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

necrobobsledder posted:

The ASRock LGA1150 C226 board I have definitely has IPMI - I've been using it for over a couple years now. Newer Supermicro boards requiring some extra software license to use IPMI is a huge annoyance so I'd be careful with those boards. ASRock doesn't seem to have problems like that though.

Are you sure that's SuperMicro, not some other vendor? I've deployed 6 new model boards from them this year, from 1151 Xeon E3 to dual E5s, and IPMI has worked the same as ever.

The only licensing I'm aware of for updating BIOS directly through IPMI without a CPU installed.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

D. Ebdrup posted:

Interestingly enough, Windows has long had a feature that's been variously called Shadow Copies previously and is now named Previous Versions which can transparently interface with zfs snapshots (and btrfs/hammer/refs snapshots? can someone confirm/deny?) allowing you to access old edits of a file. macOS has added something similar with APFS (and with HFS+ and time machines, if you go looking for the feature, but probably 99% of people don't know it's there) - however it could've been so much better If only Apple had actually based Time Machine on ZFS like some early reports seemed to indicate, instead of just doing HFS+ hardlinks and only now getting around to a inferior CoW filesystem with APFS, at least in terms of data integrity.

Yes, the CIFS shares in FreeNAS expose the ZFS snapshots to Windows machines (if you so choose). There's no setup required, it just works ... if the Windows machine can see the ZFS share presented, and snapshots are on for that share, Windows will see it. They're read-only to the clients, and as many snapshots as you have are visible (into the thousands, even).

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

redeyes posted:

This is actually pretty impressive. Surprising even.

It's incredibly handy. I let the users see their own home folder and department folder snapshots to self-service restore small backups of files/folders for themselves, and with the master "high level" snapshots, I can just roll back the entire Windows share file system in case of a CryptoLocker event (and CryptoLocker has absolutely no write access to any level of ZFS snapshots by default).

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Paul MaudDib posted:

This is why I'm looking seriously at segmenting my server needs. The NAS runs FreeBSD and is just a NAS, plus a separate beefier server on Ubuntu Server using a Ryzen or something. I get the friendly environment for most of my needs, but the serious-business OS backing it up. Or you could run a hypervisor and have both in the same machine, if directed IO/raw passthrough works properly.

So, um... how in the world is package management/versioning not a solved problem coming from the people who brought you jails? Wh y not build/run literally all applications with a minimal set of dependencies symlinked into a chroot/jail? At that point package management should basically look a lot like a bundler file for Ruby.

Seriously, do this. I just migrated all my jail stuff from FreeNAS (by which I mean, started over) to a new power sipping Xeon D 1541 box with a single NVMe drive running ESXi using the SSD for the VMs who each use FreeNAS for bulk storage if they need it. Holy poo poo, it is night and day how much better life is. Stability, updating, configuration, management, performance ... all through the roof compared to jails.

I used to be all about the jails. Now I say gently caress jails. My storage server is now a single purpose appliance, just serve storage.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Hughlander posted:

How much did that run you? I'm considering a second box for a Docker host. I want to run Docker on bare metal with the maximum RAM and don't see a good way to do that with my existing FreeNAS setup.

More than I would have liked, but I'm also on 14c/kwh power and I have a heat limitation in my home server area under the stairs ... it has an A/V cabinet dual fan wired in to exhaust, but no active cooling. About 1200 for the Supermicro 1541 barebones server (mini tower, PSU, mobo, embedded chip 8 core/16 thread, dual 10GBe, IPMI), 300 for a 500GB NVMe Samsung 960 Pro, and $200 for 64GB of eBay ECC DDR4. It will take 128GB of ECC RDIMMS

Yeah, it's a hit, but man, I'm running a Docker VM, and 9 standalone VMs off one stupid flash drive, and it is the most responsive server I've ever owned. ESXi boots fully in about a minute, all VMs pulled up in 3 minutes, and average processor usage (including a Plex VM used by family and some friends, a security DVR, and a Sonarr/Radarr/NZBget server) sits at about 30% average load across a typical day. I wish I'd done it last year.

That chassis is ready for two 2.5 SSDs and 4 hotswap 3.5 drives, so next year, I'm going to fill the drives, add an passthrough HBA to a FreeNAS VM and zfs send all the actually important pools over to it as a hot backup server.

Room to grow at a great monthly cost, is my point. I had an r/homelab style Dell R710 at one point, but that thing will eat you alive on power, noise, and cooling. Can't even hear the 1541 from 5 feet away.

Edit: KillAWatt device reads it at 59 watts at the wall in the current setup at 35% CPU load. It's a single incandescent light bulb with 16 cores and 64 GB. What a time to be alive.

insularis fucked around with this message at 05:30 on May 26, 2017

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Volguus posted:

I bought once an IBM rack server from Ebay for $50. It arrived, it worked, it was awesome (levels of magnitude faster then my old P4 used till then). But it was a loving helicopter too. During the day, you couldn't really hear it, but during the night, i could hear it from my bedroom (bedroom on second floor, server in the basement).
gently caress that poo poo. Those little fans have to spin so fast to cool anything worth a drat that they can indeed mask a helicopter.

Back in 2000, I had a dual P3 homebuilt Tyan Tiger mobo in a super tower EATX kinda case (Coppermine generation, if I remember correctly). Some days it spent 24 hours transcoding DVDs to DiVX, and while I did not need a space heater, I needed hearing protection. Ah, college.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe
Amazon is having a Prime Members only sale on HGST 6TB NAS drives for $190.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

D. Ebdrup posted:

...

UFS, Hammer, and plenty of other filesystems do snapshots - but few of them are atomic and work like ZFS does, which includes not taking up additional diskspace unless you change or delete something in the snapshot (which is, if you ask me, the most impressive part of ZFS snapshots).


Not to mention the automatic export to Windows as read only Previous Versions, which is a godsend for users and self-service. Snapshots (and of course backups, IDS,etc, etc) make me go "meh" to ransomware. I can just kill the infected client and roll back 5 minutes on the dataset. My favorite part of ZFS, honestly.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Incessant Excess posted:

Yeesh, I didn't realize that could happen. Anything I can do to protect my NAS from that?

If you're on ZFS, snapshots are read-only to all clients, so you can just roll back to the closest one you set before the event.

In other news, I used MediaTidy for the first time on my NAS collection after many years of mediocre management. It installs on Linux (ran it through a VM, mounted the shares) and uses ffmpeg and a host of other logical criteria to actually go through your music/movies/tv shows and look for structure issues and actual duplicates. I reclaimed over 400GB of storage from deleting a ton of this.avi vs. this-bettercopy.mkv accumulated from over the years (also, Sonarr going nuts grabbing propers or higher res items). Great little tool.

insularis fucked around with this message at 18:45 on Jan 22, 2018

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Ziploc posted:

One quirk I don't quite understand at the moment.

I have two servers with the following hostnames:

primaryfreenas.local
backupfreenas.local

I have primary making a snapshot every night with a 4am to 5am start window.
I have primary doing a replication task to backup with a 3am to 6am start window.

I seem to be getting these errors periodically. This one came shortly after 3am.

"Replication PrimaryVolume -> backupfreenas.local:BackupVolume failed: Failed: ssh: Could not resolve hostname backupfreenas.local: hostname nor servname provided, or not known"

They're sitting on the same LAN. Everything goes back to normal like 10 minutes later. And when it comes time to do the replication, which typically happens just after the snapshot is done, completes successfully.

I haven't found much about this while googling. Not sure if this is due to the way I have my windows setup or what.

To avoid dragging you down into a rabbit hole of DNS, PTR records, and your router to get that fixed, just change the task to go to each other's direct IP address.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Volguus posted:

But you should really really go deep down into the DNS, PTR and everything else rabbit hole. It makes everything so much easier when you can refer to the things on your network by name. And when we move to IPv6 (you should be internally) a working DNS is pretty much mandatory, as no average human being is gonna be able to memorize those IPs .

He's right. There's a saying in the sysadmin sphere. "It's always DNS". Yeah, it very often is.

On the other hand, once you get it truly right, you rarely if ever need to think about, but your stuff just works.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Ezekial posted:

So I bought 8 4tb barracudas, with an LSI megaraid. (college brokegoon so no wd reds) when drives die I can just replace them with wd reds even with different speeds? Will it cap speeds of other drives or does the raid card just handle all of it on its own?

Speeds don't matter at all to RAID, but you will get the performance of your slowest drives in the array.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

salted hash browns posted:

Super helpful, thank you! I had no idea Synology offered their own backup service.

While C2 looks great it looks like they require that you keep port 443 open on your synology device, which is a bummer since I override port 443 so I can use 443 to redirect to multiple docker services I have running on the host. B2 looks slightly cheaper for my use case, but appears the integration has had some stability issues in the past (since it's a 3rd party provider).

I would love to get off this system that requires me to override 443 on the synology host, but the only solution I can think of would require me to add another middleman into my network which I would rather not. Here is the current setup:
code:
[Synology]
- service1:5000
- service2:5001
- service3:5002
- nginx-letsencrypt:443

([url]https://service1.myhost.com[/url] resolves to the synology box. nginx just listens on 443 and proxies secure requests to service1, service2, and service3)
And the only other alternative I can think of is:
code:
[Synology]
- service1:5000
- service2:5001
- service3:5002

[Other Host (rpi?)]
- nginx-letsentrypt:443

([url]https://service1.myhost.com[/url] resolves to the other host. nginx forwards requests across the network to synology)
I'm worried that adding another hop to all my requests is too inefficient and will be too slow. Plus now I'll be sending non-secure requests over my network. Any suggestions?

Just run some VM and map your data shares into it, run Duplicacy and push that to B2. No special requirements that way, and you get client side encryption and multithreaded uploads.

My last bill for 3TB stored was $2.82/mo.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Takes No Damage posted:

So I've been sitting around with 3 95% full HDs for like 2 years and we all know it's just a matter of time before one junks out and I lose a bunch of poo poo. I do have most of my favorite things backed up to a cloud service but I'd like to have something more local that I can treat like a giant HD and just FTP all my big files to, and I'd like some redundancy so it could take at least a single drive failure.

Am I right in thinking my best bang-for-buck choice is to just build a tiny PC with a bunch of HD slots and run FreeNAS from a USB/small internal drive? Late last year there were some people throwing around system parts lists, are any of those still current?

In my ideal scenario I'd be able to start with a pair of drives and just casually add more as I need them. I'll also most likely be running my Plex server off of it but that's currently running from my 4yr old Lubuntu desktop so my streaming needs are pretty light as it is. Would there be any issue backing stuff up from both Windows and Linux machines? Recommended/ideal RAID flavor to use?

Does this still sound like a decent solution for me, and if so is there a babby's first NAS parts list anyone can point me to?


What's your budget? I mean, I'd recommend a Fractal R5 case, a Supermicro Xeon-D 1541 board (a little older, but plenty for FreeNAS, CPU and mobo integrated), 16-32 GB of ECC RAM, and some WD REDs for "cheap", power efficient, and reliable. Might be able to come in around $1100. Throw in a used, small SSD for your boot drive ... USB boot has screwed me over too many times.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Hughlander posted:

Does anyone use Duplicati on a Linux NAS? How is it for resource usage for backing up a large amount of storage (20TB or so?). In particular compared to CrashPlan. I need to figure out if I should switch to CrashPlan Biz for a year or Google Enterprise and use Duplicati. If Duplicati uses significantly less resources I’d probably swap to that otherwise inertia would leave me with CrashPlan.

Not the same, but very similar, I use Duplicacy with its built-in encryption on my home FreeNAS array to back up about 10TB to Backblaze B2, and I couldn't be happier with it. Set it up, make the cron job, and forget about it. It just works. I really like B2, as well. Very fair pricing, and my test restores have worked properly.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Takes No Damage posted:

No hard budget, this is as much a science experiment and something geeky to do as it is working up effective data backup/archiving. I'm just looking for a good value for the cost. Looks like I can get a WD Green SSD for around 40bux so that shouldn't be an issue.

Looking at the Supermicro specs it only shows 6 SATA plugs, if I ever filled out the 8 HD slots in the case how would I plug them all in? Speaking of, my PC now is already out of SATA power and I'm wary about using one of those little molex -> SATA converters because one shorted out and let out the magic blue smoke on me a few years ago. Is there a recommendation for power supplies that have that many built in SATA plugs?

The usual way is a used IT mode HBA like the LSI1068 (I think that's the one) or any basic LSI HBA plus a couple of 8087 cables that go from SAS to 4 SATA data lines. eBay this part, you can get them for $40.

As for power connectors, get an Antec or Seasonic power supply with the modular plugs (they provide a bunch in the package), and you can use as much SATA power as needed.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe
Anyone have any real world experience with ZFS record sizes? I'm considering moving my media folders to a 1MB record size, but at 20TB of moving stuff around for it to take effect on existing files, I thought I'd ask if it made much difference before rsyncing for two days.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

IOwnCalculus posted:

Oh and because I just noticed the LSI 1068 mentioned - don't get that card. Those early-gen SAS cards can't handle >2TB drives.

poo poo, must have been that IBM 1088 I was thinking of.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Hed posted:

I like this build idea but having trouble with the specific mobo--are you thinking of something like this one? https://www.supermicro.com/products/motherboard/Xeon/D/X10SDV-F.cfm

That's a great board, it's what I use for my home virtualization server. Absolutely sips power, and has 16 cores.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe
ZFS fun moment. NZBGet had a beta build that created monstrous log files due to a bug, like drive-consuming sizes. My log was going to my array with lz4 compression turned on ... a 1.1TB log file was 28GB on disk when I found it and deleted it.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

sockpuppetclock posted:

This looks like the thread for personal backups... Crashplan for Home's dying sooner than later, I need an online backup that can back up multiple computers with unlimited versioning. I'm mostly just wondering if there's something I'm missing that'd make me not want to continue onto Crashplan for Small Business?

Duplicacy and BackBlaze B2 are what I use. Encrypted locally, versioning rules, unlimited clients to your B2 bucket. I have about 1TB of my most important stuff (no movies/TV) in my B2 and my last monthly bill was under three bucks.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

apropos man posted:

I've had two WD reds in a ZFS mirror for quite a while now. I do a monthly scrub and a monthly SMART test of each drive via cron.

The drives have done 13,000 hours and 18,500 hours runtime.

Would it be a good idea to buy a spare, deactivate the pool and rebuild it with the new drive in. Run the new drive for a few months so that "I know that it's good" and then put the new drive aside for when a failure occurs.

Or is this just adding extra stress to current drives and a silly idea?

Why not save yourself the hassle and just add the new drive as a hot spare? When one of the existing drives fails, it takes over, you detach the dead one from your pool at your leisure, and that's that. If you go with larger drives as hot spares, when a matching pair gets replaced over time, you can autoexpand the pool and gain space.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe
Just a friendly tip to the Freenas users.

Script an mtime removal of your .recycle folder contents if you export those over SMB (I set mine to 7 days, as snapshots overlap the functionality). Yay for reclaiming 3TB of space.

My array has been running for over 2.5 years and I never looked into how those folders had grown ... I think I assumed they would automatically be cleaned after x days. Nope, it's for you to set up manually with a script.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

nerox posted:

I understood some of these words.

He rents a cheap server close to where he lives. It's connected to his house by VPN. The outside world talks to the rented server, which means it's also talking to his house, because of the VPN. This gets around some limitations of an ISP for connectivity direct from the outside world to the house.

Also, his internet connection is very fast, your results may vary.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

redeyes posted:

Guitar picks work great.

Can't quote this hard enough. I bought a 100 pack of various size guitar picks from Amazon for like $8, and they're great for thin, strong casing wedges.

If you cut them into isoceles triangles, they're also perfect for cleaning out charging ports and such with something strong enough to do the job, but totally non-conductive and a bit flexible.

I think I've used up 20 out of that 100 pack in two years.

Adbot
ADBOT LOVES YOU

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

D. Ebdrup posted:

df -h reports the wrong values, as all forks of OpenZFS default to using lz4 by default.

Also, I've had pools get to 100% capacity with the kernel telling me "No, seriously, the disks are full, mate" or something to that effect - and when I got rid of the data that wasn't supposed to have written, pool speed shot right back up again.

Just a quick note on this, there is a flag you can use on du to get the uncompressed size from a directory. On Ubuntu, you can use --apparent-size, and it will show the raw data size. I usually run du -hs --appararent-size /someFolder followed by du -hs /someFolder for a quick comparison.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply