Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
power crystals
Jun 6, 2007

Who wants a belly rub??

Eletriarnation posted:

I probably wouldn't bother with an SSD NAS before having 10GbE end to end personally unless I had some really specific reason to.

SSDs are nice as a working area if you have apps that do I/O intensive stuff, but you're probably not installing any of those either on this thing with those specs.

Adbot
ADBOT LOVES YOU

mobby_6kl
Aug 9, 2009

by Fluffdaddy

IOwnCalculus posted:

They aren't designing jack and/or poo poo at a system level, that is literally using the same AliExpress motherboard linked further up the page.

I don't disagree with your sentiment on the Kickstarter being sketchy as gently caress, though, since at the prices they're offering you're buying the motherboard and getting everything else for free.


lol you're right it's exactly the same boards. Which would make the NAS a pretty good deal at that price, assuming it's not a scam and it works

freeasinbeer
Mar 26, 2015

by Fluffdaddy

mobby_6kl posted:



lol you're right it's exactly the same boards. Which would make the NAS a pretty good deal at that price, assuming it's not a scam and it works

They are adding the u.2 port, and the pcie “switch”, the upgraded AMD stuff is probably based on:

https://www.aliexpress.us/item/3256...id=Ic3L6Ji5oZzS

Plenty of reviews on servethehome for the n5105/n6005 boards as routers, haven’t heard of any reliability issues.

And I have 3 of them from different vendors.

Edit; as an aside, the beta version of frigate now can use intel GPU’s for object detection acceleration and works really well on those intel 5105/6005 boards, don’t need coral anymore.

freeasinbeer fucked around with this message at 17:23 on Feb 8, 2023

wibble
May 20, 2001
Meep meep
Thanks folks, a number of red flags, I think I'll save my money and leave well alone for now. Maybe in 18 months if they get something out and it reviews good.

Vaporware
May 22, 2004

Still not here yet.
I'm trying to spin up a RockPro64 as a NAS, but getting stuck on the 2-bay design. I have to assume the best way to approach this is a mirrored drive setup? Just get 2 of the largest 3.5" WD Red Plus drives I can afford and fire away?

I knew this going in, as the board only has 1 pcie card slot and 2 bays in the NAS enclosure, but my design idea wasn't a high power server, just a reliable appliance like synology except I could repurpose it later, and decide if it's a neat raspberry pi alternative.

Mostly I'm looking to move all my cloud photo services to home backup so I can cancel if I want without them holding years of family pics hostage. I'll figure that out once the hardware works. It'd be nice to have a local SMB too, but I figure that is the easiest part of this whole adventure.

Klyith
Aug 3, 2007

GBS Pledge Week

Vaporware posted:

I'm trying to spin up a RockPro64 as a NAS, but getting stuck on the 2-bay design. I have to assume the best way to approach this is a mirrored drive setup? Just get 2 of the largest 3.5" WD Red Plus drives I can afford and fire away?

I knew this going in, as the board only has 1 pcie card slot and 2 bays in the NAS enclosure, but my design idea wasn't a high power server, just a reliable appliance like synology except I could repurpose it later, and decide if it's a neat raspberry pi alternative.

Mostly I'm looking to move all my cloud photo services to home backup so I can cancel if I want without them holding years of family pics hostage. I'll figure that out once the hardware works. It'd be nice to have a local SMB too, but I figure that is the easiest part of this whole adventure.

So like, are you installing Debian or Ubuntu or something and doing this from the ground up? You can't run TrueNAS or Unraid on a Pi-clone. Googling for "RockPro64 NAS" it looks like you're buying a box to install hardware in. The rockpro people don't have a NAS OS, this is not a complete package.

This bit especially:

Vaporware posted:

just a reliable appliance like synology

Using a Pi as a NAS is generally not that. The RockPro might have some better options than relying on SD cards like a Pi, especially with a PCIe-SATA controller. But that's gonna depend on your knowledge.


The hardware is not the hard part of this project.

Vaporware
May 22, 2004

Still not here yet.
Yeah planning to do Debian if it takes. I'm not quite as fond of Ubuntu but if it works better on the hw I will do it. Problem is that there seems to be some compatibility problems with debian on the rockpro64 and not clear on if the latest releases got better. Either it did or everyone gave up trying to fix it.

I know the software is the limitation but I'm using it to learn more Linux. I want to get better at managing my home network stuff, so reliable in my context means it will last 5+ years and reasonably easy to manage. Less that it's plug and play.

I'd love to learn all the wireguard/OpenVPN containers, set up pi-hole or other adblockers, and my edgerouter lite gateway is getting long in the tooth. it would be nice to take the DNS and DHCP roles away from it to have a better management UI.

Additionally, these boards are available so rather than wait for something easier (RPI with more than 2gb) I'll start here so I can start now.

I can toss a couple ancient 500gb sata drives in or a 128gb m2 drive and just start the SW install before spending any $$ if you're saying this isn't a good platform?

Klyith
Aug 3, 2007

GBS Pledge Week

Vaporware posted:

I can toss a couple ancient 500gb sata drives in or a 128gb m2 drive and just start the SW install before spending any $$ if you're saying this isn't a good platform?

It seems like an ok platform for the job you want -- the low-end synology boxes are just a similar ARM cpu with 2 or 4 gigs of memory. But it's the software that makes the commercial box an appliance, and this won't be an appliance.

A thing you'll have to figure out is whether the RockPro will boot directly from a drive attached to the PCIe-SATA controller, with no SD card. So yeah, I would definitely experiment with a pair of old drives before dropping the big money on large NAS drives. A dry run is always a great idea.

If it does boot from the HDDs, you are saved a world of pain and this will be only a medium-hard project. The normal problem with a Pi-NAS is the SD card: keeping your OS and more importantly any configuration on a SD card is terrible. If you can avoid them entirely that's great.

If not, you would want to use the SD card for /boot only and put the rest of the OS on the HDDs. Totally possible but that means jumping directly to advanced linux installation.


As far as your storage drives, I would run then in a btrfs mirror. That gets you both redundancy and snapshots. ZFS has more overhead that I think would not be friendly to a SBC with only 2 or 4 GB of memory, and madm sucks.

Also I saw this warning while googling about a RockPro NAS:

quote:

With the exception of HDDs/SSDs, everything you need for a complete build can be purchased from the PINE store:
A ROCKPro64 2GB or 4GB board
A 12V 5A power supply
A PCIe to dual SATA adapter: WARNING, the SATA adapter from PINE64 store is low-quality and will not function well with two SATA HDD.
hopefully you saw that too, lmao


Vaporware posted:

I know the software is the limitation but I'm using it to learn more Linux.

I went full-time linux about a year ago and I've definitely learned a lot! This cartoon is extremely accurate.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Apologies if this is something you've already seen but it looks like others have gotten ZFS working on the Rock Pro 64: https://jasoncarloscox.com/blog/zfs-on-rockpro64/

I think ZFS for a couple drives would probably be fine from a performance perspective - we're not talking about a Pi Zero here, it's probably faster than the Pi 4 for a lot of purposes and probably can at least compare to the 2010 Xeon that I'm currently using for two pools with eight drives total. From what I've heard, the place where you will run into issues with ZFS on ARM is trying to do it on the older and much slower 32-bit platforms that are still out there. If you're just trying things anyway at this stage, I would be inclined to set it up and see if it works OK.

Eletriarnation fucked around with this message at 17:45 on Feb 10, 2023

wolrah
May 8, 2006
what?
Someone looking for a "reliable appliance like synology" probably would want to avoid any options that require tinkering with DKMS internals to get working in the first place.

That said in this case that seems to be specific to some weird third party fork of Debian and doesn't apply to a more mainstream distro.

Klyith
Aug 3, 2007

GBS Pledge Week
Yeah btrfs is gonna be vastly easier for someone doing a "linux learning project", since it's a first-class citizen on linux. Depending on what distro and what install options, it may well be the out-of-the-box default.

And for a simple mirror setup does ZFS have super-compelling advantages?

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?
On the other hand btrfs doesn't have any advantages outside of being "easier" to get started with on Linux. Yeah, sure, you've got btrfs it in the kernel (unless you're on a recent RHEL distro), but what else? On Debian for instance ZFS is in contrib and getting your system ready for it is as easy as running "apt install zfs-dkms zfsutils-linux".

Yaoi Gagarin
Feb 20, 2014

Klyith posted:

Yeah btrfs is gonna be vastly easier for someone doing a "linux learning project", since it's a first-class citizen on linux. Depending on what distro and what install options, it may well be the out-of-the-box default.

And for a simple mirror setup does ZFS have super-compelling advantages?

As compared to btrfs, probably not. Compared to something like md it does

Aware
Nov 18, 2003
I've got a few of those style boards/boxes as we use them for work. One thing I've noticed is that while they do pack a punch for the form factor they do get incredibly hot if you have sustained load running on them and get very flakey if you don't cool them adequately. Otherwise the ones I have from protecli seem to be the most stable of the lot. I've also tested units from Protectli, Lanner, Axiomtek and Dell.

My favourite out of all of them so far are the Lanner 1516 series and I've deployed hundreds of them. We go with the 16 core versions loaded with 64gb ram and a 1tb ssd running our in-house software loaded with various VMs and containers for our customer edge services.

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

Yeah btrfs is gonna be vastly easier for someone doing a "linux learning project", since it's a first-class citizen on linux. Depending on what distro and what install options, it may well be the out-of-the-box default.

And for a simple mirror setup does ZFS have super-compelling advantages?
Well, yes. ZFS' self-healing works consistently, instead of requiring you to jump through hoops (that've only been well-documented by Jim Salter and nobody else, apparently).
It involves (re-/un-and-)mounting it with -o degraded, then manually doing btrfs balance - and more importantly, doing this in the right order, since if you get it wrong or forget to remove the degraded option, you can get a corrupt fillesystem (all of it, not just the affected file) the next time an URE happens.
And heaven help you if you're booting from BTRFS, because then you'll be trying to mount it as -o degraded from an environment without any documentation, so you need a second device (and gently caress you if you're somewhere without a signal, which isn't uncommon for datacenters).

Heck, even hardware RAID is more sane with regard to how it handles degraded but not destroyed volumes - and that's saying something coming from me, given how thoroughly I despise hardware RAID.

BlankSystemDaemon fucked around with this message at 21:35 on Feb 10, 2023

Vaporware
May 22, 2004

Still not here yet.
i'm going to look into the btrfs vs ZFS a bit more. I actually got the eMMC addon for booting since I know the SD card wear issue is a big deal for these kinds of board, especially with any sort of logging enabled. so easy is a big plus, but I really don't remember much about the filesystem stuff I learned a decade ago.

luckily I didn't trust their sata adapter from the get go (I thought I had one but was wrong, lol) so I ended up getting the startech one. idk if it will work but EVEN THEN I'm not putting in an order again with pine unless I'm getting another SBC.

Aware posted:

My favourite out of all of them so far are the Lanner 1516 series and I've deployed hundreds of them. We go with the 16 core versions loaded with 64gb ram and a 1tb ssd running our in-house software loaded with various VMs and containers for our customer edge services.

unf that's a nice box. I wonder if I could convince my office to use those for dev boxes? What's the price around? you can PM me a ballpark if you don't want to put it up in a post.

Klyith posted:

I went full-time linux about a year ago and I've definitely learned a lot! This cartoon is extremely accurate.

I knew I should have taken it seriously about 8 or so years ago, but now I have to learn it for work so... project time

Eletriarnation posted:

Apologies if this is something you've already seen but it looks like others have gotten ZFS working on the Rock Pro 64: https://jasoncarloscox.com/blog/zfs-on-rockpro64/

I've looked a little but I haven't found this one in particular. I was starting with the pine64 discord and forums but I keep getting distracted so thanks!
edit lol ubuntu just works on 20.04 which I'm running on a HP laptop so it really looks like debian might be just a painful exercise to keep my old college preferences alive

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
This ArsTechnica article was pretty scathing about btrfs
https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/

BlankSystemDaemon
Mar 13, 2009



Vaporware posted:

i'm going to look into the btrfs vs ZFS a bit more. I actually got the eMMC addon for booting since I know the SD card wear issue is a big deal for these kinds of board, especially with any sort of logging enabled. so easy is a big plus, but I really don't remember much about the filesystem stuff I learned a decade ago.
The most important thing to know about BTRFS is that the company that's paying for its development, Facebook, uses it for an architecturally transcient form of load-balancing (ie. spinning up guests on a KVM/ESX/Xen cluster), by instanciating the data from their proprietary TECTONIC filesystem (which replaced a Hadoop cluster).
If anything about these guests fails, they're quitely taken out back and shot in the back of the head - so they're not relying on data integrity or data availability of BTRFS to any degree.

Other companies that've based products off BTRFS had to resort to a considerable amount of hacking; as an example, Synology have BTRFS devices running on top of a single-device mdadm RAID and they then rely on a custom-written kernel module to be able to have BTRFS tell the mdadm RAID which block failed.
That's not in and of itself a bad thing, but the problem is that with all of this complexity comes the nightmare of trying to rescue data - if a Synology box fails, you can't just pop the disks out and insert them into a Linux box, and it's not like anyone who isn't a Synology user can replicate it, because the kernel module isn't available and is probably proprietary.

All of that being said, neither ZFS or BTRFS will keep you from needing backups - and if you don't follow the 3-2-1 rule (3 copies of the data, two types of storage, one of them offsite and offline), then the data doesn't exist. There's also a corollary to that rule, which is that if you don't test your backups regularly (ie. that you can restore them, preferably via some method that's programmatic or semi-automated) then you won't know if they work until you find out that they don't.
ZFS goes to quite a few lengths to make both the 3-2-1 rule and its corollary easier to put into practice, because zfs send|receive makes it wonderfully easy to do backups (and test backups), but ultimately it'll depend on the recovery point objective and recovery time objective of your data (ie. how much time can pass before the data loss tolerance is exceeded, and how long is the maximum time it can take to restore from backup).

BlankSystemDaemon fucked around with this message at 23:18 on Feb 10, 2023

Aware
Nov 18, 2003

Vaporware posted:

i'm going to look into the btrfs vs ZFS a bit more. I actually got the eMMC addon for booting since I know the SD card wear issue is a big deal for these kinds of board, especially with any sort of logging enabled. so easy is a big plus, but I really don't remember much about the filesystem stuff I learned a decade ago.

luckily I didn't trust their sata adapter from the get go (I thought I had one but was wrong, lol) so I ended up getting the startech one. idk if it will work but EVEN THEN I'm not putting in an order again with pine unless I'm getting another SBC.

unf that's a nice box. I wonder if I could convince my office to use those for dev boxes? What's the price around? you can PM me a ballpark if you don't want to put it up in a post.

I knew I should have taken it seriously about 8 or so years ago, but now I have to learn it for work so... project time

I've looked a little but I haven't found this one in particular. I was starting with the pine64 discord and forums but I keep getting distracted so thanks!
edit lol ubuntu just works on 20.04 which I'm running on a HP laptop so it really looks like debian might be just a painful exercise to keep my old college preferences alive

They're all in the $1500-2500+ range depending on specs and volume really for the platform. I really like the Dell VEPs too but the lanners support dual PSU input and have a proper RJ45 console vs microUSB AND they fit a stand 1RU form factor so that puts them ahead if you don't just deploy as desktop boxes.

Yaoi Gagarin
Feb 20, 2014


Lol that's worse than I thought. That's such terrible UX

Computer viking
May 30, 2011
Now with less breakage.

I've had to recover a btrfs mirror fairly recently, and it is exactly that weird. Coming from ZFS I kept asking if I was missing something obvious - this is a recommended, mainstream option with years and years of development hours; surely there must be a better way to do this now?

As far as I can tell the answer is just "no". I think I had to boot from a live image to patch together a simple mirror after one of the drives had some issues.

Klyith
Aug 3, 2007

GBS Pledge Week
My perspective on btrfs was that I got scared away from using it due to BSD's constant hate, and really wish I had set up my system using it because a whole lot of linux tools now do integrate with it. Now that I've partially migrated over to using it, I think he's a ninny with a serious case of fanboyism for his chosen platform.

The recovery process definitely doesn't seem fun, I haven't had to deal with that so I can't really tell how much pain it would really be. In my use of it so far I've carefully read docs rather than just tried whatever, and not had any problems. (At one point I did read a linux dev response to the thing about not mounting a degraded volume without override, which was basically "yes, that's by design". If your volume is degraded it should require an active choice to mount it because you're risking potential data loss.)


BlankSystemDaemon posted:

Other companies that've based products off BTRFS had to resort to a considerable amount of hacking; as an example, Synology have BTRFS devices running on top of a single-device mdadm RAID and they then rely on a custom-written kernel module to be able to have BTRFS tell the mdadm RAID which block failed.
That's not in and of itself a bad thing, but the problem is that with all of this complexity comes the nightmare of trying to rescue data - if a Synology box fails, you can't just pop the disks out and insert them into a Linux box, and it's not like anyone who isn't a Synology user can replicate it, because the kernel module isn't available and is probably proprietary.

What does that have to do with anything? If synology was using ZFS in some weird way that was incompatible with everything else, would that be a strike against ZFS?

Most linux distros are now using btrfs as the default root FS. This whole "btrfs is unreliable" needs some serious cite your sources. Particularly in the context of a basic 2-drive mirror set. Btrfs being unsuitable for datacenters or whatever else is neither here nor there.

BlankSystemDaemon posted:

And heaven help you if you're booting from BTRFS, because then you'll be trying to mount it as -o degraded from an environment without any documentation, so you need a second device (and gently caress you if you're somewhere without a signal, which isn't uncommon for datacenters).

Ok, but we weren't talking about a datacenter. We're talking about a box in a house where there are other computers with internet and things like USB sticks that can easily have liveboot iso loaded...

BlankSystemDaemon posted:

ZFS goes to quite a few lengths to make both the 3-2-1 rule and its corollary easier to put into practice, because zfs send|receive makes it wonderfully easy to do backups (and test backups)

btrfs stole that from ZFS, I do it on the regular

FAT32 SHAMER
Aug 16, 2012



Er wait I selected btrfs when I was setting up my synology… should I have selected exts4 instead?

SpartanIvy
May 18, 2007
Hair Elf
This might be a dumb question, but how can I force Plex to transcode to stress test my servers transcoding ability? I've started multiple streams and they're all Direct Stream or Direct Plany.

Klyith
Aug 3, 2007

GBS Pledge Week

FAT32 SHAMER posted:

Er wait I selected btrfs when I was setting up my synology… should I have selected exts4 instead?

Regardless of any opinions about ZFS vs btrfs, picking btrfs over ext4 is the better choice for a synology NAS. It gives you snapshots and several other features.

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

SpartanIvy posted:

This might be a dumb question, but how can I force Plex to transcode to stress test my servers transcoding ability? I've started multiple streams and they're all Direct Stream or Direct Plany.

On the device, select a quality that doesn't match the source quality. For instance if the video is 1080p, choose 720 or 480 or something.

FAT32 SHAMER
Aug 16, 2012



Klyith posted:

Regardless of any opinions about ZFS vs btrfs, picking btrfs over ext4 is the better choice for a synology NAS. It gives you snapshots and several other features.

Ok hallelujah

Thanks!

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

My perspective on btrfs was that I got scared away from using it due to BSD's constant hate, and really wish I had set up my system using it because a whole lot of linux tools now do integrate with it. Now that I've partially migrated over to using it, I think he's a ninny with a serious case of fanboyism for his chosen platform.
I have very firm opinions on things, sure - because it's hard to trust something again when it's already eaten your data once.
It's not like it's completely unfounded given that BTRFS has been known to claim that it had data integrity before anyone tested it, then quietly backed down on that claim when it was found that their RAID5 and RAID6 calculations weren't implemented properly (no, I don't consider editing a wiki to be an announcement, that stuff goes out on an announcement mailing list).

Also, that integration you speak of isn't finished, because there's still no support for boot environments, which is the best reason to use any atomically transactional copy-on-write filesystem as a root filesystem.
For ZFS, they've existed since 2010 on Solaris/Illumos and been an option (and recommendation) since 2012 and have have been in base since 2018.

Klyith posted:

The recovery process definitely doesn't seem fun, I haven't had to deal with that so I can't really tell how much pain it would really be. In my use of it so far I've carefully read docs rather than just tried whatever, and not had any problems. (At one point I did read a linux dev response to the thing about not mounting a degraded volume without override, which was basically "yes, that's by design". If your volume is degraded it should require an active choice to mount it because you're risking potential data loss.)
One of the very first thing I did when I started using ZFS in 2007 was using 64MB GEOM gates to put ZFS on top of, in order to test every single one of its data integrity claims, including dd'ing over parts of the files backing the GEOM gates to check its ability to self-heal and identify corruption when it gave up.
I did this, because half a decade as a storage admin taught me to not assume that I knew what to do when a datacenter was metaphorically (or in one case, demonstrably) burning - so I don't comprehend how you can say BTRFS is good, when you've not tried the worst parts of it.
Aren't you just making it an article of faith? I dunno about you, but Information Televalgelism doesn't seem like a good idea.

Klyith posted:

What does that have to do with anything? If synology was using ZFS in some weird way that was incompatible with everything else, would that be a strike against ZFS?
My point was that Synology had to hack around some of the design decisions of BTRFS in order to make a product out of it, and that they wouldn't have had to if it was ZFS; case in point, QNAP who integrates ZFS into some of their products, don't do any hacking to it.

Klyith posted:

Most linux distros are now using btrfs as the default root FS. This whole "btrfs is unreliable" needs some serious cite your sources. Particularly in the context of a basic 2-drive mirror set. Btrfs being unsuitable for datacenters or whatever else is neither here nor there.
See previous comments about the absolute nonsense that is trying to stand a system back up, especially given the context of the answer Computer Viking just gave above you visa vis needing to boot a live image to fix things.

Klyith posted:

Ok, but we weren't talking about a datacenter. We're talking about a box in a house where there are other computers with internet and things like USB sticks that can easily have liveboot iso loaded...
We're never going to agree on this - if you need a second device for documentation on the intricacies of bringing up a system, there's something fundemental that's gone wrong.
I'm not superstitious, but that doesn't mean I can't try to outguess Murphy and reduce a complexity matrix.

Klyith posted:

btrfs stole that from ZFS, I do it on the regular
Best decision they ever made.

FAT32 SHAMER posted:

Ok hallelujah

Thanks!
Never don't back up your important data.

EDIT: Sorry, I clicked post, but I want to reply to this specifically:

Klyith posted:

(At one point I did read a linux dev response to the thing about not mounting a degraded volume without override, which was basically "yes, that's by design". If your volume is degraded it should require an active choice to mount it because you're risking potential data loss.)
Being unable to boot to your root filesystem because it can't gracefully handle a degraded volume despite there's still being sufficient data availability, as well as checksumming for data integrity, to ensure that the data availability is automatically brought up to the expected configuration is utter loving nonsense.
Hardware RAID controllers have been doing it since the 90s, Linux mdadm RAID has been doing it for almost as long as it's existed (if not as long), FreeBSD gmirror/graid has been doing it for as long as it's existed. It's not just ZFS that behaves like that, it's every single RAID implementation except the one found in BTRFS.

BlankSystemDaemon fucked around with this message at 02:06 on Feb 11, 2023

FAT32 SHAMER
Aug 16, 2012



is there a good tutorial for using docker on synology? specifically for qbittorrent, but also for like sonarr or whatever else?

i have a vague idea of everything i'm doing, but i wasnt sure if something like this was overkill or underkill or just wrong

Computer viking
May 30, 2011
Now with less breakage.

I will stretch this far when it comes to btrfs: In a simple mirror, I don't think it's notably more likely to eat your data than ext4.

Do I think the degraded mirror recovery experience is stupid? Yes.
Do I think the user experience, from the command line tools to how you mount subvolumes, is needlessly obtuse? Yes.
Will it eat your data or degrade your experience if you're just using a system with btrfs? No.
Does it add useful features over mdadm raid and ext4? Yeah.


(Do I trust more complex layouts than mirrors? Nooooo)

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
I goofed up a couple of weeks ago and let my cache drive hit 100% with btrfs and that was enough to corrupt the file table, lol

that was enough to have to manually recover everything, then reformat the two mirrored drives


in short, btrfs lol lmao

spincube
Jan 31, 2006

I spent :10bux: so I could say that I finally figured out what this god damned cube is doing. Get well Lowtax.
Grimey Drawer

FAT32 SHAMER posted:

is there a good tutorial for using docker on synology? specifically for qbittorrent, but also for like sonarr or whatever else?

i have a vague idea of everything i'm doing, but i wasnt sure if something like this was overkill or underkill or just wrong

I would simply install qBittorrent/*arr/etc from one of the Linuxserver images (i.e. punch 'linuxserver' into the Registry search doodad in the docker application), so you can play around and verify that you've set everything up correctly. Then, if you want to access stuff from outside your home network, install Tailscale from the Synology appstore, and the relevant Tailscale apps on your laptop or phone or whatever; that'll be far, far, far easier than messing around with port forwarding and OpenVPN and SSH and yadda yadda.

Try shooting any questions to the self-hosting thread?

Computer viking
May 30, 2011
Now with less breakage.

e.pilot posted:

I goofed up a couple of weeks ago and let my cache drive hit 100% with btrfs and that was enough to corrupt the file table, lol

that was enough to have to manually recover everything, then reformat the two mirrored drives


in short, btrfs lol lmao

Cache drives count as "more complex than a simple mirror" :colbert:

CopperHound
Feb 14, 2012

Computer viking posted:

Cache drives count as "more complex than a simple mirror" :colbert:
I think they were referring to unRAID cache drives which are literally just btrfs mirrors as opposed to the unRAID proprietary parity array.

E: maybe the proprietary mergerfs type thing on top of all that did play a role tho.

CopperHound fucked around with this message at 17:09 on Feb 11, 2023

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Computer viking posted:

Cache drives count as "more complex than a simple mirror" :colbert:

not on unraid :colbert:

a simple OS for simple people

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
There should be a setting in the cache drive for "minimum space" so when it's hit, the mover starts plunking stuff on the array.

But this doesn't work if you've got a shitload of torrents seeding.

This is why I picked up a 1TB ssd for my data share and set up a second cache. Had that happen once and it took down all my docker containers when the drive filled up.

FAT32 SHAMER
Aug 16, 2012



spincube posted:

I would simply install qBittorrent/*arr/etc from one of the Linuxserver images (i.e. punch 'linuxserver' into the Registry search doodad in the docker application), so you can play around and verify that you've set everything up correctly. Then, if you want to access stuff from outside your home network, install Tailscale from the Synology appstore, and the relevant Tailscale apps on your laptop or phone or whatever; that'll be far, far, far easier than messing around with port forwarding and OpenVPN and SSH and yadda yadda.

Try shooting any questions to the self-hosting thread?

Awesome, thanks!!

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

Matt Zerella posted:

There should be a setting in the cache drive for "minimum space" so when it's hit, the mover starts plunking stuff on the array.

But this doesn't work if you've got a shitload of torrents seeding.

This is why I picked up a 1TB ssd for my data share and set up a second cache. Had that happen once and it took down all my docker containers when the drive filled up.

You can set the mover to go off when it hits a certain size or %full. This might be part of the CA Mover Tuning plugin but I think it's in the default menu

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

FAT32 SHAMER posted:

Awesome, thanks!!

The guides here are pretty good and I followed most of them before branching out into other things to install. The guides have the usual *arr, Sab, torrents, etc plus maybe some things you didn't know existed

Adbot
ADBOT LOVES YOU

waffle enthusiast
Nov 16, 2007



Any reason not to go with WD Red Plus 5400 rpm drives if I’m on a gigabit Ethernet connection (plus WiFi 6 for things). I’m looking at picking up a 923+ for Photo and file storage, and light Plex duty (will run it off my PC if transcoding becomes a problem).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply