Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
hogofwar
Jun 25, 2011

'We've strayed into a zone with a high magical index,' he said. 'Don't ask me how. Once upon a time a really powerful magic field must have been generated here, and we're feeling the after-effects.'
'Precisely,' said a passing bush.

cruft posted:

I don't even understand what you're asking, heh. If you want to know if you can bind mount a volume in a container OS, the answer is yes. You can also NFS mount things, although database over NFS is a first class ticket to slowsville, as you discovered.

We even had an NFS server running flatcar. And HA MariaDB and Postgres database pools.

I wasn't 100% sure what I was asking either, but I think you pretty much covered it.

My only real concern is how to easily access docker bind mounted folders to change config. If I can run an nfs server (like I currently do in Ubuntu) that solves it for me.

Adbot
ADBOT LOVES YOU

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

cruft posted:

I feel like that was the main thing. Databases want to do weird tricks with files, and adding filesystem abstraction is problematic.

Yep, for example postgres on even NFS is a big no-no, and forget of course about S3-esque providers.

The other things that containers are good at is horizontally scaling (databases need to use their own replication systems) and ephemeralness (databases have long starts and lots of long-running operations, and upgrades are typically one-way operations).

That said there's nothing wrong in running your database process in a container as long as you lock it to one instance, on one machine, with direct disk mounting and never restarting. You don't get to use any container management tricks, but you still avoid having to deal with system dependencies directly, and more importantly you can integrate the database configuration in whatever tool you are using to manage the rest of your stack. Makes it easier to deploy multiple environments too.

Mr Crucial
Oct 28, 2005
What's new pussycat?

NihilCredo posted:

The other things that containers are good at is horizontally scaling (databases need to use their own replication systems) and ephemeralness (databases have long starts and lots of long-running operations, and upgrades are typically one-way operations).

You can create database clusters with scaling, auto-scaling, managed version updates and all kinds of other fun stuff in containerised environments but you need to use something like a Kubernetes operator to handle everything in a vaguely sane way.

I’ve been trying to build HA MySQL, Postgres and Redis clusters to support some of my apps on K8S and I’ve immediately ended up spending 90% of my homelab time babysitting my database layer so it’s quickly becoming very boring. Documentation for this stuff is generally not great (looking at you, official MySQL operator by Oracle) and it feels like the community of people who are insane enough to need highly resilient DBs whilst at the same time being too cheap to pay for RDS or some other cloud service is pretty small, so good luck getting help from anyone.

Going back to single container deployments and tolerating a few minutes of downtime whenever I need to restart something looks more appealing with every issue that I hit.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Mr Crucial posted:

Going back to single container deployments and tolerating a few minutes of downtime whenever I need to restart something looks more appealing with every issue that I hit.

I don't know what services you're hosting exactly, but I think most peeps in this thread are totally fine with like 66% uptime lol (auto-turn off the services at night).

Mr Crucial
Oct 28, 2005
What's new pussycat?

NihilCredo posted:

I don't know what services you're hosting exactly, but I think most peeps in this thread are totally fine with like 66% uptime lol (auto-turn off the services at night).

I host a Wordpress website that is moderately successful in terms of traffic but spectacularly unsuccessful financially. My previous hosting provider collapsed so I started self-hosting as a temporary measure but never got round to finding a replacement provider. I really should.

Really though it's an environment for learning things that will apply to my professional life and my real career. A totally unrealistic and pointless goal I've set for myself is to achieve 100% uptime for a year, whilst simultaneously being able to automatically apply updates to operating systems, Kubernetes + supporting apps, and applications (Wordpress + all plugins, MySQL, Redis) within 7 days of release, all with no service interruption.

Last year I got to May before I broke my streak - a MySQL update that caused about 3 minutes of downtime when I had to spin down/up my single DB container to update it, so that's why I started looking into DB clustering. This year I only made it to January 5th before losing service, but it was nothing to do with the database - a new version of iscsid conflicted with a new version of selinux that blew up my storage provider and took down Wordpress.

Maybe I should take up fishing or something.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Mr Crucial posted:

Maybe I should take up fishing or something.

You know you'd be ssh'ing into your server to check on it while waiting for them to bite. Don't kid yourself.

cruft
Oct 25, 2007

NihilCredo posted:

I don't know what services you're hosting exactly, but I think most peeps in this thread are totally fine with like 66% uptime lol (auto-turn off the services at night).

I got an email from RIPE NIC a few weeks ago thanking me for running an Atlas probe for 2 years. That spurred me to get it working again, lol.

But part of that was finding out that I had a 97% uptime on my homelab Raspberry Pi.

BlankSystemDaemon
Mar 13, 2009



I'm managing four nines per year since the last major overhaul - which I'm pretty proud of, considering how little effort it takes.

El Mero Mero
Oct 13, 2001

My uptime is essentially equal to my utility's uptime. Unfortunately my utility is PG&E

El Mero Mero
Oct 13, 2001

actually scratch that. The 2-second outage this afternoon reminded me that since I put in my UPS it's actually substantially better than PG&E's

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down
Got a question about media management. A long time ago I set up a separate instance of sonarr and radarr to only grab 4K releases. The thinking was that due to the size and transcoding for external users, I'd primarily get everything I wanted in 1080p, then get the really good stuff in 4K. Problem was, for the longest time there was hardly ever anything available to download.

Turns out, I needed to change some settings for the indexer to include the right categories. I had Movies/TV - HD selected, with no other options available and that was too restrictive. I checked the box for all TV/Movies (separate for each program) and let my profiles restrict to 2160p+. Bingo bango, I'm now flooded with content in 4K.

The question that I have is, am I way overthinking this? Is maintaining two separate libraries (My Plex has a Movies Section, and a 4K movies section, which external users do not have access to the 4K) stupid and transcoding should be no big deal? Should I download everything (let's pretend infinite hard drive storage for this question) at the highest quality and let Plex sort it out?

Or is it actually a big deal and my system (dual Xeon® CPU E5-2643 v3 @ 3.40GHz, no GPU support) would probably choke on the transcoding to 1080p or lower for external users? I am in the hunt to upgrade my system, so the same question would apply toward a more modern system assuming I hit an appropriate minimum compute power (within reasonable cost).

Thanks in advance!

Azhais
Feb 5, 2007
Switchblade Switcharoo
You can tell Plex to auto transcode everything and store it with the original download. That way both versions are available in the same library. Saves the concern with your system not being able to keep up with transcoding. Does have disk space implications tho

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

Azhais posted:

You can tell Plex to auto transcode everything and store it with the original download. That way both versions are available in the same library. Saves the concern with your system not being able to keep up with transcoding. Does have disk space implications tho

Ah, okay. So the transcoding takes place up front and then just serves it up when necessary. Is there a good rule of thumb as to what level is appropriate to transcode to for serving up content on American broadband? I'm rated for 35mpbs total upload if that helps to answer.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
Two management options to consider if you wanted to keep your 2 sonarr/radarr instances setup.
There's a tool called called syncarr that lets you sync two instances, so if you added something to your 4k radarr it would also add it to the normal radarr.
You also could set up overseerr/jellyseerr, which supports a separate 4k instance of radarr/sonarr and will let you request and add media to any of them from its single interface.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

THF13 posted:

Two management options to consider if you wanted to keep your 2 sonarr/radarr instances setup.
There's a tool called called syncarr that lets you sync two instances, so if you added something to your 4k radarr it would also add it to the normal radarr.
You also could set up overseerr/jellyseerr, which supports a separate 4k instance of radarr/sonarr and will let you request and add media to any of them from its single interface.

Cool, I may check those out. I don't mind managing two sets of *arr apps though. Good to know there's some other tools out there to consider though!

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Tdarr will also auto tramscode your library into a more space efficient codec. Something that could be pretty valuable if you are going to be keeping multiple versions of the same video.

cruft
Oct 25, 2007

TraderStav posted:

Got a question about media management... sonarr and radarr ... Plex

Does anyone else ever get the feeling it's the same three dozen goons in every thread they've bookmarked?

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

cruft posted:

Does anyone else ever get the feeling it's the same three dozen goons in every thread they've bookmarked?

What can I say, I'm prolific. No, I will not stop posting.

e: I looked at your posting history to see what other threads we could be bumping into eachother on, thanks to that I see that there is a Plex thread I was unaware of. So I just added one more for you to see my posts in. Thanks!

TraderStav fucked around with this message at 15:56 on Jan 12, 2024

BlankSystemDaemon
Mar 13, 2009



El Mero Mero posted:

My uptime is essentially equal to my utility's uptime. Unfortunately my utility is PG&E
I would've had five nines per year if it wasn't for the third Unifi Security Gateway in five years going tits-up on me, and going into a boot-loop that resulted in it audibly going click and the RJ45 console port link indicator briefly flashing.
Same exact problem with all three, which convinced me that Ubiquiti has a serious design flaw in one of their central products (a core router+firewall that doesn't even come with PoE-in despite being imminently suited for it, as the PoE switches almost certainly have better PSUs than the AC adapters for the USG).

So now I have a TP-Link Omada setup on the same UPS.

cruft
Oct 25, 2007

TraderStav posted:

No, I will not stop posting.

:justpost:

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

TraderStav posted:

Ah, okay. So the transcoding takes place up front and then just serves it up when necessary. Is there a good rule of thumb as to what level is appropriate to transcode to for serving up content on American broadband? I'm rated for 35mpbs total upload if that helps to answer.

I'd probably pick around ~7 mbps. However, it's really dependent on how many concurrent streams you need to serve. If you need to serve 10 at a time 7 mbps is going to blow out your upload and bring your Plex service down. Also, your dual Xeon® CPU E5-2643 v3 @ 3.40GHz is probably not enough to do h265->h264, 4k->1080p, and tone map HDR->SDR that's a lot of computational power needed.

Hughlander
May 11, 2005

cruft posted:

Does anyone else ever get the feeling it's the same three dozen goons in every thread they've bookmarked?

I mean I do think that naturally there's going to be huge over lap towards:

I like watching media in Plex
That I store on my NAS
that I selfhost the software
to get things from Usenet

And you're going to see the same people talking in those threads

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

gariig posted:

I'd probably pick around ~7 mbps. However, it's really dependent on how many concurrent streams you need to serve. If you need to serve 10 at a time 7 mbps is going to blow out your upload and bring your Plex service down. Also, your dual Xeon® CPU E5-2643 v3 @ 3.40GHz is probably not enough to do h265->h264, 4k->1080p, and tone map HDR->SDR that's a lot of computational power needed.

It's only a handful of people, so it'd be like 3-4 max I think at any given time.

Thanks for laying out some of the technical things to look for when building a new machine. I think I'll start speccing out a new Unraid box to set up for better handling this. From what I am hearing you say, with a new build I should be able to serve up transcoding on the fly 4K->1080p to for these 3-4 people. In that event, I would only keep the highest quality file in the library and let the transcoding take it from there.

Any recommended resources for hardware to look at to do this? I think my only requirement is that it needs to be able to take my LSI card so I can connect to my JBOD. Everything else should reasonably be taken care of by the demands of transcoding. I made a list the other day of things my machine is currently doing and would need in the future. Think the hurdle is the transcoding:
code:
- Storage: Currently using 38TB, have extra 26TB in space, dual 14TB parity
- NZB Downloading (sab, sonarr, radarr, etc.)
- Plex
	- Transcoding 3-4 max multiple streams, mostly external, direct internal
	- 4K playback locally
- Crashplan/cloud backup
- Duckdns
- Nextcloud
- piHole
- swag/reverse proxy
- Tailscale
- Time Machine target
- Lightweight Windows VM
- Home Assistant
- Capability for more VMs as needed (Linux, etc.)
- Backup targets for other machines in house

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down
Separate note: my exuberance got the best of me. On my 4K versions of Sonarr/Radarr, I went through and added all the stuff I'd like to have in 4K, marking them to search for missing episodes (everything). This sent ~7TB to my sabNZBD. As it seems the 4K downloads take a lot longer for post-processing/copying on my server (perhaps due to the aforementioned older processor?) I have about 50 downloads that are in 'waiting' mode while they individually unpack and then get moved from the Downloads Share to the Media share. This I/O bottleneck has me micromanaging the queue as my gigabit speed greatly outpaces the ability of machine to post-process.

E: We have a winter storm coming our way with gusty winds, please god do not let the power go out and completely bork this process.

Variable 5
Apr 17, 2007
We do these things not because they are easy, but because we thought they would be easy.
Grimey Drawer

TraderStav posted:

I have about 50 downloads that are in 'waiting' mode while they individually unpack and then get moved from the Downloads Share to the Media share. This I/O bottleneck has me micromanaging the queue as my gigabit speed greatly outpaces the ability of machine to post-process.

There's a setting in Sabnzbd to pause downloads while completed ones are being processed.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

Variable 5 posted:

There's a setting in Sabnzbd to pause downloads while completed ones are being processed.

Cheers, thank you! Just enabled that

cruft
Oct 25, 2007

Maybe someone on here would like this thing I threw together to visualize CPU load without having to run a bunch of new services:



It's some JavaScript that pulls /proc/cpuinfo from a web server and renders it as a chart. Everything happens in the browser.

https://git.woozle.org/neale/portal/src/branch/main/web/stat.html and https://git.woozle.org/neale/portal/src/branch/main/web/stat.mjs ought to get you going, you have to tell the web server to provide /proc/stat when the browser asks for proc/stat.

Mr. Crow
May 22, 2008

Snap City mayor for life

Hughlander posted:

I mean I do think that naturally there's going to be huge over lap towards:

I like watching media in Plex
That I store on my NAS
that I selfhost the software
to get things from Usenet

And you're going to see the same people talking in those threads

i keep wanting to try usenet and see what the hubub is about then they want you to pay for access and im just like EHHHHHHHH Ill stick to private sites

cruft
Oct 25, 2007

Mr. Crow posted:

i keep wanting to try usenet and see what the hubub is about then they want you to pay for access and im just like EHHHHHHHH Ill stick to private sites

I gave :10bux: to the Frugal guys a couple months ago to see what was going on in Usenet these days.

Seems like it's the preferred place to spam dozens of copies of your latest paint-drinking insane screed about whatever cabal is out to get you personally. One after another, in sequential order.

Not much else going on in alt.*.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

Mr. Crow posted:

i keep wanting to try usenet and see what the hubub is about then they want you to pay for access and im just like EHHHHHHHH Ill stick to private sites

Never having to touch torrents again is sooo worth $20/year

Corb3t
Jun 7, 2003

Can't expect every dirty pirate to want to pay for anything, but yeah, I've been using usenet to automate my entire stack for drat near a decade and I couldn't imagine mucking around on whatever new torrent site is the flavor of the month or whatever. It saturates my 1 Gbps connection and not reliant on somebody else sharing.

I also have Frugal Usenet for $40 a year.

Corb3t fucked around with this message at 21:17 on Jan 12, 2024

Variable 5
Apr 17, 2007
We do these things not because they are easy, but because we thought they would be easy.
Grimey Drawer
Are people using torrents and not paying for VPNs?

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Variable 5 posted:

Are people using torrents and not paying for VPNs?

Most of the world doesn't give a poo poo about piracy. ISP letters are only a thing in the US, Germany, and a handful of other countries.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Mullvad costs $5/mo and it's great. They shut down the port forwarding, but it still works fine without it.

I tried usenet via Frugal a few years ago and the experience sucked. Indexers are awful, I'm not putting my credit card into some sketchy Russian website and I'm sure as hell not buying loving crypto to pay for the privilege of searching for some obscure Linux distro.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down
I've never given any of the indexers my payment information.

I did realize the other day that it's been like 5 years since I set up my indexer sites and have no idea if half are defunct. Hate to start from scratch if all got tits up but so far they're chugging along.

Corb3t
Jun 7, 2003

Cenodoxus posted:

Mullvad costs $5/mo and it's great. They shut down the port forwarding, but it still works fine without it.

I tried usenet via Frugal a few years ago and the experience sucked. Indexers are awful, I'm not putting my credit card into some sketchy Russian website and I'm sure as hell not buying loving crypto to pay for the privilege of searching for some obscure Linux distro.

Privacy.com
Paypal.com

TraderStav posted:

I've never given any of the indexers my payment information.

I did realize the other day that it's been like 5 years since I set up my indexer sites and have no idea if half are defunct. Hate to start from scratch if all got tits up but so far they're chugging along.

Paid for a Lifetime membership to NZBGeek back in 2015, no complaints, no scary Russian propaganda.
I've had an NZB.su (that SkEtChY Russian site with hammer and sickle favicon) account since 2011.
Drunkenslug has been around for a long time too.

Corb3t fucked around with this message at 22:20 on Jan 12, 2024

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

Corb3t posted:

Privacy.com
Paypal.com

Paid for a Lifetime membership to NZBGeek back in 2015, no complaints, no scary Russian propaganda.
I've had an NZB.su (that SkEtChY Russian site with hammer and sickle favicon) account since 2011.
Drunkenslug has been around for a long time too.

Think I have those but not [REDACTED]. No idea how many are used regularly or not. Sounds like there's been some institutionalization occurring.

TraderStav fucked around with this message at 18:02 on Jan 13, 2024

Potato Salad
Oct 23, 2014

nobody cares


Corb3t posted:

Can't expect every dirty pirate to want to pay for anything, but yeah, I've been using usenet to automate my entire stack for drat near a decade and I couldn't imagine mucking around on whatever new torrent site is the flavor of the month or whatever. It saturates my 1 Gbps connection and not reliant on somebody else sharing.

I also have Frugal Usenet for $40 a year.

do you feel comfortable sharing your stack

I've been out of the game for almost a decade

Corb3t
Jun 7, 2003

Potato Salad posted:

do you feel comfortable sharing your stack

I've been out of the game for almost a decade

So I do all my media requests through mobile Overseerr/Jellyseerr connected to Plex/Jellyfin + Sonarr + Radarr + Bazarr + Prowlarr, connected to Sabnzbd using Frugal Usenet with NZBGeek and Drunkenslug as indexers, all on unraid. Pretty hands off, my requests download in minutes and Plex refreshes instantly. Throw in Tautulli if you want to monitor Plex streaming among friends.

We also do a lot of streaming account sharing with friends but hard to beat the convenience of Overseerr when it’s so hands off.

Corb3t fucked around with this message at 04:28 on Jan 13, 2024

Adbot
ADBOT LOVES YOU

Dicty Bojangles
Apr 14, 2001

I agree, Overseer is very slick and I'm super glad I finally gave it a try over the holidays. Huge time saver when all the visiting family have requests.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply