Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hughlander
May 11, 2005

xgalaxy posted:

I'm far from knowledgable about this. My understanding is that docker is like a lightweight vm. You can create "containers" that describe what services you want and how they are configured and then you can give other people that container and they can install it on their system without much hassle. It is 'guaranteed' to work as they say without having to mess around with configuration. Plug and play as long as whoever setup the container initially did their job well. Since it is a very light weight vm in essence, uninstalling the container is pretty simple.

https://www.docker.com/what-docker

Anyways, what I'm suggesting is for someone to create a docker container that has all of nzbget, sonarr, mylar, couch potato, newznab, etc. configured in it. Ready to go. Then all you have to do is install the container on your linux and your done. No loving around.

I'm using one from github from a year ago but each app is its own container. The repo name was just Dockerfiles so maybe google that with couch potato? Mine is transmission with PIA, couch, sick rage, sabnzbd, Mylar, headphones, and I think one other. They share a done getting the content data container, and a link to the NAS

Adbot
ADBOT LOVES YOU

Hughlander
May 11, 2005

YouTuber posted:

I'm encountering a rather weird problem at the moment. I just flattened my server and set it up once again since the default mount point OVH had sucked. Sabnzbd downloads files successfully and unpacks them properly. Sonarr seems to be getting stuck in some kind of loop. It detects the file, and begins moving it to the proper show folder and I watch it tick upward until 99% then it disappears. Only for the process to repeat itself. It's mounted via NFS and I seem to have no problem rsyncing stuff back and forth.

code:
System.UnauthorizedAccessException: Access to the path is denied.
  at System.IO.File.Move (System.String sourceFileName, System.String destFileName) <0x410dbbe0 + 0x0030f> in <filename unknown>:0 
  at NzbDrone.Common.Disk.DiskProviderBase.MoveFile (System.String source, System.String destination, Boolean overwrite) <0x410db7e0 + 0x00243> in <filename unknown>:0 
  at NzbDrone.Common.Disk.DiskTransferService.TryMoveFileVerified (System.String sourcePath, System.String targetPath, Int64 originalSize) <0x410db690 + 0x00042> in <filename unknown>:0

Thats the error it's kicking out.

Looks like the uid the sonarr process is running under doesn't have write permissions to delete the source file.

Hughlander
May 11, 2005

Thanks to this thread I got two more backup servers for free! Thanks thread!

Hughlander
May 11, 2005

Heners_UK posted:

Two? Which one other than xsusenet?

From this page:

bigis posted:
I'm getting tired of lovely completion on Astraweb and am considering moving to usenet.farm. Is anyone else with farm? Are you happy with them?

slash posted:
I tried their free trial block as a backup and it worked fine. Signup for the free trial and give it a shot, nothing to lose!

Hughlander
May 11, 2005

kri kri posted:

Me too, its been working great in my docker setup.

Share a Dockerfile? It was on my list to write one tonight. I didn't like the two I saw for some reason.

Hughlander
May 11, 2005

prom candy posted:

Is it worth switching to Sonarr if I already have Sickbeard set up the way I like it?

I recently did this and can say "yes". Though I didn't try out medusa. I sperged a bit and redid almost my entire setup that was already docker containers. Added jackett and hydra. Dropped cp sick and sabnzbd for radarr sonarr and nzbget. Picked up lazylibrarian again now that there's a fork with calibre support. Even got 2 new 1tb block accounts. Things so much smoother.

Hughlander
May 11, 2005

vivica posted:

Can you please link to that lazylibrarian fork?

https://github.com/DobyTang/LazyLibrarian lots of active development.

Hughlander
May 11, 2005

xgalaxy posted:

Getting sick of mylar being poo poo. Really tempted to fork Sonarr (much like the so far successful Radarr has done).
I think I'd call it Hodarr as a lovely Game of Thrones reference.

I was going to suggest that maybe lazy librarian is a better starting point but no you are right it should be sonar

Hughlander
May 11, 2005

Speaking of. Anyone have a way for new movies to show up in radar that's same? I'd like to just add a RSS feed of new releases, have it show up on the calendar and then monitor what I want it to snatch. Instead I added the IMDb new releases RSS feed and occasionally have to sort all movies by date add to monitor poo poo.

Hughlander
May 11, 2005

TraderStav posted:

Is headphones still the equivalent of Sonarr and Couchpotato? I tried using it a few years ago but recall it not meeting my expectations for one reason or another. Has it improved at all? Does it do the job?

It's at the couch potato level. It kinda works but not great and you'd sometimes need to help it. Mostly because no one knows how the gently caress to name music.

Hughlander
May 11, 2005


Isn’t that moot though if you just have it monitor nzbget/transmission? I thought they deprecated Drone directories for just talking to the APIs of the fetchers.

Hughlander
May 11, 2005

Boris Galerkin posted:

Is there something like nzbhydra but for providers? I don’t download enough stuff to justify subscriptions so I just have a few block accounts. Getting tired of nzbget just deciding to hang and do nothing (and me not noticing for days) when a block account runs out or whatever.

Speaking of nzbhydra is it something worth using?

When I started reading this I was going to say NZBGet... Do you have the tiers set up? I’ve never had a problem with it not falling back to a different block account for whatever reason. Though I do have one subscription and then 4-5 blocks that last for years so it may be a bit different.

Hughlander
May 11, 2005

SymmetryrtemmyS posted:

I decided to check my Hydra for the first time in a while, and here are my stats:


NZBGeek and PFMonkey are the best, as I'd suspected, though NZBCat has a notable advantage on response time. DogNZB just sucks.

I wish there was some way to migrate Usenet Megasearch stats into NZBHydra (if megasearch even kept stats?). I used that for years before Hydra rolled around.

Look again, the pie chart is how many times an indexer actually served the NZB for you. PFMonkey is the biggest but #2 is Dog not Geek. I checked my stats and I'm 60 Dog, 30 geek, 10 everything else. But that's just what my material is. I may look at PFMonkey though. Hell I fully forgot I had dog, I thought that I was mostly just geek.

Hughlander
May 11, 2005

SymmetryrtemmyS posted:

Dog is fourth; PF, .org, and Geek are all above it.


They also have the lowest percentage of unique results.


Ok then I'm just completely blind / missled by circle parts. The dog really looked larger :)

Hughlander
May 11, 2005

tonic posted:

Has anyone figured out a way to use lists in Radarr to download bunches of older movies by category? Say I wanted to get 100 action movies that were most popular in the 1990s....is this possible?

I figured out how to use Trakt lists to get recent films, but can't figure out a configuration that gets older movies.

Also finally setup NZBHydra and it was totally worth it just for the indexer stats :D

I have Radarr signed up for some IMDb lists like “Every Marvel movie” “AFI Top 100 movies” etc... Just find one for top 100 action movies of the 1990s or something.

Hughlander
May 11, 2005

One huge plus for docker (to me) is that once you get your system set up with well updated containers (linuxserver.io for instance.) then any kind of updates just becomes
docker-compose pull --parallel
docker-compose up -d

And every part of your system is on the latest.

Hughlander
May 11, 2005

Thermopyle posted:

Just because it's easier to get out of a container than a VM does not mean running in a container is on an equal security footing as the same software running directly on your server.

That’s why you run a container in an lxc on a vm! Errr or something (I run my containers on an lxc so if docker fucks up somewhere I can just lxc stop it.). But that’s not for this thread.

Hughlander
May 11, 2005

EL BROMANCE posted:

When somethings been missed and you go to the Search function for it, does it had a red "!" icon saying why it's ignored the release you want, and if so what does the mouse hover usually say?

Was going to post this. Any time a release wasn’t there this told me why. Do the manual search.

Hughlander
May 11, 2005

Skarsnik posted:

A recent update of something or other broke nzbhydra for me on Centos 7 so figured I'd take the opportunity to install hydra2

Hell of a lot easier than getting 1 up and running which required an altinstall of python 2.73, just a simple yum install of the java jre and it ran straight out the box (well, out the zip)

Loads quicker than 1, I'm impressed

Meh. Docker run linuxserver/nzbhydra vs docker run linuxserver/nzbhydra2 isn’t any easier.

Hughlander
May 11, 2005

wolrah posted:

No, the downloader (SAB, nzbget, etc.) downloads to a general directory, then the library manager (Sonarr, Radarr, etc.) which is monitoring the downloader sees that the download is complete, grabs the file out of that generic directory, and sorts/renames it in to the final location.

In my setup for example my downloader (currently nzbget) saves to /mnt/media/Unsorted/<category>/<post title>/

Sonarr and Radarr then take the files from there and put them in /mnt/media/Videos/TV Shows/<name>/ or /mnt/media/Videos/Movies/<name>/ as appropriate.

Drone folder is a legacy feature from before the library manager had full integration with the downloaders, you'd point it at your "Unsorted" folder and it'd check the folder for new content every X minutes. It's only there to support upgrades from old installs and stubborn people who insist on using random-rear end downloaders. Don't use this anymore.

I really should figure out why I have one set up still. I'm using Deluge and NZBGet so I shouldn't need one but I do have it set anyway.

Hughlander
May 11, 2005

Volguus posted:

I cannot understand what's the problem of Sonarr about that (and whether there is a setting to tell it to just get over itself). Show X , season 10, episode 11 has been downloaded by the download manager. Hell, you sonarr told it to do that. Copy that junk (whatever it is, the entire folder if you can't figure out what's a media file and what isnt) into the destination dir, mark it done and ... be done. Move on with life.

But no, if the mkv name is mangled, if the episode name is not known, if the stars just don't align, basically any excuse and just do nothing.

That's the behavior I'd want and expect. If someone included a bunch of extra poo poo, multiple mkvs, etc... I don't want those all copied into my plex library. (Just imaging having to deal with 2 different randomly named mkv in a season folder in plex is enough to make me shudder.) I want it to ask for help. Copying the entire directory over where that could include executable, is a really bad idea for automation.

Hughlander
May 11, 2005

EL BROMANCE posted:

Radarr people -

When you've got the cut off quality of your movies, what is your next step? Leave it in Radarr for the length of time it's in your collection, or clear it out?

I found I was just leaving things in Radarr forever, but then going into it and using that to delete anything I was done, if I remember to even do that. But having Plex already keep a library of my movies I questioned what the actual point of doing this was (especially as I quite like using Plex to delete things these days). So I stripped a few hundred entries out now which probably had monitored tags in place for a bunch I really didn't need new versions of. I think going forward now I'll at the very least unmonitor stuff that's come down, then periodically delete the entries of those unmonitored ones. It's a shame there isn't a built in feature to unmonitor anything that's hit cutoff (I guess it doesn't for PROPER reasons, but cutoff+7 days or something would generally fix that) but there are scripts it seems.

Leave it in. Are you going to watch it immediately and realize that it only has mandarin dialog? Are you going to remember every movie you ever added if you get the spur of the moment of "Hey let me add X?" (Or are you going to check 2 spots in that case?) Are you going to decide that 720/1080/4k is the highest you'll ever want to look at a movie and never want to go back and change the cut off on some titles? But I also don't go and delete things from Plex either. So maybe my PoV is massively different than yours.

Hughlander
May 11, 2005

EL BROMANCE posted:

Upgraded Sonarr to the v3 beta, and already glad I did so just because you can now do an en mass Show/Season selection in the Manual Import. Lord, doing it individually on the old version was probably my biggest bugbear. I like the new UI as a whole, but gonna take a bit to get used to the show view I think.

Is there a linuxserver.io container for it?

Hughlander
May 11, 2005

EL BROMANCE posted:

Yeah the funny thing is when I used to run sab I would see endless attempts to access it with generic/broken password attempts. I guess generic scripts don’t know what to do with a wide open radarr.

Scripts don’t but if you show up on google you can expect someone to check out your api tokens to indexers.

Hughlander
May 11, 2005

I think I'd need to way redo my NAS filesystems to make that worthwhile. Too much of my content is just on a ZFS dataset called 'media' with normal snapshotting policy. So Swapping over would just be me holding both copies until the newest monthly snapshot ages out.

Hughlander
May 11, 2005

zer0spunk posted:

It still trips me out that at one point writers on the xfiles would go on the xfiles newsgroup and actually engage with the fanbase. IIRC there's one episode where they shoutout a usenet poster by username inside the ep.

If you go back and read some of it, it's like this fly in amber moment of early internet before everyone decided to be huge assholes.

JMS would do the same on rec.arts.sf.tv.babylon5 do the point that there's actually a wikipedia page covering it. https://en.wikipedia.org/wiki/Babylon_5%27s_use_of_the_Internet

Hughlander
May 11, 2005

Thermopyle posted:

Is there a way to force Radarr to search for multiple movies right now? I changed the profile on a bunch of movies, but it's a huge pain in the rear end to go to each movie and click search.

There is but I’m phone posting so couldn’t see full Ui. Try movies update library?

Hughlander
May 11, 2005

Jerk McJerkface posted:

Just a quick update. I ended up installing Traefik 2.1 with forward auth leveraging Google Oauth to protect all my containers. It's all working very well. I know maybe it's not the best to open up 443 to the world, but all the containers that matter are protected by Oauth. The ones that Oauth doesn't work with (like the calibre container that uses Guac) I just whiltelist to local IPs only.

I think that it's a reasonable enough solution. Maybe I'm susceptible to DDOS, but that'd happen if I only had Ombi open anyways.

What I do and works with guac is add http basic auth over https before the reverse proxy in nginx.

Hughlander
May 11, 2005

TraderStav posted:

I’ve never gotten LL to find anything I was ever looking for, has your experience been different?

It’s not great but it does work. It keeps the latest Sanderson there without me worrying about what’s come out. And I do some fire and forget things that may show up months later.

Hughlander
May 11, 2005

Skarsnik posted:

Letsencypt makes it so easy to add multiple domains to a cert I've never seen much point in trying to get a wildcard to work

I think mine has about 8 or 9 now tied to a single primary cert

I got api limited by let’s encrypt from configuring my system before I went to wildcard. I think I use 60 or so hostnames now that it’s fully configured.

Hughlander
May 11, 2005

Dicty Bojangles posted:

Pretty much the only thing I can think of is making sure you're using v3 Radarr and Sonarr. The rest is pretty much the same - IMHO if Plex works for you there isn't much reason to switch to Emby/etc..

What tags are you using for v3? I just noticed I was using sonarr:preview but radarr:latest. Checking the tags on dockerhub I don't see any static tag for v3 on radarr other than 'nightly' which I assume I wouldn't want.

Hughlander
May 11, 2005

Craptacular! posted:

A container's internal storage is read-only. Some containers are configured to allow minor revisions to be stored in mounted storage but generally speaking the strength of a container is that the contents are immutable and the mounted volume is highly portable. The mounted volume ideally only contains stuff

It's not that it's read-only, it's that it's ephemeral. You can write to it and it'll be saved in a layer, and then cleaned up the next time the container starts.

Hughlander
May 11, 2005

Jesse Iceberg posted:

I had thought the answer to this would be it doesn't, as I run a similar setup (actually got its starting point from their Docker Compose I think) and in the past if I wanted to use nzb360 with the services outside my LAN instead of their GUI, I've VPN'ed in to do it to bypass Traefik.

But from some searching, looks like you can maybe just put the Oauth layer in front of the service GUIs through careful use of Traefik labels, and leave the service API endpoints bare for nzb360 to hit. So long as you don't mind leaving those endpoints bare, that is, using the API key for their protection. Have to see if I can get that to work.

I should write up a full post but here's what I've done:
.env file
code:
PRIVATE_IP=HeadersRegexp(`X-Real-Ip`, `(^127\.)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)`)
docker-compose.yml
code:
  sonarr:
    container_name: sonarr
    image: ghcr.io/linuxserver/sonarr:preview
    restart: always
    volumes:
      - /dev/rtc:/dev/rtc:ro
      - sonarr_config:/config
      - /datastore/Media:/media/public
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    labels:
      traefik.enable: true
      traefik.http.routers.sonarr.rule: "Host(`sonarr.${DOMAIN}`)"
      traefik.http.routers.sonarr.tls: true
      traefik.http.routers.sonarr.middlewares: "secured-admin"
      traefik.http.routers.sonarr.priority: 99
      traefik.http.routers.sonarr2.rule: "Host(`sonarr.${DOMAIN}`) && ${PRIVATE_IP}"
      traefik.http.routers.sonarr2.tls: true
      traefik.http.routers.sonarr2.middlewares: "secured-local"
      traefik.http.routers.sonarr2.priority: 100
... Traefik labels:

          # 0=Admin 1=Co-Admin 2=Super User 3=Power User 4=User
          traefik.http.middlewares.auth-user.forwardauth.address: "https://organizr.${DOMAIN}/api/v2/auth?group=4"
          traefik.http.middlewares.auth-admin.forwardauth.address: "https://organizr.${DOMAIN}/api/v2/auth?group=1"

          # Allow local without Auth:
          traefik.http.middlewares.secured-user.chain.middlewares: "frameoptions, auth-user"
          traefik.http.middlewares.secured.chain.middlewares: "frameoptions, auth-admin"
          traefik.http.middlewares.secured-admin.chain.middlewares: "frameoptions, auth-admin"
          traefik.http.middlewares.secured-local.chain.middlewares: "frameoptions"
If you come from a local IP you just get connected directly, otherwise you get redirected to organizr where you need to be a user login or an admin login depending if it's secured-admin or secured-user. Organizr is then using plex auth as it's login.

Hughlander
May 11, 2005

hbag posted:

im using mountain duck because windows' built in webdav functionality sucks poo poo
and, again, i'm not going to leave my pc running 24/7

A) Webdav sucks poo poo period, which is why people are saying to look at smb/CIFS.
B) No one is saying 24/7/365 but rather get a simple system going then expand it out when you know WTF you're doing.
C) What you are doing doesn't make any sense anyway unless you're on dialup. Sonarr is on your PC that isn't on 24/7 as you've said 237948 times. So you have a 6 hour window where things are going to be going to nzbget. Where 8-11 minutes later it'll be done. the other 18 hours the Pi isn't going to be looking for anything because it's not geting any instructions. So why is nzbdget and sonarr not on the same machine? If both were on Windows you'd get the files. If both were on Pi you'd get the files. And in no case would you be loving around with Webdav.

Hughlander
May 11, 2005

History Comes Inside! posted:

Anyone got a recommendation for a good indexer to pair with Mylar?

Also why is Mylar such an unwieldy piece of poo poo compared to the equivalents for virtually any other thing? Just lack of interest leading to lack of developer really caring that much?

It isn’t. It’s a fork of a common code base that lazy librarian and a music one also came from. A half dozen years back that was the state of the art. It’s just that then lidarr radarrsonarr came along.

Hughlander
May 11, 2005

Tea Bone posted:

Mono is causing a CPU usage spike I've pinned it down to my sonarr docker container. Every time "rescan monitored episodes" runs mono goes to 100% CPU usage, causing my fans to ramp up.

I have two radarr containers running which aren't causing the same issue.

It started about a week ago after around 2.5 tests without a hitch. I've tried updating the container which hasn't made a difference. I've also "un-monitored" a bunch of stuff to no avail. Does anyone have any ideas?

In the configuration for sonarr set the max CPU to be 25%. (.25) I do that for lidarr when I used to run it.

Hughlander
May 11, 2005

Hughmoris posted:

In addition to Linux ISOs, can Usenet also be a resource for Linux Manuals/Books? Or is it mainly just ISOs?

Yes, and even geeks reading linux manuals to you! check out all the 'arr's. Sonarr, Radarr, Readarr, Lidarr(Had really really bad performance last I looked at it.)

Hughlander
May 11, 2005

more falafel please posted:

1. A decade ago there were a couple free indexers that had everything, so it didn't matter as much. These days, the good ones are paid. NZBGeek has been great for me.

2. I assume it's fine, I haven't used it personally in a few years, but 99% of my stuff goes through Sonarr/Radarr/Lidarr/etc. When I do manually download an NZB I just add it to SABnzbd manually.

I definitely recommend checking out the *arrs if you haven't yet. Downloading stuff is great, but having it just show up automagically is better.

Same, only I use prowlarr to search and push the button to send it to the nzb/bittorrent client depending on what it found first.

Takes No Damage posted:

:yeah: I've only been truly impressed with technology a few times in my life, but getting everything set up and then just seeing stuff appear in your Plex library as it comes out without you even remembering it is :discourse:

It's next level when you have a discord bot set up to a friends and family channel that they can add stuff to the queue as easily as:
!tv The Fall Guy 2023

And you start see things you didn't even know exist show up...

Hughlander
May 11, 2005

All of the above but also consistency in time. If the ISO came out at 5:30p and you want to run it at 6:00p I know that I'll be getting 75mpbs from usenet, I have no idea what transfer I'll be getting from BT, or when it'll be done. Like above I think I give usenet +500 pts for the *arrs.

Adbot
ADBOT LOVES YOU

Hughlander
May 11, 2005

Matt Zerella posted:

You can set a delay under the profiles. "Prefer Usenet" with a 120 minute delay to BT is what I use.

Interesting, I'll do that but in addition to the point system, since If there's an 8 year old ISO 100% of the blocks are going to be on Usenet while even with '5 seeders' or whatever, chances are it's never coming down from BT. This is also probably going to be what gets me to switch over to that other thing that autowrites quality rules for the *arrs so you can make a change in one place and push it out.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply