Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
I randomly tried dognzb 2 days ago and the registrations were open (I think they were closed before?). My account says trial and number of days. Maybe it's a stupid question but do the trial accounts get converted to some permanent but limited account or are they just closed if you don't sign up before the trial period is over?

Adbot
ADBOT LOVES YOU

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!

Rexxed posted:

They give you 30 days after the trial ends where you can't use the site but you can upgrade trial to member. If you don't buy membership they delete the trial account 30 days after the trial expiration day. It's hard to find the correct info about how their stuff works but they do have a support forum:
https://dognzb.cr/board/index.php?/topic/4128-trial-to-member/

I've been on a lite account since they reneged on lifetime memberships but it's always best to diversify on nzb sites anyway. The four or five I'm using now seem to keep things moving.

Thanks for the answer! I was looking for the details about the trial membership, read the FAQ and search the forum a bit but didn't find anything.

I've been using DrunkenSlug and Usenet Crawler but the latter is often down. I'm looking for some decent indexers but they aren't that easy to come by IMO. dobnzb has everything neatly sorted and labeled, at least.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
I'll keep that in mind, thanks! I signed up for dog (only for a year in case they decide to shut down in the meantime or something) and a kind goon sent me some more tips in a PM so I'm all set for now. Setting up everything was a bit of a pain in the rear end but even on my relatively slow internet usenet has been a crazy QoL improvement for me so it was worth it in the end.

e: typo

lordfrikk fucked around with this message at 06:18 on Sep 7, 2018

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
Where do you even store all that stuff when you're downloading 1TB per month?

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!

Colostomy Bag posted:

NAS. Usually what I do is copy stuff down, unzip/rar/par/whatever on my SSD then send it up to my re purposed server that runs FreeNAS with Plex on a ZFS pool. Works for me. It's an old 2600K Sandy Bridge box.

I'm probably the edge case, but I like reading about folks that do the "crap load of requests" and other stuff when I'm just the "uh kind of collector" type stuff.

But how big are your home NASes anyway? I've seen that one video of a guy having 70 TB storage complete with an enterprise-level server housing but I'm thinking there's not many people who go all out on storage like that, right?

EL BROMANCE posted:

Depends if it's being archived or not really, but being able to download at high speed means you can get the very best version of something that exists. Maybe a UHD remux thats 100gb, then deleted afterwards.

Yeah, I could understand if people download, watch and delete. Storing all that stuff is pretty mindboggling to me, though :v:

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
I signed up for a monthly on NewsgroupDirect and while it works great most of the time I'm noticing lot of older stuff is missing parts. Do I understand it correctly that this is not an indexer problem but a usenet provider problem? If so, what other providers have great retention and won't have you paying through the roof? For reference, I snagged their $3.14/mo deal.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!

Canuckistan posted:

NewsGroupDirect used to be a reseller of a larger network with 3500+ days retention, but they split off and became an independent provider. As such their retention went to poo poo, purporting to be 1100 days but I find even that to be unreliable..

I find they're good for recent stuff, and they max out my 600 meg connection, but you'll eat up blocks like crazy if you try to get anything older than 1000 days.

That sucks but luckily they offer 3 month trial (or close to it with the first 3 months being sub half a dollar) so I might cancel it. Lot of time I want to download the original BD disk and those are mostly gone.

Posted on the previous page but Newshosting seems (on paper) to have high retention. Does it match reality?

Personally I don't download enough to warrant several providers at once so I'm looking for the best overall. Right now I only have 300 Mbps so it doesn't need to support anything higher than that.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!

Xaris posted:

Which one is that?

Watchtower. Go here: https://containrrr.dev/watchtower/

The websites has complete docs that will help you set it up.

I have it configured to automatically update daily at midnight, restart containers and delete the old images. I have these 3 environmental variables set but you can also use command line parameters:

code:
TZ	                Whatever/Whatever
WATCHTOWER_CLEANUP	true
WATCHTOWER_SCHEDULE	0 0 0 * * *
Docs for these variables:
https://containrrr.dev/watchtower/arguments/#time_zone
https://containrrr.dev/watchtower/arguments/#cleanup
https://containrrr.dev/watchtower/arguments/#scheduling

You can also tell it to ignore certain containers etc. One thing I will say is that automatic updating of images/containers make the image tag you're using more important. Usually I use "latest" for everything but using that for databases is a really bad idea unless you like a bunch of broken containers and having to rollback your databases (probably a very bad idea). I know because I set my PostgreSQL to latest and it updated automatically from 12 to 13, breaking my poo poo. Oops!


norp posted:

https://hub.docker.com/r/v2tec/watchtower
Looks like it's been abandoned, although maybe it's simple enough that it hasn't needed an update in 3 years.

Seems like the author maintains the image themselves: https://github.com/containrrr/watchtower

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!

Craptacular! posted:

While I can understand being able to issue a command that makes containers automatically pull a new image and rebuild themselves, I can't get behind having that happen entirely hands-off automated as Watchtower does. Docker kind of has a busted security model that requires administrators to exercise caution and Watchtower takes it out of their hands in a way that just seems too scary to me. You're essentially giving a whole bunch of different software authors the reservation to run whatever software they please on your system with actions that are processed through a daemon that runs in root, and if one of the containers ever becomes malicious or something you'll probably auto-update to it unawares.

Just use
docker-compose up --force-recreate --build -d
when you want to update. It doesn't even have to be a deliberate act from the person who put together your image, if just one of those layers is hosed you're involuntarily onboarding and running it with superuser so at least control when and why you want to update containers that are working fine instead of just taking a "version number = higher = good" approach.

Good point, I can only agree with that! I accepted that poo poo will break when I started to use Watchtower and it has but I don't mind that much because the time saved on all the containers I'm running was worth it. I only had PostgreSQL and Nextcloud break, both on major version upgrades, which I am now more mindful of. Containers from more established providers like Linux Server should minimize all sorts of problems but they won't ever prevent the user being dumb like me.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!

Knot My President! posted:

How does Dog compare to DrunkenSlug?

According to NZBHydra2 indexer stats, it's easily competing on unique hits with DrunkenSlug, even on the free tier. Of course my paid subs to DrunkenSlug and NZBGeek are processing a lot more API hits than the 100 free hits on DogNZB but still, it has lot of stuff the other two don't.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
I had some time to mess around with my setup and tried replacing NZBHydra 2 with Prowlarr and ended up keeping it. I like the method of adding indexers to Radarr/Sonarr directly better just because I can see the name of the indexer in the history or during interactive search. It's a small change but :shrug:

I also found out a small tool called unmanic (https://github.com/Unmanic/unmanic) that is a bit finicky but allows you to easily optimize your library, for example automatically remove all English language tracks from every file etc.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
Are those secret indexer people going to enter the thread and say mean words to me if I mention them or what? I’m just lolling irl at thee dumb fight club rules.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
I like Usenet a lot more than private trackers. Got Eweka as the only provider, and DrunkenSlug and NZBGeek as indexers. Works like a charm! For anime I prefer Nyaa + AnimeBytes, though.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!


This is after switching to Prowlarr roughly 3 months ago? I have paid accounts on the first two, DOGnzb I'm using my expired premium. DOGnzb is highly competitive with the other two for my use case but the reason why I'm currently not subscribed is the payment method which is extremely dumb.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
Does anyone have any experience with mergerfs? I just want to pool together 3 drives so I thought that might be the easiest. However, keeping all the SABnzbd/Sonarr/Radarr settings as before, some weird permission issue is cropping up. SABnzbd thinks it cannot write to the drive and/or says the disk is full (it's not) but small files like TV show episodes get downloaded nonetheless, while bigger downloads just end up producing so many errors that SABnzbd pauses the downloading.

Doing docker execute into the container shows that I can't create anything as the assigned user while root works. However, when deleting a file as root it says that the file doesn't exist, even though it does and it does get deleted.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!

Keito posted:

I used mergerfs for years before moving on to ZFS, so it'd be from memory, but maybe I can help if you post some details because there's not enough to go on from this post.

Some relevant bits would be where you are using mergerfs to create the union mountpoint (host system or container), what user is running mergerfs, what mount options you are using, how the mergerfs mount is being mapped into the containers you're having trouble with, and so on.

You do have the allow_other mount option, right? Some 'ls -la' type output where you fail to write/root gets error on delete would be nice too.

Thank you for your help!

I use OMV6 to manage everything, and from what I can tell it's running directly on the host system. I have 3 physical drives mounted to /srv/dev-disk-by-uuid-$UUID and the mergerfs union mountpoint is /srv/mergerfs/storage. mergerfs is running as root. The mount options are allow_other, cache.files=off, use_ino, dev, and suid. In the containers the volumes are mapped to /srv/mergerfs/storage/$FOLDER. The user and group of the folders in the containers is the same (UID 1000, GID 100).

The raw ps aux | grep merger output is:

code:
root        1133  0.1  0.1 315968 11840 ?        S<sl 09:52   0:01 mergerfs storage:0cb17d37-057f-4482-993a-d080190cc915 /srv/mergerfs/storage -o rw,branches=/srv/dev-disk-by-uuid-8395dcd4-48bf-4a01-a626-9a7f4cddb339:/srv/dev-disk-by-uuid-92c9f992-2c0d-44bf-a534-d26c3e9e0000:/srv/dev-disk-by-uuid-d434c000-eb91-40e4-8fe3-1cb9f1fc456a,category.create=epmfs,minfreespace=200G,fsname=storage:0cb17d37-057f-4482-993a-d080190cc915,allow_other,cache.files=off,use_ino,dev,suid
The issue with no space left after exec'ing into a container:

code:
root@49a9b4628399:/downloads#ls -la
total 16
drwxrwsr-x 4 abc  abc  4096 Jun  7 17:18 .
drwxr-xr-x 1 root root 4096 Jun  7 16:06 ..
drwxrwsr-x 2 abc  abc  4096 Jun  2 16:17 bin
drwxrwsr-x 3 abc  abc  4096 Jun  7 17:15 tv
root@49a9b4628399:/downloads#df -h .
Filesystem                                    Size  Used Avail Use% Mounted on
storage:0cb17d37-057f-4482-993a-d080190cc915   22T  7.1T   15T  33% /downloads
root@49a9b4628399:/downloads#touch test
touch: cannot touch 'test': No space left on device

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
I have actually figured it out... I set 200 GB min space and the drive where I've originally stored all the files has less than that now. The policy was set to existing path, most free space but the paths did not exist on the other drives 🤦 Which policy is the best for just saving to one drive then selecting another drive when the space runs out if I don't want to micromanage folders?

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!

Keito posted:

Thanks for the info dump, I was about to ask you about that minfreespace setting but you beat me to it.

I had a look at the fstab file from my old file server and was also using epmfs for create. The setting you likely want to enable here is moveonenospc=true, so that it's written to another disk when the one with the existing path is "full".

https://github.com/trapexit/mergerf...space-available


Edit: Alternatively you can switch to the mfs policy if you don't care about "grouping" stuff together on single disks, instead just writing wherever has the most free space. This is much faster than moveonenospc if the error is returned mid-write, not having to redo everything towards another disk.

I've set it to "all" and it works! "mfs" wouldn't work for some reason...

mergerfs is a bit more involved than I imagine, however, I will say that I am a huge idiot who didn't read anything about mergerfs before setting it up so it's 100% on me. It's actually surprising nothing burned down because of my stupidity :v:

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!

Warbird posted:

Would anyone know any Usenet discussion groups that would be focused on foreign films? Specifically Chinese ones? The usual suspects mentioned here don’t seem to do well in that area.

I personally don't know about Usenet indexers but for torrents there's Avistaz, they have all kinds of Asian movies/TV.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
Kinda late but one big advantage of Jellyfin is that it works completely offline because the login is local, no need to access any other online service. The UI is not great but combine it with Infuse on Apple TV/iOS and it’s suddenly the best combo IMO.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
Steam can usually saturate a gigabit connection. Somebody in the Steam Deck thread posted this picture from the Baldur’s Gate 3 launch:

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
Downloading 100s of GBs of Steam games on gigabit can tax your SSD and CPU more than you might expect, and I’m sure it can be a bottleneck, too. Also it might be possible that you have a cheaper SSD which relies on cache to achieve higher speeds, so your download (and the whole system usually) will screech to a halt when you run out of it during bigger downloads (my previous cheap SATA SSD would hit that around 80 GB).

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
Deluge was good because it had thin client I could run on my PC and use that to easily feed it torrents. Had problems with stability, though, so I've switched to rTorrent, which is more stable for me, but I can't find a way to connect to my NAS from the desktop.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
I've had dog for awhile but it stops working intermittently and it's not the fastest



I do have a lot of successful grabs from it, though, so I'm not sure how much that matters in an automated environment

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
I haven't bought any lifetime subs for anything because I'm always worried they will go back on their promise, and I can feel superior supporting the devs more :downs:

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
I've been using Sonarr v4 for months now and I can't remember a single bug, so either they're nonexistent, or minuscule, or I'm entirely too dumb to notice.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
Jellyfin with Infuse on Apple TV is perfect. I had been using CoreELEC before for 2 years and it felt like using something from the 90s. I appreciate all the people working on it but drat.

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
I've been using SABnzbd since I've started self-hosting stuff ~4 years ago and haven't had any issues with it :shrug:

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!
LunaSea is available for iOS and Android, and it also does Sonarr/Radarr etc. but I don't know how good is NZBClient.

Adbot
ADBOT LOVES YOU

lordfrikk
Mar 11, 2010

Oh, say it ain't fuckin' so,
you stupid fuck!

Jel Shaker posted:

can i ask what apps that people are using for watching video on their iphone?

i’m using jellyfin so i assume swiftfin makes the most sense but i work in an area with terrible wifi so i can’t really stream remotely and downloading is essential but not possible on the app unfortunately

Infuse. It works on iPhone, iPad, Apple TV and Macs, and connects flawlessly to Jellyfin. I use it to regularly download stuff to watch to my Mac and watch it offline on a train/plane.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply