Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Decairn
Dec 1, 2007

Skarsnik posted:

I recently switched to Sonarr but couldn't get the sickbeard indexer working, how do you do it exactly?

It's possible the indexer was down at the exact moment i was trying though

Sonarr -> Settings -> Indexers -> Plus -> Usenet Newznab -> Custom. Name Lolo Sickbeard, Enable search, Enable RSS, http://lolo.sickbeard.com Categories 5030,5040, all else blank.

Adbot
ADBOT LOVES YOU

Skarsnik
Oct 21, 2008

I...AM...RUUUDE!




Yep, exactly what I was doing yesterday with no luck

Working fine now, ah well

thanks!

Feenix
Mar 14, 2003
Sorry, guy.

Decairn posted:

Sonarr -> Series -> Add Series -> Import existing series from disk -> enter new HDD path
No idea on SABNZBD.

Ok so I actually went to Series > Series Settings and found a way to toggle all of them then down at the bottom I could set the new path for my shows. No problem, it seems to have took. Now when I go to search for anything (manual or automatic) I just get the little animated searching balls forever... weird... :\

Gwaihir
Dec 8, 2009
Hair Elf

Vykk.Draygo posted:

I've just about had it with DogNZB's dogshit worthless broken API. Are there any decent alternatives that aren't closed off, or am I pretty much stuck rolling my own newznab indexer? If so, how much work is involved in that if I really only care about HD shows, HD movies, and music albums?

I was seeing a decent number of timeouts to dog in my sickbeard logs so I checked on their forums, apparently they've got a new set of servers and load balancers for their API services and they're getting them installed this week.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Vykk.Draygo posted:

I'm using that in Sonarr just fine. My problem is for when I want to search for something through NZB 360 on my phone. Sure I can still use the DogNZB website through the mobile browser, but I'd prefer it to just work.

Usenet Crawler maybe?

Vykk.Draygo
Jan 17, 2004

I say salesmen and women of the world unite!

Gwaihir posted:

I was seeing a decent number of timeouts to dog in my sickbeard logs so I checked on their forums, apparently they've got a new set of servers and load balancers for their API services and they're getting them installed this week.

I saw that too. They've been "upgrading" for months, but maybe they really will get it done soon.

Thanks, this seems to work fine.

Vykk.Draygo fucked around with this message at 17:54 on Apr 7, 2015

lokk
Nov 18, 2005
i'm legit.
Is there any way to set up Sick Beard to download multiple versions of the same episode? I'm getting tons of fakes for a certain episode of a certain show that's quite popular, and I don't want to miss the ACTUAL episode before it gets DMCA'd.

Dicty Bojangles
Apr 14, 2001

sure just switch to Sonarr :v:

Skarsnik
Oct 21, 2008

I...AM...RUUUDE!




And have it download it over and over, failing each time

8 times here till I noticed..

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!

Skarsnik posted:

And have it download it over and over, failing each time

8 times here till I noticed..

I think mine only grabbed twice, the first time failed import and blacklisted automatically. Second time succeeded, and I had to blacklist manually.

Skarsnik
Oct 21, 2008

I...AM...RUUUDE!




Despite having the -KILLERS release blacklisted, it's tried to grab it a few times, dunno why

Sonarr really needs a 'don't try and download till after broadcast' setting

Dicty Bojangles
Apr 14, 2001

You could try setting minimum age in the indexers tab to 30min or whatever but I suppose that might turn into a DMCA issue.

elwood
Mar 28, 2001

by Smythe
I changed from sickbeard to sonarr 2 weeks ago and it somehow just worked. Woke up this morning, turned on my laptop, sonarr started, grabbed what I think you are talking about (all available episodes), somehow ignored the fake and encrypted ones and everything looks to be alright.

SymmetryrtemmyS
Jul 13, 2013

I got super tired of seeing your avatar throwing those fuckin' glasses around in the astrology thread so I fixed it to a .jpg

elwood posted:

I changed from sickbeard to sonarr 2 weeks ago and it somehow just worked. Woke up this morning, turned on my laptop, sonarr started, grabbed what I think you are talking about (all available episodes), somehow ignored the fake and encrypted ones and everything looks to be alright.

Sonarr is really well done. I've been using it since early 1.0, and it was better than Sickbeard then - but a couple of years later and it's one of my favorite programs on my computer. I wish they would expand its purview to compete with CouchPotato.

mrmcd
Feb 22, 2003

Pictured: The only good cop (a fictional one).

My Sonarr somehow managed to download and import what I assume are fakes from like 4 weeks out. :haw:

This is gonna be real fun if this keeps up all season.

Vykk.Draygo
Jan 17, 2004

I say salesmen and women of the world unite!
You grandpas watching shows in SD need to get with the times :corsair:

edit: I switched to Sonarr last week and everything seems to be working great except one show, a daily show, is extracting the files into my completed shows folder, then moving just the episode over to where it should go, leaving a folder with the scene name filled with an nfo file and a sample. Anybody know why just this one show would be doing this?

Vykk.Draygo fucked around with this message at 23:06 on Apr 12, 2015

Kid
Jun 18, 2004

Gotcha
Is there any way I can get Sonarr to just rename the video files and dump them all into 1 folder without making a subfolder for every series? I just move everything off of that harddrive as soon as it finishes and don't need any sorting after naming the files.

SymmetryrtemmyS
Jul 13, 2013

I got super tired of seeing your avatar throwing those fuckin' glasses around in the astrology thread so I fixed it to a .jpg

Kid posted:

Is there any way I can get Sonarr to just rename the video files and dump them all into 1 folder without making a subfolder for every series? I just move everything off of that harddrive as soon as it finishes and don't need any sorting after naming the files.

I'd suggest turning off Sonarr processing and instead just use a script with your download client. There are good, configurable organizing scripts for both NZBget and SabNZBD+.

Diviance
Feb 11, 2004

Television rules the nation.

SymmetryrtemmyS posted:

Sonarr is really well done. I've been using it since early 1.0, and it was better than Sickbeard then - but a couple of years later and it's one of my favorite programs on my computer. I wish they would expand its purview to compete with CouchPotato.

One of their eventually goals is movie support, but they want to be stable and very functional with TV first because it is apparently a big undertaking and they will have to concentrate less on TV while working on movies.

Murodese
Mar 6, 2007

Think you've got what it takes?
We're looking for fine Men & Women to help Protect the Australian Way of Life.

Become part of the Legend. Defence Jobs.
I don't remember if I ever talked about this on here, so here you go (also sorry Thermopyle, I only just saw your post from April 2013 today, I do not read the thread regularly :():

quote:

It's been a long-rear end time since this project was first on here and it's changed fairly significantly since then, so I figured I'd re-mention it for anyone that's after a simple indexer.

Most of the information on the project is on the github readme, so I'll only put a bit here:

History

NZBMatrix died and I figured that it wouldn't be that long before other indexers started dropping off the perch, so I started using my own Newznab+ install. After realising I didn't need something that complex and prone to over-management to serve myself and my friends, I started writing an indexer in Python / MongoDB.

The aim was to write an indexer that pretty much only presented an API and used significantly less resources than existing solutions - I didn't need the webui or anything, since the vast majority of traffic was API hits. Hence, pynab-mongo was written. Mongo worked well for a while, but ran into some problems involving orphaned NZBs / releases that would require several-week-long-queries to resolve. It was quicker for me to rewrite the indexer using postgres than to run those queries, which is what I did.

Pynab-postgres is the result. NZB creation is of equal performance to the mongo version (thanks to some postgres-specific optimisations) and searching is much more reliable. It's also significantly more easy to develop, and something I've had a lot of fun with over the last year or two.

Should I use this?

Is the majority of access to your indexer via the API? Are you looking for an indexer that's relatively easy to set-up and run and requires minimal ongoing maintenance, and also with low hardware requirements? Do you not care about user limits or non-API-related metadata?

Has:

- Newznab-compatible API (anything that works with NN works with this)
- Group indexing, release creation / categorisation, some renaming of obfuscated releases
- Binary blacklisting, compressed-header support, SSL
- Post-processing (imdb/tvrage, password checking, nfos, sfvs) to assist API serving
- The world's most basic query webui to look up books and poo poo
- Fast processing, it's quite optimised and can run on pretty simple hardware

Uses NN+'s regex collection (requires NN+ ID, go donate to them because they absolutely deserve it) or supply your own

Does not have:

- Graphical administration
-Really, any kind of detailed webui (you can give it queries and that's about it)
- API limits or any kind of advanced user administration (API keys are used for access and generated via CLI)
- Detailed metadata for releases (no need for amazon/rottentomatoes/whatever integration, because there's no webui to display that)
- ???

TL;DR

Small fast indexer for API use.

Use it if it suits you, feel free to request/code features. Happy to take ideas of stuff to implement, too. Feedback welcome!

https://github.com/Murodese/pynab

Double tl;dr: a really fast usenet indexer. It's pretty stable now, being in use by quite a few people for a couple of years, and most of the bugs are hammered out. If you're looking for something to index weird groups not covered by other indexers or want to just run your own indexer from home, this works. In the future I'll package it into a tiny portable thing that can just provide people with their own poo poo, without having to pay for indexer access or send traffic externally (for people that live in Australia, for instance).

Totally looking for people to build more pre-db integration into it, as well. We do a couple of groups, but there are some big ones missing that I don't have any kind of pre access to.

Mr Chips
Jun 27, 2007
Whose arse do I have to blow smoke up to get rid of this baby?

Murodese posted:

I don't remember if I ever talked about this on here, so here you go (also sorry Thermopyle, I only just saw your post from April 2013 today, I do not read the thread regularly :():


Double tl;dr: a really fast usenet indexer. It's pretty stable now, being in use by quite a few people for a couple of years, and most of the bugs are hammered out. If you're looking for something to index weird groups not covered by other indexers or want to just run your own indexer from home, this works. In the future I'll package it into a tiny portable thing that can just provide people with their own poo poo, without having to pay for indexer access or send traffic externally (for people that live in Australia, for instance).

Totally looking for people to build more pre-db integration into it, as well. We do a couple of groups, but there are some big ones missing that I don't have any kind of pre access to.

Request: add a database abstraction layer so I can point it at oracle instead of postgresql

Murodese
Mar 6, 2007

Think you've got what it takes?
We're looking for fine Men & Women to help Protect the Australian Way of Life.

Become part of the Legend. Defence Jobs.

Mr Chips posted:

Request: add a database abstraction layer so I can point it at oracle instead of postgresql

There is one, but parts are heavily optimized and I have to deal with certain bits in a DBMS specific way. Mysql support exists but isn't fast, for instance.

e; actually, you can try it just by changing the db engine in your config to 'cx_oracle' and pip3 install cx_oracle. It'll be slow as poo poo saving segments to the db, but that's about the same speed as newznab works :v:

Murodese fucked around with this message at 05:52 on Apr 13, 2015

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Hah, I forgot all about that. I ripped some ideas from it for my own distributed indexer running over multiple OS threads, green threads, processes, or machines.

I should clean that up and put it on github...

Murodese
Mar 6, 2007

Think you've got what it takes?
We're looking for fine Men & Women to help Protect the Australian Way of Life.

Become part of the Legend. Defence Jobs.
Yeah, chuck it to me when it's done, I'd love to see it. I was originally going to write this to use distributed collection/processing, but you can already do that to some extent by using the DB as a centralised event handler and just split the various responsibilities over different machines.

mrmcd
Feb 22, 2003

Pictured: The only good cop (a fictional one).

What is the best torrent client to use with Sonarr on a linux with all cli or remote access?

I have sonarr+sabnzbd running on my HTPC, which isn't totally headless, but I mostly keep XBMC on the main display for actually watching TV and then drive everything else over a web browser or ssh window from my other computer. There's a few things I'd like to fill in from private torrent sites that usenet and newznab seem to just be missing but I've not used torrents in years, since before utorrent turned to adware poo poo.

bollig
Apr 7, 2006

Never Forget.
Not really on the topic at hand, but has to do with usenet, just not liberating files.

I'm going through some old newsgroup discussion and I was wondering what it meant if a line was preceded by a # or | or some combination of the two? Real neckbeard stuff, thanks.

wolrah
May 8, 2006
what?

bollig posted:

Not really on the topic at hand, but has to do with usenet, just not liberating files.

I'm going through some old newsgroup discussion and I was wondering what it meant if a line was preceded by a # or | or some combination of the two? Real neckbeard stuff, thanks.

Without the actual post to get some context from, I'd guess an unusual way to indicate quoted content when doing an inline reply. Typically people use > but who knows. Otherwise # precedes a single-line comment in a lot of programming languages, so possibly something related to that.

bollig
Apr 7, 2006

Never Forget.

wolrah posted:

Without the actual post to get some context from, I'd guess an unusual way to indicate quoted content when doing an inline reply. Typically people use > but who knows. Otherwise # precedes a single-line comment in a lot of programming languages, so possibly something related to that.

So I'm looking at a passage that has something like this:

In article, so and so writes

>blah blah

#looks like a response to above, which refers to

>>this stuff

>now this looks like a response to the '>>' stuff above

#this looks like it's a response to above

And then there's some text without #'s or >'s.

edit: most of those are multiple lines, not just single lines.

bollig fucked around with this message at 15:03 on Apr 13, 2015

bollig
Apr 7, 2006

Never Forget.

bollig posted:

So I'm looking at a passage that has something like this:

In article, so and so writes

>blah blah

#looks like a response to above, which refers to

>>this stuff

>now this looks like a response to the '>>' stuff above

#this looks like it's a response to above

And then there's some text without #'s or >'s.

I guess, is there any reason that you would stuff a # in front of something if it was something you were typing in that instance?

gabensraum
Sep 16, 2003


LOAD "NICE!",8,1
I wish there was a way in Sickbeard to set an unaired episode to be ignored. There's just no checkbox until the time comes, but I'd like to ignore things like "best of" episodes in advance. Currently I pause it before those weeks if I remember, but usually I don't.

Does Sonarr do that kind of thing?

Dicty Bojangles
Apr 14, 2001

Yes- with Sonarr you can toggle ignore or download on individual episodes, seasons, or even entire shows if you want.

gabensraum
Sep 16, 2003


LOAD "NICE!",8,1
Thanks - sickbeard can do all that as long as they've aired, it's the unaired it won't. I'll have a look.

xgalaxy
Jan 27, 2004
i write code
So what are the storage and bandwidth implications if someone wanted to run their own indexer for their own personal use?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

xgalaxy posted:

So what are the storage and bandwidth implications if someone wanted to run their own indexer for their own personal use?

Bandwidth might be 100 MB a day if you index the top few groups (which should be good enough if you don't care about some esoteric stuff). I've been running an indexer for personal use for 1-2 years and index like 5 groups and my database plus nzb storage is around maybe 75GB.

Murodese
Mar 6, 2007

Think you've got what it takes?
We're looking for fine Men & Women to help Protect the Australian Way of Life.

Become part of the Legend. Defence Jobs.

xgalaxy posted:

So what are the storage and bandwidth implications if someone wanted to run their own indexer for their own personal use?

I index something like 25 popular groups (barring some really spammy ones), and have backfilled to around ~700-900 days on stuff. End result is that I have around 520,000 releases. My DB looks like this (I've noted the tables that fluctuate really wildly as you're indexing, because the data is collected and then aggregated, compressed and deleted):

pre:
 nspname  |              relname               |    size    |       refrelname        |    relidxrefrelname     | relfilenode | relkind | reltuples | relpages 
----------+------------------------------------+------------+-------------------------+-------------------------+-------------+---------+-----------+----------
 pg_toast | pg_toast_4704468                   | 12 GB      | nzbs                    |                         |     4704472 | t       |   4866792 |  1579537
 public   | segments                           | 5790 MB    | pg_toast_4704738        |                         |     5192408 | r       |   4761148 |   741172 <-- fluctuates wildly
 public   | ix_segments_segment                | 2012 MB    |                         |                         |     5192415 | i       |   4761148 |   257486 <-- same with this one
 public   | ix_segments_part_id                | 1599 MB    |                         |                         |     5192416 | i       |   4761148 |   204694 <-- and this one
 public   | segments_pkey                      | 1288 MB    |                         |                         |     5192414 | i       |   4761148 |   164855 <-- ..and this one
 public   | nzbs                               | 287 MB     | pg_toast_4704468        |                         |     4704468 | r       |    494241 |    36707
 public   | pres                               | 234 MB     | pg_toast_4704419        |                         |     4704419 | r       |   1026911 |    29956
 public   | releases                           | 168 MB     | pg_toast_4704633        |                         |     5581407 | r       |    519001 |    21510
 pg_toast | pg_toast_4704468_index             | 158 MB     | pg_toast_4704468        | nzbs                    |     4704474 | i       |   5191460 |    20280
 public   | ix_pres_name                       | 116 MB     |                         |                         |     4704430 | i       |   1026911 |    14824
 public   | nfos                               | 105 MB     | pg_toast_4704457        |                         |     5581477 | r       |    138937 |    13496
 pg_toast | pg_toast_4704457                   | 85 MB      | nfos                    |                         |     5581480 | t       |     57122 |    10881
 public   | ix_pres_requestgroup               | 59 MB      |                         |                         |     4854331 | i       |   1026911 |     7580
 public   | pres_uniq                          | 58 MB      |                         |                         |     5223660 | i       |   1026911 |     7396
 public   | releases_uniq                      | 56 MB      |                         |                         |     5581431 | i       |    519001 |     7158
 public   | ix_parts_group_name                | 53 MB      |                         |                         |     5192401 | i       |     13987 |     6777
 public   | ix_releases_search_name            | 52 MB      |                         |                         |     5581428 | i       |    519001 |     6597
 public   | ix_parts_binary_id                 | 36 MB      |                         |                         |     5192403 | i       |     13987 |     4599
 public   | ix_parts_posted                    | 36 MB      |                         |                         |     5192404 | i       |     13987 |     4587
 public   | ix_pres_requestid                  | 34 MB      |                         |                         |     4854332 | i       |   1026911 |     4321
 public   | ix_parts_total_segments            | 34 MB      |                         |                         |     5192402 | i       |     13987 |     4294
 public   | parts_pkey                         | 33 MB      |                         |                         |     5192400 | i       |     13987 |     4287

Newznab uses a lot more space than pynab, though (something like 8-10x more?), and NZBs are stored on the disk rather than in the DB, so take that into account.

My network usage while backfilling, updating and post-processing looks like this:

pre:
Device em1 [10.1.1.200] (1/2):
===========================================================================================================================================================================
Incoming:
|                                ##|###          #                  |##|    #       ###   .##|   #           #|
#                                ######         ##                 |####|  .#.      ###   ####  .#. ##       ##|
#                               #######         ##                 ######. ###     .###  .####  ###.##.     .###.
#                               #######.       |###                #######.###     ####. ###### #######     #####
##         .                   .########       ####               ############.   .##### ##############     #####
##       ||#                   #########      #####               ##############|#######|##############    ######   Curr: 3.22 MBit/s
##    |.|###.                 .#########     ######  ..          .###################################### .#######   Avg: 2.35 MBit/s
## #.|#######                 ##########    |#############       #################################################  Min: 552.00 Bit/s
#############                .########### .|##############|    .##################################################  Max: 11.08 MBit/s
#############.              .##############################...####################################################  Ttl: 196.72 MByte

The top of those peaks is 10mbit/s. However, that's while backfilling a lot of groups at high concurrency (8 threads for updating groups, 8 threads for backfilling groups, 4 for postprocessing), so if you're only doing a few groups you can expect something more like 2mbit/s while backfilling, and if you're not doing any backfilling at all it's going to be more like 5-10mb/hour depending on the groups.

Laserface
Dec 24, 2004

What are the go-to indexers these days?

I use NZBs.org but im setting up usenet for a friend who obviously doesnt have the ability to get in on The Good poo poo.

SymmetryrtemmyS
Jul 13, 2013

I got super tired of seeing your avatar throwing those fuckin' glasses around in the astrology thread so I fixed it to a .jpg

Laserface posted:

What are the go-to indexers these days?

I use NZBs.org but im setting up usenet for a friend who obviously doesnt have the ability to get in on The Good poo poo.

My enabled indexers in Sonarr are Nzb.su, Sickbeard, OZnzb, NZBplanet, and NZBgeek. I also have a DogNZB account, but their API is having problems right now.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

SymmetryrtemmyS posted:

My enabled indexers in Sonarr are Nzb.su, Sickbeard, OZnzb, NZBplanet, and NZBgeek. I also have a DogNZB account, but their API is having problems right now.

OZnzb's site posted:

We are the best NZB Indexer.

Sold!

Megaman
May 8, 2004
I didn't read the thread BUT...
I want to start using an RSS TV gathering program, I hear Sonarr is the best, but doesn't NZBget have an RSS capability? What's the difference? Is it just worth sticking with NZBget since I already use it? Or is Sonarr actually going to give me a huge benefit that I don't see?

Adbot
ADBOT LOVES YOU

Decairn
Dec 1, 2007

Megaman posted:

I want to start using an RSS TV gathering program, I hear Sonarr is the best, but doesn't NZBget have an RSS capability? What's the difference? Is it just worth sticking with NZBget since I already use it? Or is Sonarr actually going to give me a huge benefit that I don't see?

NZBGet is primarily devoted to download. It'll download whatever NZB source it is given. I often use this for one-offs rather than series and use the indexers 'add to cart' feature to manually search Usenet and flag for download (with NZBGet picking up every 15 minutes for new things).
Sonarr monitors named series only, and looks at multiple index sites to determine before deciding where to pull from. Its the better automated option with a bunch of filters and settings that can assist with quality selection and the like.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply