|
Skarsnik posted:I recently switched to Sonarr but couldn't get the sickbeard indexer working, how do you do it exactly? Sonarr -> Settings -> Indexers -> Plus -> Usenet Newznab -> Custom. Name Lolo Sickbeard, Enable search, Enable RSS, http://lolo.sickbeard.com Categories 5030,5040, all else blank.
|
# ? Apr 7, 2015 07:03 |
|
|
# ? May 28, 2024 15:34 |
|
Yep, exactly what I was doing yesterday with no luck Working fine now, ah well thanks!
|
# ? Apr 7, 2015 07:12 |
|
Decairn posted:Sonarr -> Series -> Add Series -> Import existing series from disk -> enter new HDD path Ok so I actually went to Series > Series Settings and found a way to toggle all of them then down at the bottom I could set the new path for my shows. No problem, it seems to have took. Now when I go to search for anything (manual or automatic) I just get the little animated searching balls forever... weird... :\
|
# ? Apr 7, 2015 07:20 |
|
Vykk.Draygo posted:I've just about had it with DogNZB's dogshit worthless broken API. Are there any decent alternatives that aren't closed off, or am I pretty much stuck rolling my own newznab indexer? If so, how much work is involved in that if I really only care about HD shows, HD movies, and music albums? I was seeing a decent number of timeouts to dog in my sickbeard logs so I checked on their forums, apparently they've got a new set of servers and load balancers for their API services and they're getting them installed this week.
|
# ? Apr 7, 2015 16:16 |
|
Vykk.Draygo posted:I'm using that in Sonarr just fine. My problem is for when I want to search for something through NZB 360 on my phone. Sure I can still use the DogNZB website through the mobile browser, but I'd prefer it to just work. Usenet Crawler maybe?
|
# ? Apr 7, 2015 16:52 |
|
Gwaihir posted:I was seeing a decent number of timeouts to dog in my sickbeard logs so I checked on their forums, apparently they've got a new set of servers and load balancers for their API services and they're getting them installed this week. I saw that too. They've been "upgrading" for months, but maybe they really will get it done soon. Thermopyle posted:Usenet Crawler maybe? Thanks, this seems to work fine. Vykk.Draygo fucked around with this message at 17:54 on Apr 7, 2015 |
# ? Apr 7, 2015 17:45 |
|
Is there any way to set up Sick Beard to download multiple versions of the same episode? I'm getting tons of fakes for a certain episode of a certain show that's quite popular, and I don't want to miss the ACTUAL episode before it gets DMCA'd.
|
# ? Apr 12, 2015 18:14 |
|
sure just switch to Sonarr
|
# ? Apr 12, 2015 19:09 |
|
And have it download it over and over, failing each time 8 times here till I noticed..
|
# ? Apr 12, 2015 19:14 |
|
Skarsnik posted:And have it download it over and over, failing each time I think mine only grabbed twice, the first time failed import and blacklisted automatically. Second time succeeded, and I had to blacklist manually.
|
# ? Apr 12, 2015 19:23 |
|
Despite having the -KILLERS release blacklisted, it's tried to grab it a few times, dunno why Sonarr really needs a 'don't try and download till after broadcast' setting
|
# ? Apr 12, 2015 19:28 |
|
You could try setting minimum age in the indexers tab to 30min or whatever but I suppose that might turn into a DMCA issue.
|
# ? Apr 12, 2015 20:31 |
|
I changed from sickbeard to sonarr 2 weeks ago and it somehow just worked. Woke up this morning, turned on my laptop, sonarr started, grabbed what I think you are talking about (all available episodes), somehow ignored the fake and encrypted ones and everything looks to be alright.
|
# ? Apr 12, 2015 20:35 |
|
elwood posted:I changed from sickbeard to sonarr 2 weeks ago and it somehow just worked. Woke up this morning, turned on my laptop, sonarr started, grabbed what I think you are talking about (all available episodes), somehow ignored the fake and encrypted ones and everything looks to be alright. Sonarr is really well done. I've been using it since early 1.0, and it was better than Sickbeard then - but a couple of years later and it's one of my favorite programs on my computer. I wish they would expand its purview to compete with CouchPotato.
|
# ? Apr 12, 2015 21:34 |
|
My Sonarr somehow managed to download and import what I assume are fakes from like 4 weeks out. This is gonna be real fun if this keeps up all season.
|
# ? Apr 12, 2015 22:52 |
|
You grandpas watching shows in SD need to get with the times edit: I switched to Sonarr last week and everything seems to be working great except one show, a daily show, is extracting the files into my completed shows folder, then moving just the episode over to where it should go, leaving a folder with the scene name filled with an nfo file and a sample. Anybody know why just this one show would be doing this? Vykk.Draygo fucked around with this message at 23:06 on Apr 12, 2015 |
# ? Apr 12, 2015 23:03 |
|
Is there any way I can get Sonarr to just rename the video files and dump them all into 1 folder without making a subfolder for every series? I just move everything off of that harddrive as soon as it finishes and don't need any sorting after naming the files.
|
# ? Apr 12, 2015 23:06 |
|
Kid posted:Is there any way I can get Sonarr to just rename the video files and dump them all into 1 folder without making a subfolder for every series? I just move everything off of that harddrive as soon as it finishes and don't need any sorting after naming the files. I'd suggest turning off Sonarr processing and instead just use a script with your download client. There are good, configurable organizing scripts for both NZBget and SabNZBD+.
|
# ? Apr 12, 2015 23:10 |
|
SymmetryrtemmyS posted:Sonarr is really well done. I've been using it since early 1.0, and it was better than Sickbeard then - but a couple of years later and it's one of my favorite programs on my computer. I wish they would expand its purview to compete with CouchPotato. One of their eventually goals is movie support, but they want to be stable and very functional with TV first because it is apparently a big undertaking and they will have to concentrate less on TV while working on movies.
|
# ? Apr 13, 2015 01:28 |
|
I don't remember if I ever talked about this on here, so here you go (also sorry Thermopyle, I only just saw your post from April 2013 today, I do not read the thread regularly ):quote:It's been a long-rear end time since this project was first on here and it's changed fairly significantly since then, so I figured I'd re-mention it for anyone that's after a simple indexer. Double tl;dr: a really fast usenet indexer. It's pretty stable now, being in use by quite a few people for a couple of years, and most of the bugs are hammered out. If you're looking for something to index weird groups not covered by other indexers or want to just run your own indexer from home, this works. In the future I'll package it into a tiny portable thing that can just provide people with their own poo poo, without having to pay for indexer access or send traffic externally (for people that live in Australia, for instance). Totally looking for people to build more pre-db integration into it, as well. We do a couple of groups, but there are some big ones missing that I don't have any kind of pre access to.
|
# ? Apr 13, 2015 04:41 |
|
Murodese posted:I don't remember if I ever talked about this on here, so here you go (also sorry Thermopyle, I only just saw your post from April 2013 today, I do not read the thread regularly ): Request: add a database abstraction layer so I can point it at oracle instead of postgresql
|
# ? Apr 13, 2015 05:15 |
|
Mr Chips posted:Request: add a database abstraction layer so I can point it at oracle instead of postgresql There is one, but parts are heavily optimized and I have to deal with certain bits in a DBMS specific way. Mysql support exists but isn't fast, for instance. e; actually, you can try it just by changing the db engine in your config to 'cx_oracle' and pip3 install cx_oracle. It'll be slow as poo poo saving segments to the db, but that's about the same speed as newznab works Murodese fucked around with this message at 05:52 on Apr 13, 2015 |
# ? Apr 13, 2015 05:18 |
|
Hah, I forgot all about that. I ripped some ideas from it for my own distributed indexer running over multiple OS threads, green threads, processes, or machines. I should clean that up and put it on github...
|
# ? Apr 13, 2015 07:15 |
|
Yeah, chuck it to me when it's done, I'd love to see it. I was originally going to write this to use distributed collection/processing, but you can already do that to some extent by using the DB as a centralised event handler and just split the various responsibilities over different machines.
|
# ? Apr 13, 2015 07:18 |
|
What is the best torrent client to use with Sonarr on a linux with all cli or remote access? I have sonarr+sabnzbd running on my HTPC, which isn't totally headless, but I mostly keep XBMC on the main display for actually watching TV and then drive everything else over a web browser or ssh window from my other computer. There's a few things I'd like to fill in from private torrent sites that usenet and newznab seem to just be missing but I've not used torrents in years, since before utorrent turned to adware poo poo.
|
# ? Apr 13, 2015 13:09 |
|
Not really on the topic at hand, but has to do with usenet, just not liberating files. I'm going through some old newsgroup discussion and I was wondering what it meant if a line was preceded by a # or | or some combination of the two? Real neckbeard stuff, thanks.
|
# ? Apr 13, 2015 14:44 |
|
bollig posted:Not really on the topic at hand, but has to do with usenet, just not liberating files. Without the actual post to get some context from, I'd guess an unusual way to indicate quoted content when doing an inline reply. Typically people use > but who knows. Otherwise # precedes a single-line comment in a lot of programming languages, so possibly something related to that.
|
# ? Apr 13, 2015 14:50 |
|
wolrah posted:Without the actual post to get some context from, I'd guess an unusual way to indicate quoted content when doing an inline reply. Typically people use > but who knows. Otherwise # precedes a single-line comment in a lot of programming languages, so possibly something related to that. So I'm looking at a passage that has something like this: In article, so and so writes >blah blah #looks like a response to above, which refers to >>this stuff >now this looks like a response to the '>>' stuff above #this looks like it's a response to above And then there's some text without #'s or >'s. edit: most of those are multiple lines, not just single lines. bollig fucked around with this message at 15:03 on Apr 13, 2015 |
# ? Apr 13, 2015 14:57 |
|
bollig posted:So I'm looking at a passage that has something like this: I guess, is there any reason that you would stuff a # in front of something if it was something you were typing in that instance?
|
# ? Apr 13, 2015 15:01 |
|
I wish there was a way in Sickbeard to set an unaired episode to be ignored. There's just no checkbox until the time comes, but I'd like to ignore things like "best of" episodes in advance. Currently I pause it before those weeks if I remember, but usually I don't. Does Sonarr do that kind of thing?
|
# ? Apr 14, 2015 08:05 |
|
Yes- with Sonarr you can toggle ignore or download on individual episodes, seasons, or even entire shows if you want.
|
# ? Apr 14, 2015 13:21 |
|
Thanks - sickbeard can do all that as long as they've aired, it's the unaired it won't. I'll have a look.
|
# ? Apr 14, 2015 14:08 |
|
So what are the storage and bandwidth implications if someone wanted to run their own indexer for their own personal use?
|
# ? Apr 14, 2015 19:56 |
|
xgalaxy posted:So what are the storage and bandwidth implications if someone wanted to run their own indexer for their own personal use? Bandwidth might be 100 MB a day if you index the top few groups (which should be good enough if you don't care about some esoteric stuff). I've been running an indexer for personal use for 1-2 years and index like 5 groups and my database plus nzb storage is around maybe 75GB.
|
# ? Apr 14, 2015 21:13 |
|
xgalaxy posted:So what are the storage and bandwidth implications if someone wanted to run their own indexer for their own personal use? I index something like 25 popular groups (barring some really spammy ones), and have backfilled to around ~700-900 days on stuff. End result is that I have around 520,000 releases. My DB looks like this (I've noted the tables that fluctuate really wildly as you're indexing, because the data is collected and then aggregated, compressed and deleted): pre:nspname | relname | size | refrelname | relidxrefrelname | relfilenode | relkind | reltuples | relpages ----------+------------------------------------+------------+-------------------------+-------------------------+-------------+---------+-----------+---------- pg_toast | pg_toast_4704468 | 12 GB | nzbs | | 4704472 | t | 4866792 | 1579537 public | segments | 5790 MB | pg_toast_4704738 | | 5192408 | r | 4761148 | 741172 <-- fluctuates wildly public | ix_segments_segment | 2012 MB | | | 5192415 | i | 4761148 | 257486 <-- same with this one public | ix_segments_part_id | 1599 MB | | | 5192416 | i | 4761148 | 204694 <-- and this one public | segments_pkey | 1288 MB | | | 5192414 | i | 4761148 | 164855 <-- ..and this one public | nzbs | 287 MB | pg_toast_4704468 | | 4704468 | r | 494241 | 36707 public | pres | 234 MB | pg_toast_4704419 | | 4704419 | r | 1026911 | 29956 public | releases | 168 MB | pg_toast_4704633 | | 5581407 | r | 519001 | 21510 pg_toast | pg_toast_4704468_index | 158 MB | pg_toast_4704468 | nzbs | 4704474 | i | 5191460 | 20280 public | ix_pres_name | 116 MB | | | 4704430 | i | 1026911 | 14824 public | nfos | 105 MB | pg_toast_4704457 | | 5581477 | r | 138937 | 13496 pg_toast | pg_toast_4704457 | 85 MB | nfos | | 5581480 | t | 57122 | 10881 public | ix_pres_requestgroup | 59 MB | | | 4854331 | i | 1026911 | 7580 public | pres_uniq | 58 MB | | | 5223660 | i | 1026911 | 7396 public | releases_uniq | 56 MB | | | 5581431 | i | 519001 | 7158 public | ix_parts_group_name | 53 MB | | | 5192401 | i | 13987 | 6777 public | ix_releases_search_name | 52 MB | | | 5581428 | i | 519001 | 6597 public | ix_parts_binary_id | 36 MB | | | 5192403 | i | 13987 | 4599 public | ix_parts_posted | 36 MB | | | 5192404 | i | 13987 | 4587 public | ix_pres_requestid | 34 MB | | | 4854332 | i | 1026911 | 4321 public | ix_parts_total_segments | 34 MB | | | 5192402 | i | 13987 | 4294 public | parts_pkey | 33 MB | | | 5192400 | i | 13987 | 4287 My network usage while backfilling, updating and post-processing looks like this: pre:Device em1 [10.1.1.200] (1/2): =========================================================================================================================================================================== Incoming: | ##|### # |##| # ### .##| # #| # ###### ## |####| .#. ### #### .#. ## ##| # ####### ## ######. ### .### .#### ###.##. .###. # #######. |### #######.### ####. ###### ####### ##### ## . .######## #### ############. .##### ############## ##### ## ||# ######### ##### ##############|#######|############## ###### Curr: 3.22 MBit/s ## |.|###. .######### ###### .. .###################################### .####### Avg: 2.35 MBit/s ## #.|####### ########## |############# ################################################# Min: 552.00 Bit/s ############# .########### .|##############| .################################################## Max: 11.08 MBit/s #############. .##############################...#################################################### Ttl: 196.72 MByte
|
# ? Apr 15, 2015 04:21 |
|
What are the go-to indexers these days? I use NZBs.org but im setting up usenet for a friend who obviously doesnt have the ability to get in on The Good poo poo.
|
# ? Apr 23, 2015 08:13 |
|
Laserface posted:What are the go-to indexers these days? My enabled indexers in Sonarr are Nzb.su, Sickbeard, OZnzb, NZBplanet, and NZBgeek. I also have a DogNZB account, but their API is having problems right now.
|
# ? Apr 23, 2015 12:07 |
|
SymmetryrtemmyS posted:My enabled indexers in Sonarr are Nzb.su, Sickbeard, OZnzb, NZBplanet, and NZBgeek. I also have a DogNZB account, but their API is having problems right now. OZnzb's site posted:We are the best NZB Indexer. Sold!
|
# ? Apr 24, 2015 00:18 |
|
I want to start using an RSS TV gathering program, I hear Sonarr is the best, but doesn't NZBget have an RSS capability? What's the difference? Is it just worth sticking with NZBget since I already use it? Or is Sonarr actually going to give me a huge benefit that I don't see?
|
# ? Apr 27, 2015 18:27 |
|
|
# ? May 28, 2024 15:34 |
|
Megaman posted:I want to start using an RSS TV gathering program, I hear Sonarr is the best, but doesn't NZBget have an RSS capability? What's the difference? Is it just worth sticking with NZBget since I already use it? Or is Sonarr actually going to give me a huge benefit that I don't see? NZBGet is primarily devoted to download. It'll download whatever NZB source it is given. I often use this for one-offs rather than series and use the indexers 'add to cart' feature to manually search Usenet and flag for download (with NZBGet picking up every 15 minutes for new things). Sonarr monitors named series only, and looks at multiple index sites to determine before deciding where to pull from. Its the better automated option with a bunch of filters and settings that can assist with quality selection and the like.
|
# ? Apr 27, 2015 18:38 |