Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
FAT32 SHAMER
Aug 16, 2012



I heard about shinobi yesterday while looking for blue Iris alternatives that can be dockerised. It looks like you can either run native software or you can run it in docker, but they strongly suggest running the native app and using ubuntu 20.4 if you’re on an Nvidia gpu

https://shinobi.video/

The install video was really slick, too

https://youtu.be/Vk_5hlSQeV0?feature=shared

I feel like I have security concerns on running it native over dockerised, but maybe that’s more of a knowledge gap on my part? Like I’d imagine running it native will be a lot more effort from a network security front than just messing around with docker compose search terms for an hour or two

Adbot
ADBOT LOVES YOU

c355n4
Jan 3, 2007

I keep looking at moving from using NextCloud to Immich. I only use NextCloud for storing photos from two phones and it is total overkill. I gather Immich recently joined FUTO (https://immich.app/blog/2024/immich-core-team-goes-fulltime/). No idea if this is a good sign or not. Has anyone made the jump and how was it?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

c355n4 posted:

I keep looking at moving from using NextCloud to Immich. I only use NextCloud for storing photos from two phones and it is total overkill. I gather Immich recently joined FUTO (https://immich.app/blog/2024/immich-core-team-goes-fulltime/). No idea if this is a good sign or not. Has anyone made the jump and how was it?

Run Immich along side Nextcloud and try it out. As Immich states in their readme:

quote:

⚠️ The project is under very active development.
⚠️ Expect bugs and breaking changes.
⚠️ Do not use the app as the only way to store your photos and videos.

I have my doubts about the FUTO thing but I am cautiously optimistic...

Warbird
May 23, 2012

America's Favorite Dumbass

I decided to look around to see if there were any decent deals going on for hardware to upgrade my setup and god almighty that was a mistake. I'm going to end up going Intel again if for no other reason that I kinda sorta understand their naming schema vs whatever AMD is up to. It's super bothersome that most home server focused content creators are either "I did unspeakable sins to this Raspberry Pi" or "I spent 5 figures and now have a pebibyte of storage" with little attention paid in-between.

I'll likely just upgrade my desktop machine and use those guts in the new server as per tradition.

Aware
Nov 18, 2003
I mean there's not really any wrong answer, though Intel continues to be a nice choice for using the iGPU for light transcoding work in Plex. I plan to migrate my unraid to my 5900x from its current 8700k in a couple of years so may pick up an ARC then for a lovely AMD CPU / Intel GPU setup.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Anyone here running Whoogle? Is that worthwhile?

Motronic
Nov 6, 2009

Combat Pretzel posted:

Anyone here running Whoogle? Is that worthwhile?

Thank you. This led me to SearX, which led me to SearXNG and filled my morning of sitting on meetings I don't care to be on and no need to participate in. My search results have recently gone to total garbage with a bunch of ads at the top on whatever google privacy filtered engine I was using. The results coming from this are quite good and split into more meaninful categories (files, social media) all with selectable search sources per type.

Now I just need to figure out how to add it as a default search egine for brave (pretty sure they don't let you do this because money) or replace brave with something else.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Motronic posted:

Now I just need to figure out how to add it as a default search egine for brave (pretty sure they don't let you do this because money) or replace brave with something else.

Settings -> Manage search engines and site search -> Site Search -> Add -> (Fill out new search engine, use this for the url
code:
https://urlforyoursearxnginstance/search?q=%s
Then three dots next to it and "make default"

You're good to go.

Motronic
Nov 6, 2009

Oh crap, I did that and missed that I could make it a default from there. I was expecting to have it added to the dropdown box. Thank you!

Korean Boomhauer
Sep 4, 2008
I’ve been meaning to setup SearXNG for a while. The results on my friends instance are so much better than google.

Quixzlizx
Jan 7, 2007
Around a month ago, I succumbed to the culmination of a series of mistakes that started off with me having originally installed Kubuntu 23.04 instead of 22.04 LTS. I realized that I hadn't been seeing any updates recently, which is when I checked and realized that I was out of support. I didn't want to leave my server that way, so I upgraded to 23.10, which blew everything up to the point where I can't even get the OS to recognize either network adapter. I did uninstall kubuntu-desktop at one point because I'd no longer needed KDE on there, so I don't know if that caused issues when upgrading. So, I figured I'd just keep the server offline until 24.04 LTS was out and start over from there, since the important stuff is backed up.

1. Are there any showstopping issues with 24.04, or is it relatively safe to install it at this point? I'd install the server edition from the beginning this time.

2. My original build had everything either installed from packages or snaps, and I'm thinking I want to try and use Docker this time, particularly to replace the packages that require third-party repositories. I really only have background knowledge about how it works, and I don't really feel comfortable just following some guide and doing things I don't really understand. Are there any good resources out there that are somewhere between "Read the documentation and write all of your compose files from scratch" and "Copy/paste these x commands and load these y containers?" Are there recommended places to look for containers for popular stuff like the arrs that aren't busted?

2b. If I just want a nice web GUI to manage my various Docker containers, is Dockge my best bet? Is there an easy in-Docker way to back up containers, or is that more of a manual cron job?

3. Similar to 2., I want to set up Tailscale for services that only I would ever access externally, but reading the documentation doesn't exactly fill me with confidence that I understand everything. Any good guides to learning the ins and outs? Also, if I have my own domain, I'm assuming there isn't really a way to have a subdomain for the service while only being accessible through Tailscale? Or would I use my reverse proxy to route that subdomain to a closed port, so that all requests bounce off except the ones that are using Tailscale? Or do those two not really interact like that?

4. Anyone have suggestions for an epub database I can set up and read books remotely from a phone (this would be through Tailscale). I would preferably also be able to use desktop software like SumatraPDF to read files from my library when I'm at home, but if I had to I'd manually open files with Sumatra and then manually update my read titles in the library afterward.

Thanks for any advice.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Quixzlizx posted:

2b. If I just want a nice web GUI to manage my various Docker containers, is Dockge my best bet? Is there an easy in-Docker way to back up containers, or is that more of a manual cron job?

You should only be worrying about backing up the mounted directories for a container. If there are config files or database files you need to persist across container starts you should create persistant volume claims .You should not need to backup the whole container itself since they are intended to be ephemeral.

There's basically two ways to handle the data you want to retain from the containers. Either let podman/docker manage the volumes or mount a local (or network) directory.

If you use docker/podman managed volumes you can rework this script to backup all of those by just calling this script which I wrote.

https://github.com/Nitrousoxide/appstore/blob/main/scripts/podman-volume-backup.sh

Obviously replace the "podman" references to "docker" if that's your container engine of choice. I believe all the actions called by this script work the same for both engines.

If you don't have your container engine manage your volumes and you pick a local path to mount to the containers (I do this) then you can just backup those directories like you would any other directory on your system. Or you can mount the directories into a container like dupliciti and get a web interface and a lot of flexibility in how you do the backups (I do this).

Nitrousoxide fucked around with this message at 00:12 on May 22, 2024

Resdfru
Jun 4, 2004

I'm a freak on a leash.
2. Creating compose files isn't very hard. Find an example online and find the docker documentation on compose to see what fields are available and what they do. Read up on docker volumes and networks. I think the official documentation is the best for a quick overview. You probably won't have to create your own images for a while if ever.

2b. Dunno about guis but for backups I create directories on my local file system and mount that as a volume in the container. So my backups are just backing up /containers. Dunno if the best/correct way to do it but it works for me

3. Not sure about tailscale guides. Tailscale is just a VPN. I have a domain and I set a wildcard that goes to my servers local ip. My reverse proxy (traefik) labels gets added to all containers and so when a container comes up traefik knows that when a request comes in for thing.domain it should go to container x. And so my stuff is only accessible locally or via tailscale

4. Not sure but would like to know the answer to this question that isn't calibre. Audiobookserver has an ebook reader built in and an app. It seems pretty good but the audio book stuff is their priority

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Something I would highly recommend is installing proxmox as your base os for your server and then creating VMs for the distro that you want to run your services or for your container engine. Letting Proxmox mediate between your hardware and getting access to its backup features makes migrating a server to new hardware, or backing up a vm before you perform a major upgrade so much easier. And you can perform daily/weekly/whatever backups of your vm's with a fraction of a second of downtime thanks to the ZFS filesystem.

Believe me, being able to add a new server to the cluster, stopping your current vm, chosing to move it to the new node in the cluster, and starting it back up and being all good to go is just so loving amazing.

FAT32 SHAMER
Aug 16, 2012



Edit: I already asked this question somewhere lol

FAT32 SHAMER fucked around with this message at 00:34 on May 22, 2024

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I do a hypervisor which runs a distro which runs the containers. The backup/migration features of proxmox (and other hypervisors) are just such a value add that the extra layer of complexity make it so worth it.

Plus it gives you the flexibility to spin up a new vm to gently caress around in any time you want if you want to try the newest thing.

Oysters Autobio
Mar 13, 2017
New to this but when you say to run proxmox and containers, you mean that you're virtualizing your distro, like say ubuntu, and then in that ubuntu distro, you install and run docker/portainer/whatever engine of choice?

Always confused when proxmox and LXC containers get added to the mix, but if I understand it correctly the "layering" goes proxmox -> distro (LXC) -> docker/whateve, correct? Like, LXCs are just docker-like containers for Linux distros so they can be virtualised on a hypervisor like proxmox, and for your applications or services you would use docker or similar container? Or are there actual applications / services that can be run as their own LXC containers rather than docker?

Docker itself is still very new to me so while I want to setup proxmox I'd prefer tackling the LXC related stuff just for the distro and then using docker for the apps/services.,.

Hughlander
May 11, 2005

Oysters Autobio posted:

New to this but when you say to run proxmox and containers, you mean that you're virtualizing your distro, like say ubuntu, and then in that ubuntu distro, you install and run docker/portainer/whatever engine of choice?

Always confused when proxmox and LXC containers get added to the mix, but if I understand it correctly the "layering" goes proxmox -> distro (LXC) -> docker/whateve, correct? Like, LXCs are just docker-like containers for Linux distros so they can be virtualised on a hypervisor like proxmox, and for your applications or services you would use docker or similar container? Or are there actual applications / services that can be run as their own LXC containers rather than docker?

Docker itself is still very new to me so while I want to setup proxmox I'd prefer tackling the LXC related stuff just for the distro and then using docker for the apps/services.,.

Many people (myself included) run docker on the hypervisor. It’s not recommended but for a home use, I don’t want to pay the memory com
It of a VM.

Mr Shiny Pants
Nov 12, 2012

Hughlander posted:

Many people (myself included) run docker on the hypervisor. It’s not recommended but for a home use, I don’t want to pay the memory com
It of a VM.

Same. It makes way more sense. And having your compose files in Git makes redeploying them pretty easy.

hogofwar
Jun 25, 2011

'We've strayed into a zone with a high magical index,' he said. 'Don't ask me how. Once upon a time a really powerful magic field must have been generated here, and we're feeling the after-effects.'
'Precisely,' said a passing bush.
Another thing to take into consideration if you have time would be while setting up the server, set up an ansible playbook (or equivalent)as well, so you can easily get the server back to how you want it if you need to set it up from scratch again.

I really should do it myself but then I went too deep with attempting to gitops my server (with packer, Terraform, ansible) and then lost interest/didn't have enough time

Mr Shiny Pants
Nov 12, 2012

hogofwar posted:

Another thing to take into consideration if you have time would be while setting up the server, set up an ansible playbook (or equivalent)as well, so you can easily get the server back to how you want it if you need to set it up from scratch again.

I really should do it myself but then I went too deep with attempting to gitops my server (with packer, Terraform, ansible) and then lost interest/didn't have enough time

This was a step too far for me, time and effort vs just installing it manually. :)

Someday.....

Animal
Apr 8, 2003

Hey guys is this the thread for someone who went into the unRAID rabbit hole?

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Animal posted:

Hey guys is this the thread for someone who went into the unRAID rabbit hole?

This or the NAS thread, probably. Depends what you're using unraid for.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down
Or the Homelab Thread

Embarrassment of riches in this space.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Motronic posted:

Thank you. This led me to SearX, which led me to SearXNG and filled my morning of sitting on meetings I don't care to be on and no need to participate in. My search results have recently gone to total garbage with a bunch of ads at the top on whatever google privacy filtered engine I was using. The results coming from this are quite good and split into more meaninful categories (files, social media) all with selectable search sources per type.
Hmm, SearXNG looks nice.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
I'm in the process of learning g K8s for work. Helm, argocd, nginx ingress, etc.

The best way for me to learn this stuff is to have a project of my own to test everything out with.

I like the linuxserver.io containers on UnRAID, is there an equivalent to those for Helm? Does linuxserver.io create helm charts?

I'm doing this all one step at a time so right now it's just learning how helm and K8s works on a basic level (getting there and ohmyzsh autocomplete helps a lot).

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Matt Zerella posted:

I like the linuxserver.io containers on UnRAID, is there an equivalent to those for Helm? Does linuxserver.io create helm charts?

No, they only produce images. The readmes will have examples of how to run the images with Docker, but that’s as far as they go.

If you think of a Dockerfile as a template or a set of steps to install an application inside of a container image, then a Helm chart is just a template for how to create a set of resources in a Kubernetes cluster.

Bitnami is a VMware spinoff that produces a ton of images and charts for popular open-source apps. They have a lot of examples to dig through, but the downside is they’re very template-heavy so it might be harder to understand how they click together at first.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Cenodoxus posted:

No, they only produce images. The readmes will have examples of how to run the images with Docker, but that’s as far as they go.

If you think of a Dockerfile as a template or a set of steps to install an application inside of a container image, then a Helm chart is just a template for how to create a set of resources in a Kubernetes cluster.

Bitnami is a VMware spinoff that produces a ton of images and charts for popular open-source apps. They have a lot of examples to dig through, but the downside is they’re very template-heavy so it might be harder to understand how they click together at first.

Thanks! I'll look into Bitnami once I'm a bit more up to speed. I've got a ways to go on a few Udemy courses to figure this out for some basic understanding.

Hughlander
May 11, 2005

Doesn’t the truenas helm charts use linuxserver?

Quixzlizx
Jan 7, 2007
I've been reading up and I think I now mostly understand how Docker containers work, but I'm not really sure on best practices for what users to run images as and where to mount the config volumes.

For a couple of more general questions, am I better off installing docker engine + compose plugin directly from Docker's repositories instead of Ubuntu 24.04's? It's just kind of funny because one of the main reasons why I wanted to switch to docker was to reduce my dependence on third-party repositories. And when I'm doing cronjob backups on config directories, I should be using the pause/unpause docker commands before and afterward?

1. On my previous server install, everything was installed from packages except for a couple of snaps, and I had a "media" user group that had access to the SMB file share, and individual sabnzbd/sonarr/radarr etc. users for each service, so any config directories that were tied to the user would be stored in those users' home directories, or else they would be stored in etc/var/whatever, depending on the individual program's defaults. Which is fine if the programs are going to create them as defaults when being installed as packages, but I'd have to create them all manually first (by those users if they are user-specific, or have to mess with chown afterward) in order to map them as volumes, so that obviously sounds like a stupid waste of time.

Am I better off just running all of the containers as my main user (not root), having a /docker/config/<app> directory structure in /home, and just have everything located there for ease of creation/management/backup? Normally that doesn't sound correct for security purposes, but in this case docker is already restricting what each service can access. If this is still an issue, what is the best practice?

2. On another security note, I was thinking of switching my reverse proxy from caddy to nginx, because it seems like way more people use nginx, so there's way more online documentation/troubleshooting available, and also because caddy's log format isn't compatible with fail2ban, and I had to implement a hacky regex bridge between the two. However, the way my server enabled SSL before was:

a). Cloudflare provided the cert/key
b). These were stored in /etc/ssl/
c). The key file was chowned to root:ssl-cert
d). The caddy user was added to the ssl-cert group

If I end up going with "main user runs all the containers," is it a security issue to add that user to the ssl-cert group like I suspect? Am I better off just running that particular container as root?

3. I guess this isn't really a question, but Plex was one of the services I was running as a snap, and I'm thinking of just keeping it that way, because it's already containerized, it doesn't really write to anything other than its own data files, and I've spent so much time curating my Plex library that I'm definitely in a "if it ain't broke, don't fix it" mindset there. I'm assuming I'll be OK here as long as I sudo the rsync when copying my backups back to the snap folder.

Mr Shiny Pants
Nov 12, 2012
You can set the user the container will run as: https://medium.com/@mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf

So you could just reuse the Caddy user.

cruft
Oct 25, 2007

Mr Shiny Pants posted:

You can set the user the container will run as: https://medium.com/@mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf

So you could just reuse the Caddy user.

The Linuxserver people have standardized on uid/gid 911, which is what I use.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
So, on the Docker hub, there's portainer-ce tagged version 2.20.3. I've installed it and the web UI still says 2.19.5. So which version am I running?

Quixzlizx
Jan 7, 2007
Either I didn't explain myself correctly, or I'm fundamentally misunderstanding something.

The "caddy" user was on my old install, which is now dead. It doesn't exist, and it won't exist unless I manually create it, because if I use a Docker image of caddy (or whatever proxy software), it won't create a caddy user inside my kernel.

That was the whole point of my question, to ask what best practices are. Do I recreate all those users in order to map them one by one to containers (this doesn't sound right), or can I just pass the same user to all of my containers, considering they are already compartmentalized from each other?

What does it mean when Docker uses a default 911 id that doesn't actually exist in my system? Doesn't that mean I won't be able to interact with any outputted files in my volumes unless I'm root? Won't that cause an issue with my proxy example, since that container needs its user to have permission to read /etc/ssl?

cruft
Oct 25, 2007

Quixzlizx posted:

Either I didn't explain myself correctly, or I'm fundamentally misunderstanding something.

The "caddy" user was on my old install, which is now dead. It doesn't exist, and it won't exist unless I manually create it, because if I use a Docker image of caddy (or whatever proxy software), it won't create a caddy user inside my kernel.

That was the whole point of my question, to ask what best practices are. Do I recreate all those users in order to map them one by one to containers (this doesn't sound right), or can I just pass the same user to all of my containers, considering they are already compartmentalized from each other?

What does it mean when Docker uses a default 911 id that doesn't actually exist in my system? Doesn't that mean I won't be able to interact with any outputted files in my volumes unless I'm root? Won't that cause an issue with my proxy example, since that container needs its user to have permission to read /etc/ssl?

Aha, okay. Thanks for helping us understand where you are here.

Here are some things you need to know to understand what's going on:

  • Every file on your system has two numbers associated with it: one is the user ID (uid) and one is the group ID (gid). It's just two numbers.
  • It's not the kernel that maps user IDs to user names, it's the file /etc/passwd. There's another file, /etc/group, that maps group IDs to group names. You should go read them right now. /etc/passwd is "username:x:uid:gid:other stuff"
  • When you run a container, it provides a whole new root filesystem, so you get all new files for /etc/passwd and /etc/group.
  • When you bind-mound a directory into the container, all the files keep their uid and gid.

So you had a caddy user in one container, but all the files it "owned" just had a certain uid. Those files keep that same uid in the host container and any other container you mount them into, even if the host or the other containers may not have an entry for that uid in /etc/passwd.

So when I said the Linuxserver people had settled on uid 911, that just meant that inside their containers there's a "linuxserver" (or sometimes "abc") user, with uid 911. This means if you bind-mount from the host OS, all the files written inside the container will have uid 911. You can add a user in the host OS with uid 911 if you like your ls -l output to show a username and not 911. Then you can sudo to that user and mess around with files used by your containers, if you want. This is what I do, actually. I use 911 for most of my containers so it's 911 everywhere: 911 on my system means "the container user".

I don't know that there's a "best practice" but I feel like what I just outlined might be a pretty widespread and generally not despised practice...

Quixzlizx
Jan 7, 2007

cruft posted:

Aha, okay. Thanks for helping us understand where you are here.

Here are some things you need to know to understand what's going on:

  • Every file on your system has two numbers associated with it: one is the user ID (uid) and one is the group ID (gid). It's just two numbers.
  • It's not the kernel that maps user IDs to user names, it's the file /etc/passwd. There's another file, /etc/group, that maps group IDs to group names. You should go read them right now. /etc/passwd is "username:x:uid:gid:other stuff"
  • When you run a container, it provides a whole new root filesystem, so you get all new files for /etc/passwd and /etc/group.
  • When you bind-mound a directory into the container, all the files keep their uid and gid.

So you had a caddy user in one container, but all the files it "owned" just had a certain uid. Those files keep that same uid in the host container and any other container you mount them into, even if the host or the other containers may not have an entry for that uid in /etc/passwd.

So when I said the Linuxserver people had settled on uid 911, that just meant that inside their containers there's a "linuxserver" (or sometimes "abc") user, with uid 911. This means if you bind-mount from the host OS, all the files written inside the container will have uid 911. You can add a user in the host OS with uid 911 if you like your ls -l output to show a username and not 911. Then you can sudo to that user and mess around with files used by your containers, if you want. This is what I do, actually. I use 911 for most of my containers so it's 911 everywhere: 911 on my system means "the container user".

I don't know that there's a "best practice" but I feel like what I just outlined might be a pretty widespread and generally not despised practice...

Thank you. So, essentially the answer would be to have all of my containers use the same generic user, regardless of whether I create the user first and pass that along in the UID/GID arguments in the compose file, or create a user that matches the default 911 uid. Regarding my reverse proxy, I'm assuming I'd have to add this default user to the ssl-cert group in order for it to be able to access the Cloudflare-provided key in /etc/ssl/private (as well as making sure that the reverse proxy container has that folder mapped to a volume).

The second part of my question was the best practice for mapping all of the various config directories these services will produce to the host OS. If the same user is going to be used in all of the containers, I'd think it would make the most sense to have all the various config directories inside of that user's /home instead of spread all over the place.

cruft
Oct 25, 2007

Quixzlizx posted:

Thank you. So, essentially the answer would be to have all of my containers use the same generic user, regardless of whether I create the user first and pass that along in the UID/GID arguments in the compose file, or create a user that matches the default 911 uid. Regarding my reverse proxy, I'm assuming I'd have to add this default user to the ssl-cert group in order for it to be able to access the Cloudflare-provided key in /etc/ssl/private (as well as making sure that the reverse proxy container has that folder mapped to a volume).

The second part of my question was the best practice for mapping all of the various config directories these services will produce to the host OS. If the same user is going to be used in all of the containers, I'd think it would make the most sense to have all the various config directories inside of that user's /home instead of spread all over the place.

You may have a bit of trouble with the ssl keys: group ownership is this whole wonky thing depending on /etc/groups and how a process was launched. If you're still getting read errors, try running "
code:
setfacl -m u:911:r
" on the files, and "
code:
setfacl -m u:911:rx
" on the directory: this uses POSIX ACLs instead.

Brutakas
Oct 10, 2012

Farewell, marble-dwellers!
I've got a pixel phone (4a). From searching online, the phone cannot output video other than through casting. I'm aware of the current streaming devices but my question is are there still any simple (or stupid) devices that don't connect to streaming services or have accounts or need internet access in general?

I've got a media server that I connect to through VPN. My primary goal is to play movies on my phone and watch them on a tv in hotel-like situations (can't download any software to the TV and don't want to put in any credentials on the TV).

Motronic
Nov 6, 2009

When i traveled regularly for work the simplest solution to this was a Roku stick and a travel router.

Adbot
ADBOT LOVES YOU

cruft
Oct 25, 2007

Motronic posted:

When i traveled regularly for work the simplest solution to this was a Roku stick and a travel router.

:hmmyes:

I travel regularly for work and the simplest solution to this is a Chromecast or Fire Stick and the Plex server I already run.

The fire stick handles captive portals a little better than the Chromecast.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply