Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I've been using this git as a starting point for a bunch of docker compose files.

https://github.com/abhilesh/self-hosted_docker_setups

My most used applications are probably:
Wirehole (combo pihole, wireguard, and unbound)
https://github.com/IAmStoxe/wirehole

Transmission VPN - I setup any of my media grabbing apps in a stack with this which opens a vpn tunnel to my vpn of choice. Useful also since I'm on Verizon and they blocked Mangadex, so I setup foxyproxy to direct certain sites through the transmission proxy that goes through the VPN.
https://github.com/haugene/docker-transmission-openvpn

I also setup Joplin and Quillnote as notes apps on my phone/computer and had them sync to Nextcloud so they can work like a Google keep and general notetaking app that syncs in the cloud. No need for a separate docker instance provided you have a nextcloud webdav you can point them to.

Adbot
ADBOT LOVES YOU

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Does photoprism let you share a photo with people as a link? I see in their demo environment it can act as a webdav server which you can connect to with other file browsers, but like what if I just want to send a family member an album with a few photos in it?

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Scruff McGruff posted:

Overseerr also led me to LunaSea which is basically a mobile app version of HOMER/Muximux that supports the *arr apps, Tautulli, and NZB. Pretty nice.

Huh that's pretty cool.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



My ethernet adapter on my server seems to have died while I am 1000 miles away on vacation. Managed to log in to it through a vpn for a few minutes it was still accessable before it completely conked out and was getting a ton of resets and failure to write/read to that module in the logs.

Impressive how it timed the hardware failure that would make it completely unusable for the first 24 hours of my only vacation I've had away from home since Covid started.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Canine Blues Arooo posted:

I'm looking for guidance on how to better execute on my self-hosted setup. It might be in the OP, but I'm too dumb to piece it together.

What I have is a Windows box that is running a handful of services I want to be able to expose to the Internet. Those services are things like a MySQL database, a Minecraft Server, etc. Right now it's all running on bear metal and the way I'm doing this is just port forwarding and connecting via IPv4, but without a static address so things like my Minecraft Server has an ever-changing address and this is highly suboptimal. For some things, I've solved this with ngrok, but it doesn't seem to be able to do all the things (e.g. the Minecraft Server doesn't seem to play nice with ngrok for reasons I don't understand)

If I can actually figure this out, what I want is a more robust box running Windows VMs, accessible via DNS. Before I put money into a bigger box though and setup slightly more real infrastructure though, I really need to figure out this external addressing poo poo. I'm not sure what the Right Way™ to access my services reliably and consistently from the Internet actually is. Maybe it lies somewhere with ngrok and I just need to get gud. A static address is ideal, but is not in the cards.

So goons: How do I setup my box so I can talk to it from the Internet without static addressing?

You should absolutely run anything you're exposing to the internet through a reverse proxy. NGINX Proxy Manager is probably the easiest way for someone with less technical knowledge to implement it as its done through a GUI.

You'll also want some dynamic dns which will update the domain provider with your IP if it changes.

I personally do it in Docker with two containers:

1: My domain provider is cloudflare so I use this container to keep cloudflare updated on what my IP is.
https://hub.docker.com/r/oznu/cloudflare-ddns/

2: Then I use NGINX Proxy Manager (https://hub.docker.com/r/jlesage/nginx-proxy-manager#!) to direct the https traffic to the appropriate server on my network. everything exposed to the internet goes through port 443 and NGINX handles directing the traffic to the appropriate server.


Duck DNS is a free domain provider (they provide you a subdomain at their website.) But I don't think it works with NGINX Proxy Manager, so if you want to simplify the reverse proxy setup process you'll need to pay for a proper domain for yourself and decide on a provider like cloudflare (or another) to handle the DNS updates. You don't need an expensive domain, you get get one that's like $2 for the year.

Matt Zerella posted:

Dynamic DNS using duckdns. Then set up WireGuard or Tailscale and don't expose anything to the internet.

He has a minecraft server, which I assume will be used for people outside of his home. So I think he will need to use a reverse proxy to accomplish his goals, but OP if you are the only one ever using these services, than as Matt Zerella says, using wireguard, directed to your duckdns subdomain. And then run a DDNS (like this: https://github.com/linuxserver/docker-duckdns) on your end to keep duck DNS up to date on your current ip. This would be even more secure since you'd theoretically also want to implement fail2ban and other security layers on anything exposed to the internet to prevent brute force attacks on your services.

Nitrousoxide fucked around with this message at 22:21 on Dec 2, 2021

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Matt Zerella posted:

Just a note, I think I've said it before but do not expose anything of it doesn't support Single Sign On or 2FA.

Basic auth even over SSL is not enough.

I edited this in to my post, but yes should absolutely implement fail2ban or other brute force protections like 2FA or SSO if you're exposing anything to the internet at large. This does increase the complexity of your implementation. Something like https://hub.docker.com/r/authelia/authelia#! would work for this. Though I don't know if it works for NGINX. It works for Traefik, which is another implementation of a reverse proxy, though one i'm not familiar with.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



That Works posted:

I know it's a bit offtopic for the thread but we're talking about it anyways already.

I am still getting my head around most networking stuff and I'd like to be able to occasionally remotely access my Unraid NAS at home. So it seemed that setting up Wireguard (as mentioned above) on it would work but one of the 1st things for Wireguard to run is to enable UPnP.

https://forums.unraid.net/topic/84226-wireguard-quickstart/

How is that different from opening up a port on the router? Am I just not understanding some basic thing or?

From reading here and elsewhere I was all "Ok I don't need to set up a reverse proxy on the NAS and dont want to open up any ports so instead I should VPN to it, ok Wireguard... opens a port... :confused: "

Sorry if this is super naive.

You still need to open a port for Wireguard. But since it's the only service listening on that port, and it will only respond to those that have the right secret encryption key (which your client will have) even if people try to use that port as an ingress point to your network, nothing will respond to them since they don't have the right key.

Nitrousoxide fucked around with this message at 17:56 on Dec 5, 2021

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



One thing I'd recommend if you self-host email is that you should do it off-site in a managed environment. You don't want to miss emails because your home lost power or internet.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Zapf Dingbat posted:

I've experimented with Rocket Chat and it seemed like a good Slack-like. Otherwise if you've got nextcloud then try Talk.

Speaking of, rocket chat is now being integrated into Nextcloud.

https://nextcloud.com/blog/rocket-chat-nextcloud-integration/

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Just a heads up to folks who use nextcloud. Mariadb 10.6 seems to break nextcloud. After troubleshooting a broken install for a couple of hours after I updated it, I ended up rolling back to 10.5 and restoring a backup my of database and that fixed it. You may want to lock any image you have for your database file to 10.5 for now.

Maybe a fresh install of the app will work fine, I don't know. But upgrading seems to be borked for me at least.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Finally got Fail2Ban setup for NGINX Proxy Manager (had to use a separate docker container since it's not included) after having run my home-lab setup for the better part of a year without it. It immediately banned about 100 ip's as it scanned over the logs. Heh, I guess that's not too bad for that amount of time, but it's a good thing I set it up I suppose.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Zapf Dingbat posted:

Any good guides to follow? I've been running nginx as a reverse proxy and now I'm wondering what it'll catch.

https://youtu.be/Ha8NIAOsNvo

https://dbt3ch.com/books/fail2ban/page/how-to-install-and-configure-fail2ban-to-work-with-nginx-proxy-manager

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



CopperHound posted:

You can mount a specific file instead of directory with docker compose. Could that be a solution for you?

You can also mount directories within the app to fixed points outside of the container.

Like:
volumes:
- /DockerAppData/mycoolapp:/coolapp/config/

would mount "/coolapp/config/"
inside of your docker container to /DockerAppData/mycoolapp
on your host system. And then all your config files or a database or whatever would live in your host system's /DockerAppData/mycoolapp directory rather than inside of your container.

then it's an easy thing to just backup that directory on the host as you normally would.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Setup a Tdarr instance and a node on a couple of other computers to convert over my entire media library to H265 and save a ton of space. I'm on track to save something like 1 TB of space in my NAS once this is all over I expect. Pretty neat how it does the distributed conversions, so you can throw it on your gaming pc, and a couple of old laptops as nodes and it'll parse out jobs for each to complete and move them over to replace the original files when done. All without any input from the user. Then when it's done it'll sit there watching for changes to your media and if it sees new stuff it'll automatically grab it, convert it to your wanted format (H265 by default) and replace the origional, keeping you space efficient going forward.

Nitrousoxide fucked around with this message at 01:14 on Apr 20, 2022

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Zapf Dingbat posted:

Oh god, this looks like what I need. How have I not heard of this? One of the biggest problems we have is Plex refusing to play something on our tablet, or lagging due to conversion. I'm assuming you're saving space due to better encoding?

H265 can half the size of an h264 encoded file with no visible loss in quality. It's that efficient. Though as mentioned by the other poster, not all devices support it (though it's getting harder to find ones that don't these days). You can just test a file that's already encoded in h265 on your various devices to make sure you won't make your entire library unwatchable on your favorite TV (but it'll probably work fine). There's other options for conversion in tdarr too, if for some reason you didn't want to convert to h265 and wanted to convert to h264 or something else that works too, though you wouldn't get the space savings obviously.

Folks are generally switching over to h265 as the standard going forward, so its likely support for it will only improve. Everything that I'd watch a plex stream from works fine with it, including my smart TV.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Aware posted:

I think AV1 will be better in general mo ing forward but it's worse for compatibility at the moment. Also it is a necessarily lossy process so you will lose some fidelity though this will be subjective and vary depending on the source material. If you're primarily watching on a tablet and not a huge TV it's likely to be unnoticeable.

AV1 does have the advantage of being open source and pretty similar in efficiency to H265, but like you said, almost nothing supports it now, so I'd not recommend converting stuff over to it. Maybe in a few years it'll be in a better spot.

To give you an idea of how my file sizes change:

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



fletcher posted:

I really wish there was a self hosted option for discord that was as fully featured. If we could even use the discord client (desktop & mobile app) with a self hosted server, that would be amazing, but I know it's impossible and would never happen. It's great that so many gamers use it, but it's annoying how expensive the premium features are to allow larger file sizes of attachments, and higher quality for streaming things. We don't use the streaming enough to justify boosting the discord server, it's more for the quick "hey check this out for a couple minutes" type of things when we're all on there already, or using some other file sharing thing for larger file sizes. Oh how I wish it was just some XMPP type thing where we had more flexibility with what client & server to use.

What about a Matrix server:
https://github.com/AVENTER-UG/docker-matrix
https://matrix.org/

and Element client:
https://element.io/

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



BlankSystemDaemon posted:

Matrix somehow manages to be even worse, because it does an impossibly poor job of interoperating with IRC by completely making GBS threads all over the existing protocol, implementing threaded conversations by doing partial inline quoting which makes conversations harder to follow if you're using a regular client, and on top of all that if you so much as dare type one character above the max length of any message on IRC, Matrix unilaterally decides to parse the entire sentence through a httpd and instead put part of the message plus an URI into the IRC channel.
This is Microsoft Chat levels of bullshit, and they managed to get themselves banned from every network for behaving that way, so why the gently caress does Matrix developers think it's a good idea?

Is Matrix running over IRC? Looking at their docs, it looks like the standard setup just talks to other Matrix servers directly. Why would anyone care if it doesn't play nice with IRC?

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



fletcher posted:

Not sure if this is the right thread for it but I've been thinking about finally ditching Google.

My plan would be:
  • Gogle Mail - Switch to ProtonMail. I already use a custom domain with gmail. Mobile app & web interface seems pretty good. Plan would be to update the MX records, and then use the "Import via Easy Switch" feature in ProtonMail to import my emails, calendars, and contacts.
  • Google Calendar - Switch to Proton Calendar
  • Google Docs - I have a ton of docs, slides, and sheets in here. Collabora seems...ok. Maybe Onlyoffice? Does anything make it easy to open the existing gdoc files? Or do I need to do some sort of mass conversion to another format? I frequently edit these documents both on my desktop and on phone. I toyed with the idea of using MS Office, but I don't want to have to put everything on Onedrive to be able to open it on the web version.
  • Google Drive - Switch to using Syncthing both on my android phone & desktop computer to do backups to my NAS. I don't want to expose the NAS to the internet, so the phone sync would only be done over VPN to my place. The NAS is already being backed up to Backblaze.
  • Google Photos - Switch to PhotoPrism. Haven't decided where to host it yet, maybe my colo server (which I'm also thinking about moving back to my place now that I have fiber). Being able to search for "vaccine" and have it bring up my vaccine card, or "dogs", etc is probably the feature I like most about Google Photos, but it seems PhotoPrism is able to do this as well.

Anything else I'm not thinking of? Is there anything that I'm going to really miss after this switch?

Google docs, drive and photos can be replaced with Nextcloud.

You can either self host a collabora server separately (I do this) or use the built in code server available to modern installs of Nextcloud spin it up (as I understand this is technically less robust, but will be fine for a few users and isn't as complicated because you don't need to deal with the SSL certs and pointing to the correct servers). Either way it's free.

This is what a document would look like being edited online.


Nextcloud can happily backup photos from your phone automatically and can be used to share them with other folks. I do this for my family when I share hiking photos. It's a bit more clunky than Google Photos though, and you're not going to get all that neat AI stuff for identifying people and things. If you want those features then some other self-hosted open source photo apps can do them, though they can be pretty processor heavy because your server is doing all the AI identification itself.

Nitrousoxide fucked around with this message at 17:51 on May 11, 2022

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Non-rootful Podman is supposed to be more secure than Docker though if you need to do any fancy networking stuff with it I don't think that works unless you go rootful.

Unfortunately the documentation and how-to's for Podman are really lacking compared to docker. Sometimes you can just follow the docker instructions, but since those generally assume rootful docker installs it's not at all uncommon for them to just fail to work and you have to untangle how you have to do things differently in Podman (or if it's even possible, as with the aformentioned networking)

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



BlankSystemDaemon posted:

Welp.

Have you had a look at your favorite search engine for "docker escape"?

I don't think Podman is susceptible to a docker escape in rootless mode, at least as far as I know.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Easiest way to do it is to probably just install docker on an old pc, and then install the docker container for pihole. It'll work on any device that way.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



If you're using a VPN you don't need an SSL cert. It serves as the protection against the man in middle attack or packet sniffing.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Keito posted:

Do you mean that you are serving your sites with a CF origin cert now? There are several ways to go about resolving your issue, I'll describe two.

The easiest might be if you revert back to using Let's Encrypt issues certificates in nginx, and then go with cloudflared for tunneling external traffic to nginx.

Alternatively, as it's possible to serve the same domain name with different ports and different certs, you could do one config for CF and one for LE certs per nginx "server" directive. This approach leads to either lots of duplication or heavy use of includes, though.

Both the above suggestions assume a split-horizon DNS setup, but I assume you have that considering you're getting an error in the first place.

Couldn't you use a wildcard cert internally so you just need the one cert for your internal DNS resolvers? No need for a ton of duplication.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Deployed a Gitlab CE (Community Edition - The fullly Open Source version of Gitlab) instance on my server and it's pretty neat. I really like being able to automate the builds for my packages with the CI/CD pipeline like you can with github and honestly the whole user interface for CI/CD is better than github's.

Getting it to work with an already deployed reverse proxy was an ENORMOUS PAIN because gitlab includes its own reverse proxy baked in and you have to figure out which env flags to set to tell it to "shut this poo poo off" and it's not well documented.

If anyone ever has wants to do it I'll save you 8 hours of knob turning with this compose file:

code:
version: '3.6'
services:
  web:
    image: 'gitlab/gitlab-ce:latest'
    restart: always
    hostname: 'url.for.gitlab.tld'
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        external_url 'https://url.for.gitlab.tld'
        nginx['listen_port'] = 80 #make this match the http port (container side) you opened below for the web traffic for gitlab (not for the container registry)
        nginx['listen_https'] = false
        nginx['redirect_http_to_https'] = false 
        registry_external_url 'https://url.for.cointainer-registry.tld'
        gitlab_rails['registry_enabled'] = true
        gitlab_rails['registry_host'] = "url.for.cointainer-registry.tld"
        gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
        registry_nginx['listen_port'] = 5005 #make this match the port you set to open below
        registry_nginx['listen_https'] = false
        registry_nginx['proxy_set_headers'] = { "X-Forwarded-Proto" => "https", "X-Forwarded-Ssl" => "on"}
    ports:
      - '8006:80' #External port can be whatever here, I picked one that works for me
      - '2224:22' #SSH port
      - '5005:5005' #This is the port for the image registry
    volumes:
      - '/DockerAppData/Gitlab/config:/etc/gitlab'
      - '/DockerAppData/Gitlab/logs:/var/log/gitlab'
      - '/DockerAppData/Gitlab/data:/var/opt/gitlab'
    shm_size: '256m'
And then for your reverse proxy make sure you don't set "force ssl" for the container registry since it uses its own encryption/validation rather than HTTPS.

Nitrousoxide fucked around with this message at 15:35 on Dec 29, 2022

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I use Openmediavault for my server. Which, despite its name, works great as one even if you aren't using it as a NAS. I have a seperate NAS that I mount as a cifs share. It's really nice because it has a built in webgui so it can run headless and makes doing a lot of stuff you'd need to do via terminal easier.

Install omv-extras (https://wiki.omv-extras.org/) and it'll give you the ability to one click install docker (and will setup the corresponding user groups and so on) and also allow for one click installs and updates of portainer, which is also what I use to manage my docker containers.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well Played Mauer posted:

Cool, thanks. Been doing a bit of side research and it sounds like I could power most of what I need on a ~$400 NUC with Unraid, then connect a Synology NAS when I grow beyond external USB SSDs. That sounds like the expensive option, but it also seems like I could run OMV on the NUC in the meantime. Is that correct?

Yep. You could run OMV on either bare metal or through proxmox as well if you'd like to be able experiment with multiple distros.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well Played Mauer posted:

Gotcha. Generally speaking, would something like this cover me for a NUC? Figure I could drop like 32 gigs RAM and a 2.5" SSD in there and be up and running.

I run my setup on this at $150:

https://www.amazon.com/dp/B07WLLR43R?ref=nb_sb_ss_w_as-reorder-t1_ypp_rep_k0_1_7&amp=&crid=1CLP6Q7NRXAT3&amp=&sprefix=optiple

Its ram is already maxed out at 16 gigs, but I'm still okay while running quite a few applications.



I'll admit I'm getting reasonably close to redlining the RAM though. If you're not looking to run ~38 apps though you'll be fine.

Nitrousoxide fucked around with this message at 00:42 on Jan 10, 2023

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Resdfru posted:

44 containers. What are you running?

38 running. 6 of them are leftovers from the ci/cd pipelines running in my gitlab instance. They'll get cleaned up once a week on Saturday.

The ones running:

code:
gitlab-web-1
jncep
fail2ban_docker-pi
uptime-kuma
prowlarr
radarr
lidarr
overseerr
sonarr
homer
collabora_online-code-1
wireguard
pihole
homeassistant
calibre
calibre-web
duplicati
transmission
plex2
nextcloud-app-1
nextcloud-redis-1
homepage
gitlab-runner-runner-1
dockerproxy
audiobookshelf-audiobookshelf-1
unbound
homebridge-homebridge-1
syncthing
vpn_media_server-transmission-openvpn-1
portainer
nextcloud-db-1
watchtower-watchtower-1
nginx-app-1
nginx-db-1
tachidesk_server
tdarr
cloudflare_ddns-cloudflare-ddns-1
foundryvtt-foundry-1
Edit:

Well Played Mauer posted:

OK, yeah, this is a better option that I can more easily convince the wife of. Thank you!

Oh btw, I had to switch the boot on the optiplex to BIOS rather than uefi to get it to boot linux. Maybe I'm a dummy and it can be done with UEFI, but I just wanted to save you some effort getting that to work. There's a flag to do so in the boot menus.

Nitrousoxide fucked around with this message at 02:00 on Jan 10, 2023

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well Played Mauer posted:

Good to know. I was thinking of either what you linked or maybe this one. I figure double the hard drive space for $4, but I also know Dell tends to be more driver-friendly.

Poke around on the internet and make sure people were able to install linux okay on that, if so I think it'd be a better pick than my suggestion. It's also upgradable to 32 gigs of ram later if you want.

https://support.hp.com/us-en/document/c05371240#AbT2

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Resdfru posted:

nice, which homepage do you use? I've had Dashy, Homepage, homer, homarr, heimdall, and I'm sure I'm forgetting one. I just can't decide which one I like. I usually use Heimdall cause its actually set up as its the first one I ever tried.

I keep two instances running. One of Homepage and another of Homer. The first is for my apps and all the urls go through my internal reverse proxy. The second one, Homer, just links directly to their IP so I can still get to the sites if needed if my reverse proxy is down. I could probably replace the latter with bookmarks in my browser. I rarely use it. I only direct connect to my server IP if I'm updating it and it'll be updating the docker.io package since that would take down the reverse proxy and I'd loose access to it mid update.

Homepage is nice becase i've exposed the docker.socket to it (in RO only mode so it can't actually mess with it) and it can see my container statuses and health. It also links up to a bunch of containers with their API's so it can return stats on them.



Resdfru posted:

Do you use gitlab just for managing the homelab stuff, or is it doing other stuff? I thought about self hosting but in the end I just decided to use Github. I just have github actions that uses my self hosted runner which in turn has full access to docker to run compose up on all my containers. This is probably breaking 100 different security rules but none of this is accessible publicly so if anyone is accessing any of it I'm screwed anyway. also portainer could literally do the same thing out of the box but I wanted to do it this way for no reason

There are three applications I made my own dockerfiles for services where the actual dev didn't make a docker image. So to keep those updated I wanted a CI/CD pipeline to automatically build them, check that they work, and deploy it to an image registry which watchtower can check against the currently deployed image to see if there's an update. Gitlab is kind of a big chungus, chewing up 4 gigs of ram, so I'd not recommend it unless you need the more advanced features like I'm using. If you just need a lightweight git host gitea is significantly lighter on its system resources. I also keep my docker-compose backups in a git repo as well as mirror a few ones I've found on github which I need for odd stuff in my home. (example: https://github.com/andymor/keychron-k2-k4-function-keys-linux)
Just in case they ever go away I'll have my own version.

I guess I could fork it on github but that's easy heh.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Keito posted:

That's not how sockets work. Bind mounting in a socket with the ro option only means that the container can't delete the socket itself, but you're still giving away full access to control dockerd (which is equivalent to giving away root access to the host system unless you're running dockerd in rootless mode).

That's actually why one of the containers I run is docker-socket-proxy
https://github.com/Tecnativa/docker-socket-proxy

which acts to relay the socket info to whatever service I want without handing over root access to the system itself. Of course, the proxy itself still has access to the real docker socket so it's a potential threat vector, but it limits the number of additional access points to that socket to just one rather than an arbitrarily large number.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Traefik or NGINX proxy manager are the two options I would recommend. Traefik is cool because it will automatically pick up and create the proxy routing if you setup docker (or podman) containers on the same server and give them the right labels (and you give it access to the docker.sock file so it can get access to a stream of data on what's being spun up/down). This means it'll spin up whatever proxy you need as you need it provided you get those labels right in the docker-compose file. It gets somewhat more complicated if you're doing any proxying outside of services hosted on the same machine as Traefik though.

NGINX Proxy Manger (what I use) is nice in that it's got a graphical UI you can use. It's also just as easy to proxy a local service as it is a service on another machine.

You can also do local dns resolution with the reverse proxy with both of these services if you have something like a pihole (what I do). All my services can be connected to with (servicename).internal.(domain).(tld) and cannot be accessed from outside of a local IP by setting the access lists to only allow these IP Ranges:
10.0.0.0/24
172.0.0.0/8
192.168.0.0/16

Which (I believe) should account for every possible reserved local IP address.

And then you can also set services that you want to be accessible to the world at large (like my Nextcloud instance for example so I can share items with people) to not use the local IP restriction.

Nitrousoxide fucked around with this message at 21:30 on Jan 13, 2023

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well Played Mauer posted:

I think I'm gonna get a VM going with Portainer to set up this reverse proxy and other docker-oriented apps. I'll probably throw Debian 11 or whatever on it. I'm not sure how much hardware to provision it, though. The machine I have is a quad-core i5 with 16 gigs of RAM. Can I get away with giving it a couple cores and like 4-5 gigs of RAM? Is that too little, too much?

That's plenty probably. You should be able to increase the hardware provisioning later if you find it's not performant too.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well Played Mauer posted:

I found a dude selling a DS220+ with the 4GB RAM upgrade for like $289 on eBay, then grabbed a couple 6TB WD Red Pluses off Amazon.

I figure it’s a good entry into the NAS world assuming it’s not a scam. For the stuff I wanna store - media for the most part, with some space reserved for future VM projects when I get tired of external drives connected with USB 3 - the 6TB RAID should cover me until I’m ready to spend money again.

Please god let me stop spending money.

Make sure to 3-2-1 backup your server image or configs at the very least. You want a backup solution that will allow you to recover from your house getting burned down so you don’t loose everything you’ve setup.

I personally don’t backup my media folder. That gets way too expensive. It only exists on my NAS. But I take a monthly disk image of my server with clonezilla (though you can use the built in imager for Proxmox since you’re not on bare metal like me) and then daily backups of my docker container config folders with duplicity.

Both the disk images and the docker config folders are shot over to my NAS for a local copy on another device, and I use backblaze b2 storage for an external backup.

I have had to use the server image backup once due to a hardware failure on a previous iteration of it when the old laptop it was on died, and I’ve used the docker folder config backups a handful of times when I REALLY hosed something up when fiddling with a container’s settings.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Oh, lastly, make sure that you validate your backups work. Spin up a new vm and deploy a backup you made to it and make sure it all works right. There would be nothing worse than thinking you have a good backup solution only to discover it wasn't actually working when it came time to use it to recover from a critical failure.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well Played Mauer posted:

Got Portainer up and running on a VM. That was surprisingly easy. I installed Calibre-web on it just to see how it works, and aside from some database funkiness between it and Readarr, I got it up and running.

One thing I noticed is it takes the containers a good bit of time to fully initialize. I thought I broke something when getting Calibre-web to start up because the web GUI wasn't immediately available. Then I looked at the logs and realized it was still initializing 3-5 minutes after I ran the container. Is that normal, or is the external SSD I have everything on poo poo? (Maybe both?)

Other note: I really need to get a homepage/reverse proxy set up. poo poo's everywhere now and I'm forgetting which service is on which VM.

First start up of a container might take a bit as it creates the config files. But after that it should be faster. I don't use ANY remote mounted directories (except media directories) for my containers. The config folders all live locally on the server to maximize the performance. If you're using an external harddrive for your persistant volumes, try the internal ssd instead.

You could also try upping your hardware allocation to that VM and see if you get better performance.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I use the DNS challenge method with cloudflare for my local wildcard certs, and use the standard letsencrypt method for NPM for my external facing services. It's worked well for me for nearly a year now

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I use a wildcare cert for *.internal.(domain).(tld) so that I only have to setup the cloudflare challenge once for all my internal services and they can all use the same ssl cert.

Adbot
ADBOT LOVES YOU

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



SamDabbers posted:

SANs are the correct way to do it, but then you have to reissue the proxy's certificate every time you want to add or remove a service in your homelab, where you presumably tinker and host relatively short-lived experiments.

Yeah I use SANs, one each, for all my external services. My internal stuff is all wildcard certs.

Well Played Mauer posted:

I’ve been adding more stuff to the stack since I got up and running and noticed some pretty big performance hits on other services when making changes or rebuilding docker containers on my proxmox machine. After looking at the CPU/memory consumption during the slowdowns and seeing nothing out of the ordinary I finally got off my butt and put a new external SSD onto the machine. Going from a five-year-old Toshiba drive I got as a photo backup to a crucial drive from this decade immediately resolved the issues.

I also post this because my god did proxmox make changing drives on the VM easy. Just go into the hardware allocation and tell it to switch the storage to the other drive. It cloned everything and then I just restarted each VM.

If I ever replace my current bare metal optiplex build with something with a few slots for internal storage drives I'll probably switch to proxmox. It does look pretty great.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply