I've been using this git as a starting point for a bunch of docker compose files. https://github.com/abhilesh/self-hosted_docker_setups My most used applications are probably: Wirehole (combo pihole, wireguard, and unbound) https://github.com/IAmStoxe/wirehole Transmission VPN - I setup any of my media grabbing apps in a stack with this which opens a vpn tunnel to my vpn of choice. Useful also since I'm on Verizon and they blocked Mangadex, so I setup foxyproxy to direct certain sites through the transmission proxy that goes through the VPN. https://github.com/haugene/docker-transmission-openvpn I also setup Joplin and Quillnote as notes apps on my phone/computer and had them sync to Nextcloud so they can work like a Google keep and general notetaking app that syncs in the cloud. No need for a separate docker instance provided you have a nextcloud webdav you can point them to.
|
|
# ¿ Nov 15, 2021 21:49 |
|
|
# ¿ May 14, 2024 20:59 |
Does photoprism let you share a photo with people as a link? I see in their demo environment it can act as a webdav server which you can connect to with other file browsers, but like what if I just want to send a family member an album with a few photos in it?
|
|
# ¿ Nov 16, 2021 19:13 |
Scruff McGruff posted:Overseerr also led me to LunaSea which is basically a mobile app version of HOMER/Muximux that supports the *arr apps, Tautulli, and NZB. Pretty nice. Huh that's pretty cool.
|
|
# ¿ Nov 18, 2021 04:43 |
My ethernet adapter on my server seems to have died while I am 1000 miles away on vacation. Managed to log in to it through a vpn for a few minutes it was still accessable before it completely conked out and was getting a ton of resets and failure to write/read to that module in the logs. Impressive how it timed the hardware failure that would make it completely unusable for the first 24 hours of my only vacation I've had away from home since Covid started.
|
|
# ¿ Nov 23, 2021 18:49 |
Canine Blues Arooo posted:I'm looking for guidance on how to better execute on my self-hosted setup. It might be in the OP, but I'm too dumb to piece it together. You should absolutely run anything you're exposing to the internet through a reverse proxy. NGINX Proxy Manager is probably the easiest way for someone with less technical knowledge to implement it as its done through a GUI. You'll also want some dynamic dns which will update the domain provider with your IP if it changes. I personally do it in Docker with two containers: 1: My domain provider is cloudflare so I use this container to keep cloudflare updated on what my IP is. https://hub.docker.com/r/oznu/cloudflare-ddns/ 2: Then I use NGINX Proxy Manager (https://hub.docker.com/r/jlesage/nginx-proxy-manager#!) to direct the https traffic to the appropriate server on my network. everything exposed to the internet goes through port 443 and NGINX handles directing the traffic to the appropriate server. Duck DNS is a free domain provider (they provide you a subdomain at their website.) But I don't think it works with NGINX Proxy Manager, so if you want to simplify the reverse proxy setup process you'll need to pay for a proper domain for yourself and decide on a provider like cloudflare (or another) to handle the DNS updates. You don't need an expensive domain, you get get one that's like $2 for the year. Matt Zerella posted:Dynamic DNS using duckdns. Then set up WireGuard or Tailscale and don't expose anything to the internet. He has a minecraft server, which I assume will be used for people outside of his home. So I think he will need to use a reverse proxy to accomplish his goals, but OP if you are the only one ever using these services, than as Matt Zerella says, using wireguard, directed to your duckdns subdomain. And then run a DDNS (like this: https://github.com/linuxserver/docker-duckdns) on your end to keep duck DNS up to date on your current ip. This would be even more secure since you'd theoretically also want to implement fail2ban and other security layers on anything exposed to the internet to prevent brute force attacks on your services. Nitrousoxide fucked around with this message at 22:21 on Dec 2, 2021 |
|
# ¿ Dec 2, 2021 22:13 |
Matt Zerella posted:Just a note, I think I've said it before but do not expose anything of it doesn't support Single Sign On or 2FA. I edited this in to my post, but yes should absolutely implement fail2ban or other brute force protections like 2FA or SSO if you're exposing anything to the internet at large. This does increase the complexity of your implementation. Something like https://hub.docker.com/r/authelia/authelia#! would work for this. Though I don't know if it works for NGINX. It works for Traefik, which is another implementation of a reverse proxy, though one i'm not familiar with.
|
|
# ¿ Dec 2, 2021 22:24 |
That Works posted:I know it's a bit offtopic for the thread but we're talking about it anyways already. You still need to open a port for Wireguard. But since it's the only service listening on that port, and it will only respond to those that have the right secret encryption key (which your client will have) even if people try to use that port as an ingress point to your network, nothing will respond to them since they don't have the right key. Nitrousoxide fucked around with this message at 17:56 on Dec 5, 2021 |
|
# ¿ Dec 5, 2021 17:50 |
One thing I'd recommend if you self-host email is that you should do it off-site in a managed environment. You don't want to miss emails because your home lost power or internet.
|
|
# ¿ Jan 26, 2022 15:03 |
Zapf Dingbat posted:I've experimented with Rocket Chat and it seemed like a good Slack-like. Otherwise if you've got nextcloud then try Talk. Speaking of, rocket chat is now being integrated into Nextcloud. https://nextcloud.com/blog/rocket-chat-nextcloud-integration/
|
|
# ¿ Mar 3, 2022 22:35 |
Just a heads up to folks who use nextcloud. Mariadb 10.6 seems to break nextcloud. After troubleshooting a broken install for a couple of hours after I updated it, I ended up rolling back to 10.5 and restoring a backup my of database and that fixed it. You may want to lock any image you have for your database file to 10.5 for now. Maybe a fresh install of the app will work fine, I don't know. But upgrading seems to be borked for me at least.
|
|
# ¿ Mar 5, 2022 15:47 |
Finally got Fail2Ban setup for NGINX Proxy Manager (had to use a separate docker container since it's not included) after having run my home-lab setup for the better part of a year without it. It immediately banned about 100 ip's as it scanned over the logs. Heh, I guess that's not too bad for that amount of time, but it's a good thing I set it up I suppose.
|
|
# ¿ Mar 19, 2022 14:51 |
Zapf Dingbat posted:Any good guides to follow? I've been running nginx as a reverse proxy and now I'm wondering what it'll catch. https://youtu.be/Ha8NIAOsNvo https://dbt3ch.com/books/fail2ban/page/how-to-install-and-configure-fail2ban-to-work-with-nginx-proxy-manager
|
|
# ¿ Mar 20, 2022 03:54 |
CopperHound posted:You can mount a specific file instead of directory with docker compose. Could that be a solution for you? You can also mount directories within the app to fixed points outside of the container. Like: volumes: - /DockerAppData/mycoolapp:/coolapp/config/ would mount "/coolapp/config/" inside of your docker container to /DockerAppData/mycoolapp on your host system. And then all your config files or a database or whatever would live in your host system's /DockerAppData/mycoolapp directory rather than inside of your container. then it's an easy thing to just backup that directory on the host as you normally would.
|
|
# ¿ Apr 6, 2022 22:24 |
Setup a Tdarr instance and a node on a couple of other computers to convert over my entire media library to H265 and save a ton of space. I'm on track to save something like 1 TB of space in my NAS once this is all over I expect. Pretty neat how it does the distributed conversions, so you can throw it on your gaming pc, and a couple of old laptops as nodes and it'll parse out jobs for each to complete and move them over to replace the original files when done. All without any input from the user. Then when it's done it'll sit there watching for changes to your media and if it sees new stuff it'll automatically grab it, convert it to your wanted format (H265 by default) and replace the origional, keeping you space efficient going forward. Nitrousoxide fucked around with this message at 01:14 on Apr 20, 2022 |
|
# ¿ Apr 20, 2022 01:12 |
Zapf Dingbat posted:Oh god, this looks like what I need. How have I not heard of this? One of the biggest problems we have is Plex refusing to play something on our tablet, or lagging due to conversion. I'm assuming you're saving space due to better encoding? H265 can half the size of an h264 encoded file with no visible loss in quality. It's that efficient. Though as mentioned by the other poster, not all devices support it (though it's getting harder to find ones that don't these days). You can just test a file that's already encoded in h265 on your various devices to make sure you won't make your entire library unwatchable on your favorite TV (but it'll probably work fine). There's other options for conversion in tdarr too, if for some reason you didn't want to convert to h265 and wanted to convert to h264 or something else that works too, though you wouldn't get the space savings obviously. Folks are generally switching over to h265 as the standard going forward, so its likely support for it will only improve. Everything that I'd watch a plex stream from works fine with it, including my smart TV.
|
|
# ¿ Apr 20, 2022 03:57 |
Aware posted:I think AV1 will be better in general mo ing forward but it's worse for compatibility at the moment. Also it is a necessarily lossy process so you will lose some fidelity though this will be subjective and vary depending on the source material. If you're primarily watching on a tablet and not a huge TV it's likely to be unnoticeable. AV1 does have the advantage of being open source and pretty similar in efficiency to H265, but like you said, almost nothing supports it now, so I'd not recommend converting stuff over to it. Maybe in a few years it'll be in a better spot. To give you an idea of how my file sizes change:
|
|
# ¿ Apr 20, 2022 04:05 |
fletcher posted:I really wish there was a self hosted option for discord that was as fully featured. If we could even use the discord client (desktop & mobile app) with a self hosted server, that would be amazing, but I know it's impossible and would never happen. It's great that so many gamers use it, but it's annoying how expensive the premium features are to allow larger file sizes of attachments, and higher quality for streaming things. We don't use the streaming enough to justify boosting the discord server, it's more for the quick "hey check this out for a couple minutes" type of things when we're all on there already, or using some other file sharing thing for larger file sizes. Oh how I wish it was just some XMPP type thing where we had more flexibility with what client & server to use. What about a Matrix server: https://github.com/AVENTER-UG/docker-matrix https://matrix.org/ and Element client: https://element.io/
|
|
# ¿ Apr 26, 2022 21:58 |
BlankSystemDaemon posted:Matrix somehow manages to be even worse, because it does an impossibly poor job of interoperating with IRC by completely making GBS threads all over the existing protocol, implementing threaded conversations by doing partial inline quoting which makes conversations harder to follow if you're using a regular client, and on top of all that if you so much as dare type one character above the max length of any message on IRC, Matrix unilaterally decides to parse the entire sentence through a httpd and instead put part of the message plus an URI into the IRC channel. Is Matrix running over IRC? Looking at their docs, it looks like the standard setup just talks to other Matrix servers directly. Why would anyone care if it doesn't play nice with IRC?
|
|
# ¿ Apr 27, 2022 02:06 |
fletcher posted:Not sure if this is the right thread for it but I've been thinking about finally ditching Google. Google docs, drive and photos can be replaced with Nextcloud. You can either self host a collabora server separately (I do this) or use the built in code server available to modern installs of Nextcloud spin it up (as I understand this is technically less robust, but will be fine for a few users and isn't as complicated because you don't need to deal with the SSL certs and pointing to the correct servers). Either way it's free. This is what a document would look like being edited online. Nextcloud can happily backup photos from your phone automatically and can be used to share them with other folks. I do this for my family when I share hiking photos. It's a bit more clunky than Google Photos though, and you're not going to get all that neat AI stuff for identifying people and things. If you want those features then some other self-hosted open source photo apps can do them, though they can be pretty processor heavy because your server is doing all the AI identification itself. Nitrousoxide fucked around with this message at 17:51 on May 11, 2022 |
|
# ¿ May 11, 2022 15:54 |
Non-rootful Podman is supposed to be more secure than Docker though if you need to do any fancy networking stuff with it I don't think that works unless you go rootful. Unfortunately the documentation and how-to's for Podman are really lacking compared to docker. Sometimes you can just follow the docker instructions, but since those generally assume rootful docker installs it's not at all uncommon for them to just fail to work and you have to untangle how you have to do things differently in Podman (or if it's even possible, as with the aformentioned networking)
|
|
# ¿ May 13, 2022 19:30 |
BlankSystemDaemon posted:Welp. I don't think Podman is susceptible to a docker escape in rootless mode, at least as far as I know.
|
|
# ¿ May 13, 2022 20:34 |
Easiest way to do it is to probably just install docker on an old pc, and then install the docker container for pihole. It'll work on any device that way.
|
|
# ¿ May 17, 2022 16:31 |
If you're using a VPN you don't need an SSL cert. It serves as the protection against the man in middle attack or packet sniffing.
|
|
# ¿ Jun 1, 2022 01:07 |
Keito posted:Do you mean that you are serving your sites with a CF origin cert now? There are several ways to go about resolving your issue, I'll describe two. Couldn't you use a wildcard cert internally so you just need the one cert for your internal DNS resolvers? No need for a ton of duplication.
|
|
# ¿ Jun 6, 2022 14:53 |
Deployed a Gitlab CE (Community Edition - The fullly Open Source version of Gitlab) instance on my server and it's pretty neat. I really like being able to automate the builds for my packages with the CI/CD pipeline like you can with github and honestly the whole user interface for CI/CD is better than github's. Getting it to work with an already deployed reverse proxy was an ENORMOUS PAIN because gitlab includes its own reverse proxy baked in and you have to figure out which env flags to set to tell it to "shut this poo poo off" and it's not well documented. If anyone ever has wants to do it I'll save you 8 hours of knob turning with this compose file: code:
Nitrousoxide fucked around with this message at 15:35 on Dec 29, 2022 |
|
# ¿ Dec 29, 2022 15:32 |
I use Openmediavault for my server. Which, despite its name, works great as one even if you aren't using it as a NAS. I have a seperate NAS that I mount as a cifs share. It's really nice because it has a built in webgui so it can run headless and makes doing a lot of stuff you'd need to do via terminal easier. Install omv-extras (https://wiki.omv-extras.org/) and it'll give you the ability to one click install docker (and will setup the corresponding user groups and so on) and also allow for one click installs and updates of portainer, which is also what I use to manage my docker containers.
|
|
# ¿ Jan 9, 2023 22:04 |
Well Played Mauer posted:Cool, thanks. Been doing a bit of side research and it sounds like I could power most of what I need on a ~$400 NUC with Unraid, then connect a Synology NAS when I grow beyond external USB SSDs. That sounds like the expensive option, but it also seems like I could run OMV on the NUC in the meantime. Is that correct? Yep. You could run OMV on either bare metal or through proxmox as well if you'd like to be able experiment with multiple distros.
|
|
# ¿ Jan 9, 2023 23:03 |
Well Played Mauer posted:Gotcha. Generally speaking, would something like this cover me for a NUC? Figure I could drop like 32 gigs RAM and a 2.5" SSD in there and be up and running. I run my setup on this at $150: https://www.amazon.com/dp/B07WLLR43R?ref=nb_sb_ss_w_as-reorder-t1_ypp_rep_k0_1_7&=&crid=1CLP6Q7NRXAT3&=&sprefix=optiple Its ram is already maxed out at 16 gigs, but I'm still okay while running quite a few applications. I'll admit I'm getting reasonably close to redlining the RAM though. If you're not looking to run ~38 apps though you'll be fine. Nitrousoxide fucked around with this message at 00:42 on Jan 10, 2023 |
|
# ¿ Jan 10, 2023 00:32 |
Resdfru posted:44 containers. What are you running? 38 running. 6 of them are leftovers from the ci/cd pipelines running in my gitlab instance. They'll get cleaned up once a week on Saturday. The ones running: code:
Well Played Mauer posted:OK, yeah, this is a better option that I can more easily convince the wife of. Thank you! Oh btw, I had to switch the boot on the optiplex to BIOS rather than uefi to get it to boot linux. Maybe I'm a dummy and it can be done with UEFI, but I just wanted to save you some effort getting that to work. There's a flag to do so in the boot menus. Nitrousoxide fucked around with this message at 02:00 on Jan 10, 2023 |
|
# ¿ Jan 10, 2023 01:40 |
Well Played Mauer posted:Good to know. I was thinking of either what you linked or maybe this one. I figure double the hard drive space for $4, but I also know Dell tends to be more driver-friendly. Poke around on the internet and make sure people were able to install linux okay on that, if so I think it'd be a better pick than my suggestion. It's also upgradable to 32 gigs of ram later if you want. https://support.hp.com/us-en/document/c05371240#AbT2
|
|
# ¿ Jan 10, 2023 05:14 |
Resdfru posted:nice, which homepage do you use? I've had Dashy, Homepage, homer, homarr, heimdall, and I'm sure I'm forgetting one. I just can't decide which one I like. I usually use Heimdall cause its actually set up as its the first one I ever tried. I keep two instances running. One of Homepage and another of Homer. The first is for my apps and all the urls go through my internal reverse proxy. The second one, Homer, just links directly to their IP so I can still get to the sites if needed if my reverse proxy is down. I could probably replace the latter with bookmarks in my browser. I rarely use it. I only direct connect to my server IP if I'm updating it and it'll be updating the docker.io package since that would take down the reverse proxy and I'd loose access to it mid update. Homepage is nice becase i've exposed the docker.socket to it (in RO only mode so it can't actually mess with it) and it can see my container statuses and health. It also links up to a bunch of containers with their API's so it can return stats on them. Resdfru posted:Do you use gitlab just for managing the homelab stuff, or is it doing other stuff? I thought about self hosting but in the end I just decided to use Github. I just have github actions that uses my self hosted runner which in turn has full access to docker to run compose up on all my containers. This is probably breaking 100 different security rules but none of this is accessible publicly so if anyone is accessing any of it I'm screwed anyway. also portainer could literally do the same thing out of the box but I wanted to do it this way for no reason There are three applications I made my own dockerfiles for services where the actual dev didn't make a docker image. So to keep those updated I wanted a CI/CD pipeline to automatically build them, check that they work, and deploy it to an image registry which watchtower can check against the currently deployed image to see if there's an update. Gitlab is kind of a big chungus, chewing up 4 gigs of ram, so I'd not recommend it unless you need the more advanced features like I'm using. If you just need a lightweight git host gitea is significantly lighter on its system resources. I also keep my docker-compose backups in a git repo as well as mirror a few ones I've found on github which I need for odd stuff in my home. (example: https://github.com/andymor/keychron-k2-k4-function-keys-linux) Just in case they ever go away I'll have my own version. I guess I could fork it on github but that's easy heh.
|
|
# ¿ Jan 10, 2023 22:50 |
Keito posted:That's not how sockets work. Bind mounting in a socket with the ro option only means that the container can't delete the socket itself, but you're still giving away full access to control dockerd (which is equivalent to giving away root access to the host system unless you're running dockerd in rootless mode). That's actually why one of the containers I run is docker-socket-proxy https://github.com/Tecnativa/docker-socket-proxy which acts to relay the socket info to whatever service I want without handing over root access to the system itself. Of course, the proxy itself still has access to the real docker socket so it's a potential threat vector, but it limits the number of additional access points to that socket to just one rather than an arbitrarily large number.
|
|
# ¿ Jan 11, 2023 16:08 |
Traefik or NGINX proxy manager are the two options I would recommend. Traefik is cool because it will automatically pick up and create the proxy routing if you setup docker (or podman) containers on the same server and give them the right labels (and you give it access to the docker.sock file so it can get access to a stream of data on what's being spun up/down). This means it'll spin up whatever proxy you need as you need it provided you get those labels right in the docker-compose file. It gets somewhat more complicated if you're doing any proxying outside of services hosted on the same machine as Traefik though. NGINX Proxy Manger (what I use) is nice in that it's got a graphical UI you can use. It's also just as easy to proxy a local service as it is a service on another machine. You can also do local dns resolution with the reverse proxy with both of these services if you have something like a pihole (what I do). All my services can be connected to with (servicename).internal.(domain).(tld) and cannot be accessed from outside of a local IP by setting the access lists to only allow these IP Ranges: 10.0.0.0/24 172.0.0.0/8 192.168.0.0/16 Which (I believe) should account for every possible reserved local IP address. And then you can also set services that you want to be accessible to the world at large (like my Nextcloud instance for example so I can share items with people) to not use the local IP restriction. Nitrousoxide fucked around with this message at 21:30 on Jan 13, 2023 |
|
# ¿ Jan 13, 2023 18:29 |
Well Played Mauer posted:I think I'm gonna get a VM going with Portainer to set up this reverse proxy and other docker-oriented apps. I'll probably throw Debian 11 or whatever on it. I'm not sure how much hardware to provision it, though. The machine I have is a quad-core i5 with 16 gigs of RAM. Can I get away with giving it a couple cores and like 4-5 gigs of RAM? Is that too little, too much? That's plenty probably. You should be able to increase the hardware provisioning later if you find it's not performant too.
|
|
# ¿ Jan 13, 2023 23:46 |
Well Played Mauer posted:I found a dude selling a DS220+ with the 4GB RAM upgrade for like $289 on eBay, then grabbed a couple 6TB WD Red Pluses off Amazon. Make sure to 3-2-1 backup your server image or configs at the very least. You want a backup solution that will allow you to recover from your house getting burned down so you don’t loose everything you’ve setup. I personally don’t backup my media folder. That gets way too expensive. It only exists on my NAS. But I take a monthly disk image of my server with clonezilla (though you can use the built in imager for Proxmox since you’re not on bare metal like me) and then daily backups of my docker container config folders with duplicity. Both the disk images and the docker config folders are shot over to my NAS for a local copy on another device, and I use backblaze b2 storage for an external backup. I have had to use the server image backup once due to a hardware failure on a previous iteration of it when the old laptop it was on died, and I’ve used the docker folder config backups a handful of times when I REALLY hosed something up when fiddling with a container’s settings.
|
|
# ¿ Jan 14, 2023 14:46 |
Oh, lastly, make sure that you validate your backups work. Spin up a new vm and deploy a backup you made to it and make sure it all works right. There would be nothing worse than thinking you have a good backup solution only to discover it wasn't actually working when it came time to use it to recover from a critical failure.
|
|
# ¿ Jan 14, 2023 17:13 |
Well Played Mauer posted:Got Portainer up and running on a VM. That was surprisingly easy. I installed Calibre-web on it just to see how it works, and aside from some database funkiness between it and Readarr, I got it up and running. First start up of a container might take a bit as it creates the config files. But after that it should be faster. I don't use ANY remote mounted directories (except media directories) for my containers. The config folders all live locally on the server to maximize the performance. If you're using an external harddrive for your persistant volumes, try the internal ssd instead. You could also try upping your hardware allocation to that VM and see if you get better performance.
|
|
# ¿ Jan 17, 2023 18:53 |
I use the DNS challenge method with cloudflare for my local wildcard certs, and use the standard letsencrypt method for NPM for my external facing services. It's worked well for me for nearly a year now
|
|
# ¿ Jan 20, 2023 16:04 |
I use a wildcare cert for *.internal.(domain).(tld) so that I only have to setup the cloudflare challenge once for all my internal services and they can all use the same ssl cert.
|
|
# ¿ Jan 25, 2023 15:37 |
|
|
# ¿ May 14, 2024 20:59 |
SamDabbers posted:SANs are the correct way to do it, but then you have to reissue the proxy's certificate every time you want to add or remove a service in your homelab, where you presumably tinker and host relatively short-lived experiments. Yeah I use SANs, one each, for all my external services. My internal stuff is all wildcard certs. Well Played Mauer posted:I’ve been adding more stuff to the stack since I got up and running and noticed some pretty big performance hits on other services when making changes or rebuilding docker containers on my proxmox machine. After looking at the CPU/memory consumption during the slowdowns and seeing nothing out of the ordinary I finally got off my butt and put a new external SSD onto the machine. Going from a five-year-old Toshiba drive I got as a photo backup to a crucial drive from this decade immediately resolved the issues. If I ever replace my current bare metal optiplex build with something with a few slots for internal storage drives I'll probably switch to proxmox. It does look pretty great.
|
|
# ¿ Jan 27, 2023 16:09 |