Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

Zapf Dingbat posted:

I have a 1000 generation Nvidia card passthroughed on proxmox and it works fine as a remote Steam machine. I considered using it for Plex but I've never had a problem using CPU only for encoding.

I followed instructions straight off of Proxmox's site.

Same here, I have a 1060 passed through to a VM in Proxmox for a remote gaming computer and it was dead simple to set up. I haven't tried Plex there but I don't see any reason why it wouldn't work or would be any more difficult, it works just like a regular PC with a GPU in it.

Adbot
ADBOT LOVES YOU

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Scruff McGruff posted:

Same here, I have a 1060 passed through to a VM in Proxmox for a remote gaming computer and it was dead simple to set up. I haven't tried Plex there but I don't see any reason why it wouldn't work or would be any more difficult, it works just like a regular PC with a GPU in it.

Of note, some games' anti-cheat will flare up if they detect virtualization. I don't think its a ton of games that check for that these days, but its certainly out there.

Warbird
May 23, 2012

America's Favorite Dumbass

So I think I’ve screwed up.

Last night I set about setting up gitea on my Synology and was surprised to see that the installation step was taking absolute ages despite being a Docker container. After a bit of digging I think the problem is that the MariaDB instance I’m running is both on a btrfs file system and likely within a share with checksum checking enabled. Both apparently are strongly advised to not be used for anything with a large amount of random writes which, potentially, explains why this git business hasn’t finished setting itself up after 7 or so hours as well as lackluster performance of DB/VM/some Docker containers in the past.

Naturally my existing volume takes up everything on the NAS so there is no reasonable path forward to pull the data off so I can go about making a smaller volume that’s more performant for that sort of workload. Fun.

This said, I sure do have a Proxmox machine that isn’t doing a great deal and would likely be a bad better home to both my DB and container infrastructure. Is there any “idiot’s guide” for this sort of thing I should know about? Naturally shifting things over isn’t going to be particularly hard but I’d like to avoid gotchas like this in the future.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Thanks everybody. Gonna give it a go today.

hogofwar
Jun 25, 2011

'We've strayed into a zone with a high magical index,' he said. 'Don't ask me how. Once upon a time a really powerful magic field must have been generated here, and we're feeling the after-effects.'
'Precisely,' said a passing bush.
What do people have set up for monitoring logs/metrics of their server(s)? Been eyeing up setting up vector on each VM and sending logs to Loki to show in Grafana.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe

hogofwar posted:

What do people have set up for monitoring logs/metrics of their server(s)? Been eyeing up setting up vector on each VM and sending logs to Loki to show in Grafana.

I use netdata and their cloud dashboard

cruft
Oct 25, 2007

I just ssh in and run top when it feels slow.

BlankSystemDaemon
Mar 13, 2009



Learn to generate flamegraphs with tracing via the USE method that Brendan Gregg invented, that's give you answers that top (or any other utility) can't.

Head Bee Guy
Jun 12, 2011

Retarded for Busting
Grimey Drawer
I’m thinking of repurposing a 2015 i7 macbook pro into a plex server + back up + pihole server. My plex server is currently on the linux partition of my gaming rig, which means my roommates can’t watch something on plex if I’m gaming on the windows partition.

-Is a standard usb external harddrive capable of smoothly serving 4k video via plex?

- I recall some people complaining about docker on mac. Am I better off doing all of the above from a fresh linux install?

e: i’m somewhat familiar with linux, but these days i have very little interest in janitoring yet another machine.

Head Bee Guy fucked around with this message at 15:55 on Aug 1, 2023

Well Played Mauer
Jun 1, 2003

We'll always have Cabo

Head Bee Guy posted:

I’m thinking of repurposing a 2015 i7 macbook pro into a plex server + back up + pihole server. My plex server is currently on the linux partition of my gaming rig, which means my roommates can’t watch something on plex if I’m gaming on the windows partition.

-Is a standard usb external harddrive capable of smoothly serving 4k video via plex?

- I recall some people complaining about docker on mac. Am I better off doing all of the above from a fresh linux install?

e: i’m somewhat familiar with linux, but these days i have very little interest in janitoring yet another machine.

I did this on an i9 2019 with a lot of success until I had a drive failure and repurposed some other hardware just to get on a full Linux stack. That said the MacBook performed admirably.

I just used the native apps for Plex/whatever else you need to manage your media, and used external USB-C drives for everything. I ended up using an external SSD (the Samsung T7) for downloading and unpacking, then moved everything to HDDs to store and serve the content. I didn’t have any issues with the setup outside of a power failure tanking some drives.

You’ll want a wired connection to your router, most likely, but the hardware can handle the load, no problem.

If you wind up having to transcode 4K, that could be an issue since you apparently need a 10-series nvidia card to even begin to pull that off. But that’ll be an issue on anything you use.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe
How do you build redundancy into a self-hosted app?

Like if my Nextcloud is up on computer a, but computer a then burns to the ground, is there any way for computer b in another place to automatically run a mirror of that Nextcloud?

Heck Yes! Loam!
Nov 15, 2004

a rich, friable soil containing a relatively equal mixture of sand and silt and a somewhat smaller proportion of clay.

tuyop posted:

How do you build redundancy into a self-hosted app?

Like if my Nextcloud is up on computer a, but computer a then burns to the ground, is there any way for computer b in another place to automatically run a mirror of that Nextcloud?

Containers would be the answer. What you are describing is pretty much what kubernetes was designed for.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe

Heck Yes! Loam! posted:

Containers would be the answer. What you are describing is pretty much what kubernetes was designed for.

So docker doesn’t do this kind of thing? Should I even bother looking into kubernetes? Everyone makes it sound so dreadful.

Warbird
May 23, 2012

America's Favorite Dumbass

That’s because it is. Docker Swarm should be looked into first as it does more or less what you’re looking for but isn’t an experience akin to slamming your hog in a car door to maintain and set up. If you’re not an enterprise setup you likely don’t need K8s.

This said, you’d do well to look into to High Availability (HA) setups for your app of choice to see if it even plays nice in that sort of setup. If it’s not meant for that sort of architecture then it’s likely going to become a job in and of itself to get to play nice.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Warbird posted:

That’s because it is. Docker Swarm should be looked into first as it does more or less what you’re looking for but isn’t an experience akin to slamming your hog in a car door to maintain and set up. If you’re not an enterprise setup you likely don’t need K8s.

This said, you’d do well to look into to High Availability (HA) setups for your app of choice to see if it even plays nice in that sort of setup. If it’s not meant for that sort of architecture then it’s likely going to become a job in and of itself to get to play nice.

I would say that high availability is not really necessary for self hosted apps. Disaster recovery, on the other hand, is definitely necessary.

Meaning, if your machine burns to the ground, it should be fairly trivial to spin up a new one without too much hassle. Maybe it takes an hour and a few manual steps - that's OK for self hosted poo poo.

I would focus more on having backups, a scripted way of provisioning the server, and verifying that your disaster recover process works. The last part is very important, so you don't find out your DR process doesn't work at the point a disaster has already occurred.

Gitlab Pipelines, Ansible, Docker, and some AWS CLI commands to interact with S3 is my solution for that.

Warbird
May 23, 2012

America's Favorite Dumbass

Hard agree. 3-2-1 it and call it a day. Your home lab should be for fun and whatever the extreme opposite of profit is. Don’t turn it into a job.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

I'm currently using Nomad to run my home lab (it's basically a step up from Docker Swarm, but still wayyyy simpler than Kubernetes), and used to do it at my day job, and I hard agree too - in fact, I don't bother with HA at all despite using a tool that can provide it.

There's a massive difference in required effort between "I want to be able to bring the app back online in a few minutes" vs. "I want the apps to go back online automatically", at least when the apps are stateful - and almost anything you'd care to self-host is going to be stateful.

A better reason to learn an orchestration system is to have a few text files that describe the full state of the system, so you can easily put them in a git repo or just copy around, and then you can bring everything back online with "docker compose up" or "nomad run" or "nix something" I guess, even if it's two years later and you've forgotten a bunch of details.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



NihilCredo posted:

I'm currently using Nomad to run my home lab (it's basically a step up from Docker Swarm, but still wayyyy simpler than Kubernetes), and used to do it at my day job, and I hard agree too - in fact, I don't bother with HA at all despite using a tool that can provide it.

There's a massive difference in required effort between "I want to be able to bring the app back online in a few minutes" vs. "I want the apps to go back online automatically", at least when the apps are stateful - and almost anything you'd care to self-host is going to be stateful.

A better reason to learn an orchestration system is to have a few text files that describe the full state of the system, so you can easily put them in a git repo or just copy around, and then you can bring everything back online with "docker compose up" or "nomad run" or "nix something" I guess, even if it's two years later and you've forgotten a bunch of details.

Docker/Podman can also recover your apps from crashes too if you setup health checks and restart conditions too. I mean it's not gonna drain and migrate stuff to a different physical host or anything if the server itself goes down, but it has a substantial ability to recover.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe
This is all really helpful and informative. I should definitely be using a proper system like git for the stuff, definitely

Warbird
May 23, 2012

America's Favorite Dumbass

Oh that remind me. I was dinking around with Girea the other day and had everything working except ssh authenticated repo stuff. How does that even work with a reverse proxy? I tried umpteen different ways of it and never had any success.

cruft
Oct 25, 2007

Warbird posted:

Oh that remind me. I was dinking around with Girea the other day and had everything working except ssh authenticated repo stuff. How does that even work with a reverse proxy? I tried umpteen different ways of it and never had any success.

Reverse proxy usually means http. Ssh does not run over http. You want a port forward.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Warbird posted:

Oh that remind me. I was dinking around with Girea the other day and had everything working except ssh authenticated repo stuff. How does that even work with a reverse proxy? I tried umpteen different ways of it and never had any success.

What reverse proxy are you using? Most are HTTP/S reverse proxies, and may or may not support other protocols; I think nginx has a module for generic non-HTTP TCP connections. In SSH contexts, a "reverse proxy" is usually called a "jump box" and is just a regular SSH server that you happen to tunnel connections through

The issue with SSH (and many non-HTTP protocols) is that it doesn´t have a "Host: someservice.somedomain.com" header - you just connect to a certain IP and port. So a reverse proxy can't just look at the incoming request and figure out it is meant for a certain service - you need to open a dedicated port and forward all SSH connections from that port to Gitea.

IMO, git over HTTPS is gonna be much simpler unless some of your tooling doesn't support it.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe
What are some resources I can look into for “dev ops” stuff like this? There’s a lot of new technologies in the space that I’m mostly unfamiliar with.

Blurb3947
Sep 30, 2022

tuyop posted:

What are some resources I can look into for “dev ops” stuff like this? There’s a lot of new technologies in the space that I’m mostly unfamiliar with.

roadmap.sh is a site that gives a good overview of all the stuff you can learn to be specialized in whatever kind of IT track. Devops, frontend, coding languages, system architecture, cyber security, QA, etc.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I usually just setup my ssh hosts in my ssh config file so I can use use whatever alias I want for them. They should have static ip's anyway if you're running them through a reverse proxy so the config'd ips should hold true.

Resdfru
Jun 4, 2004

I'm a freak on a leash.
Has anyone run Steam via docker or on a VM for remote gaming through a browser or ipad or something?

I was looking into building my kids their own computers but I realized a possible stopgap till I do that is just throwing a couple VMs up on my server and getting a good/ok NVidia card for it. The games they play aren't very demanding. Really at the moment its one game; Wobbly Life which I ran on a NUC to see if it worked and it did. (these containers/vms won't be on NUCs, just saying it ran on something with no gpu)

It looks like my options are entire VMs or a container like this: https://hub.docker.com/r/linuxserver/kasm or
https://github.com/Steam-Headless/docker-steam-headless

seems random people on the internet do this but was wondering if anyone here has ever done it and has any thoughts/suggestions.

edit: oh steam headless has support for AMD, I have an old rx 580 or something lying around. maybe I can test it myself

Resdfru fucked around with this message at 02:16 on Aug 8, 2023

Aware
Nov 18, 2003
I don't think headless is quite what you want, you likely want to pass through the GPU to the VM then stream the output of the GPU. I use little HDMI monitor fakers for this purpose - https://www.amazon.com.au/fit-Headless-GS-Resolution-Emulator-Game-Streaming/dp/B01EK05WTY

Or rather I used to when I was futzing around with the whole idea. Then you can use steamlink, sunlight or any other streaming setup.

Edit- actually read your link, that does kind of do what youre asking though the idea of playing games via VNC horrifies me. I suspect it's more for running servers for games that don't have a dedicated server option.

blunt
Jul 7, 2005

Parsec might be what you're looking for, the self hosted version is free.

https://parsec.app/features

ToxicFrog
Apr 26, 2008


I'm on my yearly-or-so hunt for something to replace airsonic with and was wondering if anyone here had recommendations.

Airsonic (well, airsonic-advanced) is still working mostly ok, but it has some glitches¹ and the user interface is kind of bad. Also, it's been unmaintained for years.

Hard requirements:
- browse by folder
- listen-in-browser
- cache for offline listening on mobile (anything that supports the subsonic API has this)
- multiple user accounts, with access to different libraries/stars/playlists/etc
- server-side configurable transcoding support (i.e. I need to be able to say "to decode format X, use command Y")

Nice-to-haves:
- DLNA export
- some sort of homeassistant integration

Stuff I've tried and ruled out (both independent servers, and alternate UIs for subsonic-compatible backends):
- Jellyfin (I use it for video but music support is lacking, and no offline listening option)
- Moode (love the UI, but it only plays on a physically connected device, so it's more of a slimp3/squeezebox replacement)
- subplayer, airsonic-refix: no browse by folder support
- jamstash: UI is lacking a lot of basic features like a now-playing queue

Stuff I haven't tried:
- funkwhale, navidrome, ampache: no browse by folder support
- koozic: unmaintained
- substreamer-web: closed source, no idea what it supports, as of earlier this year was very early in development and had some weird issues like "you need to shrink the window to make all the controls appear"
- supysonic, gonic: would probably do what I need on the backend but need to be paired with a suitable UI to function

It's looking like Gonic or (maybe) Supysonic + some sort of web-based frontend would be the best way to go if I'm ditching Airsonic; Gonic meets all of my requirements, Supysonic is missing user access controls but if it comes down to it I can work around that by running multiple instances with different libraries configured. The problem is that both of those are backend-only, and need to be paired with a subsonic-compatible frontend to function. And all of the frontends I've tried have serious issues of their own.

So, anyone have recommendations for stuff I've missed (either subsonic-compatible web frontends, or all-in-one music servers), usage experience with substreamer/supysonic/gonic, or other ideas?

---
¹ The funniest is probably that the browse-tags-by-genre view is full of hundreds of empty genres that do not exist in my library, like "Contemporary Country Country Neo-Traditionalist Country Pop" and "Canadian Bush Swing".

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
I also have a hard "browse by folder" requirement & web based front-end, and went down a similar path as you. I've still just been using the OG Subsonic & DSub for my mobile needs. Subsonic hasn't been updated in ages and airsonic-advanced was too glitchy for me. gonic looks pretty nice, I will have to check that out.

I assume you've gone through the https://github.com/basings/selfhosted-music-overview on your quest many times by now

Supersonic looks nice for a web front-end...but no browser by folder support yet! Rats.

Nam Taf
Jun 25, 2005

I am Fat Man, hear me roar!

ToxicFrog posted:

Stuff I've tried and ruled out (both independent servers, and alternate UIs for subsonic-compatible backends):
- Jellyfin (I use it for video but music support is lacking, and no offline listening option)

I use the Finamp player on my iphone and it supports offline listening. It only does audio media from Jellyfin but that's all I need it to do. Not the most full-featured player but does what I need.

I, too, ran into this issue and this was the solution I found that worked best.

ToxicFrog
Apr 26, 2008


Nam Taf posted:

I use the Finamp player on my iphone and it supports offline listening. It only does audio media from Jellyfin but that's all I need it to do. Not the most full-featured player but does what I need.

I, too, ran into this issue and this was the solution I found that worked best.

Needs a functional browser client too, and Jellyfin's music functionality is, well, I wouldn't call it "functional" for day to day use.

fletcher posted:

I also have a hard "browse by folder" requirement & web based front-end, and went down a similar path as you. I've still just been using the OG Subsonic & DSub for my mobile needs. Subsonic hasn't been updated in ages and airsonic-advanced was too glitchy for me. gonic looks pretty nice, I will have to check that out.

I assume you've gone through the https://github.com/basings/selfhosted-music-overview on your quest many times by now

Supersonic looks nice for a web front-end...but no browser by folder support yet! Rats.

None of the emoji on that page render for me, which makes reading it somewhat difficult.

After some rummaging (and chatting with the gonic dev for a while), I think the solution I'm drifting towards is:
- gonic on the backend
--- needs a patch to support importing of audio files like trackers that TagLib doesn't support
--- needs a custom PATH so that when it uses `ffmpeg` to transcode things it calls a wrapper that can invoke different tools for different formats, rather than blindly calling ffmpeg with the same arguments on everything
- airsonic-refix on the frontend
--- browse-by-file support is available in a PR here
--- some other functionality (play entire high-level directory/artist/genre, album art in browse by file mode) is missing compared to the stock UI but I can do without that if needed

Probably going to do some hacking on that on the weekend and we'll see how it turns out.

Blurb3947
Sep 30, 2022
Haven't messed with any of this but want to know if it's possible to connect to home via VPN but also get the benefits of my Proton VPN service. I could use Proton at the router level but I'd rather not as I prefer being able to change my locations based on different needs and the VPN config only allows a single server.

Would it be best to set up a server with Wireguard so I can access internal devices out and about but also have Proton on there too? Would that even work?

cruft
Oct 25, 2007

Blurb3947 posted:

Haven't messed with any of this but want to know if it's possible to connect to home via VPN but also get the benefits of my Proton VPN service.

Yes.

Have your home VPN not establish a default route (aka Gateway).

For extra avoiding-unnecessary-slowdown goodness, be sure your Proton VPN isn't trying to route traffic to home.

Quixzlizx
Jan 7, 2007
I have a server that, for now, really only hosts Plex and FoundryVTT, and only Foundry is exposed to the internet. After reading this thread for a bit, I figured I should at least protect myself a little more, and I ended up with a domain ---> Cloudflare DNS/proxy --> Caddy reverse proxy --> Foundry chain, with end-to-end https.

I have a couple of questions about this. Maybe someone could explain to me like I'm an idiot how the reverse proxy is blocking traffic rather than just redirecting it. I had port 30000 forwarded to my server originally, but now I have ports 80 and 443 forwarded, which Caddy is still redirecting to Foundry. Is it that, since I only have a "foundry.mywebsite.com" rule in my Caddy config, that all traffic that doesn't originate from there is blocked, so I don't have to worry about random port sniffers directly scanning my IP address getting anywhere?

I also want to implement fail2ban so that randoms can't brute force their way into Foundry. Unfortunately, Caddy doesn't support CLF, so I can't just use one of f2b's default profiles. I managed to find this link, which is essentially a standard f2b jail, but with a custom regex (which I can't wrap my head around) in order to filter Caddy's logs into a format that f2b can parse. I understand how f2b works, and I'll probably slightly modify his jail configs to drop max retries down to 5, but with a 300 second findtime so that I don't end up with players with 10 failed login attempts over 3 years getting banned, but I just want to make sure that the regex code makes sense, since I don't want to just copy/paste code I don't understand.

Zapf Dingbat
Jan 9, 2001


Quixzlizx posted:

I have a server that, for now, really only hosts Plex and FoundryVTT, and only Foundry is exposed to the internet. After reading this thread for a bit, I figured I should at least protect myself a little more, and I ended up with a domain ---> Cloudflare DNS/proxy --> Caddy reverse proxy --> Foundry chain, with end-to-end https.

I have a couple of questions about this. Maybe someone could explain to me like I'm an idiot how the reverse proxy is blocking traffic rather than just redirecting it. I had port 30000 forwarded to my server originally, but now I have ports 80 and 443 forwarded, which Caddy is still redirecting to Foundry. Is it that, since I only have a "foundry.mywebsite.com" rule in my Caddy config, that all traffic that doesn't originate from there is blocked, so I don't have to worry about random port sniffers directly scanning my IP address getting anywhere?


So when you say you have the ports forwarded, you mean at your router, right? Sorry if that's a dumb question.

I run NGINX instead of Caddy, so I can't give any exact advice, but how do you know Caddy's blocking the traffic? Have you confirmed where the traffic stops? Do you see Caddy establish a connection in its logs? If so, what sort of errors are you getting? The same goes for Foundry. I don't know what kind of logs Foundry would have, but I would think you could at least tcpdump the traffic.

Quixzlizx
Jan 7, 2007

Zapf Dingbat posted:

So when you say you have the ports forwarded, you mean at your router, right? Sorry if that's a dumb question.

I run NGINX instead of Caddy, so I can't give any exact advice, but how do you know Caddy's blocking the traffic? Have you confirmed where the traffic stops? Do you see Caddy establish a connection in its logs? If so, what sort of errors are you getting? The same goes for Foundry. I don't know what kind of logs Foundry would have, but I would think you could at least tcpdump the traffic.

Maybe I worded my post badly. In my router, I have port 443 forwarded to my server. If someone visits "foundry.mywebsite.com," Caddy correctly sends that traffic to port 30000, which is the port Foundry is listening on.

My question was more a general question about how reverse proxies are supposed to work. I have a specific rule for "foundry.mywebsite.com" in my Caddy config file, and it's working as expected. My question was what happens if someone scans port 443 while not coming from "foundry.mywebsite.com," as in directly using my IP address. Do reverse proxies just not resolve those connections at all, since there's no rule defined for that? Or do I have to explicitly block all attempted connections that don't fit one of my defined rules?

Motronic
Nov 6, 2009

Quixzlizx posted:

My question was what happens if someone scans port 443 while not coming from "foundry.mywebsite.com," as in directly using my IP address. Do reverse proxies just not resolve those connections at all, since there's no rule defined for that? Or do I have to explicitly block all attempted connections that don't fit one of my defined rules?

Generally they won't do anything. Sometimes there is a default page/site. It depends on the proxy and how you configured it/how your install was configured. This is very much a "read the docs" thing, not a "reverse proxies all do this".

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Quixzlizx posted:

Maybe I worded my post badly. In my router, I have port 443 forwarded to my server. If someone visits "foundry.mywebsite.com," Caddy correctly sends that traffic to port 30000, which is the port Foundry is listening on.

My question was more a general question about how reverse proxies are supposed to work. I have a specific rule for "foundry.mywebsite.com" in my Caddy config file, and it's working as expected. My question was what happens if someone scans port 443 while not coming from "foundry.mywebsite.com," as in directly using my IP address. Do reverse proxies just not resolve those connections at all, since there's no rule defined for that? Or do I have to explicitly block all attempted connections that don't fit one of my defined rules?

Get requests include headers which indicate which website you're trying to reach. A reverse proxy that gets a get request without an appropriate header just won't forward that request along anywhere and will return a 403 error to the requester.

Adbot
ADBOT LOVES YOU

Zapf Dingbat
Jan 9, 2001


Sorry about that. Yeah, so Caddy would be seeing which domain the outside traffic is trying to reach and forward it to the appropriate server.

I don't know if this is best practice or anything, but you can have the proxy also handle all the TLS/SSL and have the traffic inside your network be plain. I've got the Let's Encrypt renew scripts running on the proxy.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply