|
Zapf Dingbat posted:I have a 1000 generation Nvidia card passthroughed on proxmox and it works fine as a remote Steam machine. I considered using it for Plex but I've never had a problem using CPU only for encoding. Same here, I have a 1060 passed through to a VM in Proxmox for a remote gaming computer and it was dead simple to set up. I haven't tried Plex there but I don't see any reason why it wouldn't work or would be any more difficult, it works just like a regular PC with a GPU in it.
|
# ? Jul 31, 2023 12:22 |
|
|
# ? Jun 10, 2024 12:09 |
Scruff McGruff posted:Same here, I have a 1060 passed through to a VM in Proxmox for a remote gaming computer and it was dead simple to set up. I haven't tried Plex there but I don't see any reason why it wouldn't work or would be any more difficult, it works just like a regular PC with a GPU in it. Of note, some games' anti-cheat will flare up if they detect virtualization. I don't think its a ton of games that check for that these days, but its certainly out there.
|
|
# ? Jul 31, 2023 13:36 |
|
So I think I’ve screwed up. Last night I set about setting up gitea on my Synology and was surprised to see that the installation step was taking absolute ages despite being a Docker container. After a bit of digging I think the problem is that the MariaDB instance I’m running is both on a btrfs file system and likely within a share with checksum checking enabled. Both apparently are strongly advised to not be used for anything with a large amount of random writes which, potentially, explains why this git business hasn’t finished setting itself up after 7 or so hours as well as lackluster performance of DB/VM/some Docker containers in the past. Naturally my existing volume takes up everything on the NAS so there is no reasonable path forward to pull the data off so I can go about making a smaller volume that’s more performant for that sort of workload. Fun. This said, I sure do have a Proxmox machine that isn’t doing a great deal and would likely be a bad better home to both my DB and container infrastructure. Is there any “idiot’s guide” for this sort of thing I should know about? Naturally shifting things over isn’t going to be particularly hard but I’d like to avoid gotchas like this in the future.
|
# ? Jul 31, 2023 13:42 |
|
Thanks everybody. Gonna give it a go today.
|
# ? Jul 31, 2023 15:59 |
|
What do people have set up for monitoring logs/metrics of their server(s)? Been eyeing up setting up vector on each VM and sending logs to Loki to show in Grafana.
|
# ? Jul 31, 2023 23:27 |
hogofwar posted:What do people have set up for monitoring logs/metrics of their server(s)? Been eyeing up setting up vector on each VM and sending logs to Loki to show in Grafana. I use netdata and their cloud dashboard
|
|
# ? Jul 31, 2023 23:45 |
|
I just ssh in and run top when it feels slow.
|
# ? Aug 1, 2023 02:42 |
Learn to generate flamegraphs with tracing via the USE method that Brendan Gregg invented, that's give you answers that top (or any other utility) can't.
|
|
# ? Aug 1, 2023 10:17 |
|
I’m thinking of repurposing a 2015 i7 macbook pro into a plex server + back up + pihole server. My plex server is currently on the linux partition of my gaming rig, which means my roommates can’t watch something on plex if I’m gaming on the windows partition. -Is a standard usb external harddrive capable of smoothly serving 4k video via plex? - I recall some people complaining about docker on mac. Am I better off doing all of the above from a fresh linux install? e: i’m somewhat familiar with linux, but these days i have very little interest in janitoring yet another machine. Head Bee Guy fucked around with this message at 15:55 on Aug 1, 2023 |
# ? Aug 1, 2023 15:52 |
|
Head Bee Guy posted:I’m thinking of repurposing a 2015 i7 macbook pro into a plex server + back up + pihole server. My plex server is currently on the linux partition of my gaming rig, which means my roommates can’t watch something on plex if I’m gaming on the windows partition. I did this on an i9 2019 with a lot of success until I had a drive failure and repurposed some other hardware just to get on a full Linux stack. That said the MacBook performed admirably. I just used the native apps for Plex/whatever else you need to manage your media, and used external USB-C drives for everything. I ended up using an external SSD (the Samsung T7) for downloading and unpacking, then moved everything to HDDs to store and serve the content. I didn’t have any issues with the setup outside of a power failure tanking some drives. You’ll want a wired connection to your router, most likely, but the hardware can handle the load, no problem. If you wind up having to transcode 4K, that could be an issue since you apparently need a 10-series nvidia card to even begin to pull that off. But that’ll be an issue on anything you use.
|
# ? Aug 1, 2023 16:09 |
How do you build redundancy into a self-hosted app? Like if my Nextcloud is up on computer a, but computer a then burns to the ground, is there any way for computer b in another place to automatically run a mirror of that Nextcloud?
|
|
# ? Aug 4, 2023 03:08 |
|
tuyop posted:How do you build redundancy into a self-hosted app? Containers would be the answer. What you are describing is pretty much what kubernetes was designed for.
|
# ? Aug 4, 2023 03:26 |
Heck Yes! Loam! posted:Containers would be the answer. What you are describing is pretty much what kubernetes was designed for. So docker doesn’t do this kind of thing? Should I even bother looking into kubernetes? Everyone makes it sound so dreadful.
|
|
# ? Aug 4, 2023 03:42 |
|
That’s because it is. Docker Swarm should be looked into first as it does more or less what you’re looking for but isn’t an experience akin to slamming your hog in a car door to maintain and set up. If you’re not an enterprise setup you likely don’t need K8s. This said, you’d do well to look into to High Availability (HA) setups for your app of choice to see if it even plays nice in that sort of setup. If it’s not meant for that sort of architecture then it’s likely going to become a job in and of itself to get to play nice.
|
# ? Aug 4, 2023 03:52 |
Warbird posted:That’s because it is. Docker Swarm should be looked into first as it does more or less what you’re looking for but isn’t an experience akin to slamming your hog in a car door to maintain and set up. If you’re not an enterprise setup you likely don’t need K8s. I would say that high availability is not really necessary for self hosted apps. Disaster recovery, on the other hand, is definitely necessary. Meaning, if your machine burns to the ground, it should be fairly trivial to spin up a new one without too much hassle. Maybe it takes an hour and a few manual steps - that's OK for self hosted poo poo. I would focus more on having backups, a scripted way of provisioning the server, and verifying that your disaster recover process works. The last part is very important, so you don't find out your DR process doesn't work at the point a disaster has already occurred. Gitlab Pipelines, Ansible, Docker, and some AWS CLI commands to interact with S3 is my solution for that.
|
|
# ? Aug 4, 2023 04:58 |
|
Hard agree. 3-2-1 it and call it a day. Your home lab should be for fun and whatever the extreme opposite of profit is. Don’t turn it into a job.
|
# ? Aug 4, 2023 06:12 |
|
I'm currently using Nomad to run my home lab (it's basically a step up from Docker Swarm, but still wayyyy simpler than Kubernetes), and used to do it at my day job, and I hard agree too - in fact, I don't bother with HA at all despite using a tool that can provide it. There's a massive difference in required effort between "I want to be able to bring the app back online in a few minutes" vs. "I want the apps to go back online automatically", at least when the apps are stateful - and almost anything you'd care to self-host is going to be stateful. A better reason to learn an orchestration system is to have a few text files that describe the full state of the system, so you can easily put them in a git repo or just copy around, and then you can bring everything back online with "docker compose up" or "nomad run" or "nix something" I guess, even if it's two years later and you've forgotten a bunch of details.
|
# ? Aug 4, 2023 23:26 |
NihilCredo posted:I'm currently using Nomad to run my home lab (it's basically a step up from Docker Swarm, but still wayyyy simpler than Kubernetes), and used to do it at my day job, and I hard agree too - in fact, I don't bother with HA at all despite using a tool that can provide it. Docker/Podman can also recover your apps from crashes too if you setup health checks and restart conditions too. I mean it's not gonna drain and migrate stuff to a different physical host or anything if the server itself goes down, but it has a substantial ability to recover.
|
|
# ? Aug 5, 2023 00:08 |
This is all really helpful and informative. I should definitely be using a proper system like git for the stuff, definitely
|
|
# ? Aug 5, 2023 04:05 |
|
Oh that remind me. I was dinking around with Girea the other day and had everything working except ssh authenticated repo stuff. How does that even work with a reverse proxy? I tried umpteen different ways of it and never had any success.
|
# ? Aug 5, 2023 13:37 |
|
Warbird posted:Oh that remind me. I was dinking around with Girea the other day and had everything working except ssh authenticated repo stuff. How does that even work with a reverse proxy? I tried umpteen different ways of it and never had any success. Reverse proxy usually means http. Ssh does not run over http. You want a port forward.
|
# ? Aug 5, 2023 14:17 |
|
Warbird posted:Oh that remind me. I was dinking around with Girea the other day and had everything working except ssh authenticated repo stuff. How does that even work with a reverse proxy? I tried umpteen different ways of it and never had any success. What reverse proxy are you using? Most are HTTP/S reverse proxies, and may or may not support other protocols; I think nginx has a module for generic non-HTTP TCP connections. In SSH contexts, a "reverse proxy" is usually called a "jump box" and is just a regular SSH server that you happen to tunnel connections through The issue with SSH (and many non-HTTP protocols) is that it doesn´t have a "Host: someservice.somedomain.com" header - you just connect to a certain IP and port. So a reverse proxy can't just look at the incoming request and figure out it is meant for a certain service - you need to open a dedicated port and forward all SSH connections from that port to Gitea. IMO, git over HTTPS is gonna be much simpler unless some of your tooling doesn't support it.
|
# ? Aug 5, 2023 14:23 |
What are some resources I can look into for “dev ops” stuff like this? There’s a lot of new technologies in the space that I’m mostly unfamiliar with.
|
|
# ? Aug 5, 2023 14:36 |
|
tuyop posted:What are some resources I can look into for “dev ops” stuff like this? There’s a lot of new technologies in the space that I’m mostly unfamiliar with. roadmap.sh is a site that gives a good overview of all the stuff you can learn to be specialized in whatever kind of IT track. Devops, frontend, coding languages, system architecture, cyber security, QA, etc.
|
# ? Aug 5, 2023 14:50 |
I usually just setup my ssh hosts in my ssh config file so I can use use whatever alias I want for them. They should have static ip's anyway if you're running them through a reverse proxy so the config'd ips should hold true.
|
|
# ? Aug 5, 2023 18:59 |
|
Has anyone run Steam via docker or on a VM for remote gaming through a browser or ipad or something? I was looking into building my kids their own computers but I realized a possible stopgap till I do that is just throwing a couple VMs up on my server and getting a good/ok NVidia card for it. The games they play aren't very demanding. Really at the moment its one game; Wobbly Life which I ran on a NUC to see if it worked and it did. (these containers/vms won't be on NUCs, just saying it ran on something with no gpu) It looks like my options are entire VMs or a container like this: https://hub.docker.com/r/linuxserver/kasm or https://github.com/Steam-Headless/docker-steam-headless seems random people on the internet do this but was wondering if anyone here has ever done it and has any thoughts/suggestions. edit: oh steam headless has support for AMD, I have an old rx 580 or something lying around. maybe I can test it myself Resdfru fucked around with this message at 02:16 on Aug 8, 2023 |
# ? Aug 8, 2023 02:03 |
|
I don't think headless is quite what you want, you likely want to pass through the GPU to the VM then stream the output of the GPU. I use little HDMI monitor fakers for this purpose - https://www.amazon.com.au/fit-Headless-GS-Resolution-Emulator-Game-Streaming/dp/B01EK05WTY Or rather I used to when I was futzing around with the whole idea. Then you can use steamlink, sunlight or any other streaming setup. Edit- actually read your link, that does kind of do what youre asking though the idea of playing games via VNC horrifies me. I suspect it's more for running servers for games that don't have a dedicated server option.
|
# ? Aug 8, 2023 13:56 |
|
Parsec might be what you're looking for, the self hosted version is free. https://parsec.app/features
|
# ? Aug 8, 2023 14:13 |
|
I'm on my yearly-or-so hunt for something to replace airsonic with and was wondering if anyone here had recommendations. Airsonic (well, airsonic-advanced) is still working mostly ok, but it has some glitches¹ and the user interface is kind of bad. Also, it's been unmaintained for years. Hard requirements: - browse by folder - listen-in-browser - cache for offline listening on mobile (anything that supports the subsonic API has this) - multiple user accounts, with access to different libraries/stars/playlists/etc - server-side configurable transcoding support (i.e. I need to be able to say "to decode format X, use command Y") Nice-to-haves: - DLNA export - some sort of homeassistant integration Stuff I've tried and ruled out (both independent servers, and alternate UIs for subsonic-compatible backends): - Jellyfin (I use it for video but music support is lacking, and no offline listening option) - Moode (love the UI, but it only plays on a physically connected device, so it's more of a slimp3/squeezebox replacement) - subplayer, airsonic-refix: no browse by folder support - jamstash: UI is lacking a lot of basic features like a now-playing queue Stuff I haven't tried: - funkwhale, navidrome, ampache: no browse by folder support - koozic: unmaintained - substreamer-web: closed source, no idea what it supports, as of earlier this year was very early in development and had some weird issues like "you need to shrink the window to make all the controls appear" - supysonic, gonic: would probably do what I need on the backend but need to be paired with a suitable UI to function It's looking like Gonic or (maybe) Supysonic + some sort of web-based frontend would be the best way to go if I'm ditching Airsonic; Gonic meets all of my requirements, Supysonic is missing user access controls but if it comes down to it I can work around that by running multiple instances with different libraries configured. The problem is that both of those are backend-only, and need to be paired with a subsonic-compatible frontend to function. And all of the frontends I've tried have serious issues of their own. So, anyone have recommendations for stuff I've missed (either subsonic-compatible web frontends, or all-in-one music servers), usage experience with substreamer/supysonic/gonic, or other ideas? --- ¹ The funniest is probably that the browse-tags-by-genre view is full of hundreds of empty genres that do not exist in my library, like "Contemporary Country Country Neo-Traditionalist Country Pop" and "Canadian Bush Swing".
|
# ? Aug 16, 2023 21:08 |
I also have a hard "browse by folder" requirement & web based front-end, and went down a similar path as you. I've still just been using the OG Subsonic & DSub for my mobile needs. Subsonic hasn't been updated in ages and airsonic-advanced was too glitchy for me. gonic looks pretty nice, I will have to check that out. I assume you've gone through the https://github.com/basings/selfhosted-music-overview on your quest many times by now Supersonic looks nice for a web front-end...but no browser by folder support yet! Rats.
|
|
# ? Aug 17, 2023 04:04 |
|
ToxicFrog posted:Stuff I've tried and ruled out (both independent servers, and alternate UIs for subsonic-compatible backends): I use the Finamp player on my iphone and it supports offline listening. It only does audio media from Jellyfin but that's all I need it to do. Not the most full-featured player but does what I need. I, too, ran into this issue and this was the solution I found that worked best.
|
# ? Aug 17, 2023 11:35 |
|
Nam Taf posted:I use the Finamp player on my iphone and it supports offline listening. It only does audio media from Jellyfin but that's all I need it to do. Not the most full-featured player but does what I need. Needs a functional browser client too, and Jellyfin's music functionality is, well, I wouldn't call it "functional" for day to day use. fletcher posted:I also have a hard "browse by folder" requirement & web based front-end, and went down a similar path as you. I've still just been using the OG Subsonic & DSub for my mobile needs. Subsonic hasn't been updated in ages and airsonic-advanced was too glitchy for me. gonic looks pretty nice, I will have to check that out. None of the emoji on that page render for me, which makes reading it somewhat difficult. After some rummaging (and chatting with the gonic dev for a while), I think the solution I'm drifting towards is: - gonic on the backend --- needs a patch to support importing of audio files like trackers that TagLib doesn't support --- needs a custom PATH so that when it uses `ffmpeg` to transcode things it calls a wrapper that can invoke different tools for different formats, rather than blindly calling ffmpeg with the same arguments on everything - airsonic-refix on the frontend --- browse-by-file support is available in a PR here --- some other functionality (play entire high-level directory/artist/genre, album art in browse by file mode) is missing compared to the stock UI but I can do without that if needed Probably going to do some hacking on that on the weekend and we'll see how it turns out.
|
# ? Aug 18, 2023 00:56 |
|
Haven't messed with any of this but want to know if it's possible to connect to home via VPN but also get the benefits of my Proton VPN service. I could use Proton at the router level but I'd rather not as I prefer being able to change my locations based on different needs and the VPN config only allows a single server. Would it be best to set up a server with Wireguard so I can access internal devices out and about but also have Proton on there too? Would that even work?
|
# ? Sep 11, 2023 03:30 |
|
Blurb3947 posted:Haven't messed with any of this but want to know if it's possible to connect to home via VPN but also get the benefits of my Proton VPN service. Yes. Have your home VPN not establish a default route (aka Gateway). For extra avoiding-unnecessary-slowdown goodness, be sure your Proton VPN isn't trying to route traffic to home.
|
# ? Sep 11, 2023 03:48 |
|
I have a server that, for now, really only hosts Plex and FoundryVTT, and only Foundry is exposed to the internet. After reading this thread for a bit, I figured I should at least protect myself a little more, and I ended up with a domain ---> Cloudflare DNS/proxy --> Caddy reverse proxy --> Foundry chain, with end-to-end https. I have a couple of questions about this. Maybe someone could explain to me like I'm an idiot how the reverse proxy is blocking traffic rather than just redirecting it. I had port 30000 forwarded to my server originally, but now I have ports 80 and 443 forwarded, which Caddy is still redirecting to Foundry. Is it that, since I only have a "foundry.mywebsite.com" rule in my Caddy config, that all traffic that doesn't originate from there is blocked, so I don't have to worry about random port sniffers directly scanning my IP address getting anywhere? I also want to implement fail2ban so that randoms can't brute force their way into Foundry. Unfortunately, Caddy doesn't support CLF, so I can't just use one of f2b's default profiles. I managed to find this link, which is essentially a standard f2b jail, but with a custom regex (which I can't wrap my head around) in order to filter Caddy's logs into a format that f2b can parse. I understand how f2b works, and I'll probably slightly modify his jail configs to drop max retries down to 5, but with a 300 second findtime so that I don't end up with players with 10 failed login attempts over 3 years getting banned, but I just want to make sure that the regex code makes sense, since I don't want to just copy/paste code I don't understand.
|
# ? Sep 11, 2023 22:11 |
|
Quixzlizx posted:I have a server that, for now, really only hosts Plex and FoundryVTT, and only Foundry is exposed to the internet. After reading this thread for a bit, I figured I should at least protect myself a little more, and I ended up with a domain ---> Cloudflare DNS/proxy --> Caddy reverse proxy --> Foundry chain, with end-to-end https. So when you say you have the ports forwarded, you mean at your router, right? Sorry if that's a dumb question. I run NGINX instead of Caddy, so I can't give any exact advice, but how do you know Caddy's blocking the traffic? Have you confirmed where the traffic stops? Do you see Caddy establish a connection in its logs? If so, what sort of errors are you getting? The same goes for Foundry. I don't know what kind of logs Foundry would have, but I would think you could at least tcpdump the traffic.
|
# ? Sep 21, 2023 18:44 |
|
Zapf Dingbat posted:So when you say you have the ports forwarded, you mean at your router, right? Sorry if that's a dumb question. Maybe I worded my post badly. In my router, I have port 443 forwarded to my server. If someone visits "foundry.mywebsite.com," Caddy correctly sends that traffic to port 30000, which is the port Foundry is listening on. My question was more a general question about how reverse proxies are supposed to work. I have a specific rule for "foundry.mywebsite.com" in my Caddy config file, and it's working as expected. My question was what happens if someone scans port 443 while not coming from "foundry.mywebsite.com," as in directly using my IP address. Do reverse proxies just not resolve those connections at all, since there's no rule defined for that? Or do I have to explicitly block all attempted connections that don't fit one of my defined rules?
|
# ? Sep 21, 2023 20:41 |
|
Quixzlizx posted:My question was what happens if someone scans port 443 while not coming from "foundry.mywebsite.com," as in directly using my IP address. Do reverse proxies just not resolve those connections at all, since there's no rule defined for that? Or do I have to explicitly block all attempted connections that don't fit one of my defined rules? Generally they won't do anything. Sometimes there is a default page/site. It depends on the proxy and how you configured it/how your install was configured. This is very much a "read the docs" thing, not a "reverse proxies all do this".
|
# ? Sep 21, 2023 20:49 |
Quixzlizx posted:Maybe I worded my post badly. In my router, I have port 443 forwarded to my server. If someone visits "foundry.mywebsite.com," Caddy correctly sends that traffic to port 30000, which is the port Foundry is listening on. Get requests include headers which indicate which website you're trying to reach. A reverse proxy that gets a get request without an appropriate header just won't forward that request along anywhere and will return a 403 error to the requester.
|
|
# ? Sep 21, 2023 21:14 |
|
|
# ? Jun 10, 2024 12:09 |
|
Sorry about that. Yeah, so Caddy would be seeing which domain the outside traffic is trying to reach and forward it to the appropriate server. I don't know if this is best practice or anything, but you can have the proxy also handle all the TLS/SSL and have the traffic inside your network be plain. I've got the Let's Encrypt renew scripts running on the proxy.
|
# ? Sep 21, 2023 22:48 |