|
lostleaf posted:I mainly use tailscale for access to the nas on my network. I normally assign something really simple ip for access like 10.0.0.5. The ip assigned by tailscale is pretty random. Can you just use the name assigned by MagicDNS?
|
# ? May 18, 2023 03:41 |
|
|
# ? Jun 10, 2024 11:15 |
|
TransatlanticFoe posted:Can you just use the name assigned by MagicDNS? Oh wow! Thanks! It works! Previously I was trying to access the server using \\server and had to resort to the ip address. I had no idea i could just to specify server without the forward slashes.
|
# ? May 18, 2023 04:53 |
|
Aware posted:Just use wireguard directly?
|
# ? May 18, 2023 06:19 |
|
lostleaf posted:Oh wow! Thanks! It works!
|
# ? May 18, 2023 09:28 |
|
Since this is the self hosting thread, check out headscale if you want to host your own Tailscale controller on a cheap VPS or something.
|
# ? May 18, 2023 19:57 |
|
I've been doing a lot of stuff on my own server in my office for a while, but I wonder whether there's anything that can be hosted remotely that's more practical, like on Digital Ocean. Has anyone here had a situation where they found it more convenient to spin a service up remotely for cheap rather than dealing with NAT and reverse proxies? Nothing like Plex or other media, of course.
|
# ? May 19, 2023 04:38 |
Zapf Dingbat posted:I've been doing a lot of stuff on my own server in my office for a while, but I wonder whether there's anything that can be hosted remotely that's more practical, like on Digital Ocean. Has anyone here had a situation where they found it more convenient to spin a service up remotely for cheap rather than dealing with NAT and reverse proxies? Once you have all the provisioning scripted via anslbe, docker, etc it's easy to host either on a home server, remote VM, or colocated server. $80/mo gets me 2U with a 1 Gbps connection and 200 watts of power in a California based datacenter (https://dedicated.com/). I found it to be more cost effective to use my own hardware rather than a VM, the break even point was about 2-3 years to cover the initial cost of the hardware, for an 8 core box, 128GB ram, and 32TB of storage. I prefer having anything publicly accessible outside of my home network, so I host those on a remote colocated server that lives in a datacenter. For things that are private just for me, I host them at home behind a VPN because I have fiber and solar. There's no way I'm gonna hand over my unencrypted traffic to some third party like Cloudflare.
|
|
# ? May 19, 2023 04:52 |
|
For the same price roughly I just pay OVH for a 6c12t/64gb ram/1tb nvme server in my city which I run proxmox on and a bunch of VMs and containers. I just use a mikrotik VM in front of them with wireguard access to a common mgmt network. I sometimes think about paying more for storage and running Plex there but I think thats better off at home for a few reasons.
|
# ? May 19, 2023 07:13 |
Aware posted:For the same price roughly I just pay OVH for a 6c12t/64gb ram/1tb nvme server in my city which I run proxmox on and a bunch of VMs and containers. I just use a mikrotik VM in front of them with wireguard access to a common mgmt network. I sometimes think about paying more for storage and running Plex there but I think thats better off at home for a few reasons. The key for mine was the storage and latency. Since I host game servers on it, I wanted it to be located in California where I live, which reduces the number of options quite a bit. I also found that if you want 32TB of storage and a modern CPU, it gets expensive pretty quick if you are renting the hardware. That's what led me to purchase my own hardware and do the colocation.
|
|
# ? May 19, 2023 19:30 |
|
Oh yeah I'd love to do the same but the lowest price for colo here is about $200/mo for 2RU and usually comes with a 10M connection or something like 4TB/mo of data. Transit is expensive here sadly. I have a Dell R740xd (2 CPU 8c16t, 112gb ram and 24TB+SSD) that I've built up from a single CPU 16Gb ram when I bought it cheap that I'd love to Colo but it just doesn't add up here sadly. Full racks are between 1-1.5k a month so it just doesn't make sense from a provider perspective for the hassle of leasing out much less than a quarter or half rack.
|
# ? May 20, 2023 00:52 |
|
I have a Hetzner instance that I have thrown some public facing stuff for a group of friends I play tabletop RPGs with (a wiki, FoundryVTT install, group scheduling software). I do it because I don’t like exposing my home network and I wanted to learn remote hosting. I think it ends up being $20/month and if someone gets past the firewall and reverse proxy all they’re gonna get is an info dump on a fake World of Darkness city and a relatively underpowered bot net machine.
|
# ? May 20, 2023 06:49 |
|
Well Played Mauer posted:I have a Hetzner instance that I have thrown some public facing stuff for a group of friends I play tabletop RPGs with (a wiki, FoundryVTT install, group scheduling software). I do it because I don’t like exposing my home network and I wanted to learn remote hosting. I think it ends up being $20/month and if someone gets past the firewall and reverse proxy all they’re gonna get is an info dump on a fake World of Darkness city and a relatively underpowered bot net machine. TIL you can self-host Foundry. Gonna have to look into it.
|
# ? May 21, 2023 13:40 |
NihilCredo posted:TIL you can self-host Foundry. Gonna have to look into it. Yep you can self-host it. I do. Here's my redacted compose file you you want it, note I pegged it to a 9.x version to keep it compatible with some mods, you may want :latest code:
|
|
# ? May 21, 2023 14:13 |
|
Yeah it tends to work pretty well. It’s a little wonky just getting your access key to install the software but not horrible. My main issue is the game we’re running has official support on Roll20 so we’re over there until I drag them all back to Cuberpunk Red, which has an amazing fan made module on Foundry.
|
# ? May 21, 2023 19:14 |
|
i am having the weirdest issue with my VPS and it is driving me god drat insane So a few days ago while on vacation my personal apiscp server went offline, i was not able to pull up QR codes or attachments from my email on any device i messaged the vps company after i was not able to ssh in and they said they have an outage the outage resolved i cant log in still i cant ssh in, i cant use racknerd's console like it would open but it wont take any input so i submitted another ticket in, they asked for the root password, i didnt have it so i booted the server in rescue mode, chrooted it and updated the root password and sent it to the provider. the provider says they can log in so i rebooted the server, the console shows the system booting, i type in my username and password, i am in, 10 seconds later it stops taking my keystroke input racknerd is still telling me that there is no issue because they can still log in. im baffled. i even spun up a virtual desktop at work and ssh'd and it would time out. still racknerd is able to log in.
|
# ? May 23, 2023 09:29 |
|
If you have backups: Ask them to re-provision it
|
# ? May 23, 2023 12:18 |
|
Malloc Voidstar posted:If you have backups: Ask them to re-provision it turned out it was the updated kernel. rolled back on grub and everything was fine
|
# ? May 24, 2023 18:24 |
|
I have a server running TrueNAS core and I've recently picked up one of those little micro-PCs to run docker containers on, since TrueNAS really hates you using it for anything other than a storage appliance. I have my media folder on the NAS shared via SMB that's then mounted on the little PC running debian; the containers can (should) then access that mount. I've got Radarr set up which seems to work great however when I try to use Sonarr, which is set up the exact same way just with different folders on the same share mapped with docker, I get issues trying to add my TV show library to it. When I select the folder in 'Import Library', which it can see the contents of just fine, I either get an error 'root folder is not writable by user abc' or I just get a blank page. It's really inscrutable and I've tried a ton of things with /etc/fstab and the permissions on the NAS etc, nothing makes a difference except changing whether I get the permissions error or just the blank page. When I get the blank page, I can see in the directory that it's actually written a folder 'sonarr_write_test.txt' in there but I can't do anything with it. Anyone else had this error? I'm also having issues with the share not mounting reliably on boot despite working every time running mount -a manually, I imagine that's due to the network taking too long to come up. The sonarr and radarr containers also like to not come up after a reboot because of network fuckery, I can see it in the logs. That's a bit easier to diagnose and fix though probably.
|
# ? May 25, 2023 18:12 |
|
Generic Monk posted:I have a server running TrueNAS core and I've recently picked up one of those little micro-PCs to run docker containers on, since TrueNAS really hates you using it for anything other than a storage appliance. I have my media folder on the NAS shared via SMB that's then mounted on the little PC running debian; the containers can (should) then access that mount. Is the user for the docker container set up with the correct UUID/GUID?
|
# ? May 25, 2023 20:06 |
|
Warbird posted:Is the user for the docker container set up with the correct UUID/GUID? What's going to be the correct one? The owner/group of the directory on the NAS have UID/GID 1001/1001. Is that passed through samba? I've just changed the 'PUID' and 'PGID' variables in the compose file to 1001 but it hasn't had any effect and I still just get a blank page (incidentally the blank page issue persists even if I completely tear down the container and volume so must be something related to my config somewhere.) There is no UID/GID of 1001 on the machine hosting the containers; do I need to add those? Here's the current compose file code:
code:
|
# ? May 25, 2023 20:42 |
is your share owned by root? If you add code:
My fstab entry (which works fine for this same thing) looks like this: code:
x-systemd.requires= the correct option for your x-systemd.wants= there? I don't see xsystemd.wants= in the man page for fstab. https://manpages.ubuntu.com/manpages/xenial/man5/systemd.mount.5.html Nitrousoxide fucked around with this message at 21:05 on May 25, 2023 |
|
# ? May 25, 2023 20:51 |
|
Nitrousoxide posted:is your share owned by root? Hmm, the folder and subdirectories are all owned by root with no write permissions for group or other... I assumed it was all fine since Radarr is working perfectly. Let me just edit the file and reboot... Edit: Rebooted and now the share/all items within are owned by 1001/1001, but I still get the same issue. Blank page when I select my root TV shows directory, if for the sake of testing I go into a TV show folder and add that I don't get a blank page but I do get the error: code:
Generic Monk fucked around with this message at 21:16 on May 25, 2023 |
# ? May 25, 2023 21:01 |
Generic Monk posted:Hmm, the folder and subdirectories are all owned by root with no write permissions for group or other... I assumed it was all fine since Radarr is working perfectly. Let me just edit the file and reboot... Radarr might be running as root, so it can probably interact fine with a root owned share. Do you have any uid/gid environment variables set for that container? In the absence of any the container will default to whatever the dockerfile had it running as, which is usually uid/gid 0.
|
|
# ? May 25, 2023 21:08 |
|
Nitrousoxide posted:Radarr might be running as root, so it can probably interact fine with a root owned share. Do you have any uid/gid environment variables set for that container? In the absence of any the container will default to whatever the dockerfile had it running as, which is usually uid/gid 0. It's really the exact same including the PUID/PGID environment variables: code:
Do I need to have a user/group on the host machine (the one hosting the containers) with a UID/GID of 1001 for it to 'pass through' properly? My only real experience janitoring this kind of stuff before has been freeBSD jails which needed something like that (the GID inside the jail had to match the GID outside the jail). Generic Monk fucked around with this message at 21:28 on May 25, 2023 |
# ? May 25, 2023 21:18 |
|
Generic Monk posted:Hmm, the folder and subdirectories are all owned by root with no write permissions for group or other... I assumed it was all fine since Radarr is working perfectly. Let me just edit the file and reboot... abc is the username used by linuxserver images. The UID/GID is 911/911. Read the documentation for your image to change the UID/GID used. Spoiler alert: it's PUID and PGID environment variables. Edit: it seems you knew this. You may not have known that some (all?) linuxserver images do a recursive chown on startup. So if you have two running with the same directory and different UIDs, they're going to be fighting each other. cruft fucked around with this message at 22:56 on May 25, 2023 |
# ? May 25, 2023 21:37 |
|
Generic Monk posted:I'm also having issues with the share not mounting reliably on boot despite working every time running mount -a manually, I imagine that's due to the network taking too long to come up. Probably network related. I had the same issue with a giant usb drive not being ready in time. I fixed it with a flag in fstab to ignore any issues with the mount and still boot and then a cronjob that runs on boot that just does mount -a. Dunno if there's a more graceful solution but it works
|
# ? May 25, 2023 21:50 |
|
Generic Monk posted:
I had similar problems a few months ago, it was this for me. Need permissions to match the uid/gid.
|
# ? May 25, 2023 22:08 |
|
cruft posted:abc is the username used by linuxserver images. The UID/GID is 911/911. Looking at the output from the sonarr container I can see that it doesn't seem to be respecting the GID set in the compose file (the weird characters are from their logo that didn't parse properly when I pasted it into the forum :')): code:
Resdfru posted:Probably network related. I had the same issue with a giant usb drive not being ready in time. what flag did you use in fstab? Generic Monk fucked around with this message at 11:07 on May 26, 2023 |
# ? May 26, 2023 09:25 |
|
code:
|
# ? May 26, 2023 12:00 |
|
What should I do for my NUC to keep it powered when I have power loss? It's already behind UPS but the UPS is too slow. Inline capacitor for the 19V PSU?
|
# ? May 28, 2023 08:31 |
|
Kivi posted:What should I do for my NUC to keep it powered when I have power loss? a better ups, i’ve never heard of an ups having that issue
|
# ? May 28, 2023 13:25 |
Yeah a UPS shouldn’t fail to prevent your power supply from being interrupted. If your UPS supports it you should also install a NUT server on a device connected to it so if the power outage extends long enough everything connected to it can smoothly shut down before its batteries run out. If you also setup a raspberry pi or some hyper low power device to directly connect to the mains and set it to always turn on when connected to power you can have it send out magic packets to your UPS protected devices so they will come back up automatically even in the event of an extended power outage which exceeds your ups’s ability to weather. That should totally automate your network’s ability to heal from a power outage. Edit: oh you might also want to set your mains connected magic packet sending device to delay sending said packet by 10-20 minutes after coming online. That way you know your power will have been continuously online for that long and that your UPS will have had enough time to recharge. You wouldn't want a flickering power restoration to have your devices to trying to reboot over and over again. And you want enough juice in the tank of your UPS to let your devices gracefully shut down again if there's another outage. Nitrousoxide fucked around with this message at 15:45 on May 28, 2023 |
|
# ? May 28, 2023 13:59 |
|
On that subject, is there any sort of configuration I could do/make where a NUT server would would work without the main network router being on a UPS as well? My lab stuff is off somewhere else and I’d have to go buy at least 2 UPSs I order to keep power on the router and a switch I have b/t the two “ends”.
|
# ? May 28, 2023 14:04 |
|
Warbird posted:On that subject, is there any sort of configuration I could do/make where a NUT server would would work without the main network router being on a UPS as well? My lab stuff is off somewhere else and I’d have to go buy at least 2 UPSs I order to keep power on the router and a switch I have b/t the two “ends”. Can you power your router with PoE? Then it can run off the UPS in your lab even though it's in another room. Something like this adapter could be useful if your router uses a DC barrel jack and doesn't natively accept standard PoE.
|
# ? May 28, 2023 14:46 |
|
I had the NUC in surge only outlet.
|
# ? May 28, 2023 14:51 |
|
SamDabbers posted:Can you power your router with PoE? Then it can run off the UPS in your lab even though it's in another room. Something like this adapter could be useful if your router uses a DC barrel jack and doesn't natively accept standard PoE. Not sure, I’d have to look. Even if so that would require replacement of two switches to support PoE so that may or may not be viable.
|
# ? May 28, 2023 15:11 |
Well after royally loving up a bare metal server with a dumbass apt purge command I've migrated all my bare metal servers to VMs within Proxmox and converted the previous metal into Proxmox nodes in a cluster. Now it's trivial to move a vm between nodes with only a few minutes of downtime for the transfer. I also have a Proxmox Backup Server setup which allows any node to reach into those backup images and restore a vm. Even if a node utterly crashes I can restore all the VMs on it on other nodes in the cluster in a few minutes. I'll probably start exploring Proxmox's HA tools another weekend, so it can handle all that itself without any input from me. I'm still running Duplicity on the containers in my servers, so I can do more discrete restores if I need to. Both my container backups and my Proxmox image backups are rsynced to offsite block storage as well as a local repo on my network's NAS. I can do restores locally to not pay exfiltration fees but can still recover everything even if my house burns down. I had those backups setup before, but now that all the servers are VMs I can do nightly incremental image backups rather than the manual monthly (if I remember) images I was taking on the bare metal stuff. Much happier with this setup.
|
|
# ? Jun 10, 2023 16:30 |
|
I really like proxmox, it's like the perfect thing for me.
|
# ? Jun 11, 2023 13:47 |
|
So with the death of Apollo for Reddit, I'm thinking of going back to Reader. Initial research shows that FreshRSS is everyone's favorite self hosted option. Anyone have any experience with it or alternative suggestions? This will be exposed via a cloud flare tunnel. Because it'll be public I have a rule that MFA is required and everything I'm seeing about FRSS seems like it doesn't support anything beyond normal auth. So I'm not wild about this. The only other service I have exposed is Overseerr which uses plex SSO so that one is fine.
|
# ? Jun 11, 2023 15:23 |
|
|
# ? Jun 10, 2024 11:15 |
|
You can just set up nginx reverse proxy, it can enforce mfa in front of any application you're proxying
|
# ? Jun 11, 2023 15:34 |