Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
TransatlanticFoe
Mar 1, 2003

Hell Gem

lostleaf posted:

I mainly use tailscale for access to the nas on my network. I normally assign something really simple ip for access like 10.0.0.5. The ip assigned by tailscale is pretty random.

Can you just use the name assigned by MagicDNS?

Adbot
ADBOT LOVES YOU

lostleaf
Jul 12, 2009

TransatlanticFoe posted:

Can you just use the name assigned by MagicDNS?

Oh wow! Thanks! It works!

Previously I was trying to access the server using \\server and had to resort to the ip address. I had no idea i could just to specify server without the forward slashes.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Aware posted:

Just use wireguard directly?

TACD
Oct 27, 2000

lostleaf posted:

Oh wow! Thanks! It works!
This sums up my entire experience with Tailscale. All of your least favourite, most painful networking tasks just immediately work with so little effort that it’s almost disorienting.

SamDabbers
May 26, 2003



Since this is the self hosting thread, check out headscale if you want to host your own Tailscale controller on a cheap VPS or something.

Zapf Dingbat
Jan 9, 2001


I've been doing a lot of stuff on my own server in my office for a while, but I wonder whether there's anything that can be hosted remotely that's more practical, like on Digital Ocean. Has anyone here had a situation where they found it more convenient to spin a service up remotely for cheap rather than dealing with NAT and reverse proxies?

Nothing like Plex or other media, of course.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Zapf Dingbat posted:

I've been doing a lot of stuff on my own server in my office for a while, but I wonder whether there's anything that can be hosted remotely that's more practical, like on Digital Ocean. Has anyone here had a situation where they found it more convenient to spin a service up remotely for cheap rather than dealing with NAT and reverse proxies?

Nothing like Plex or other media, of course.

Once you have all the provisioning scripted via anslbe, docker, etc it's easy to host either on a home server, remote VM, or colocated server. $80/mo gets me 2U with a 1 Gbps connection and 200 watts of power in a California based datacenter (https://dedicated.com/). I found it to be more cost effective to use my own hardware rather than a VM, the break even point was about 2-3 years to cover the initial cost of the hardware, for an 8 core box, 128GB ram, and 32TB of storage.

I prefer having anything publicly accessible outside of my home network, so I host those on a remote colocated server that lives in a datacenter. For things that are private just for me, I host them at home behind a VPN because I have fiber and solar.

There's no way I'm gonna hand over my unencrypted traffic to some third party like Cloudflare.

Aware
Nov 18, 2003
For the same price roughly I just pay OVH for a 6c12t/64gb ram/1tb nvme server in my city which I run proxmox on and a bunch of VMs and containers. I just use a mikrotik VM in front of them with wireguard access to a common mgmt network. I sometimes think about paying more for storage and running Plex there but I think thats better off at home for a few reasons.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Aware posted:

For the same price roughly I just pay OVH for a 6c12t/64gb ram/1tb nvme server in my city which I run proxmox on and a bunch of VMs and containers. I just use a mikrotik VM in front of them with wireguard access to a common mgmt network. I sometimes think about paying more for storage and running Plex there but I think thats better off at home for a few reasons.

The key for mine was the storage and latency. Since I host game servers on it, I wanted it to be located in California where I live, which reduces the number of options quite a bit. I also found that if you want 32TB of storage and a modern CPU, it gets expensive pretty quick if you are renting the hardware. That's what led me to purchase my own hardware and do the colocation.

Aware
Nov 18, 2003
Oh yeah I'd love to do the same but the lowest price for colo here is about $200/mo for 2RU and usually comes with a 10M connection or something like 4TB/mo of data. Transit is expensive here sadly. I have a Dell R740xd (2 CPU 8c16t, 112gb ram and 24TB+SSD) that I've built up from a single CPU 16Gb ram when I bought it cheap that I'd love to Colo but it just doesn't add up here sadly. Full racks are between 1-1.5k a month so it just doesn't make sense from a provider perspective for the hassle of leasing out much less than a quarter or half rack.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
I have a Hetzner instance that I have thrown some public facing stuff for a group of friends I play tabletop RPGs with (a wiki, FoundryVTT install, group scheduling software). I do it because I don’t like exposing my home network and I wanted to learn remote hosting. I think it ends up being $20/month and if someone gets past the firewall and reverse proxy all they’re gonna get is an info dump on a fake World of Darkness city and a relatively underpowered bot net machine.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Well Played Mauer posted:

I have a Hetzner instance that I have thrown some public facing stuff for a group of friends I play tabletop RPGs with (a wiki, FoundryVTT install, group scheduling software). I do it because I don’t like exposing my home network and I wanted to learn remote hosting. I think it ends up being $20/month and if someone gets past the firewall and reverse proxy all they’re gonna get is an info dump on a fake World of Darkness city and a relatively underpowered bot net machine.

TIL you can self-host Foundry. Gonna have to look into it.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



NihilCredo posted:

TIL you can self-host Foundry. Gonna have to look into it.

Yep you can self-host it. I do. Here's my redacted compose file you you want it, note I pegged it to a 9.x version to keep it compatible with some mods, you may want :latest

code:
version: "3.8"

services:
  foundry:
    image: felddy/foundryvtt:9.269
    hostname: my_foundry_host
    init: true
    restart: "unless-stopped"
    volumes:
      - type: bind
        source: /DockerAppData/foundryvtt
        target: /data
    environment:
      - FOUNDRY_PASSWORD=${foundry_pass}
      - FOUNDRY_USERNAME=${foundry_username}
      - FOUNDRY_ADMIN_KEY=${foundry_admin_key}
      - FOUNDRY_HOSTNAME=${foundry_url}
      - FOUNDRY_PROXY_SSL=true
      - FOUNDRY_PROXY_PORT=443

    ports:
      - target: 30000
        published: 30000
        protocol: tcp

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Yeah it tends to work pretty well. It’s a little wonky just getting your access key to install the software but not horrible. My main issue is the game we’re running has official support on Roll20 so we’re over there until I drag them all back to Cuberpunk Red, which has an amazing fan made module on Foundry.

RoboBoogie
Sep 18, 2008
i am having the weirdest issue with my VPS and it is driving me god drat insane


So a few days ago while on vacation my personal apiscp server went offline, i was not able to pull up QR codes or attachments from my email on any device

i messaged the vps company after i was not able to ssh in and they said they have an outage


the outage resolved i cant log in still


i cant ssh in, i cant use racknerd's console like it would open but it wont take any input

so i submitted another ticket in, they asked for the root password, i didnt have it so i booted the server in rescue mode, chrooted it and updated the root password and sent it to the provider.


the provider says they can log in



so i rebooted the server, the console shows the system booting, i type in my username and password, i am in, 10 seconds later it stops taking my keystroke input


racknerd is still telling me that there is no issue because they can still log in. im baffled.



i even spun up a virtual desktop at work and ssh'd and it would time out. still racknerd is able to log in.


Malloc Voidstar
May 7, 2007

Fuck the cowboys. Unf. Fuck em hard.
If you have backups: Ask them to re-provision it

RoboBoogie
Sep 18, 2008

Malloc Voidstar posted:

If you have backups: Ask them to re-provision it

turned out it was the updated kernel.

rolled back on grub and everything was fine

Generic Monk
Oct 31, 2011

I have a server running TrueNAS core and I've recently picked up one of those little micro-PCs to run docker containers on, since TrueNAS really hates you using it for anything other than a storage appliance. I have my media folder on the NAS shared via SMB that's then mounted on the little PC running debian; the containers can (should) then access that mount.

I've got Radarr set up which seems to work great however when I try to use Sonarr, which is set up the exact same way just with different folders on the same share mapped with docker, I get issues trying to add my TV show library to it. When I select the folder in 'Import Library', which it can see the contents of just fine, I either get an error 'root folder is not writable by user abc' or I just get a blank page. It's really inscrutable and I've tried a ton of things with /etc/fstab and the permissions on the NAS etc, nothing makes a difference except changing whether I get the permissions error or just the blank page. When I get the blank page, I can see in the directory that it's actually written a folder 'sonarr_write_test.txt' in there but I can't do anything with it. Anyone else had this error?

I'm also having issues with the share not mounting reliably on boot despite working every time running mount -a manually, I imagine that's due to the network taking too long to come up. The sonarr and radarr containers also like to not come up after a reboot because of network fuckery, I can see it in the logs. That's a bit easier to diagnose and fix though probably.

Warbird
May 23, 2012

America's Favorite Dumbass

Generic Monk posted:

I have a server running TrueNAS core and I've recently picked up one of those little micro-PCs to run docker containers on, since TrueNAS really hates you using it for anything other than a storage appliance. I have my media folder on the NAS shared via SMB that's then mounted on the little PC running debian; the containers can (should) then access that mount.

I've got Radarr set up which seems to work great however when I try to use Sonarr, which is set up the exact same way just with different folders on the same share mapped with docker, I get issues trying to add my TV show library to it. When I select the folder in 'Import Library', which it can see the contents of just fine, I either get an error 'root folder is not writable by user abc' or I just get a blank page. It's really inscrutable and I've tried a ton of things with /etc/fstab and the permissions on the NAS etc, nothing makes a difference except changing whether I get the permissions error or just the blank page. When I get the blank page, I can see in the directory that it's actually written a folder 'sonarr_write_test.txt' in there but I can't do anything with it. Anyone else had this error?

I'm also having issues with the share not mounting reliably on boot despite working every time running mount -a manually, I imagine that's due to the network taking too long to come up. The sonarr and radarr containers also like to not come up after a reboot because of network fuckery, I can see it in the logs. That's a bit easier to diagnose and fix though probably.

Is the user for the docker container set up with the correct UUID/GUID?

Generic Monk
Oct 31, 2011

Warbird posted:

Is the user for the docker container set up with the correct UUID/GUID?

What's going to be the correct one? The owner/group of the directory on the NAS have UID/GID 1001/1001. Is that passed through samba? I've just changed the 'PUID' and 'PGID' variables in the compose file to 1001 but it hasn't had any effect and I still just get a blank page (incidentally the blank page issue persists even if I completely tear down the container and volume so must be something related to my config somewhere.)

There is no UID/GID of 1001 on the machine hosting the containers; do I need to add those?

Here's the current compose file

code:
version: "2.1"
services:
  sonarr:
    image: lscr.io/linuxserver/sonarr:latest
    container_name: sonarr
    environment:
      - PUID=1001
      - PGID=1001
      - TZ=Etc/UTC
    volumes:
      - sonarr-settings:/config
      - /mnt/freenas-media/Videos/TV:/tv #optional
      - /mnt/freenas-media/Downloads:/downloads #optional
    ports:
      - 8989:8989
    restart: unless-stopped
volumes:
  sonarr-settings:
And the line from /etc/fstab related to the share:

code:
//freenas/media  /mnt/freenas-media/     cifs    credentials=/home/max/freenas.smb,vers=3.0,noperm,x-systemd.automount,x-systemd.after=network-online.target,_netdev 0 0
There's probably a lot of crap in there that doesn't need to be there since I've been troubleshooting the unreliable behaviour on reboot

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



is your share owned by root?

If you add
code:
uid=1001,gid=1001
to your fstab entry's options does it get owned by the same user as your container there (1001)?

My fstab entry (which works fine for this same thing) looks like this:
code:
//192.168.7.228/Media/Media   /media/Media/   cifs   nofail,uid=1000,gid=1000,vers=2.0,iocharset=utf8,credentials=/home/core/.sambacreds
edit: isn't
x-systemd.requires=

the correct option for your x-systemd.wants= there? I don't see xsystemd.wants= in the man page for fstab.

https://manpages.ubuntu.com/manpages/xenial/man5/systemd.mount.5.html

Nitrousoxide fucked around with this message at 21:05 on May 25, 2023

Generic Monk
Oct 31, 2011

Nitrousoxide posted:

is your share owned by root?

If you add
code:
uid=1001,gid=1001
to your fstab entry's options does it get owned by the same user as your container there (1001)?

Hmm, the folder and subdirectories are all owned by root with no write permissions for group or other... I assumed it was all fine since Radarr is working perfectly. Let me just edit the file and reboot...

Edit: Rebooted and now the share/all items within are owned by 1001/1001, but I still get the same issue. Blank page when I select my root TV shows directory, if for the sake of testing I go into a TV show folder and add that I don't get a blank page but I do get the error:

code:
Unable to add root folder
Folder is not writable by user abc
Which is also the same behaviour as before. I'm going to try tearing the container down and recreating it again but not too hopeful.

Generic Monk fucked around with this message at 21:16 on May 25, 2023

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Generic Monk posted:

Hmm, the folder and subdirectories are all owned by root with no write permissions for group or other... I assumed it was all fine since Radarr is working perfectly. Let me just edit the file and reboot...

Radarr might be running as root, so it can probably interact fine with a root owned share. Do you have any uid/gid environment variables set for that container? In the absence of any the container will default to whatever the dockerfile had it running as, which is usually uid/gid 0.

Generic Monk
Oct 31, 2011

Nitrousoxide posted:

Radarr might be running as root, so it can probably interact fine with a root owned share. Do you have any uid/gid environment variables set for that container? In the absence of any the container will default to whatever the dockerfile had it running as, which is usually uid/gid 0.

It's really the exact same including the PUID/PGID environment variables:

code:
version: "2.1"
services:
  radarr:
    image: lscr.io/linuxserver/radarr:latest
    container_name: radarr
    environment:
      - PUID=1001
      - PGID=1001
      - TZ=Etc/UTC
    volumes:
      - radarr-settings:/config
      - /media/freenas-media/Videos/Films:/movies #optional
      - /media/freenas-media/Downloads:/downloads #optional
    ports:
      - 7878:7878
    restart: unless-stopped
volumes:
  radarr-settings:
Edit: Off the back of what you said, I changed the PUID and PGID to 0 in the Sonarr container and it worked! For debugging purposes anyway. Meanwhile the Radarr container doesn't seem to care what the UID/GID is set to (I had it set to 1000 before which was working fine).

Do I need to have a user/group on the host machine (the one hosting the containers) with a UID/GID of 1001 for it to 'pass through' properly? My only real experience janitoring this kind of stuff before has been freeBSD jails which needed something like that (the GID inside the jail had to match the GID outside the jail).

Generic Monk fucked around with this message at 21:28 on May 25, 2023

cruft
Oct 25, 2007

Generic Monk posted:

Hmm, the folder and subdirectories are all owned by root with no write permissions for group or other... I assumed it was all fine since Radarr is working perfectly. Let me just edit the file and reboot...

Edit: Rebooted and now the share/all items within are owned by 1001/1001, but I still get the same issue. Blank page when I select my root TV shows directory, if for the sake of testing I go into a TV show folder and add that I don't get a blank page but I do get the error:

code:
Unable to add root folder
Folder is not writable by user abc
Which is also the same behaviour as before. I'm going to try tearing the container down and recreating it again but not too hopeful.

abc is the username used by linuxserver images. The UID/GID is 911/911.

Read the documentation for your image to change the UID/GID used. Spoiler alert: it's PUID and PGID environment variables.

Edit: it seems you knew this. You may not have known that some (all?) linuxserver images do a recursive chown on startup. So if you have two running with the same directory and different UIDs, they're going to be fighting each other.

cruft fucked around with this message at 22:56 on May 25, 2023

Resdfru
Jun 4, 2004

I'm a freak on a leash.

Generic Monk posted:

I'm also having issues with the share not mounting reliably on boot despite working every time running mount -a manually, I imagine that's due to the network taking too long to come up.

Probably network related. I had the same issue with a giant usb drive not being ready in time.

I fixed it with a flag in fstab to ignore any issues with the mount and still boot and then a cronjob that runs on boot that just does mount -a.

Dunno if there's a more graceful solution but it works

Dyscrasia
Jun 23, 2003
Give Me Hamms Premium Draft or Give Me DEATH!!!!

Generic Monk posted:


Do I need to have a user/group on the host machine (the one hosting the containers) with a UID/GID of 1001 for it to 'pass through' properly? My only real experience janitoring this kind of stuff before has been freeBSD jails which needed something like that (the GID inside the jail had to match the GID outside the jail).

I had similar problems a few months ago, it was this for me. Need permissions to match the uid/gid.

Generic Monk
Oct 31, 2011

cruft posted:

abc is the username used by linuxserver images. The UID/GID is 911/911.

Read the documentation for your image to change the UID/GID used. Spoiler alert: it's PUID and PGID environment variables.

Edit: it seems you knew this. You may not have known that some (all?) linuxserver images do a recursive chown on startup. So if you have two running with the same directory and different UIDs, they're going to be fighting each other.

Looking at the output from the sonarr container I can see that it doesn't seem to be respecting the GID set in the compose file (the weird characters are from their logo that didn't parse properly when I pasted it into the forum :')):

code:
───────────────────────────────────────
      ██╗     ███████╗██╗ ██████╗ 
      ██║     ██╔════╝██║██╔═══██╗
      ██║     ███████╗██║██║   ██║
      ██║     ╚════██║██║██║   ██║
      ███████╗███████║██║╚██████╔╝
      ╚══════╝╚══════╝╚═╝ ╚═════╝ 
   Brought to you by linuxserver.io
───────────────────────────────────────
To support the app dev(s) visit:
Sonarr: https://sonarr.tv/donate
To support LSIO projects visit:
https://www.linuxserver.io/donate/
───────────────────────────────────────
GID/UID
───────────────────────────────────────
User UID:    1000
User GID:    911
Currently I have all the UIDs/GIDs set to 1000 in the compose files which is the ID of my user on the container host, which I have set as the owner of the share in /etc/fstab. Doesn't make a difference with Sonarr which requiresUID/GID 0 to import anything.

Radarr... kind of works which is the worst thing. The download scraping and adding to queue works perfectly, then it will sometimes copy the file to the movies directory, into a folder with the right name, but it won't rename the file to conform with that structure, nor will it download the .nfo file with the metadata. But it will download the artwork! I remember trying to get this working ages ago and the weird unreliability was what killed it for me, but I'll keep ploughing away because the idea is really cool and all the pieces are here. my bad it does seem to work, it just takes ages

Resdfru posted:

Probably network related. I had the same issue with a giant usb drive not being ready in time.

I fixed it with a flag in fstab to ignore any issues with the mount and still boot and then a cronjob that runs on boot that just does mount -a.

Dunno if there's a more graceful solution but it works

what flag did you use in fstab?

Generic Monk fucked around with this message at 11:07 on May 26, 2023

Resdfru
Jun 4, 2004

I'm a freak on a leash.
code:

PARTUUID="abcdef": /media/mount ntfs defaults,nofail 0 0

Kivi
Aug 1, 2006
I care
What should I do for my NUC to keep it powered when I have power loss?

It's already behind UPS but the UPS is too slow. Inline capacitor for the 19V PSU?

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Kivi posted:

What should I do for my NUC to keep it powered when I have power loss?

It's already behind UPS but the UPS is too slow. Inline capacitor for the 19V PSU?

a better ups, i’ve never heard of an ups having that issue

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Yeah a UPS shouldn’t fail to prevent your power supply from being interrupted.

If your UPS supports it you should also install a NUT server on a device connected to it so if the power outage extends long enough everything connected to it can smoothly shut down before its batteries run out.

If you also setup a raspberry pi or some hyper low power device to directly connect to the mains and set it to always turn on when connected to power you can have it send out magic packets to your UPS protected devices so they will come back up automatically even in the event of an extended power outage which exceeds your ups’s ability to weather. That should totally automate your network’s ability to heal from a power outage.

Edit: oh you might also want to set your mains connected magic packet sending device to delay sending said packet by 10-20 minutes after coming online. That way you know your power will have been continuously online for that long and that your UPS will have had enough time to recharge. You wouldn't want a flickering power restoration to have your devices to trying to reboot over and over again. And you want enough juice in the tank of your UPS to let your devices gracefully shut down again if there's another outage.

Nitrousoxide fucked around with this message at 15:45 on May 28, 2023

Warbird
May 23, 2012

America's Favorite Dumbass

On that subject, is there any sort of configuration I could do/make where a NUT server would would work without the main network router being on a UPS as well? My lab stuff is off somewhere else and I’d have to go buy at least 2 UPSs I order to keep power on the router and a switch I have b/t the two “ends”.

SamDabbers
May 26, 2003



Warbird posted:

On that subject, is there any sort of configuration I could do/make where a NUT server would would work without the main network router being on a UPS as well? My lab stuff is off somewhere else and I’d have to go buy at least 2 UPSs I order to keep power on the router and a switch I have b/t the two “ends”.

Can you power your router with PoE? Then it can run off the UPS in your lab even though it's in another room. Something like this adapter could be useful if your router uses a DC barrel jack and doesn't natively accept standard PoE.

Kivi
Aug 1, 2006
I care
:ms:

I had the NUC in surge only outlet.

Warbird
May 23, 2012

America's Favorite Dumbass

SamDabbers posted:

Can you power your router with PoE? Then it can run off the UPS in your lab even though it's in another room. Something like this adapter could be useful if your router uses a DC barrel jack and doesn't natively accept standard PoE.

Not sure, I’d have to look. Even if so that would require replacement of two switches to support PoE so that may or may not be viable.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well after royally loving up a bare metal server with a dumbass apt purge command I've migrated all my bare metal servers to VMs within Proxmox and converted the previous metal into Proxmox nodes in a cluster. Now it's trivial to move a vm between nodes with only a few minutes of downtime for the transfer. I also have a Proxmox Backup Server setup which allows any node to reach into those backup images and restore a vm. Even if a node utterly crashes I can restore all the VMs on it on other nodes in the cluster in a few minutes.

I'll probably start exploring Proxmox's HA tools another weekend, so it can handle all that itself without any input from me.

I'm still running Duplicity on the containers in my servers, so I can do more discrete restores if I need to. Both my container backups and my Proxmox image backups are rsynced to offsite block storage as well as a local repo on my network's NAS. I can do restores locally to not pay exfiltration fees but can still recover everything even if my house burns down.

I had those backups setup before, but now that all the servers are VMs I can do nightly incremental image backups rather than the manual monthly (if I remember) images I was taking on the bare metal stuff.

Much happier with this setup.

SEKCobra
Feb 28, 2011

Hi
:saddowns: Don't look at my site :saddowns:
I really like proxmox, it's like the perfect thing for me.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
So with the death of Apollo for Reddit, I'm thinking of going back to Reader.

Initial research shows that FreshRSS is everyone's favorite self hosted option. Anyone have any experience with it or alternative suggestions? This will be exposed via a cloud flare tunnel.

Because it'll be public I have a rule that MFA is required and everything I'm seeing about FRSS seems like it doesn't support anything beyond normal auth. So I'm not wild about this.

The only other service I have exposed is Overseerr which uses plex SSO so that one is fine.

Adbot
ADBOT LOVES YOU

Azhais
Feb 5, 2007
Switchblade Switcharoo
You can just set up nginx reverse proxy, it can enforce mfa in front of any application you're proxying

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply