Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

hogofwar posted:

So a rough overview would be this?

Packer would create the base VM templates, setup with the basic stuff you would want in each VM.

Terraform would set these up in Proxmox when needed, specifying the different cpu/memory/network config for each VM that is spun up

Ansible would do final config/set up each VM for their own unique usage. (A VM that runs docker, a VM that runs backup, etc)

Yep. And then if you want to get fancy you do a git/cicd setup to track everything. You can probably skip terraform unless you've got some kind of complicated deployment pattern.

Adbot
ADBOT LOVES YOU

hogofwar
Jun 25, 2011

'We've strayed into a zone with a high magical index,' he said. 'Don't ask me how. Once upon a time a really powerful magic field must have been generated here, and we're feeling the after-effects.'
'Precisely,' said a passing bush.

Matt Zerella posted:

Yep. And then if you want to get fancy you do a git/cicd setup to track everything. You can probably skip terraform unless you've got some kind of complicated deployment pattern.

Without terraform, would I just manually create the VMs out of the templates, or do I just rely on Ansible to do that?

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

hogofwar posted:

Without terraform, would I just manually create the VMs out of the templates, or do I just rely on Ansible to do that?

I would not managed Proxmox with Ansible. I'm not even sure if it's possible tbh.

Warbird
May 23, 2012

America's Favorite Dumbass

It’s completely possible though a bit limited. You can create a VM or a container with the given attributes. That’s about it.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Discovered the issue with that EX FAT drive. I live in Texas so brownouts and blackouts are pretty common and I don’t have my external drive (it’s one of those five-HDD arrays you can hardware raid and then connect via USB) connected to a UPS. We had a brownout a couple days ago and OS X had another freak out and said it couldn’t repair the drive. Not sure if it’s an OS X thing or more likely something with the hardware array being weird.

Regardless, I’m moving the drive off the Mac and onto a machine that has proxmox installed on it. I’m gonna disable the hardware RAID and do a software one. Is it advisable to create the RAID at the proxmox level and then just pass it through to the VM, or can I get away with doing it all inside the VM I want managing the drive? I feel like the former is the better way to go.

cruft
Oct 25, 2007

Well Played Mauer posted:

Discovered the issue with that EX FAT drive. I live in Texas so brownouts and blackouts are pretty common and I don’t have my external drive (it’s one of those five-HDD arrays you can hardware raid and then connect via USB) connected to a UPS. We had a brownout a couple days ago and OS X had another freak out and said it couldn’t repair the drive. Not sure if it’s an OS X thing or more likely something with the hardware array being weird.

Regardless, I’m moving the drive off the Mac and onto a machine that has proxmox installed on it. I’m gonna disable the hardware RAID and do a software one. Is it advisable to create the RAID at the proxmox level and then just pass it through to the VM, or can I get away with doing it all inside the VM I want managing the drive? I feel like the former is the better way to go.

No filesystem handles sudden loss of power well, but the FAT family is particularly awful with it.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Yeah, so I’ve learned. Twice now, lol.

Moved the whole thing over to one of the machines running proxmox and reformatted everything in ZFS. That was wonderfully painless; now I’m just trying to figure out if I want to just make it a shared drive on an existing system or use it as an opportunity to spin up a VM to do some sort of Open Media Vault install.

I already have a synology for backup and important stuff, otherwise I’d be more into making it an actual NAS.

I guess the other option is to decommission the MacBook and run the plex suite on a VM. It’d be one less OS to have to fret over.

cruft
Oct 25, 2007

MacOS isn't really what I think of when I think of a server operating system. Not that my opinion on MacOS is worth anything, LOL.

20+ years ago I ran a MacOS 9 server for the college mac lab. I'm sure the Unix-based MacOS is a lot less of a kludge. It was weird to have a macintosh sitting there on a table in the datacenter, surrounded by rack-mount systems. I wonder if it got self-conscious.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



You could do it now with docker/podman on a mac, but at that point you're just running linux with extra steps.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

Nitrousoxide posted:

You could do it now with docker/podman on a mac, but at that point you're just running linux with extra steps.

The podman CLI tool at least just downloads a Fedora CoreOS image and spins it up in QEMU, not sure about "(Docker|Podman) Desktop" but probably they are doing pretty much the same thing.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Please don't use docker desktop on Mac. It's very very bad.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo

Matt Zerella posted:

Please don't use docker desktop on Mac. It's very very bad.

I lived this hell for a work project a while ago.

The MacBook as plex server was more a first project that worked well enough I got into self hosting as a hobby. It’s an intel from 2019 so i figured may as well make use of the i9 and RAM. Its shortcomings as a server became pretty apparent once I got into docker and proxmox though.

I’ve been putting off switching over to a vm mostly out of inertia and Diablo 4. The second drive failure in as many brownouts is enough to get me in gear though. Also to get a second UPS :science:

cruft
Oct 25, 2007

Well Played Mauer posted:

I’ve been putting off switching over to a vm mostly out of inertia and Diablo 4

This phrase needs to be immortalized somewhere. Maybe this quote is enough.

Warbird
May 23, 2012

America's Favorite Dumbass

Well Played Mauer posted:

I lived this hell for a work project a while ago.

The MacBook as plex server was more a first project that worked well enough I got into self hosting as a hobby. It’s an intel from 2019 so i figured may as well make use of the i9 and RAM. Its shortcomings as a server became pretty apparent once I got into docker and proxmox though.

I’ve been putting off switching over to a vm mostly out of inertia and Diablo 4. The second drive failure in as many brownouts is enough to get me in gear though. Also to get a second UPS :science:

It’s a laptop. It’s its own UPS already.

cruft
Oct 25, 2007

Warbird posted:

It’s a laptop. It’s its own UPS already.

This makes me think I should put that old Thinkpad x220 back into service and relieve the Raspberry Pi 4. Apparently I can get a USB 3.0 Expresscard for it.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Keito posted:

The podman CLI tool at least just downloads a Fedora CoreOS image and spins it up in QEMU, not sure about "(Docker|Podman) Desktop" but probably they are doing pretty much the same thing.

Like I said, Linux with extra steps.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo

Warbird posted:

It’s a laptop. It’s its own UPS already.

Nah I mean for the external drive, not the laptop.

BlankSystemDaemon
Mar 13, 2009



cruft posted:

This makes me think I should put that old Thinkpad x220 back into service and relieve the Raspberry Pi 4. Apparently I can get a USB 3.0 Expresscard for it.
I have one of those for my old T420, and they're terrible.

cruft
Oct 25, 2007

BlankSystemDaemon posted:

I have one of those for my old T420, and they're terrible.

Everything keeps coming back to just staying the course on the RPi4.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

Nitrousoxide posted:

Like I said, Linux with extra steps.

I didn't realize that by extra steps you meant "more layers of indirection", but sure, yeah, it's a lot. In terms of just getting going it's three commands; brew install podman && podman machine init && podman machine start.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Project “move plex to Linux” is underway. I gave up on nfs in docker and instead got samba running on my host machine so poo poo on the other VM that needs to access the drive can.

Next up is getting all the dockerized arrs to talk to each other as well as manage files all on the drive. The relative directory names are making it a lot more confusing than just directly managing everything in the host machine’s structure.

I think I’m over complicating something here. That or it’s just an inescapable initial pain in the rear end.

BlankSystemDaemon
Mar 13, 2009



How can NFS be a problem?

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
And how does NFS come into play with docker when you're doing bind mounts? Were you bind mounting a directory that was an NFS mount?

csammis
Aug 26, 2003

Mental Institution

BlankSystemDaemon posted:

How can NFS be a problem?

I’m not sure if this is what op is talking about but some of the arrs, and Plex I think, use SQLite as their backing store which is a no-go on a network mounted file system due to file locking constraints. This is an annoying problem for me at least since it means I can’t just deploy a container and stick the “config” volume out on a NAS and be done with it.

e: unless I know ahead of time the container is not using SQLite, but usually that has come as an unexpected surprise when the app’s settings fail to persist or it crashes a lot.

csammis fucked around with this message at 19:28 on Jul 18, 2023

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Yeah that was jumbled. The short version is I had some other docker containers on another VM that accessed the drive via samba when it was connected to the MacBook.

Since I moved the drive over to another Linux VM, I mounted the drive on the aforementioned docker VM via NFS and the container, audiobookshelf, couldn’t write to the volume when I re-composed. So I just mounted the drive using samba and it’s working again.

cruft
Oct 25, 2007

edited to remove a stupid joke. It wasn't even offensive or ragey or anything. It was just too stupid for me (and hardly anything is too stupid for me).

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well Played Mauer posted:

Project “move plex to Linux” is underway. I gave up on nfs in docker and instead got samba running on my host machine so poo poo on the other VM that needs to access the drive can.

Next up is getting all the dockerized arrs to talk to each other as well as manage files all on the drive. The relative directory names are making it a lot more confusing than just directly managing everything in the host machine’s structure.

I think I’m over complicating something here. That or it’s just an inescapable initial pain in the rear end.

If you'd like some compose files that work for me:
You should just have to adjust ips, paths, and uid/gid's for these as well as any vpn settings. All of the Arrs will be going through your VPN now so if that container breaks or is updated you'll need to redeploy the whole compose file probably.

Plex:
code:
version: "3"

services:
  plex:
    image: ghcr.io/linuxserver/plex:latest
    container_name: plex
    network_mode: host
    volumes:
      - /path/to/Music:/music
      - /path/to/TV:/tv
      - /path/to/Movies:/movies
      - /path/to/config:/config

    environment:
      - PUID=1000
      - PGID=100
      - VERSION=docker
      - PLEX_CLAIM= #optional
      
    healthcheck:
      test: curl --fail http://192.168.7.233:32400/web/index.html || kill 1
      interval: 10s
      retries: 3
      start_period: 30s
      timeout: 10s
    restart: unless-stopped
transmission:
code:
version: '3.9'
services:
 transmission-openvpn:
    container_name: vpn_media_server-transmission-openvpn 
    volumes:
        - /path/to/download-folder:/downloads
        - /etc/localtime:/etc/localtime:ro
        - /path/to/Transmission/data:/data
        - /path/to/Transmission/scripts:/scripts
    dns:
      - 1.1.1.1
      - 1.0.0.1
    environment:
        - PUID=1000
        - PGID=100
        - CREATE_TUN_DEVICE=true
        - OPENVPN_PROVIDER= #Replace this with your vpn provider
        - TRANSMISSION_WEB_UI=combustion
        - NORDVPN_COUNTRY=US
        - NORDVPN_CATEGORY=legacy_p2p
        - NORDVPN_PROTOCOL=udp
        - OPENVPN_USERNAME=${vpnusername}
        - OPENVPN_PASSWORD=${vpnpassword}
        - OPENVPN_OPTS=--inactive 3600 --ping 10 --ping-exit 600
        - WEBPROXY_ENABLED=true
        - WEBPROXY_PORT=8888
        - LOCAL_NETWORK=192.168.7.0/24 #IP space for your local network
        - TRANSMISSION_SCRAPE_PAUSED_TORRENTS_ENABLED=false
        - TRANSMISSION_DOWNLOAD_DIR=/downloads
        - TRANSMISSION_INCOMPLETE_DIR=/downloads
        - TRANSMISSION_RATIO_LIMIT=2
        - TRANSMISSION_IDLE_SEEDING_LIMIT=300
        - TRANSMISSION_IDLE_SEEDING_LIMIT_ENABLED=true
        - TRANSMISSION_RATIO_LIMIT_ENABLED=true
    cap_add:
        - NET_ADMIN
    logging:
        driver: json-file
        options:
            max-size: 10m
    ports:
        - 9091:9091   #Transmission
        - 7878:7878   #Radarr
        - 8989:8989   #Sonarr
        - 8686:8686   #Lidarr
        - 9696:9696   #Prowlarr
        - 8888:8888   #Transmission Proxy Port
        - 5055:5055   #Overseer
    restart: always
    healthcheck:
      test: curl --fail http://192.168.7.233:9091 || kill 1
      interval: 10s
      retries: 3
      start_period: 60s
      timeout: 10s
    image: haugene/transmission-openvpn

 prowlarr:
    image: ghcr.io/linuxserver/prowlarr:develop
    container_name: prowlarr
    network_mode: "service:transmission-openvpn"
    environment:
      - PUID=1000
      - PGID=100
      - TZ=America/New_York
    depends_on:
        - transmission-openvpn
    volumes:
      - /path/to/prowlarr/config:/config
    healthcheck:
      test: curl --fail http://192.168.7.233:9696 || kill 1
      interval: 10s
      retries: 3
      start_period: 30s
      timeout: 10s
    restart: unless-stopped
 
 radarr:
    image: ghcr.io/linuxserver/radarr:latest
    network_mode: "service:transmission-openvpn"
    container_name: radarr
    environment:
        - PUID=1000
        - PGID=100
        - TZ=America/New_York
        - UMASK=022 #optional
    depends_on:
        - transmission-openvpn
    volumes:
        - /path/to/radarr/config:/config
        - /path/to/media:/media
        - /path/to/download-folder:/downloads
    healthcheck:
      test: curl --fail http://192.168.7.233:7878 || kill 1
      interval: 10s
      retries: 3
      start_period: 30s
      timeout: 10s
    restart: unless-stopped
 
 sonarr:
    image: ghcr.io/linuxserver/sonarr:latest
    network_mode: "service:transmission-openvpn"
    container_name: sonarr
    environment:
        - PUID=1000
        - PGID=100
        - TZ=America/New_York
        - UMASK=022 #optional
    depends_on:
        - transmission-openvpn
    volumes:
        - /path/to/sonarr/config:/config
        - /path/to/media:/media
        - /path/to/download-folder:/downloads
    healthcheck:
      test: curl --fail http://192.168.7.233:8989 || kill 1
      interval: 10s
      retries: 3
      start_period: 30s
      timeout: 10s
    restart: unless-stopped
 
 lidarr:
    image: ghcr.io/linuxserver/lidarr:latest
    network_mode: "service:transmission-openvpn"
    container_name: lidarr
    environment:
        - PUID=1000
        - PGID=100
        - TZ=America/New_York
        - UMASK=022 #optional
    depends_on:
        - transmission-openvpn
    volumes:
        - /path/to/lidarr/config:/config
        - /path/to/media:/media
        - /path/to/download-folder:/downloads
    healthcheck:
      test: curl --fail http://192.168.7.233:8686 || kill 1
      interval: 10s
      retries: 3
      start_period: 30s
      timeout: 10s
    restart: unless-stopped

 overseerr:
    image: lscr.io/linuxserver/overseerr
    network_mode: "service:transmission-openvpn"
    container_name: overseerr
    environment:
      - PUID=1000
      - PGID=100
      - TZ=America/New_York
    depends_on:
        - transmission-openvpn
    volumes:
      - /path/to/overseerr/config:/config
    healthcheck:
      test: curl --fail http://192.168.7.233:5055 || kill 1
      interval: 10s
      retries: 3
      start_period: 30s
      timeout: 10s
    restart: unless-stopped

 recyclarr:
    image: ghcr.io/recyclarr/recyclarr
    container_name: recyclarr
    user: 1000:100
    network_mode: "service:transmission-openvpn"
    volumes:
      - /path/to/recyclarr/config:/config
    environment:
      - TZ=America/New_York
    restart: always

Nitrousoxide fucked around with this message at 20:22 on Jul 18, 2023

cruft
Oct 25, 2007

Nitrousoxide posted:

If you'd like some compose files that work for me:
You should just have to adjust ips, paths, and uid/gid's for these as well as any vpn settings. All of the Arrs will be going through your VPN now so if that container breaks or is updated you'll need to redeploy the whole compose file probably.

If I were starting this from scratch I would give a serious look to HomelabOS.

Currently I've only given it a casual glance and thought it looked intriguing.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Much appreciated, but I’m a Usenet dude so just force SSL/TLS on everything and hope for the best.

At this point I have everything running, just need to tweak all my settings to actually get it all talking

BlankSystemDaemon
Mar 13, 2009



I desperately want a dtrace-backed monitoring system like Sun FishWorks.

For those of you who don't know it, it's famously featured in this video:
https://www.youtube.com/watch?v=tDacjrSCeq4

EDIT: There's more information about it in these slides by Cindi McGuire and Brendan Gregg as well as these slides by Bryan Cantrill and Brendan Gregg.

It's nothing short of amazing, because it ties together SMF (which is what System500, launchd for macOS, LaunchPad, and OpenRC all failed at being), FMA (a failure management framework, capable of dealing not just with hardware failures but also software failures - which every OS needs, and only Solaris has ever had), and dtrace (a tracing facility for use in production systems at production scale).

BlankSystemDaemon fucked around with this message at 21:05 on Jul 18, 2023

Mr. Crow
May 22, 2008

Snap City mayor for life

BlankSystemDaemon posted:

I desperately want a dtrace-backed monitoring system like Sun FishWorks.

For those of you who don't know it, it's famously featured in this video:
https://www.youtube.com/watch?v=tDacjrSCeq4

EDIT: There's more information about it in these slides by Cindi McGuire and Brendan Gregg as well as these slides by Bryan Cantrill and Brendan Gregg.

It's nothing short of amazing, because it ties together SMF (which is what System500, launchd for macOS, LaunchPad, and OpenRC all failed at being), FMA (a failure management framework, capable of dealing not just with hardware failures but also software failures - which every OS needs, and only Solaris has ever had), and dtrace (a tracing facility for use in production systems at production scale).

The most amazing thing about this video to me has always been that this guy presumably works in there without ear protection, holy poo poo

BlankSystemDaemon
Mar 13, 2009



Mr. Crow posted:

The most amazing thing about this video to me has always been that this guy presumably works in there without ear protection, holy poo poo
I think I asked him about that once, and apparently it was just for the video.

Inceltown
Aug 6, 2019

cruft posted:

If I were starting this from scratch I would give a serious look to HomelabOS.

Currently I've only given it a casual glance and thought it looked intriguing.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
I post this as a warning to others who have developed enough experience and confidence to be at peak (or trough, depending on your preference) Dunning-Kruger, like me.

I spent a chunk of my day getting my Plex/Arr suite VM up and running in Proxmox. When I swapped my five-drive hard drive array over to my Proxmox machine, I neglected to disable the hardware RAID (yeah, I know) I'd set up for the Macbook. Proxmox picked up the drive no problem, so I said, "Hey, why not format this as ZFS and then assign that disk to the VM? I'm so smart." Did that, despite Proxmox saying that ZFS isn't compatible with hardware RAIDs. Whatever, man.

That whole setup lasted right up until I started doing heavy writing to my disk array, at which point poo poo blew up so bad that the Proxmox machine wouldn't boot with the hard drive array plugged into it. So I went back and reset the drive array to a bunch of independent disks. I figured I'd ZFS that bad boy and throw it into a brand new VM, but couldn't figure out how to do that without losing terabytes of space for reasons that aren't yet clear to me. So I ended up passing the USB controller into the VM and creating a software RAID when I reinstalled ubuntu server.

The second attempt only took about an hour since I'd jogged my memory earlier in the day. Upside is I've learned how to set up Samba in addition to NFS. And I get a pretty snappy Macbook back, though I'm not sure what I want to do with it.

e: I take some of this back. After more shenanigans with the system freaking out on the software RAID I think the most recent power outage borked my enclosure. Like I had to boot into recovery mode to force assemble the RAID with mdadm despite it saying the drives were stable. Then after another reboot the vm would let me access the console in proxmox but wouldn’t connect to the internet. The other two vms on the same machine are fine. The only difference is they don’t have the drives passed through to them.

This has been the weirdest computer thing I’ve dealt with in a while.

I’m getting a new enclosure without a RAID controller from Amazon tomorrow to see if that’s what did it.

Well Played Mauer fucked around with this message at 09:37 on Jul 19, 2023

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
To continue the fun “why not spend a bunch of money when poo poo breaks” trend in my house, I’m upgrading my old i7 8700k desktop with a newer machine. This will leave me with a machine that dislikes HDDs but has 32 gigs of RAM and space to drop an nvidia 1080 card I have laying around.

My immediate thought was turning it into a Plex machine via proxmox, but am leery about dealing with pci-e passthrough. Have any of y’all tried it, and is there a good walkthrough (video or otherwise) you’d recommend? I’ve googled a bit and liked Craft Computing’s but wouldn’t mind seeing someone else do it too.

Aware
Nov 18, 2003
Dunno about proxmox pass through but I reckon it'd be quite easy. Unraid does it fine too. And really nothing to lose by giving it a go. Proxmox rules generally for VM stuff.

Zapf Dingbat
Jan 9, 2001


I have a 1000 generation Nvidia card passthroughed on proxmox and it works fine as a remote Steam machine. I considered using it for Plex but I've never had a problem using CPU only for encoding.

I followed instructions straight off of Proxmox's site.

Zapf Dingbat fucked around with this message at 03:40 on Jul 29, 2023

lostleaf
Jul 12, 2009

Zapf Dingbat posted:

I have a 1000 generation Nvidia card passthroughed on proxmox and it works fine as a remote Steam machine. I considered using it for Plex but I've never had a problem using CPU only for encoding.

I followed instructions straight off of Proxmox's site.

Is it still the case you can only pass thru discrete GPU? And that two clients can't share the same GPU?

Aware
Nov 18, 2003
I think nVidia removed that restriction in later drivers. Containers can certainly share a GPU, pretty sure VMs can now.

Adbot
ADBOT LOVES YOU

lostleaf
Jul 12, 2009

Aware posted:

I think nVidia removed that restriction in later drivers. Containers can certainly share a GPU, pretty sure VMs can now.

Thanks. It's been a few years since I looked into running VM

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply