Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Resdfru
Jun 4, 2004

I'm a freak on a leash.

corgski posted:

gonna be great when they go down and take 2/3 of the internet with them because nobody bothered to think about that.

This has already happened a few times since so many companies use their cdn. Unless you mean permanently

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



corgski posted:

gonna be great when they go down and take 2/3 of the internet with them because nobody bothered to think about that.
https://twitter.com/jschauma/status/1592730914227822593
https://twitter.com/jschauma/status/1592732158056988674

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


corgski posted:

Dynamic TXT records have been added!

:eyepop:

Well, there's my new project for the week. Thanks!

Aware
Nov 18, 2003
drat these companies centralising key infrastructure through the terrible crime of making it easy! I'm not sure how not using CF at home will stick it to them but you do you.

Inept
Jul 8, 2003

gigantic corporations control a huge amount of everything. lol that they tried to put 3 million records in a spreadsheet and were surprised when it poo poo itself

csammis
Aug 26, 2003

Mental Institution

Inept posted:

gigantic corporations control a huge amount of everything

Apparently this is the loving truth. I decided to go with dns.he.net and so far the following poo poo has happened:

  • Apis Networks dot com no longer claims knowledge of my old web hosting even though my nameservers are ns1.apisnetworks.com and ns2.apisnetworks.com
  • Going through Chrome's saved passwords redirects me to Hostineer, to which I can log in and get access to the management of said web hosting. In fact I can manage the DNS records stored on apisnetworks.com there too, but not change the registered nameservers themselves. Okay.
  • Hostineer is GoDaddy? I think?
  • I still have no idea who manages my domain name or who I registered it with twenty years ago so I did a WHOIS and ICANN thinks it's registered through Wild West Domains. What in the crisp blue hell is Wild West Domains??
  • Apparently it is actually Hostineer, which is actually GoDaddy, which is all being managed through secureserver.net
  • Which evidently is where I can change the nameserver records

There isn't a :psyduck: big enough for this. All I want is to put the name in the place to point to the thing :(

Neslepaks
Sep 3, 2003

Gandi is good.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Neslepaks posted:

Rolling your own CA is actually a nightmare and I recommend against it. So many bothersome issues went away when I changed to a LE wildcard instead.

Yep. It's a good exercise if you homelab to learn though. Most people don't know what a CA is or the difference between a private CA and self signed cert is.

A few years ago I had to deep dive on this for work and it definitely took me a few weeks to really wrap my head around PKI infrastructure. But yeah at home? No way would I use my own CA.

Potato Salad
Oct 23, 2014

nobody cares


Neslepaks posted:

Rolling your own CA is actually a nightmare and I recommend against it. So many bothersome issues went away when I changed to a LE wildcard instead.

hey serious question, why use LE wildcard? It seems like one of the added values of using LE is that it's easy to get specific certs for each individual system / pool of systems and have them automatically maintained

SamDabbers
May 26, 2003



Potato Salad posted:

hey serious question, why use LE wildcard? It seems like one of the added values of using LE is that it's easy to get specific certs for each individual system / pool of systems and have them automatically maintained

A wildcard cert could be useful if you only have one IP address and want to do TLS termination on a reverse proxy.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I use a wildcare cert for *.internal.(domain).(tld) so that I only have to setup the cloudflare challenge once for all my internal services and they can all use the same ssl cert.

Potato Salad
Oct 23, 2014

nobody cares


SamDabbers posted:

A wildcard cert could be useful if you only have one IP address and want to do TLS termination on a reverse proxy.

I might be old fashioned then. I make sure to enumerate SANs.

Is this a Good Enough™ (for self hosting) kind of thing, especially considering that LE cert key material is only good for a short period of time?

SamDabbers
May 26, 2003



SANs are the correct way to do it, but then you have to reissue the proxy's certificate every time you want to add or remove a service in your homelab, where you presumably tinker and host relatively short-lived experiments.

odiv
Jan 12, 2003

Neslepaks posted:

Gandi is good.
Yep, they're my registrar and have been for years.

Was on call over the holidays so got a little extra money on my paycheque I'm going to use to buy one of those fanless, 5-6 ethernet port, computers so I can roll my own router.
https://www.aliexpress.com/item/1005004336924039.html

Am currently thinking VyOS on proxmox. Wish me luck!

Neslepaks
Sep 3, 2003

Potato Salad posted:

hey serious question, why use LE wildcard? It seems like one of the added values of using LE is that it's easy to get specific certs for each individual system / pool of systems and have them automatically maintained

For me it's partly that it's easier to dist a wildcard cert to various parts of my infrastructure, some of which may not have direct internet access, combined with a desire to not "leak" my hostnames to the world.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

odiv posted:

Yep, they're my registrar and have been for years.

Was on call over the holidays so got a little extra money on my paycheque I'm going to use to buy one of those fanless, 5-6 ethernet port, computers so I can roll my own router.
https://www.aliexpress.com/item/1005004336924039.html

Am currently thinking VyOS on proxmox. Wish me luck!

I was a fan of Gandi for a long time but I jumped ship over their handling of this situation: https://news.ycombinator.com/item?id=22001822

I transferred all my domains to AWS

BlankSystemDaemon
Mar 13, 2009



fletcher posted:

I was a fan of Gandi for a long time but I jumped ship over their handling of this situation: https://news.ycombinator.com/item?id=22001822

I transferred all my domains to AWS
Every cloud provider has had data loss, and everyone has told their customers to use their own backup.
Data doesn't exist if there aren't three copies, on two different medium, and one that's offsite and offline (so it can't be touched even if someone gains physical access to the device).

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
I’ve been adding more stuff to the stack since I got up and running and noticed some pretty big performance hits on other services when making changes or rebuilding docker containers on my proxmox machine. After looking at the CPU/memory consumption during the slowdowns and seeing nothing out of the ordinary I finally got off my butt and put a new external SSD onto the machine. Going from a five-year-old Toshiba drive I got as a photo backup to a crucial drive from this decade immediately resolved the issues.

I also post this because my god did proxmox make changing drives on the VM easy. Just go into the hardware allocation and tell it to switch the storage to the other drive. It cloned everything and then I just restarted each VM.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


I have a Ceph pool in my Proxmox cluster that was on some cheap 2.5" laptop spinners, and it was always "meh" - but that was all the additional storage I could fit in my hosts (an assortment of OptiPlex micros). I never really wanted to put anything on the pool because it was so drat slow, any VMs I put there would inevitably have errors or randomly crash, but I still wanted to have some amount of clustered shared storage just in case.

I switched those out for some Intel DC SSDs and it's actually quite usable now. Those DC-class SSDs are nice, too, because they have multiple PBs of write endurance compared to the 200-300TB consumer NVMe drives. So I'm no longer skittish about putting high-IO stuff on them (InfluxDB, Graylog, etc.)

Proxmox good, SSDs gooder

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



SamDabbers posted:

SANs are the correct way to do it, but then you have to reissue the proxy's certificate every time you want to add or remove a service in your homelab, where you presumably tinker and host relatively short-lived experiments.

Yeah I use SANs, one each, for all my external services. My internal stuff is all wildcard certs.

Well Played Mauer posted:

I’ve been adding more stuff to the stack since I got up and running and noticed some pretty big performance hits on other services when making changes or rebuilding docker containers on my proxmox machine. After looking at the CPU/memory consumption during the slowdowns and seeing nothing out of the ordinary I finally got off my butt and put a new external SSD onto the machine. Going from a five-year-old Toshiba drive I got as a photo backup to a crucial drive from this decade immediately resolved the issues.

I also post this because my god did proxmox make changing drives on the VM easy. Just go into the hardware allocation and tell it to switch the storage to the other drive. It cloned everything and then I just restarted each VM.

If I ever replace my current bare metal optiplex build with something with a few slots for internal storage drives I'll probably switch to proxmox. It does look pretty great.

CopperHound
Feb 14, 2012

I spent the last 6 months trying out proxmox and decided it was only making the bullshit I wanted to run at home more complicated. I had it running on my nas and two usff PCs, and these are the extra things I struggled with:

  • Nodes of the cluster showing unknown status
  • Containers refusing to start
  • Containers refusing to stop
  • Updating the OS and learning the joys of ballooning log files for each vm and lxc container
  • Trying to find the configuration files or commands to do things not supported in the web interface like bind mounts.
  • Container start up order only applies per node and not across cluster

On the plus side, backups were simple and I managed to do all my fumbling around and transition away without any data loss.

I will admit, in retrospect, I was doing some stuff that seems quite dumb:
I wanted to run some drives with mergerFS and SnapRaid, so I passed them through to an open media vault VM and shared them back to the cluster with NFS. A lot of my struggles seemed to relate to me not running the NAS bare metal.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Yeah, that (virtualized NAS) is a setup I've tried and would always advise staying away from. It's just been pure pain for me.

I originally wanted to go that route when I moved to a server that supported HBA passthrough because I hated the idea of my NAS sitting at single-digit utilization all the time. Hardware maintenance and reboots sucked, and my new top fear became "what if my hypervisor dies and takes my NAS with it".

Bare-metal NAS is simpler and one less failure point to worry about. I got around the wasted resources issue by using the NAS as a Docker host, which also shrunk my VM footprint by a fair amount.

BlankSystemDaemon
Mar 13, 2009



I think it's safe to say that if you're looking seriously at self-hosting, the first thing you want is a general server OS rather than an appliance OS that's meant for a specific purpose and which was later retrofitted to be able to occasionally do other things.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?
I've ran TrueNAS CORE virtualized on ESXi for two years next month, and so far it's been completely hassle free. It only does storage and shares (NFS, etc), and then I have a separate Ubuntu LTS VM for containers. Thought I'd spin up some more VMs but haven't had any reason to do so yet.

If I were redoing my setup I'd start with evaluating Proxmox VE instead of going with VMWare (especially because of Broadcom, but also because I'd prefer to be using free/open software), but I'd definitely be using virtualization again.

BlankSystemDaemon
Mar 13, 2009



Sure, it's possible - but you're relying on the hardware accelerated virtualization technology for multi-tenancy, and it still complicates things to the point that it's easier to get a small detail wrong.

Not only does that mean each kernel takes up its own amount of space (ideally it shouldn't be enough to matter, but we all know just how ideal the real world with production workloads usually is), it also means each kernel has its own idea of what needs to be cached.

And while it's certainly possible to partition things if you've got enough resources, there's always an overhead associated, and the same is true for all the peripheral device virtualization you'll need to be doing (because it's only the CPU that's hardware accelerated) - so it's much harder to fit it into a small thermal/energy envelope.

odiv
Jan 12, 2003

Keito posted:


If I were redoing my setup I'd start with evaluating Proxmox VE instead of going with VMWare (especially because of Broadcom, but also because I'd prefer to be using free/open software), but I'd definitely be using virtualization again.
I have TrueNAS Scale on Proxmox VE right now. It's also been hassle free to this point, but I'm also not using TrueNAS for anything else besides the NAS thing. I probably could have just gone with the tried and true Core but Scale had just officially come out, so I said what the hell.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



By the way, if you're using a OMV and docker stops working after an update today, it's because of apparmor. The workaround is in this thread. You can either tell Docker to not expect it or install apparmor. Both solutions are in this thread, the Docker suggested solution is the latter and linked here:

https://forum.openmediavault.org/index.php?thread/46112-docker-not-working-since-omv-upgrade/&postID=337135#post337135

Nitrousoxide fucked around with this message at 06:43 on Feb 3, 2023

forbidden dialectics
Jul 26, 2005





odiv posted:

I have TrueNAS Scale on Proxmox VE right now. It's also been hassle free to this point, but I'm also not using TrueNAS for anything else besides the NAS thing. I probably could have just gone with the tried and true Core but Scale had just officially come out, so I said what the hell.

I've been using Turnkey Fileserver running in a container on ProxMox for NAS duties for years now; it works perfectly fine.

CopperHound
Feb 14, 2012

spincube posted:

I've found a nice Android app that can stream music from a few self-hosted applications: https://symfonium.app It's paid, with a full seven-day trial.

In my experience so far it plays really, really well with a Navidrome install, and offers a few choice features on top: like smart playlists, and falling back to an offline cache, and I appreciate being able to remove parts of the UI that I'll never use.
I have been using this for a bit now and need to ask is there an app like this for desktop?

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
I got Tailscale set up so I can access stuff from away when/if I ever need to. I really dig thatI don't need to open ports for Plex or Synology Photos.

This has sent me down a security hardening path. I'm not exactly a target and everything is behind a firewalled router, but I also figure it's worth taking some time to at the very least get some basic self defense created.

Some of what I've been getting set up:
  • Switching to certificates for SSH login
  • Not letting my wife set her own password for her Synology account :laffo:
  • Locked the reverse proxy access to the network subnet, though if someone were to get in it wouldn't really matter.
  • Throwing passwords on any front-end service that Tailscale can touch with Authelia/fail2ban, as well as limiting user access to specific machines/ports based on use case
  • Creating accounts with very specific access levels between systems. E.g. paperless-ngx writes to the Synology drive but can only write/read one specific shared folder rather than just logging in with my main account or 777'ing everything.
  • Put everything on the reverse proxy behind SSL using DNS challenge
  • Anything I'm reaching out to that can accept SSL connections, I do

Is there any obvious/non-obvious stuff I'm missing? I don't really need Fort Knox over here and I don't feel comfortable/need to expose poo poo via open ports, so some of this is more for learning than necessity, but it still seems important to do right.

Trapick
Apr 17, 2006

Well Played Mauer posted:

  • Not letting my wife set her own password for her Synology account :laffo:
Is there any obvious/non-obvious stuff I'm missing? I don't really need Fort Knox over here and I don't feel comfortable/need to expose poo poo via open ports, so some of this is more for learning than necessity, but it still seems important to do right.
Does your wife have physical access? Still risky.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well Played Mauer posted:

I got Tailscale set up so I can access stuff from away when/if I ever need to. I really dig thatI don't need to open ports for Plex or Synology Photos.

This has sent me down a security hardening path. I'm not exactly a target and everything is behind a firewalled router, but I also figure it's worth taking some time to at the very least get some basic self defense created.

Some of what I've been getting set up:
  • Switching to certificates for SSH login
  • Not letting my wife set her own password for her Synology account :laffo:
  • Locked the reverse proxy access to the network subnet, though if someone were to get in it wouldn't really matter.
  • Throwing passwords on any front-end service that Tailscale can touch with Authelia/fail2ban, as well as limiting user access to specific machines/ports based on use case
  • Creating accounts with very specific access levels between systems. E.g. paperless-ngx writes to the Synology drive but can only write/read one specific shared folder rather than just logging in with my main account or 777'ing everything.
  • Put everything on the reverse proxy behind SSL using DNS challenge
  • Anything I'm reaching out to that can accept SSL connections, I do

Is there any obvious/non-obvious stuff I'm missing? I don't really need Fort Knox over here and I don't feel comfortable/need to expose poo poo via open ports, so some of this is more for learning than necessity, but it still seems important to do right.

You can throw https://github.com/authelia/authelia in front of anything that doesn't have a sign in option built in to have some authentication for all your services.

I've personally not done this, but I live alone and don't ever have anyone over who has access to the non-guest wifi.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Just found a good deal on a HP 600 ProDesk G4 SFF with an i7-8700. I’m going to throw a bunch of RAM into it and make it my primary server/docker shenanigans, and keep the mini I have for pihole, unbound, and NPM.

The extra room to expand seems nice, just in case my MacBook goes and I need a Plex backup, too. But this will put me one Ethernet port over what my router can handle so now I’m looking at switches.

At this point I’m giving myself six months before I’ve got some rack mounted behemoth sitting in my office while my wife tells her friends she wishes I had a gambling problem.

SEKCobra
Feb 28, 2011

Hi
:saddowns: Don't look at my site :saddowns:
Rackmounted switches with nice cabling actually increase waifu satisfaction.

Arishtat
Jan 2, 2011

SEKCobra posted:

Rackmounted switches with nice cabling actually increase waifu satisfaction.

Around these parts spouse satisfaction is directly proportional to meeting WiFi SLAs. She does appreciate the central file storage though (when she remembers to use it).

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
I bought a lot of goodwill with the PiHole and Plex setups, actually.

I have a dumb switch question that I haven’t been able to find an answer for, and I’m pretty sure I’m overthinking it, but I’m trying to figure out the optimal setup for what gets plugged into the switch.

My router has 1 2.5 gbps port and 3 1 gbps ports. The unmanaged switch I’m grabbing has all 2.5 gbps ports. Most of my equipment has only 1 gbps NICs. My question is more about lan bandwidth than wan. Basically, do I lose anything if I plug the switch into the 2.5 gbps port, then plug everything into the switch, leaving the 1gbps ports on the router empty? Or would I be better off putting some of the machines on the 1gbps ports on the router to spread the distribution around?

Like I said, I’m thinking more about lan data transfer than anything else. Like, do two machines talking on separate 1gbps ports free up the stuff on the 2.5 gbps switch, since they wouldn’t be using overhead on the switch to talk, or does it not really work that way?

I think I’m conflating hubs and switches here, but what’s adding to my confusion is the router claims it can handle 6 gbps over WiFi. So does that mean the router can move that amount of data and the maximum capacity is limited per port(and thus there’s some benefit to using all the ports on the router), or is the WiFi maximum separate from the Ethernet ports, and it’s better to just throw everything into the switch that’s connected to the 2.5 gbps port on the router?

This really feels like a stupid question but I can’t quite turn it over in my head.

Arishtat
Jan 2, 2011

Well Played Mauer posted:

I bought a lot of goodwill with the PiHole and Plex setups, actually.

I have a dumb switch question that I haven’t been able to find an answer for, and I’m pretty sure I’m overthinking it, but I’m trying to figure out the optimal setup for what gets plugged into the switch.

My router has 1 2.5 gbps port and 3 1 gbps ports. The unmanaged switch I’m grabbing has all 2.5 gbps ports. Most of my equipment has only 1 gbps NICs. My question is more about lan bandwidth than wan. Basically, do I lose anything if I plug the switch into the 2.5 gbps port, then plug everything into the switch, leaving the 1gbps ports on the router empty? Or would I be better off putting some of the machines on the 1gbps ports on the router to spread the distribution around?

Like I said, I’m thinking more about lan data transfer than anything else. Like, do two machines talking on separate 1gbps ports free up the stuff on the 2.5 gbps switch, since they wouldn’t be using overhead on the switch to talk, or does it not really work that way?

I think I’m conflating hubs and switches here, but what’s adding to my confusion is the router claims it can handle 6 gbps over WiFi. So does that mean the router can move that amount of data and the maximum capacity is limited per port(and thus there’s some benefit to using all the ports on the router), or is the WiFi maximum separate from the Ethernet ports, and it’s better to just throw everything into the switch that’s connected to the 2.5 gbps port on the router?

This really feels like a stupid question but I can’t quite turn it over in my head.

In the absence of information such as switch and router model numbers just plug your LAN devices into the switch and move on with your life OP. For more information I’d suggest posing the question to the home networking thread with the relevant details included.

Mr. Crow
May 22, 2008

Snap City mayor for life

Well Played Mauer posted:

I bought a lot of goodwill with the PiHole and Plex setups, actually.

I have a dumb switch question that I haven’t been able to find an answer for, and I’m pretty sure I’m overthinking it, but I’m trying to figure out the optimal setup for what gets plugged into the switch.

My router has 1 2.5 gbps port and 3 1 gbps ports. The unmanaged switch I’m grabbing has all 2.5 gbps ports. Most of my equipment has only 1 gbps NICs. My question is more about lan bandwidth than wan. Basically, do I lose anything if I plug the switch into the 2.5 gbps port, then plug everything into the switch, leaving the 1gbps ports on the router empty? Or would I be better off putting some of the machines on the 1gbps ports on the router to spread the distribution around?

Like I said, I’m thinking more about lan data transfer than anything else. Like, do two machines talking on separate 1gbps ports free up the stuff on the 2.5 gbps switch, since they wouldn’t be using overhead on the switch to talk, or does it not really work that way?

I think I’m conflating hubs and switches here, but what’s adding to my confusion is the router claims it can handle 6 gbps over WiFi. So does that mean the router can move that amount of data and the maximum capacity is limited per port(and thus there’s some benefit to using all the ports on the router), or is the WiFi maximum separate from the Ethernet ports, and it’s better to just throw everything into the switch that’s connected to the 2.5 gbps port on the router?

This really feels like a stupid question but I can’t quite turn it over in my head.

https://www.nvtphybridge.com/full-duplex/

Realistically you won't notice optimizing your switch layout unless your constantly sending large amounts of data around your house, which would be unusual for me at least. Like the other guy said just use it and if you notice issues then nerd out on it. For example 4k streaming bitrate is anywhere from 40-128 Mbps, so if you have issues its probably not the switch. Probably.

Krailor
Nov 2, 2001
I'm only pretending to care
Taco Defender

Well Played Mauer posted:

I bought a lot of goodwill with the PiHole and Plex setups, actually.

I have a dumb switch question that I haven’t been able to find an answer for, and I’m pretty sure I’m overthinking it, but I’m trying to figure out the optimal setup for what gets plugged into the switch.

My router has 1 2.5 gbps port and 3 1 gbps ports. The unmanaged switch I’m grabbing has all 2.5 gbps ports. Most of my equipment has only 1 gbps NICs. My question is more about lan bandwidth than wan. Basically, do I lose anything if I plug the switch into the 2.5 gbps port, then plug everything into the switch, leaving the 1gbps ports on the router empty? Or would I be better off putting some of the machines on the 1gbps ports on the router to spread the distribution around?

Like I said, I’m thinking more about lan data transfer than anything else. Like, do two machines talking on separate 1gbps ports free up the stuff on the 2.5 gbps switch, since they wouldn’t be using overhead on the switch to talk, or does it not really work that way?

I think I’m conflating hubs and switches here, but what’s adding to my confusion is the router claims it can handle 6 gbps over WiFi. So does that mean the router can move that amount of data and the maximum capacity is limited per port(and thus there’s some benefit to using all the ports on the router), or is the WiFi maximum separate from the Ethernet ports, and it’s better to just throw everything into the switch that’s connected to the 2.5 gbps port on the router?

This really feels like a stupid question but I can’t quite turn it over in my head.

You're overthinking it. Just plug everything into your switch and don't worry about it, nothing you have is going to impact network performance in any way.

Adbot
ADBOT LOVES YOU

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Appreciated, thanks everyone. I figured this was in practicality a rabbit hole but wasn't entirely sure if there was a good practice I was missing.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply