|
corgski posted:gonna be great when they go down and take 2/3 of the internet with them because nobody bothered to think about that. This has already happened a few times since so many companies use their cdn. Unless you mean permanently
|
# ? Jan 24, 2023 21:18 |
|
|
# ? May 29, 2024 21:37 |
corgski posted:gonna be great when they go down and take 2/3 of the internet with them because nobody bothered to think about that. https://twitter.com/jschauma/status/1592732158056988674
|
|
# ? Jan 24, 2023 22:12 |
|
corgski posted:Dynamic TXT records have been added! Well, there's my new project for the week. Thanks!
|
# ? Jan 24, 2023 22:52 |
|
drat these companies centralising key infrastructure through the terrible crime of making it easy! I'm not sure how not using CF at home will stick it to them but you do you.
|
# ? Jan 24, 2023 22:58 |
|
gigantic corporations control a huge amount of everything. lol that they tried to put 3 million records in a spreadsheet and were surprised when it poo poo itself
|
# ? Jan 25, 2023 04:11 |
|
Inept posted:gigantic corporations control a huge amount of everything Apparently this is the loving truth. I decided to go with dns.he.net and so far the following poo poo has happened:
There isn't a big enough for this. All I want is to put the name in the place to point to the thing
|
# ? Jan 25, 2023 04:38 |
|
Gandi is good.
|
# ? Jan 25, 2023 07:36 |
|
Neslepaks posted:Rolling your own CA is actually a nightmare and I recommend against it. So many bothersome issues went away when I changed to a LE wildcard instead. Yep. It's a good exercise if you homelab to learn though. Most people don't know what a CA is or the difference between a private CA and self signed cert is. A few years ago I had to deep dive on this for work and it definitely took me a few weeks to really wrap my head around PKI infrastructure. But yeah at home? No way would I use my own CA.
|
# ? Jan 25, 2023 09:36 |
|
Neslepaks posted:Rolling your own CA is actually a nightmare and I recommend against it. So many bothersome issues went away when I changed to a LE wildcard instead. hey serious question, why use LE wildcard? It seems like one of the added values of using LE is that it's easy to get specific certs for each individual system / pool of systems and have them automatically maintained
|
# ? Jan 25, 2023 14:40 |
|
Potato Salad posted:hey serious question, why use LE wildcard? It seems like one of the added values of using LE is that it's easy to get specific certs for each individual system / pool of systems and have them automatically maintained A wildcard cert could be useful if you only have one IP address and want to do TLS termination on a reverse proxy.
|
# ? Jan 25, 2023 14:54 |
I use a wildcare cert for *.internal.(domain).(tld) so that I only have to setup the cloudflare challenge once for all my internal services and they can all use the same ssl cert.
|
|
# ? Jan 25, 2023 15:37 |
|
SamDabbers posted:A wildcard cert could be useful if you only have one IP address and want to do TLS termination on a reverse proxy. I might be old fashioned then. I make sure to enumerate SANs. Is this a Good Enough™ (for self hosting) kind of thing, especially considering that LE cert key material is only good for a short period of time?
|
# ? Jan 25, 2023 15:46 |
|
SANs are the correct way to do it, but then you have to reissue the proxy's certificate every time you want to add or remove a service in your homelab, where you presumably tinker and host relatively short-lived experiments.
|
# ? Jan 25, 2023 16:32 |
|
Neslepaks posted:Gandi is good. Was on call over the holidays so got a little extra money on my paycheque I'm going to use to buy one of those fanless, 5-6 ethernet port, computers so I can roll my own router. https://www.aliexpress.com/item/1005004336924039.html Am currently thinking VyOS on proxmox. Wish me luck!
|
# ? Jan 25, 2023 17:06 |
|
Potato Salad posted:hey serious question, why use LE wildcard? It seems like one of the added values of using LE is that it's easy to get specific certs for each individual system / pool of systems and have them automatically maintained For me it's partly that it's easier to dist a wildcard cert to various parts of my infrastructure, some of which may not have direct internet access, combined with a desire to not "leak" my hostnames to the world.
|
# ? Jan 25, 2023 20:55 |
odiv posted:Yep, they're my registrar and have been for years. I was a fan of Gandi for a long time but I jumped ship over their handling of this situation: https://news.ycombinator.com/item?id=22001822 I transferred all my domains to AWS
|
|
# ? Jan 25, 2023 21:05 |
fletcher posted:I was a fan of Gandi for a long time but I jumped ship over their handling of this situation: https://news.ycombinator.com/item?id=22001822 Data doesn't exist if there aren't three copies, on two different medium, and one that's offsite and offline (so it can't be touched even if someone gains physical access to the device).
|
|
# ? Jan 25, 2023 21:12 |
|
I’ve been adding more stuff to the stack since I got up and running and noticed some pretty big performance hits on other services when making changes or rebuilding docker containers on my proxmox machine. After looking at the CPU/memory consumption during the slowdowns and seeing nothing out of the ordinary I finally got off my butt and put a new external SSD onto the machine. Going from a five-year-old Toshiba drive I got as a photo backup to a crucial drive from this decade immediately resolved the issues. I also post this because my god did proxmox make changing drives on the VM easy. Just go into the hardware allocation and tell it to switch the storage to the other drive. It cloned everything and then I just restarted each VM.
|
# ? Jan 27, 2023 06:06 |
|
I have a Ceph pool in my Proxmox cluster that was on some cheap 2.5" laptop spinners, and it was always "meh" - but that was all the additional storage I could fit in my hosts (an assortment of OptiPlex micros). I never really wanted to put anything on the pool because it was so drat slow, any VMs I put there would inevitably have errors or randomly crash, but I still wanted to have some amount of clustered shared storage just in case. I switched those out for some Intel DC SSDs and it's actually quite usable now. Those DC-class SSDs are nice, too, because they have multiple PBs of write endurance compared to the 200-300TB consumer NVMe drives. So I'm no longer skittish about putting high-IO stuff on them (InfluxDB, Graylog, etc.) Proxmox good, SSDs gooder
|
# ? Jan 27, 2023 15:07 |
SamDabbers posted:SANs are the correct way to do it, but then you have to reissue the proxy's certificate every time you want to add or remove a service in your homelab, where you presumably tinker and host relatively short-lived experiments. Yeah I use SANs, one each, for all my external services. My internal stuff is all wildcard certs. Well Played Mauer posted:I’ve been adding more stuff to the stack since I got up and running and noticed some pretty big performance hits on other services when making changes or rebuilding docker containers on my proxmox machine. After looking at the CPU/memory consumption during the slowdowns and seeing nothing out of the ordinary I finally got off my butt and put a new external SSD onto the machine. Going from a five-year-old Toshiba drive I got as a photo backup to a crucial drive from this decade immediately resolved the issues. If I ever replace my current bare metal optiplex build with something with a few slots for internal storage drives I'll probably switch to proxmox. It does look pretty great.
|
|
# ? Jan 27, 2023 16:09 |
|
I spent the last 6 months trying out proxmox and decided it was only making the bullshit I wanted to run at home more complicated. I had it running on my nas and two usff PCs, and these are the extra things I struggled with:
On the plus side, backups were simple and I managed to do all my fumbling around and transition away without any data loss. I will admit, in retrospect, I was doing some stuff that seems quite dumb: I wanted to run some drives with mergerFS and SnapRaid, so I passed them through to an open media vault VM and shared them back to the cluster with NFS. A lot of my struggles seemed to relate to me not running the NAS bare metal.
|
# ? Jan 27, 2023 18:19 |
|
Yeah, that (virtualized NAS) is a setup I've tried and would always advise staying away from. It's just been pure pain for me. I originally wanted to go that route when I moved to a server that supported HBA passthrough because I hated the idea of my NAS sitting at single-digit utilization all the time. Hardware maintenance and reboots sucked, and my new top fear became "what if my hypervisor dies and takes my NAS with it". Bare-metal NAS is simpler and one less failure point to worry about. I got around the wasted resources issue by using the NAS as a Docker host, which also shrunk my VM footprint by a fair amount.
|
# ? Jan 28, 2023 06:56 |
I think it's safe to say that if you're looking seriously at self-hosting, the first thing you want is a general server OS rather than an appliance OS that's meant for a specific purpose and which was later retrofitted to be able to occasionally do other things.
|
|
# ? Jan 28, 2023 13:54 |
|
I've ran TrueNAS CORE virtualized on ESXi for two years next month, and so far it's been completely hassle free. It only does storage and shares (NFS, etc), and then I have a separate Ubuntu LTS VM for containers. Thought I'd spin up some more VMs but haven't had any reason to do so yet. If I were redoing my setup I'd start with evaluating Proxmox VE instead of going with VMWare (especially because of Broadcom, but also because I'd prefer to be using free/open software), but I'd definitely be using virtualization again.
|
# ? Jan 28, 2023 14:41 |
Sure, it's possible - but you're relying on the hardware accelerated virtualization technology for multi-tenancy, and it still complicates things to the point that it's easier to get a small detail wrong. Not only does that mean each kernel takes up its own amount of space (ideally it shouldn't be enough to matter, but we all know just how ideal the real world with production workloads usually is), it also means each kernel has its own idea of what needs to be cached. And while it's certainly possible to partition things if you've got enough resources, there's always an overhead associated, and the same is true for all the peripheral device virtualization you'll need to be doing (because it's only the CPU that's hardware accelerated) - so it's much harder to fit it into a small thermal/energy envelope.
|
|
# ? Jan 28, 2023 16:08 |
|
Keito posted:
|
# ? Jan 28, 2023 17:17 |
By the way, if you're using a OMV and docker stops working after an update today, it's because of apparmor. The workaround is in this thread. You can either tell Docker to not expect it or install apparmor. Both solutions are in this thread, the Docker suggested solution is the latter and linked here: https://forum.openmediavault.org/index.php?thread/46112-docker-not-working-since-omv-upgrade/&postID=337135#post337135 Nitrousoxide fucked around with this message at 06:43 on Feb 3, 2023 |
|
# ? Feb 3, 2023 06:29 |
|
odiv posted:I have TrueNAS Scale on Proxmox VE right now. It's also been hassle free to this point, but I'm also not using TrueNAS for anything else besides the NAS thing. I probably could have just gone with the tried and true Core but Scale had just officially come out, so I said what the hell. I've been using Turnkey Fileserver running in a container on ProxMox for NAS duties for years now; it works perfectly fine.
|
# ? Feb 3, 2023 06:33 |
|
spincube posted:I've found a nice Android app that can stream music from a few self-hosted applications: https://symfonium.app It's paid, with a full seven-day trial.
|
# ? Feb 3, 2023 20:24 |
|
I got Tailscale set up so I can access stuff from away when/if I ever need to. I really dig thatI don't need to open ports for Plex or Synology Photos. This has sent me down a security hardening path. I'm not exactly a target and everything is behind a firewalled router, but I also figure it's worth taking some time to at the very least get some basic self defense created. Some of what I've been getting set up:
Is there any obvious/non-obvious stuff I'm missing? I don't really need Fort Knox over here and I don't feel comfortable/need to expose poo poo via open ports, so some of this is more for learning than necessity, but it still seems important to do right.
|
# ? Feb 3, 2023 23:13 |
|
Well Played Mauer posted:
|
# ? Feb 4, 2023 02:00 |
Well Played Mauer posted:I got Tailscale set up so I can access stuff from away when/if I ever need to. I really dig thatI don't need to open ports for Plex or Synology Photos. You can throw https://github.com/authelia/authelia in front of anything that doesn't have a sign in option built in to have some authentication for all your services. I've personally not done this, but I live alone and don't ever have anyone over who has access to the non-guest wifi.
|
|
# ? Feb 4, 2023 02:45 |
|
Just found a good deal on a HP 600 ProDesk G4 SFF with an i7-8700. I’m going to throw a bunch of RAM into it and make it my primary server/docker shenanigans, and keep the mini I have for pihole, unbound, and NPM. The extra room to expand seems nice, just in case my MacBook goes and I need a Plex backup, too. But this will put me one Ethernet port over what my router can handle so now I’m looking at switches. At this point I’m giving myself six months before I’ve got some rack mounted behemoth sitting in my office while my wife tells her friends she wishes I had a gambling problem.
|
# ? Feb 4, 2023 04:26 |
|
Rackmounted switches with nice cabling actually increase waifu satisfaction.
|
# ? Feb 4, 2023 09:28 |
|
SEKCobra posted:Rackmounted switches with nice cabling actually increase waifu satisfaction. Around these parts spouse satisfaction is directly proportional to meeting WiFi SLAs. She does appreciate the central file storage though (when she remembers to use it).
|
# ? Feb 4, 2023 14:46 |
|
I bought a lot of goodwill with the PiHole and Plex setups, actually. I have a dumb switch question that I haven’t been able to find an answer for, and I’m pretty sure I’m overthinking it, but I’m trying to figure out the optimal setup for what gets plugged into the switch. My router has 1 2.5 gbps port and 3 1 gbps ports. The unmanaged switch I’m grabbing has all 2.5 gbps ports. Most of my equipment has only 1 gbps NICs. My question is more about lan bandwidth than wan. Basically, do I lose anything if I plug the switch into the 2.5 gbps port, then plug everything into the switch, leaving the 1gbps ports on the router empty? Or would I be better off putting some of the machines on the 1gbps ports on the router to spread the distribution around? Like I said, I’m thinking more about lan data transfer than anything else. Like, do two machines talking on separate 1gbps ports free up the stuff on the 2.5 gbps switch, since they wouldn’t be using overhead on the switch to talk, or does it not really work that way? I think I’m conflating hubs and switches here, but what’s adding to my confusion is the router claims it can handle 6 gbps over WiFi. So does that mean the router can move that amount of data and the maximum capacity is limited per port(and thus there’s some benefit to using all the ports on the router), or is the WiFi maximum separate from the Ethernet ports, and it’s better to just throw everything into the switch that’s connected to the 2.5 gbps port on the router? This really feels like a stupid question but I can’t quite turn it over in my head.
|
# ? Feb 4, 2023 15:14 |
|
Well Played Mauer posted:I bought a lot of goodwill with the PiHole and Plex setups, actually. In the absence of information such as switch and router model numbers just plug your LAN devices into the switch and move on with your life OP. For more information I’d suggest posing the question to the home networking thread with the relevant details included.
|
# ? Feb 4, 2023 15:48 |
|
Well Played Mauer posted:I bought a lot of goodwill with the PiHole and Plex setups, actually. https://www.nvtphybridge.com/full-duplex/ Realistically you won't notice optimizing your switch layout unless your constantly sending large amounts of data around your house, which would be unusual for me at least. Like the other guy said just use it and if you notice issues then nerd out on it. For example 4k streaming bitrate is anywhere from 40-128 Mbps, so if you have issues its probably not the switch. Probably.
|
# ? Feb 4, 2023 16:55 |
|
Well Played Mauer posted:I bought a lot of goodwill with the PiHole and Plex setups, actually. You're overthinking it. Just plug everything into your switch and don't worry about it, nothing you have is going to impact network performance in any way.
|
# ? Feb 4, 2023 17:43 |
|
|
# ? May 29, 2024 21:37 |
|
Appreciated, thanks everyone. I figured this was in practicality a rabbit hole but wasn't entirely sure if there was a good practice I was missing.
|
# ? Feb 4, 2023 18:11 |