Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
corgski
Feb 6, 2007

Silly goose, you're here forever.

Counterpoint: postfix comes with sane defaults now, at least in Debian, and packages like https://www.iredmail.org/ make it even more trivial. The challenge is getting a static IP with good reputation and knowing which RBLs are actually extortion scams and shouldn't be considered when receiving mail. (UCEPROTECT, that's the big extortion scam)

Oh and don't bother trying to deliver to O365 users, that's an uphill battle.

corgski fucked around with this message at 04:22 on Jan 14, 2023

Adbot
ADBOT LOVES YOU

Well Played Mauer
Jun 1, 2003

We'll always have Cabo

corgski posted:

Counterpoint: postfix comes with sane defaults now, at least in Debian, and packages like https://www.iredmail.org/ make it even more trivial. The challenge is getting a static IP with good reputation and knowing which RBLs are actually extortion scams and shouldn't be considered when receiving mail. (UCEPROTECT, that's the big extortion scam)

Oh and don't bother trying to deliver to O365 users, that's an uphill battle.

I have had to deal with email deliverability issues in the past and understand the nightmare from the "I just wanna send transactional emails gently caress" fence. I didn't even think about having to send to ESPs from a brand new IP with at best zero reputation. I was thinking more about hosting the personal domain so I could send MX records and such to push the name@customdomain through Proton's mail servers.

corgski
Feb 6, 2007

Silly goose, you're here forever.

If you're just using protonmail for hosted email on your own domain that takes pretty much all the hassle out of it - all you're doing is setting MX records and you're paying them to host the infrastructure and fight with SORBS when some chucklefuck posts their email username and password on a public forum and it gets picked up to send several gigabytes of dick enhancement pill spam.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
I found a dude selling a DS220+ with the 4GB RAM upgrade for like $289 on eBay, then grabbed a couple 6TB WD Red Pluses off Amazon.

I figure it’s a good entry into the NAS world assuming it’s not a scam. For the stuff I wanna store - media for the most part, with some space reserved for future VM projects when I get tired of external drives connected with USB 3 - the 6TB RAID should cover me until I’m ready to spend money again.

Please god let me stop spending money.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well Played Mauer posted:

I found a dude selling a DS220+ with the 4GB RAM upgrade for like $289 on eBay, then grabbed a couple 6TB WD Red Pluses off Amazon.

I figure it’s a good entry into the NAS world assuming it’s not a scam. For the stuff I wanna store - media for the most part, with some space reserved for future VM projects when I get tired of external drives connected with USB 3 - the 6TB RAID should cover me until I’m ready to spend money again.

Please god let me stop spending money.

Make sure to 3-2-1 backup your server image or configs at the very least. You want a backup solution that will allow you to recover from your house getting burned down so you don’t loose everything you’ve setup.

I personally don’t backup my media folder. That gets way too expensive. It only exists on my NAS. But I take a monthly disk image of my server with clonezilla (though you can use the built in imager for Proxmox since you’re not on bare metal like me) and then daily backups of my docker container config folders with duplicity.

Both the disk images and the docker config folders are shot over to my NAS for a local copy on another device, and I use backblaze b2 storage for an external backup.

I have had to use the server image backup once due to a hardware failure on a previous iteration of it when the old laptop it was on died, and I’ve used the docker folder config backups a handful of times when I REALLY hosed something up when fiddling with a container’s settings.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Yeah, I currently set up my VMs to monthly backups to the local storage on the server, but will work up a full proxmox backup once I get the NAS up and running. That's a great idea.

I had a similar thought on media given it's not the end of the world if it goes poof; it's more annoying than anything else. If instead I used the NAS to hold the media and back up the server/MacBook running the media management, I could stripe the two HDDs in the NAS for extra capacity, then leave the low-risk media on it plus have the server backups as a fallback. That way all three systems would need to die at once to lose everything, and I have 12TB of storage. If I get really crazy, I could pick up a bigger NAS for more redundancy.

The flaw with this plan is I'm pretty interested in pulling our family photos and poo poo away from Google/iCloud, and my wife has made it clear that if we do that and we lose all our photos, my time on the planet will be limited. I guess I could always RAID 1 this thing, and if I run out of storage get a bigger NAS machine and turn the 2-bay one into an SSD drive to hold a litany of VMs and move the exsting drives into the new one, then complement them with additional drives using SHR or something? This is way ahead of myself but it's fun to think about.

Nulldevice
Jun 17, 2006
Toilet Rascal

Well Played Mauer posted:

Yeah, I currently set up my VMs to monthly backups to the local storage on the server, but will work up a full proxmox backup once I get the NAS up and running. That's a great idea.

I had a similar thought on media given it's not the end of the world if it goes poof; it's more annoying than anything else. If instead I used the NAS to hold the media and back up the server/MacBook running the media management, I could stripe the two HDDs in the NAS for extra capacity, then leave the low-risk media on it plus have the server backups as a fallback. That way all three systems would need to die at once to lose everything, and I have 12TB of storage. If I get really crazy, I could pick up a bigger NAS for more redundancy.

The flaw with this plan is I'm pretty interested in pulling our family photos and poo poo away from Google/iCloud, and my wife has made it clear that if we do that and we lose all our photos, my time on the planet will be limited. I guess I could always RAID 1 this thing, and if I run out of storage get a bigger NAS machine and turn the 2-bay one into an SSD drive to hold a litany of VMs and move the exsting drives into the new one, then complement them with additional drives using SHR or something? This is way ahead of myself but it's fun to think about.

Do RAID1. RAID0 is suicide RAID. One disk goes, all of your data is gone forever. If you end up needing additional capacity down the line you have other options, such as adding larger disks one at a time to the existing NAS, for instance putting in 12TB drives, simply replace one disk, let it get built from the other disk, then when that's done, replace the second drive with another 12TB, and when that's done, the space increases. Make sure you're running in SHR for your pool config so things like this are automated.(cliff notes version)

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Oh, lastly, make sure that you validate your backups work. Spin up a new vm and deploy a backup you made to it and make sure it all works right. There would be nothing worse than thinking you have a good backup solution only to discover it wasn't actually working when it came time to use it to recover from a critical failure.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Will do. Everything NAS related should show up next weekend. Pretty excited about all this. I think in the meantime I’m gonna get a reverse proxy going and then get a homepage set up.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Got Portainer up and running on a VM. That was surprisingly easy. I installed Calibre-web on it just to see how it works, and aside from some database funkiness between it and Readarr, I got it up and running.

One thing I noticed is it takes the containers a good bit of time to fully initialize. I thought I broke something when getting Calibre-web to start up because the web GUI wasn't immediately available. Then I looked at the logs and realized it was still initializing 3-5 minutes after I ran the container. Is that normal, or is the external SSD I have everything on poo poo? (Maybe both?)

Other note: I really need to get a homepage/reverse proxy set up. poo poo's everywhere now and I'm forgetting which service is on which VM.

odiv
Jan 12, 2003

Well Played Mauer posted:

One thing I noticed is it takes the containers a good bit of time to fully initialize. I thought I broke something when getting Calibre-web to start up because the web GUI wasn't immediately available. Then I looked at the logs and realized it was still initializing 3-5 minutes after I ran the container. Is that normal, or is the external SSD I have everything on poo poo? (Maybe both?)
Maybe what you allocated the VM for resources? Or is the external SSD on like USB2 or something?

Well Played Mauer posted:

Other note: I really need to get a homepage/reverse proxy set up. poo poo's everywhere now and I'm forgetting which service is on which VM.
Yeah, that's what I said! :P

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well Played Mauer posted:

Got Portainer up and running on a VM. That was surprisingly easy. I installed Calibre-web on it just to see how it works, and aside from some database funkiness between it and Readarr, I got it up and running.

One thing I noticed is it takes the containers a good bit of time to fully initialize. I thought I broke something when getting Calibre-web to start up because the web GUI wasn't immediately available. Then I looked at the logs and realized it was still initializing 3-5 minutes after I ran the container. Is that normal, or is the external SSD I have everything on poo poo? (Maybe both?)

Other note: I really need to get a homepage/reverse proxy set up. poo poo's everywhere now and I'm forgetting which service is on which VM.

First start up of a container might take a bit as it creates the config files. But after that it should be faster. I don't use ANY remote mounted directories (except media directories) for my containers. The config folders all live locally on the server to maximize the performance. If you're using an external harddrive for your persistant volumes, try the internal ssd instead.

You could also try upping your hardware allocation to that VM and see if you get better performance.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Well Played Mauer posted:

One thing I noticed is it takes the containers a good bit of time to fully initialize. I thought I broke something when getting Calibre-web to start up because the web GUI wasn't immediately available. Then I looked at the logs and realized it was still initializing 3-5 minutes after I ran the container. Is that normal, or is the external SSD I have everything on poo poo? (Maybe both?)

Try to stick to a single distributor for images who builds off the same base image.

https://www.linuxserver.io are really good about this. All of their containers are built off the same base image. So let's say you have 3 containers from them where the base image is updated. The first image will pull the needed layers and if it shares any of them with the other 2, you already have them downloaded.

This is a nice rundown on it: https://vsupalov.com/docker-image-layers/

I believe hotio do the same with their images too.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Lotta stuff here, thanks y'all.

odiv posted:

Maybe what you allocated the VM for resources? Or is the external SSD on like USB2 or something?

It's USB 3, but the drive was one I had laying around to archive old photos we had before upgrading our iCloud subscription, so I wasn't looking for speed from it. I have a new Crucial SSD I'm planning to add to the unit via USB 3. Maybe I'll just move the backups over to that and run off it to see if I see improvement.

Nitrousoxide posted:

First start up of a container might take a bit as it creates the config files. But after that it should be faster. I don't use ANY remote mounted directories (except media directories) for my containers. The config folders all live locally on the server to maximize the performance. If you're using an external harddrive for your persistant volumes, try the internal ssd instead.

You could also try upping your hardware allocation to that VM and see if you get better performance.

Since it's a tiny PC, the only internal SSD is the one that's running Proxmox, and it wouldn't let me use it to store any of the VMs on it.

Matt Zerella posted:

Try to stick to a single distributor for images who builds off the same base image.

https://www.linuxserver.io are really good about this. All of their containers are built off the same base image. So let's say you have 3 containers from them where the base image is updated. The first image will pull the needed layers and if it shares any of them with the other 2, you already have them downloaded.

This is a nice rundown on it: https://vsupalov.com/docker-image-layers/

I believe hotio do the same with their images too.

I sorta did this by accident, or have so far. Linuxserver is where I grabbed Calibre-web, so I'll keep using that as my main source.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Not trying to turn this into my blog, so if this is getting annoying I'll stop, but!

I managed to get Nginx Proxy Manager up and running. I decided not to try to set up DDNS for remote access because my use case doesn't really justify that level of internet exposure. I figure I can make Wireguard a project if I ever really need to remote into my little command center. One problem I've run into, though, is I wasn't able to figure out how to log in to Proxmox from the proxy host I set up for it. It'll load the site correctly, but after logging in, it throws me a 401 error that says: "No ticket." I googled around a bit but didn't find a solution that worked for me. I imagine it's all SSL related, but I wasn't able to find anything on the internet about getting that working without exposing the NPM VM to the internet. In the meantime, I can connect by hitting the IP directly, so it's not a huge deal outside of it being the one thing dangling that isn't working.

odiv
Jan 12, 2003

Post the proxy host settings?

Also, I didn't bother setting up proxmox with the reverse proxy running on it because I figured if I was having problems with the proxmox server and needed to troubleshoot then I wouldn't want to remember how to get to it. I might end up offloading some of the network stuff to another box and also roll my own router. We'll see.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
I managed to get it running after finding a tutorial to create self-signed keys. I whipped one up with a 10-year expiration and threw it into NPM, and now everything's over https and Proxmox et al are happy campers. Now I just gotta figure out why Firefox is being a dick about the pem I created.

Well Played Mauer fucked around with this message at 23:40 on Jan 18, 2023

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Well Played Mauer posted:

I managed to get it running after finding a tutorial to create self-signed keys. I whipped one up with a 10-year expiration and threw it into NPM, and now everything's over https and Proxmox et al are happy campers. Now I just gotta figure out why Firefox is being a dick about the pem I created.

This is such a deep rabbit hole you've fallen down but if you learn it (PKI management) you'll be ahead of 75% of the people applying for jobs.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Yeah, I found a couple tutorials to set up self signed certs and put your own CA together but I haven’t had the time, so I started thinking about just doing it the usual way and decided I’d rather just VPN into everything. I’m still using a self signed cert so I can easily get into proxmox but I just turned SSL off for the other stuff so I’m not clicking a buncha times to open, like, Plex.

Tamba
Apr 5, 2010

You can use letsencrypt certificates without exposing your server to the internet by using the DNS challenge method. A lot easier to set up than messing with your own CA and having to distribute that to all your devices

Azhais
Feb 5, 2007
Switchblade Switcharoo

Tamba posted:

You can use letsencrypt certificates without exposing your server to the internet by using the DNS challenge method. A lot easier to set up than messing with your own CA and having to distribute that to all your devices

Yeah, I have npm set up to do it itself and keep them updated. Really simple interface with cloudflare's dns

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?
I set up PKI using Vault last year after reading this tutorial and it was relatively simple (if you're a nerd I guess). If you have a domain and the use case is just enabling TLS for HTTP services your devices will be accessing, you should probably be using Let's Encrypt instead of rolling your own CA though.

Potato Salad
Oct 23, 2014

nobody cares


rolling your own CA is so fun though :smithicide:

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Potato Salad posted:

rolling your own CA is so fun though :smithicide:

I use a standalone GUI called XCA to generate an offline root CA and sign a 10-year intermediate that I hand to ADCS to sign everything else. I also have another intermediate off of that for Smallstep CA to do all my ACME poo poo.

It was absolutely exhausting and mind-melting to get everything set up just right, but it's been very hands-off ever since.

And then there's the occasional self-hosted app (NextCloud :argh:) whose mobile app shits the bed if you don't have a globally trusted cert like LetsEncrypt.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I use the DNS challenge method with cloudflare for my local wildcard certs, and use the standard letsencrypt method for NPM for my external facing services. It's worked well for me for nearly a year now

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
I've read that streaming media is against the Cloudflare TOS. Would setting this up with DNS challenge and just running everything locally keep me out of their crosshairs?

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
That's specifically for the cloudflare tunnel, where they terminate SSL on their end and send the traffic to you in a VPN.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Ah, got it. I'll give this a try, then. Thanks, y'all.

e: Got that working. Thanks again, I just wasn't googling properly.

Well Played Mauer fucked around with this message at 19:16 on Jan 20, 2023

Coxswain Balls
Jun 4, 2001

Potato Salad posted:

rolling your own CA is so fun though :smithicide:

I went through learning how to do this, complete with the non networked root CA that I tossed in my safe. It was interesting learning how all that stuff works, although a year later when my certs started expiring I forgot how to do everything properly again.

csammis
Aug 26, 2003

Mental Institution
I've been getting my self-hosting poo poo together after having a tiny hosting with Apis Networks (which is now being administered through Hostineer? I guess? I have no idea what happened) for almost two decades. The server stuff is easy enough to self-host but Apis is also the DNS provider for my domain name. It's costing about $50 a year so I'd like to get my DNS providing moved elsewhere while I'm getting everything else rearranged. What's a goon-recommended DNS provider that just does DNS, preferably coming in at less than $50 annually?

corgski
Feb 6, 2007

Silly goose, you're here forever.

https://dns.he.net

There's no reason to pay for someone to host your nameservers, it's free from Hurricane Electric or a free value-add from just about every registrar these days.

Aware
Nov 18, 2003
I use cloudflare free tier myself. Does the job and has some nice additional features if you want them.

Resdfru
Jun 4, 2004

I'm a freak on a leash.
Cloudflare free is pretty cool. Has an api, and other stuff like zero trust you can use

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.
I've been using Cloudflare for a while and it's great. I didn't really appreciate it enough until a friend of mine needed help troubleshooting his business website that he registered through GoDaddy and that was a miserable experience.

BlankSystemDaemon
Mar 13, 2009



Cloudflare: Making DNS centralized since 2009.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


corgski posted:

https://dns.he.net

There's no reason to pay for someone to host your nameservers, it's free from Hurricane Electric or a free value-add from just about every registrar these days.

HE.net DNS is great. I would switch back in a heartbeat if they had an API that could do ACME DNS challenges. The only free DNS provider I've found that can do those is Cloudflare, so I'm stuck with them.

Corb3t
Jun 7, 2003

Cloudflare is tight, you can even host your own website from your server for free.

corgski
Feb 6, 2007

Silly goose, you're here forever.

BlankSystemDaemon posted:

Cloudflare: Making DNS centralized since 2009.

gonna be great when they go down and take 2/3 of the internet with them because nobody bothered to think about that.

corgski
Feb 6, 2007

Silly goose, you're here forever.

Cenodoxus posted:

HE.net DNS is great. I would switch back in a heartbeat if they had an API that could do ACME DNS challenges. The only free DNS provider I've found that can do those is Cloudflare, so I'm stuck with them.


quote:

Dynamic TXT records have been added!
We've received requests for dynamic TXT records for use with Let's Encrypt Certificates. We've added them in using the same basic ddns syntax that we already provide with the difference being the use of 'txt=' in place of 'myip='. You will need to create the dynamic TXT record from within the dns.he.net interface before you will be able to make updates. You will not be able to dynamically create and delete these TXT records as doing so would subsequently remove your ddns key associated with the record.

NOTE: A propagation delay of up to 5 minutes may be experienced as the TTL of the record will need to expire and refresh. You should wait before requesting DNS01 validation once you have updated the record.

Adbot
ADBOT LOVES YOU

Neslepaks
Sep 3, 2003

Potato Salad posted:

rolling your own CA is so fun though :smithicide:

Rolling your own CA is actually a nightmare and I recommend against it. So many bothersome issues went away when I changed to a LE wildcard instead.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply