Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

SEKCobra posted:

I believe for the tunnels you can absolutely run TLS inside of it. Normal web protection does terminate at their firewall.

Cloudflare encrypts the data transferred in the tunnels between their edge nodes and your host running cloudflared, but only after decrypting it once on their end. Whether cloudflared connects to your services via HTTPS or not afterwards doesn't change that.

Adbot
ADBOT LOVES YOU

Mr Crucial
Oct 28, 2005
What's new pussycat?
If you don’t like Cloudflare you could use OpenSSH tunnelling to do basically the same thing to your private VPS. That has the same advantage of being an outbound-initiated connection so no need for a fixed IP or any ports opening. Because it’s just SSH you don’t need any special software on the VPS, pretty much anything Linux would work. You’d need some sort of reverse proxy web server on the VPS to direct traffic down the tunnel once it’s established but it sounds like you’re already anticipating that.

Detecting if the tunnel goes down and bringing it up again automatically might require some finesse though.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Mr Crucial posted:

If you don’t like Cloudflare you could use OpenSSH tunnelling to do basically the same thing to your private VPS. That has the same advantage of being an outbound-initiated connection so no need for a fixed IP or any ports opening. Because it’s just SSH you don’t need any special software on the VPS, pretty much anything Linux would work. You’d need some sort of reverse proxy web server on the VPS to direct traffic down the tunnel once it’s established but it sounds like you’re already anticipating that.

Detecting if the tunnel goes down and bringing it up again automatically might require some finesse though.

I believe that this is one reason why Wireguard tunneling is pretty much recommended over SSH tunneling nowadays? Besides the (arguably) easier configuration, you can set a keepalive which helps when your home connection goes up and down.

Anyway, to (begin to) answer my own question, it seems I should not have been searching for "hardened" or "minimal" linux distros (because then I get general-purpose stuff like Alpine that needs configuration), but I should have been looking at router- and firewall-oriented distros.

https://en.wikipedia.org/wiki/List_of_router_and_firewall_distributions

I need to dig further, but OPNSense and VyOS seem the most promising options. IPFire looked interesting, but they seem to have a strange beef against Wireguard and, in the security/cryptography space, I definitely don't want to go against the herd.

e: yeah, replaced pfSense with OPNSense. I didn't know they were related, but from a quick googling OPNSense seems to be the consensus.
vvvvvv

NihilCredo fucked around with this message at 16:40 on Aug 31, 2022

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

NihilCredo posted:

I believe that this is one reason why Wireguard tunneling is pretty much recommended over SSH tunneling nowadays? Besides the (arguably) easier configuration, you can set a keepalive which helps when your home connection goes up and down.

Anyway, to (begin to) answer my own question, it seems I should not have been searching for "hardened" or "minimal" linux distros (because then I get general-purpose stuff like Alpine that needs configuration), but I should have been looking at router- and firewall-oriented distros.

https://en.wikipedia.org/wiki/List_of_router_and_firewall_distributions

I need to dig further, but pfSense and VyOS seem the most promising options. IPFire looked interesting, but they seem to have a strange beef against Wireguard and, in the security/cryptography space, I definitely don't want to go against the herd.

OPNSense is really good as well.

Mr Shiny Pants
Nov 12, 2012

NihilCredo posted:

I believe that this is one reason why Wireguard tunneling is pretty much recommended over SSH tunneling nowadays? Besides the (arguably) easier configuration, you can set a keepalive which helps when your home connection goes up and down.

Anyway, to (begin to) answer my own question, it seems I should not have been searching for "hardened" or "minimal" linux distros (because then I get general-purpose stuff like Alpine that needs configuration), but I should have been looking at router- and firewall-oriented distros.

https://en.wikipedia.org/wiki/List_of_router_and_firewall_distributions

I need to dig further, but OPNSense and VyOS seem the most promising options. IPFire looked interesting, but they seem to have a strange beef against Wireguard and, in the security/cryptography space, I definitely don't want to go against the herd.

e: yeah, replaced pfSense with OPNSense. I didn't know they were related, but from a quick googling OPNSense seems to be the consensus.
vvvvvv

If you don't mind the CLI, OpenBSD is really great for this. I have replaced my own pfSense machines with a couple of BSD VMs. Wireguard works really well on it and the firewall is basically one of the nicest I know.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
My CI/CD pipeline for self hosted stuff at home basically consists of the following stages:
* Test stuff: Run terraform to spin up a EC2 instance for testing, run ansible, run docker-compose, run sanity checks, tear down test instance
* Deploy stuff: Run ansible to install & config stuff, run docker-compose to start my apps

I like the simplicity of it and it seems to work well. I know there's a million different ways to do this stuff, but is there anything compelling I'm missing out on with this way of doing things?

The only thing I'm not a big fan of are the way secrets are managed. Currently they are defined as secret repository variables and get fed into the build when the pipeline runs.

Azhais
Feb 5, 2007
Switchblade Switcharoo
Hashicorp vault is pretty easy to set up and integrate with Ansible

Mr Shiny Pants
Nov 12, 2012
Continuing the Ceph talk, how doable would it be to host it on a RPI cluster? Like a couple of OSDs and the like. What would I need and how fault tolerant could I get it?

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Mr Shiny Pants posted:

Continuing the Ceph talk, how doable would it be to host it on a RPI cluster? Like a couple of OSDs and the like. What would I need and how fault tolerant could I get it?

There actually are arm64 Ceph packages these days, so you no longer have to compile the entire system yourself. Once upon a time that would have been your first problem.

So I suppose that leaves the fact that USB is your only mechanism for attaching mass storage. That feels extremely twitchy to me, especially given that you included "fault tolerant" as a requirement, but I suppose there's no reason not to experiment if you want to. Actually, since there's ARM packages available now, I'm sure someone out there has already done it.

The second thing to think about, if you really want to try Ceph on RPis, is the fact that it's a memory hog. When I was managing a cluster professionally, the recommendation was 1GB of RAM for every TB of storage being managed by a machine. Then there's the overhead for monitor, rgw, and/or mds daemons if you don't have dedicated nodes for those.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

Mr Shiny Pants posted:

Continuing the Ceph talk, how doable would it be to host it on a RPI cluster? Like a couple of OSDs and the like. What would I need and how fault tolerant could I get it?

Jeff Geerling did this in one of his recent videos. Now, he was clustering them on a dedicated board so I'm not sure how different would be to cluster them independently but he gives a decent overview.
https://www.youtube.com/watch?v=ecdm3oA-QdQ

Mr Shiny Pants
Nov 12, 2012

mdxi posted:

There actually are arm64 Ceph packages these days, so you no longer have to compile the entire system yourself. Once upon a time that would have been your first problem.

So I suppose that leaves the fact that USB is your only mechanism for attaching mass storage. That feels extremely twitchy to me, especially given that you included "fault tolerant" as a requirement, but I suppose there's no reason not to experiment if you want to. Actually, since there's ARM packages available now, I'm sure someone out there has already done it.

The second thing to think about, if you really want to try Ceph on RPis, is the fact that it's a memory hog. When I was managing a cluster professionally, the recommendation was 1GB of RAM for every TB of storage being managed by a machine. Then there's the overhead for monitor, rgw, and/or mds daemons if you don't have dedicated nodes for those.

The memory requirement is a bit of a bummer, I would have loved some big drives (20TB or something) and have a low power, high density system. USB is a problem (still), something like a RockPi with a dedicated SATA port could work no?

Does it actually need this much memory?

Scruff McGruff posted:

Jeff Geerling did this in one of his recent videos. Now, he was clustering them on a dedicated board so I'm not sure how different would be to cluster them independently but he gives a decent overview.
https://www.youtube.com/watch?v=ecdm3oA-QdQ

Thanks, I'll have a look.

fancyclown
Dec 10, 2012

frameset posted:

My boot SSD died last week for the first time since I switched to using docker for my services.

The rebuild experience was so much smoother and better thanks to docker. I set all my containers to write their app data to /opt/config/$service and then I rdiff-backup /opt/config to a backup disk nightly, I do the same with my fstab.

Getting back online was a case of installing ubuntu server, then mounting my backup disk. Then I copied the media disks and parity disk lines from the fstab backup to the new fstab, and rdiff-backup restored the config directories.

Then I ran docker-compose up -d on my backed up docker-compose file and all my services were back online as if they'd not been gone. I can never go back to losing a weekend on configuring all my services and putting config files back into /etc/ all over the place.

This is good poo poo. I would like to back up the config files of my pi and vm to dropbox. Any recommended app I can use for that?

It would be a nightmare to redo Traefik again from scratch on my pi-hole.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.
What's everyone using for PDFs these days? I've been using Drawboard PDF on my SurfaceBook forever and it's great but I'm finally replacing the laptop and figured I'd see what's out there since the new machine doesn't have a touchscreen (the main reason I went with Drawboard initially).

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
I've been using Syncthing on my phone to upload my Camera folder to my NAS in "Send Only" mode. I copied a bunch of additional photos to this folder that were on my desktop and now Syncthing on my phone says "Out of Sync" with a button to override changes. I don't to override the changes though, I just want this folder on my NAS to have all the photos, and for my phone to have only a subset of them. Is there some other way I should be going about this?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Found the answer, syncthing does not support this use case: https://forum.syncthing.net/t/understanding-override-changes-button-can-it-go-when-default-is-send-only/15889

Neslepaks
Sep 3, 2003

FWIW I use Nextcloud to sync photos and though it has many warts of its own it's fine with what you describe.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Neslepaks posted:

FWIW I use Nextcloud to sync photos and though it has many warts of its own it's fine with what you describe.

Thanks! That was the solution I was going to try next. I looked at Seafile and Resilio Sync as well, the reviews seemed mixed on them.

It's a bummer though, Syncthing works so well. I just wish it had a checkbox to support this particular use case. It still falls short on the google drive/photos experience of being able to browse your files whether or not they are synced locally yet, and automatically downloading a particular file or photo at the point you open it on the device.

Nextcloud seems pretty heavy weight though, having to run the php app, database, and web server.

For the files synced by Nextcloud, do they appear as just normal files on the filesystem of the server? That's what turned me off of Seafile, that it stores your files in a database.

Neslepaks
Sep 3, 2003

Yeah they're just files and you can browse them in the app or whatever. You can also two way sync to a pc using the deskop client if needed.

odiv
Jan 12, 2003

Anyone get into PBX? Just installed the Asterisk add-on in home assistant and thinking about getting into a small home phone system.

SamDabbers
May 26, 2003



odiv posted:

Anyone get into PBX? Just installed the Asterisk add-on in home assistant and thinking about getting into a small home phone system.

I tinkered with Asterisk and FreeSwitch for a while and it was fun to learn about but I never ended up actually using it since everything is cell phone centric these days.

odiv
Jan 12, 2003

My kids are going to start taking the bus home from school in the new year. Trying to keep from buying them phones as long as I can. Probably a losing battle. :)

I have an old android phone that I can put a SIP client on it that they can use to call us on if they need to. There's also the Home Assistant tablet in the hall that now has a SIP card on one of the screens that they could use for calls. Plus knowing the neighbours just in case (two are home right now with small kids, which is nice).

I was also thinking I could maybe use it for an in-home intercom system which would be useful sometimes.

But yeah, probably just easier to just buy a flip phone that lives at home.

odiv fucked around with this message at 18:11 on Nov 22, 2022

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Bruh you should just start planning now to get refurb/used iPhones. They're de rigeur amongst the youths.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

odiv posted:

Anyone get into PBX? Just installed the Asterisk add-on in home assistant and thinking about getting into a small home phone system.

I rolled a PBX using FreePBX with CallCentric as the VOIP provider. I think newer PBX softwares are simpler to setup and use than FreePBX, with the caveat that most are not free for multiple users. If I had to do it all over again, I wouldn't do it again, because like SamDabbers said, cellphones.

odiv
Jan 12, 2003

Yeah, but I might be able to get get free surplussed IP phones!

Did you have your own household extension when you were a kid? How cool would that be!?

(yeah, ok ok, not as cool as an iphone)

SamDabbers
May 26, 2003



odiv posted:

Yeah, but I might be able to get get free surplussed IP phones!

I found that they take a lot of room on my desk, I strongly prefer a wireless headset to using a handset receiver, and I almost never call anyone on the PSTN anymore.

That said, enjoy the VoIP rabbit hole if you have the itch to go down it :)

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
I have a friend that runs a small VoiP phone service provider, mostly targets small/medium businesses because yeah, nobody uses desk phones anymore outside of an office setting.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe
Is there any way to self-host a Speedtest instance? Something I can use to check the quality of my connection to my house when I’m not at home?

Tamba
Apr 5, 2010

You could run an iperf server. Your router might actually already include it.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
I use this as a docker container. https://github.com/librespeed/speedtest

Tamba
Apr 5, 2010

What's the best practice for having a bunch of docker containers that want a database?
- A DB for each application, that's only accessible from within that stack or
- A single DB with a user for each application

Giving everything its own DB seems cleaner, but I don't know how bad the overhead is?

CopperHound
Feb 14, 2012

I would say optimizing databases is beyond the scope of personal use. Do a DB for each application and avoid accidentally loving up backups or permissions.

Idk, if you want to have each instance in its own container or not tho.

kujeger
Feb 19, 2004

OH YES HA HA
I'd strongly recommend one db each, in separate containers. Makes it a alot easier to manage and reason about.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?
I've been setting up separate databases for each service so far, but the manual work involved with upgrading between major version releases (at least with PostgreSQL) makes it pretty annoying when you've got a bunch, so I'm not sure anymore. Started looking into CockroachDB as it seems pretty nice if going for clustering at some point.

SEKCobra
Feb 28, 2011

Hi
:saddowns: Don't look at my site :saddowns:
I'm running FreePBX, have for years, it's a nice thing to have, I have the phones ring when someone rings the doorbell and also run my alarm system through it for free.

BlankSystemDaemon
Mar 13, 2009



Having one database per container may be easier to think about conceptually, but modern database software has many optimizations to try and make the most out of the hardware+software it's running on.

Among other things, this means that the database software will be fighting with other instances of itself in order to cache memory for itself - this will in turn hurt every instance, probably lead to more swapping, and almost certainly mean the things using the database will run slower.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Deployed a Gitlab CE (Community Edition - The fullly Open Source version of Gitlab) instance on my server and it's pretty neat. I really like being able to automate the builds for my packages with the CI/CD pipeline like you can with github and honestly the whole user interface for CI/CD is better than github's.

Getting it to work with an already deployed reverse proxy was an ENORMOUS PAIN because gitlab includes its own reverse proxy baked in and you have to figure out which env flags to set to tell it to "shut this poo poo off" and it's not well documented.

If anyone ever has wants to do it I'll save you 8 hours of knob turning with this compose file:

code:
version: '3.6'
services:
  web:
    image: 'gitlab/gitlab-ce:latest'
    restart: always
    hostname: 'url.for.gitlab.tld'
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        external_url 'https://url.for.gitlab.tld'
        nginx['listen_port'] = 80 #make this match the http port (container side) you opened below for the web traffic for gitlab (not for the container registry)
        nginx['listen_https'] = false
        nginx['redirect_http_to_https'] = false 
        registry_external_url 'https://url.for.cointainer-registry.tld'
        gitlab_rails['registry_enabled'] = true
        gitlab_rails['registry_host'] = "url.for.cointainer-registry.tld"
        gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
        registry_nginx['listen_port'] = 5005 #make this match the port you set to open below
        registry_nginx['listen_https'] = false
        registry_nginx['proxy_set_headers'] = { "X-Forwarded-Proto" => "https", "X-Forwarded-Ssl" => "on"}
    ports:
      - '8006:80' #External port can be whatever here, I picked one that works for me
      - '2224:22' #SSH port
      - '5005:5005' #This is the port for the image registry
    volumes:
      - '/DockerAppData/Gitlab/config:/etc/gitlab'
      - '/DockerAppData/Gitlab/logs:/var/log/gitlab'
      - '/DockerAppData/Gitlab/data:/var/opt/gitlab'
    shm_size: '256m'
And then for your reverse proxy make sure you don't set "force ssl" for the container registry since it uses its own encryption/validation rather than HTTPS.

Nitrousoxide fucked around with this message at 15:35 on Dec 29, 2022

spincube
Jan 31, 2006

I spent :10bux: so I could say that I finally figured out what this god damned cube is doing. Get well Lowtax.
Grimey Drawer
I've found a nice Android app that can stream music from a few self-hosted applications: https://symfonium.app It's paid, with a full seven-day trial.

In my experience so far it plays really, really well with a Navidrome install, and offers a few choice features on top: like smart playlists, and falling back to an offline cache, and I appreciate being able to remove parts of the UI that I'll never use.

CopperHound
Feb 14, 2012

Thanks for sharing, that's it pretty cool that I can pull multiple sources together.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
I'm a complete newbie to this, but over the past month or so I've been using an old Macbook Pro as a Plex media server. Getting everything set up was pretty easy, since there are OS native apps for most of what I need on there. But that has me entering more of a rabbit hole (or pihole, in my most current case) of other stuff I can do from my home machines. I managed to get pihole installed in a docker container using docker desktop, but I still haven't been able to get pihole working on ipv6, apparently due to the fact that docker sucks at ipv6(?).

In that process, I've been looking at a ton of documentation and guides that reference linux to make changes to configurations and such. Translating where /etc/pihole/setupVars.conf is in macos isn't particularly difficult, but it's time consuming, and a lot of commands in linux just aren't there on macos. I know I can use brew to grab certain things, but the lack of consistency with linux in terms of file infrastructure means I'm going to have to translate the majority of guides I find, which just adds a lot of time to the configuration and troubleshooting process.

It seems like a foregone conclusion that if I jump into more self-hosted projects, I'm going to need to get a linux system built out. I used to run linux as a desktop OS nearly exclusively back in the early 2000s, so I'm reasonably confident I can get to a point where I have an opinion on systemd relatively quickly. But I have a few questions to help get started:
  • Is there a distro y'all recommend for this? I'm thinking I'd like to set up a NAS, have a media server, sync photos from devices, get pihole working on ipv6, etc. I'm not a sysadmin by trade so I'm not looking for something I'd need to build completely from scratch. Unraid seems like an option but I've only just started looking at it.
  • As far as hardware goes, what should I be looking for? I figure I'm going to want SSDs for the NAS but what about processor and RAM for a machine that does most of what I want to do? How much should I spend for the non-SSD stuff?
  • Is there a world where I can keep the Macbook working as a server for some things, but not all? It's handling Plex etc. like a champ aside from me getting cute with removable SSDs to hold my media, but maybe I could use the NAS for storage and just run the media server off the Macbook? I don't really use it for anything else, so I figure I can just leave it plugged in somewhere. I could try to get a linux install on the Macbook, but that seems hard as balls and I'd have to reconfigure the Plex stuff anyway, so that's probably out.

Adbot
ADBOT LOVES YOU

Resdfru
Jun 4, 2004

I'm a freak on a leash.
Distro really depends on what you prefer. Probably gonna be CentOS or Debian or Ubuntu or Fedora or insert distro here. You're right in that going Linux to do docker is a much better idea than trying to make it do things on windows or mac but beyond that it's what you want to work with imo. I personally just use Ubuntu 22.04

Hardware wise, depends on what you'll be doing. But most of the poo poo people self host is not really hardware demanding and in fact a lot of people run Nucs and the like and are fine. I personally run a 4th generation i5 as my main server and some Intel Nucs.

Definitely keep the Mac if you want. As long as all your stuff can talk to each other you're good. You could also move it to docker (either migrate or fresh install) so you can install Linux on the Mac and have another Linux node to put poo poo on then that container can go back to being on the Mac if you want

I'm no expert though

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply