Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Warbird
May 23, 2012

America's Favorite Dumbass

Cenodoxus posted:

For system and application monitoring, I use Telegraf feeding to InfluxDB with dashboards and such in Grafana.

For log aggregation I have Graylog being fed from several inputs depending on the source:
  • rsyslog for Linux hosts and network devices
  • Winlogbeat for Windows hosts, using the Beats format and managed by Graylog Sidecar
  • GELF for Docker and Kubernetes workloads, either using fluent-bit on K8s or the native GELF logging driver in the Docker daemon.

Seems interesting. Any good writeups for that stuff or is it largely self explanatory?

Adbot
ADBOT LOVES YOU

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Warbird posted:

Seems interesting. Any good writeups for that stuff or is it largely self explanatory?

For the most part a lot of these services I’ve expanded into over the years, fiddled with and found opportunities to integrate them.

Graylog can be a bit of a bear because it’s built on top of MongoDB and Elasticsearch, so while the setup process is well documented, you can end up going down a rabbit hole. The official docs are still pretty good though.

You might have better luck starting with InfluxDB and Telegraf since the setup is more streamlined. The official docs are very straightforward, but you can also find a lot of blogs and guides elsewhere.

mawarannahr
May 21, 2019

NetData is very easy to get running, FWIW. I'd even call it "batteries included."

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Looks like UnRAID might be switching to yearly updates subscription model and that would probably be the end of it for me. It's been a good run but I guess it's finally time to move over to full fat Linux since I'm comfortable with it.

Is ZFS ok on Rocky Linux? I prefer RedHat based distros but I'd like to avoid the bleeding edge Fedora.

If I went Debian I'd probably stick to vanilla Debian as I'm not a big fan of Ubuntu. I'm pretty sure ZFS is fine there.

My plan would be to do most of my configuration with Ansible (so it's repeatable in case of a disaster). And use Portainer for docker management.

Kibner
Oct 21, 2008

Acguy Supremacy

Matt Zerella posted:

Looks like UnRAID might be switching to yearly updates subscription model and that would probably be the end of it for me. It's been a good run but I guess it's finally time to move over to full fat Linux since I'm comfortable with it.

Is ZFS ok on Rocky Linux? I prefer RedHat based distros but I'd like to avoid the bleeding edge Fedora.

If I went Debian I'd probably stick to vanilla Debian as I'm not a big fan of Ubuntu. I'm pretty sure ZFS is fine there.

My plan would be to do most of my configuration with Ansible (so it's repeatable in case of a disaster). And use Portainer for docker management.

I'm guessing TrueNAS Scale is not an option? It runs on Debian, iirc.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Kibner posted:

I'm guessing TrueNAS Scale is not an option? It runs on Debian, iirc.

No, I hate how they use K8s for running containers. And I don't want to run a VM for dockers.

Tamba
Apr 5, 2010

Another option would be Openmediavault. It's debian as well, and used to include Portainer as the recommended way to use docker, but they've gotten rid of that in favor of their own webui for docker compose.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I use OpenMediaVault inside a Proxmox VM. Mostly for legacy reasons though. I started out using OMV on bare metal and heavily utilized the webui. I migrated it to a VM to make backing up/restoring way easier and barely use the webui now. I have a script that updates all my containers and OMV via ssh and I manage the containers via a git repo with compose files.

As a starting point though, OMV is a pretty easy distro to use. It's an easy recommend for newbies.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe
I still can’t or like haven’t gotten around to wrapping my head around git for this stuff. I keep my compose files backed up and they just live in the root of their stack’s directory.

Resdfru
Jun 4, 2004

I'm a freak on a leash.
I have a hacky method. I have a git repo for my compose files and I had a self hosted runner container. When I pushed /merged to main it would kick off github actions which just ran compose up.

I keep all my compose files in 1 directory so I use a command to run compose up on all the files

My runner token expired and I never bothered to generate a new one though. It was useful when I was making a lot of changes but I rarely touch anything anymore so when I do I push to git and then ssh in and pull and run compose up on the container I edited.

I have a variety of tasks on my kanboard that I wanna do to do things better but finding time is always the issue

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down
I'm still a few steps behind you all. I don't even know where to find my docker compose files on either Unraid or Portainer. It's all abstracted away in the GUI. I have my appdata and docker files all backed up so can get there, but not sure if the items I'm finding are the docker compose files, or the XML that Unraid works with.

I feel like I'm super close to having the full loop on basic Docker stuff closed once I get my head around this issue and then can do advanced poo poo like have all my docker files located elsewhere and in a press of a button have all my dockers fire right back up on a new instance.

Which that above paragraph goal always confuses me, since these containers all have stuff sitting in AppData that it needs... so if I don't have that backed up or waiting for the container, how is it so turnkey? Then I get right back to the start and my understanding falls apart.

Resdfru
Jun 4, 2004

I'm a freak on a leash.
I don't use unraid or portainer so I could be way off here but

1. If what you're looking at is a compose file it looks similar to this. Compose is a standardized yaml format. If it's not in this general format docker won't do what it should. But portainer or unraid could be wrapping this in something else i suppose.
https://docs.linuxserver.io/general/docker-compose/#v1x-compatibility

code:

version: "2.1"
services:
  heimdall:
    image: linuxserver/heimdall
    container_name: heimdall
    volumes:
      - /home/user/appdata/heimdall:/config
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
    ports:
      - 80:80
      - 443:443
    restart: unless-stopped

2. Your container compose file will say which volumes it uses to store data. If those locations are backed up then whatever the container is doing is backed up for the most part. If there is no volume attached then the container is likely ephemeral

Now maybe unraid or portainer changes things and it stores that stuff in some sort of app data? I guess they could be keeping all your volumes in the same place.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Disaster avoided. Looks like current UnRAID licenses are unaffected by the new keys.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



TraderStav posted:

I'm still a few steps behind you all. I don't even know where to find my docker compose files on either Unraid or Portainer. It's all abstracted away in the GUI. I have my appdata and docker files all backed up so can get there, but not sure if the items I'm finding are the docker compose files, or the XML that Unraid works with.

I feel like I'm super close to having the full loop on basic Docker stuff closed once I get my head around this issue and then can do advanced poo poo like have all my docker files located elsewhere and in a press of a button have all my dockers fire right back up on a new instance.

Which that above paragraph goal always confuses me, since these containers all have stuff sitting in AppData that it needs... so if I don't have that backed up or waiting for the container, how is it so turnkey? Then I get right back to the start and my understanding falls apart.

You can grab the compose files out of portainer by going to settings and then back up Portainer. They should all just be in a zip file you can unzip. It won't be organized with any descriptions for the various compose files so you'll need to go into each one and figure out which service it's for. But assuming you don't have a ton it shouldn't take too long.

TransatlanticFoe
Mar 1, 2003

Hell Gem

TraderStav posted:

I'm still a few steps behind you all. I don't even know where to find my docker compose files on either Unraid or Portainer. It's all abstracted away in the GUI.

When I was using stacks, all of my compose files were under the compose/ directory in the mapped volume. They’re each under separate numbered folders by id, you can see which is which by mousing over in Portainer and checking the URL. Once I figured it out I had them symlinked in a separate folder with easier names.

hellotoothpaste
Dec 21, 2006

I dare you to call it a perm again..

So there’s a lot of talk about UniFi here, and my house came with a pair of APs pre-installed with no documentation. The circle-y ones with the LED ring in the center, they route back to a cable cabinet/some sort of PoE adapters plugged into the wall. I used to be a network guy but have literally no time to dig into how to access these things once they’re plugged in (I’ve left them off the network until I know how to admin them).

Any tips from the thread, or a better thread to ask in, etc. regarding these? They’d be fed by gigabit fiber, which right now I have plugged into one of those edgelord gaming routers I picked up during some VR work… which is fine. Any helpful resources on where to start would be great, as I have a feeling there’s a “smart home” way to deal with this poo poo through an app or whatever.

Cool thread, thanks for creating it

Edit: I’m also open to ‘let me google that for you’ burns since honestly I’m being lazy about it right now.

Edit edit: I’m guessing this is the one, same adapter etc. so at least I have a manual for now: https://dl.ubnt.com/guides/UniFi/UniFi_AP_QSG.pdf

Idiot edit: Mine are currently lit blue which doesn’t seem to be a status for the one in the manual. Anyway, it looks exactly the same but definitely consumer-level.

Dumbass edit: It’s UAP-AC-LITE and blue means the network is up somehow, so I guess I really am going to stupidly dive into this late night. Fml

hellotoothpaste fucked around with this message at 09:01 on Feb 21, 2024

Aware
Nov 18, 2003
I got no love for Ubiquiti poo poo but I think you'll probably need to pull them down and hit the reset button. Not sure you can adopt them to a UniFi controller (yeah you'll need that too) if they're already set up elsewhere/aren't factory default.

Resdfru
Jun 4, 2004

I'm a freak on a leash.
You can run APs without a controller but I lm not sure which if any features would be unavailable

You will probably have to reset them but it's possible they are using default creds

https://lazyadmin.nl/home-network/setup-unifi-ap-without-controller/

As this is the self hosting thread, you can easily self host the unifi controller in docker.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
You don't need to keep the controller running, once it's setup you can turn it off and the wifi will continue to work fully functional. Just turn the controller software back on every few months for updates or when you need to make changes. You can install it on docker, or just on your normal computer.

Having it run somewhere 24/7 you can get logs and statistics and things, check what devices are connected, etc.

And I think you will need to go and reset them with a paper clip or pin in order to be able to adopt them.

unknown
Nov 16, 2002
Ain't got no stinking title yet!


Provided they are not eol (end of life) you can configure them with the unifi app and not setup a controller. You get about 90% of the functionality/configuration options with the app.

Motronic
Nov 6, 2009

THF13 posted:

You don't need to keep the controller running, once it's setup you can turn it off and the wifi will continue to work fully functional.

"Fully" meaning full if you don't use a guest network or expect advanced/assisted handoff between APs.

Dyscrasia
Jun 23, 2003
Give Me Hamms Premium Draft or Give Me DEATH!!!!

mawarannahr posted:

NetData is very easy to get running, FWIW. I'd even call it "batteries included."

Netdata works great for home servers. The docker compose file works out of box to get all system stats. I've not tried to get it running on my fedora system with podman as a child to my Ubuntu servers parent yet.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Netdata is good. I ran into a weird issue with Netdata in docker-compose, but my use case was probably a little odd to start with.

If you follow the official instructions, they have you set the container's hostname equal to the FQDN of your host. I also had some other containers for various services that were hosted under the host's FQDN via Traefik routes. One of those containers was Uptime Kuma, trying to probe those services underneath the host's FQDN. So when I ran Netdata, Docker's internal DNS fuckery meant that all those DNS queries from Kuma for my host's FQDN were getting resolved to the Netdata container.

It broke my monitoring for a little while until I figured it out. It wasn't a huge deal - I just commented out the hostname line, but that also meant that my Netdata instance saw the hostname as whatever the container ID happened to be, rather than the name of the host.

Cenodoxus fucked around with this message at 01:37 on Feb 22, 2024

cruft
Oct 25, 2007

I liked netdata a whole lot too. It was less resource intensive than Grafana/Prometheus, and even had some information on how to take that load down further. At the end of the day, though, it was still taking something close to 8% of the entire CPU on the RPi4, which is what pushed me to write my own thing displaying CPU use only when asked, and going back to the way I did system admin on the Sun 4a labs in the 90s.

But for the casual homelab, I feel like netdata is probably the smart place to start. It has excellent documentation on everything it's collecting and why it's important, and is much simpler to set up than Grafana/Promethus/Exporters, which, frankly, are a pretty daunting task.

:siren: I am running a Raspberry Pi. I realize this is unusual. I am not asking for assistance.

BlankSystemDaemon
Mar 13, 2009



Netdata is pretty neat, but loving hell I wish it was more granular - then again, that'd also increase the probe effect that cruft was just talking about above.

I want someone to do what Sun FishWorks did, which you can see demonstrated here:
https://www.youtube.com/watch?v=tDacjrSCeq4

A good setup of graphana and prometheus can kinda get you close to that, but the probe effect is much higher than it would be just using dtrace.

mawarannahr
May 21, 2019

BlankSystemDaemon posted:

Netdata is pretty neat, but loving hell I wish it was more granular - then again, that'd also increase the probe effect that cruft was just talking about above.

I want someone to do what Sun FishWorks did, which you can see demonstrated here:
https://www.youtube.com/watch?v=tDacjrSCeq4

A good setup of graphana and prometheus can kinda get you close to that, but the probe effect is much higher than it would be just using dtrace.

Yeah, I was actually trying to do something to log resource usage of a few specific processes (and ideally their threads) and I couldn't find a way to do it by PID. ended up polling ps while true and parsing the output... surely there has got to be a better way in 2024?

BlankSystemDaemon
Mar 13, 2009



mawarannahr posted:

Yeah, I was actually trying to do something to log resource usage of a few specific processes (and ideally their threads) and I couldn't find a way to do it by PID. ended up polling ps while true and parsing the output... surely there has got to be a better way in 2024?
Unfortunately not, and as you've discovered, even ps has a large probe effect.

I still maintain that the only solution is dtrace, or something with a similar design.
And a packet filter with extensions doesn't count.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Is Traefik my only option for using Google OAUTH easily to protect and unify the logins for my apps?

I'm currently using SWAG with Authelia but I really hate that I have to run a database and Redis just for SSO. This is all internal protection and learning.

Anyway it's pretty easy to turn Authelia on for SWAG but it looks like that's your only option aside from LDAP or basic?

cruft
Oct 25, 2007

Matt Zerella posted:

Is Traefik my only option for using Google OAUTH easily to protect and unify the logins for my apps?

I'm currently using SWAG with Authelia but I really hate that I have to run a database and Redis just for SSO. This is all internal protection and learning.

Anyway it's pretty easy to turn Authelia on for SWAG but it looks like that's your only option aside from LDAP or basic?

I just set up OAuth2-Proxy using my Gitea server for an IDP, but Google is the default. Works great, no database required, it's essentially the architecture I came up with for my (crappier) SimpleAuth in that it encrypts the persistent data for itself, and tells the browser to remember it. There are things you can't do this way, like revoke issued tokens, but for a homelab it's a solid option.

I'm using Caddy, which isn't well-documented for OAuth2-Proxy. Traefik is. Give it a whirl, I think you're going to enjoy it.

e: oh, I looked up SWAG. That's the thing that overlays nginx to make it as simple as Traefik and Caddy for TLS. OAuth2-Proxy provides nginx instructions, too, assuming SWAG exposes nginx config files...

If you're already thinking about using Traefik, I'd like to suggest looking at Caddy, also. I used Traefik for a while, and the TLS stuff was dreamy, but the configuration was just bonkers confusing to me: I was spending so much time debugging my Traefik config. Caddy gave me the auto-TLS, and moved back to "edit a configuration file" which hurt my brain less.

cruft fucked around with this message at 23:44 on Feb 25, 2024

cruft
Oct 25, 2007

I had a long effortpost about various FLOSS file hosting solutions and why they're all awful. It was too long. I'll put it on my blog or something.

But the short version is: I spent all weekend trying out various things and it's just about as bad as it was 20 years ago when I threw my hands in the air and told everybody to just use Google.

NextCloud is probably the best candidate, but I refuse to run it for multiple reasons, not the least of which is that I can't trust PHP with my tax records.

OwnCloud Infinite Scale is easy, low-resource, and super quick. It doesn't want an external database. I got it going in about 10 minutes. But it uses the filesystem in a screwy way that nothing else can work with. They're clearly targeting Enterprise customers. Which makes sense, really: Enterprise customers don't typically need PhotoPrism to see their files, and they also are willing to pay money for software. They might even appreciate that OCIS can interact natively with Ceph. Not CephFS: just plain old Rados Ceph. I know I would have found that appealing at my previous job. That is not appealing at home. You won't be getting my $0/month and feature requests for family photo albums, OwnCloud GmbH.

At the end of the weekend, I'm back to dufs, which provides WebDAV and a usable (but not awesome) user interface in the browser. I can front-end it with Caddy/OAuth2-Proxy for SSO. Maybe I'll create an SFTP user for the Android devices to run PhotoSync (WebDAV is now protected by MFA OIDC, and PhotoSync can't cope with that).

Other things I looked at but decided were worse than a WebDAV server with built-in JavaScript client: SeaFile, sftpgo, CozyCloud, FileStash, MyDrive, webdav-drive.

cruft fucked around with this message at 00:00 on Feb 26, 2024

Motronic
Nov 6, 2009

cruft posted:

NextCloud is probably the best candidate

This is the saddest true statement I've read in a while.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe
I loved that effort post! I had intended to read it later

cruft
Oct 25, 2007

tuyop posted:

I loved that effort post! I had intended to read it later

Okay, I put it in the hard mode thread. https://forums.somethingawful.com/showthread.php?threadid=4051802&pagenumber=1#post538003958

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
I think it sucks that you feel you need to quarantine those posts to that thread when they’d be perfectly fine here.

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


Matt Zerella posted:

I think it sucks that you feel you need to quarantine those posts to that thread when they’d be perfectly fine here.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

cruft posted:

NextCloud is probably the best candidate, but I refuse to run it for multiple reasons, not the least of which is that I can't trust PHP with my tax records.

What's the attack vector you are worried about?

cruft
Oct 25, 2007

Matt Zerella posted:

I think it sucks that you feel you need to quarantine those posts to that thread when they’d be perfectly fine here.

I was going to take it down anyway, it was too much. Glad you enjoyed the in-depth crap!


fletcher posted:

What's the attack vector you are worried about?

I do security research, and let's just say PHP (the interpreter) has not impressed me.

For instance, with NextCloud, in order to prevent Apache from serving up your configuration file, complete with database password etc, you have to provide a path exclusion in .htaccess. That's not a NextCloud architecture issue, per se: it's a design PHP encourages.

To answer your specific question: I worry about script kiddies exploiting one of the abundant opportunities for security vulnerabilities to creep into the code due to architectural decisions made years ago by PHP developers. A lot of this would be mitigated by disallowing all access until authenticated, but hiding the whole application behind a gateway would limit functionality.

cruft fucked around with this message at 03:15 on Feb 26, 2024

odiv
Jan 12, 2003

I use nextcloud and it's fine (yes, could be better), but I VPN in to access everything at home, so I'm less concerned security-wise.

Internet Explorer
Jun 1, 2005







Matt Zerella posted:

I think it sucks that you feel you need to quarantine those posts to that thread when they’d be perfectly fine here.

Yeah just to be clear you absolutely do not need to do this. It might make sense to put a little disclaimer/FYI at the top of your "hard mode" posts, but that's more just avoiding unnecessary misunderstanding than anything.

Adbot
ADBOT LOVES YOU

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt


I went through a similar journey and tried many of the same things. I'm also running on low-power hardware, a Pi4 with 4GB and a USB HDD (though my desktop can also run services if needed via Nomad), and Nextcloud was getting too clunky.

I eventually ended up with rclone running "serve sftp" in a `restrict,command` SSH key, so it's not even running when a client isn't connected and my clients don't have unrestricted shell access.

On the desktop, I browse files with KDE Dolphin and sync with rclone. On Android, I do both with RoundSync (which is literally rclone in app form).

I use PiGallery2 for photo and video albums, it's indeed as blisteringly fast as it claims. Since it's just a webapp/PWA, I added a basic auth with Caddy so I don't need to trust the authors' security. It lacks any ML-based autotagging capabilities like Immich etc., but my desktop has a LLM-capable GPU and it's a task I would like to eventually automate separately that way.

edit: I didn't stick with it for reasons I now can't remember, but you may want to take a look at KaraDAV. It's fast and puts a real effort into being compatible with Nextcloud client apps. Main downside is it's a small project by a small company, so you may want to be careful with security.

NihilCredo fucked around with this message at 10:26 on Feb 26, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply