Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Heck Yes! Loam!
Nov 15, 2004

a rich, friable soil containing a relatively equal mixture of sand and silt and a somewhat smaller proportion of clay.
don't expose your plex instance to the internet folks

https://arstechnica.com/information...orporate-vault/

Adbot
ADBOT LOVES YOU

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

BedBuglet posted:

I assume you plan to map /dev/(bus|ttyUSB0) into the container? There's cturra but honestly, I would consider homecoming it. It is simple enough to make a ntp server container. Here's a guide if you want.
https://blog.oddbit.com/post/2015-10-09-running-ntp-in-a-container/

yeah that’s the plan, I’ll give that guide a try

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Heck Yes! Loam! posted:

don't expose your plex instance to the internet folks

https://arstechnica.com/information...orporate-vault/

I’m not a target of a state actor, I’m fine

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Heck Yes! Loam! posted:

don't expose your plex instance to the internet folks

https://arstechnica.com/information...orporate-vault/

If I'm reading this right either the Plex server was used as a jump box or the person who got hacked shared a password?

Heck Yes! Loam!
Nov 15, 2004

a rich, friable soil containing a relatively equal mixture of sand and silt and a somewhat smaller proportion of clay.

Matt Zerella posted:

If I'm reading this right either the Plex server was used as a jump box or the person who got hacked shared a password?

it doesn't say specifically, but you can read between the lines and assume this person shared passwords between their home functions and work functions or had their work stuff stored in their personal lastpass. My assumption would be:

Plex hacked to get control of local machine > keylogger on plex machine > capture passwords to personal LastPass > Personal LastPass provided access to professional resources.

or

Plex hacked to get control of local machine > keylogger on plex machine > idiot used shared passwords and no mfa > access to professional resources.

Motronic
Nov 6, 2009

e.pilot posted:

I’m not a target of a state actor, I’m fine

Yep, this.

I also don't keep my passwords on someone else's computer. And my Plex server is in a DMZ, not on my own desktop or even in the same subnet.

cruft
Oct 25, 2007

Heck Yes! Loam! posted:

don't expose your plex instance to the internet folks

https://arstechnica.com/information...orporate-vault/

:eek:


e.pilot posted:

I’m not a target of a state actor, I’m fine

How were you able to determine this? I might be, I just don't know.

I wonder if there's a way to convince Jellyfin to work with the HTTP Basic auth that I have in front of literally everything else...

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Plex having an exploitable RCE is the least :stonk: thing about this incident.

quote:

Already smarting from a breach that put partially encrypted login data into a threat actor’s hands, LastPass on Monday said that the same attacker hacked an employee’s home computer and obtained a decrypted vault available to only a handful of company developers.
...
“The threat actor was able to capture the employee’s master password as it was entered, after the employee authenticated with MFA, and gain access to the DevOps engineer’s LastPass corporate vault.”

An employee with privileged access to customer information put their corporate LastPass vault on their home PC. This same vault held the access keys to their main S3 buckets that store encrypted customer data.

I don't know why nerds insist on letting work stuff anywhere near their personal computers. You do that and you might as well slap an asset tag on it, because they can fire you and seize that poo poo at a moment's notice.

cruft
Oct 25, 2007

Cenodoxus posted:

Plex having an exploitable RCE is the least :stonk: thing about this incident.

An employee with privileged access to customer information put their corporate LastPass vault on their home PC. This same vault held the access keys to their main S3 buckets that store encrypted customer data.

I don't know why nerds insist on letting work stuff anywhere near their personal computers. You do that and you might as well slap an asset tag on it, because they can fire you and seize that poo poo at a moment's notice.

BYOD, baby.

Seems like this must have been the Plex client?

corgski
Feb 6, 2007

Silly goose, you're here forever.

Cenodoxus posted:

Plex having an exploitable RCE is the least :stonk: thing about this incident.

At this point the plex client is all electron so it would be more :stonk: if it didn't have an assload of RCE vulnerabilities.

BlankSystemDaemon
Mar 13, 2009



"I'm fine because I'm not important" only works right up until it's no longer a tool of APTs - once an exploit gets to the level of script kiddies who run it against /0, you're just as hosed as someone getting targeted by an APT.

cruft
Oct 25, 2007

BlankSystemDaemon posted:

"I'm fine because I'm not important" only works right up until it's no longer a tool of APTs - once an exploit gets to the level of script kiddies who run it against /0, you're just as hosed as someone getting targeted by an APT.

It is a naïve take for sure, and in my experience as a professional computer security educator, it tends to lead people into overly-risky decisions.

BlankSystemDaemon
Mar 13, 2009



cruft posted:

It is a naïve take for sure, and in my experience as a professional computer security educator, it tends to lead people into overly-risky decisions.
Absolutely, and the worst part is, the time between "APT tool" and "script kiddie tool" is getting smaller and smaller.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
I’m also running it in a container on its own vlan through nginx, so meh

anything the internet can touch carries some risk, all you can do it try to limit said risk

cruft
Oct 25, 2007

e.pilot posted:

I’m also running it in a container on its own vlan through nginx, so meh

anything the internet can touch carries some risk, all you can do it try to limit said risk

Right, so let's talk about what else we can do to try and limit risk! Running it in a container is something you've done. How restricted is this container? Does it have write permissions to any important files? How often do you apply software upgrades? (Plex, to its credit, does a good job nagging you about needing to upgrade.)

Let's pretend an attacker were able to get a local shell inside your Plex container. What could they do with that? They could ruin your Plex state for starters, there probably isn't much you can do to defend against that one other than scheduling regular backups so you can get back quickly. Could they also delete your library? Could they launch an attack against other machines on your network?

Now let's say there's some second script they could run to become root in that container. We call this "privilege ecalation". What would that allow them to do?

What if they're able to break out of the container and access things in the host operating system?

I'm using you (e.pilot) as a sort of strawman for my own thinking here. I have to admit I haven't done much more than what you've outlined, and I'm increasingly worried that this exposes me way more than I'd like. Especially since my career makes me a target for grey and black hats.

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
I just have mine behind Tailscale, but I’m also thinking it’s time to reduce what the machine housing the media suite can do. It’s a Mac laptop that used to be a work machine, so there’s some baggage there to take care of.

Beyond just ditching a bunch of apps and removing browser addons, maybe using ufw on my other network machines to block incoming ssh from its local IP? Not sure that’s enough.

I’m on an asus router so I can’t easily vlan the thing.

Warbird
May 23, 2012

America's Favorite Dumbass

Misread that as ‘/dev/titty’ and now I’m surprised that the self hosting/Linux world hasn’t debased themselves in the normally usual manner by now.

Surely there is a really good CLI utility named butts or whatever.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

cruft posted:

Right, so let's talk about what else we can do to try and limit risk! Running it in a container is something you've done. How restricted is this container? Does it have write permissions to any important files? How often do you apply software upgrades? (Plex, to its credit, does a good job nagging you about needing to upgrade.)

Let's pretend an attacker were able to get a local shell inside your Plex container. What could they do with that? They could ruin your Plex state for starters, there probably isn't much you can do to defend against that one other than scheduling regular backups so you can get back quickly. Could they also delete your library? Could they launch an attack against other machines on your network?

Now let's say there's some second script they could run to become root in that container. We call this "privilege ecalation". What would that allow them to do?

What if they're able to break out of the container and access things in the host operating system?

I'm using you (e.pilot) as a sort of strawman for my own thinking here. I have to admit I haven't done much more than what you've outlined, and I'm increasingly worried that this exposes me way more than I'd like. Especially since my career makes me a target for grey and black hats.

my plex library is the only thing it has access to on the machine itself, it backs up nightly through means separate from plex to a machine that lives outside of my house in the shed in my yard, I could set it to read only but that’d break the dvr functionality, it’s also entirely :filez: and dvr content, so even worst case it’s not like I’d be losing anything irreplaceable

root level access inside the container wouldn’t gain much if anything, save for deleting and messing with my plex
library and files

breaking out of the container would potentially be real bad, but there’s still a lot to get through to get to that point

only one port is open on the firewall
it’s on its own VLAN
it only goes to nginx
nginx only points at plex (and my mastodon instance, which is a much bigger threat vector than plex, but I digress)
docker segregates everything from the host os

again there’s no perfect security, but there are steps you can take to make the amount of layers to get through more substantial compared to just rawdogging a bare os plex install on your main PC to the internet

you could also just keep everything closed off on your network and only access it via wireguard, but you wouldn’t be able to share with anyone unless you used plex relay which limits to 720p

or if your friends are tech savvy, create a VPN specifically for plex guest access and nothing else, and give them access to that for plex

e: passwords are also all randomly generated or 15+ character pass phrases

e.pilot fucked around with this message at 19:17 on Mar 1, 2023

cruft
Oct 25, 2007

e.pilot posted:

you could also just keep everything closed off on your network and only access it via wireguard, but you wouldn’t be able to share with anyone unless you used plex relay which limits to 720p

Aye, there's the rub.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
For the security topic that was discussed recently and avoiding running things as root - is it ok to start as root and relying on an apps ability to switch user?

For example with mumble, I use a letsencrypt cert via certbot, which needs privileges to:

quote:

This include Certbot’s --work-dir, --logs-dir, and --config-dir. By default these are /var/lib/letsencrypt, /var/log/letsencrypt, and /etc/letsencrypt respectively.

Mumble can switch users on startup with the uname config parameter. Is that sufficient? Similar for nginx with the user directive.

BlankSystemDaemon
Mar 13, 2009



fletcher posted:

For the security topic that was discussed recently and avoiding running things as root - is it ok to start as root and relying on an apps ability to switch user?

For example with mumble, I use a letsencrypt cert via certbot, which needs privileges to:

Mumble can switch users on startup with the uname config parameter. Is that sufficient? Similar for nginx with the user directive.
Since mumble isn't binding to a privileged port, it's probably doing raw socket access - so yes, you should absolutely use its facility to drop privileges, instead of letting it run as root.

If it needs access to letsencrypt, you need to add the mumble user to the letsencrypt group, and set the permissions on the letsencrypt folders properly.

EDIT: Some background on this:
The only possible reason to run something as root is if you're binding on a privileged port, which is 0 through 1023, or if you're using raw socket access.
The rest of the 16-bit range is divide into a few unofficial segments, with 1024 through 49151 being known as user ports, and 49152 to 65535 being known as dynamic/private ports.

User ports are expected to be used by user processes (ie. proceses not run as root) and dynamic/private ports are generally meant to be things that haven't been assigned by IANA. Most Unix-likes also includes parts of this list under /etc/services (which in turn get used by netstat and other utilities to convert port numbers into human-readable protocols).

BlankSystemDaemon fucked around with this message at 21:21 on Mar 2, 2023

cruft
Oct 25, 2007

fletcher posted:

For the security topic that was discussed recently and avoiding running things as root - is it ok to start as root and relying on an apps ability to switch user?

I would consider that good enough for your homelab. Like, most apps (not all) drop root super early in the process, after opening whatever privileged doodads they need root for.

The need to start anything as root should be reduced quite a bit by providing capabilities to the process when it starts, if you want to go next level.

corgski
Feb 6, 2007

Silly goose, you're here forever.

fletcher posted:

For the security topic that was discussed recently and avoiding running things as root - is it ok to start as root and relying on an apps ability to switch user?

For example with mumble, I use a letsencrypt cert via certbot, which needs privileges to:

Mumble can switch users on startup with the uname config parameter. Is that sufficient? Similar for nginx with the user directive.

Certbot runs independently of murmur as a cron job ( e: or systemd timer,) the only interaction it would have with murmur is restarting the systemd service to load the new certificates.

But yes having it drop privileges on startup using its own built in method is a fine way of getting it to run as an unprivileged user.

corgski fucked around with this message at 21:38 on Mar 2, 2023

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Most stuff could be run as something less than root, especially with port mapping in docker. No reason you couldn't map even your reverse proxy to 8080:80 and 4443:443 (or something). Just need to route the traffic accordingly in your home network too.

Most stuff runs just fine with rootless Podman, which has way less authority over the system than Docker and is a drop in replacement for it.

Shame Portainer doesn't work (well) with it since that makes the whole process of managing your containers way nicer.

I find managing a Portainer stack waaaaay nicer than loving around with systemd stuff to get similar functionality.

Nitrousoxide fucked around with this message at 21:42 on Mar 2, 2023

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Always follow principle of least privilege wherever possible.

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

Always follow principle of least privilege wherever possible.
Not just least privilege, but also privilege separation.
If you've got anything that listens on a port, it should be running as its own user/group, and only be allowed to have access to the directories that it needs access to.

Notably, this means not running everything as user nobody - that user is meant to be used for anonymous/untrusted NFS access.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

BlankSystemDaemon posted:

Since mumble isn't binding to a privileged port, it's probably doing raw socket access - so yes, you should absolutely use its facility to drop privileges, instead of letting it run as root.

Even if you for some reason need require to a privileged port, on Linux it's trivial to grant CAP_NET_BIND_SERVICE instead of giving away full root access. I'm sure FreeBSD has the same kind of system. ;)

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



BlankSystemDaemon posted:

Not just least privilege, but also privilege separation.
If you've got anything that listens on a port, it should be running as its own user/group, and only be allowed to have access to the directories that it needs access to.

Notably, this means not running everything as user nobody - that user is meant to be used for anonymous/untrusted NFS access.

You can throw a :z or :Z at the end of a volume bind in a Podman deployment to let SELinux limit access to binds. Little "z" will let other stuff beyond just the container access that bind mount, while big "Z" will ONLY let that container access it. Podman (and SELinux) handles all the userspace craziness required to ensure this which is nice so you don't have to.

I think podman also has its own mount functionality too now? I haven't tried to use it for network shares, but it would be nice if I could use it to mount network directories without having to resort to system level mounts using fstab.

cruft
Oct 25, 2007

Nitrousoxide posted:

Most stuff runs just fine with rootless Podman, which has way less authority over the system than Docker and is a drop in replacement for it.

Is there a way to orchestrate Podman that isn't kubernetes or their "generate a systemd service file to start the container" command?

I'm using Docker Swarm right now and it's just so nice. But it has some CPU overhead and is frankly overkill for a single-node swarm. I've tried a couple times to move everything over to podman or docker-compose and always fall back to swarm after a couple hours of struggle.

e: if I'm being honest, systemd services to start podman containers wouldn't be that bad.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

Nitrousoxide posted:

You can throw a :z or :Z at the end of a volume bind in a Podman deployment to let SELinux limit access to binds. Little "z" will let other stuff beyond just the container access that bind mount, while big "Z" will ONLY let that container access it. Podman (and SELinux) handles all the userspace craziness required to ensure this which is nice so you don't have to.

It just relabels the SELinux context of the file hierarchy on the host system. Not that nice IMO since you're changing host (meta)data to make it work.

BlankSystemDaemon
Mar 13, 2009



Keito posted:

Even if you for some reason need require to a privileged port, on Linux it's trivial to grant CAP_NET_BIND_SERVICE instead of giving away full root access. I'm sure FreeBSD has the same kind of system. ;)
Yeah, I know there are ways to change which ports are privileged - but they're privileged ports because the underlying processes are expected to be controlled/administrated by the operator of the system.

On FreeBSD, as you summized, it's [url="https://man.freebsd.org/mac_portacl(4)"]mac_portacl(4).

Nitrousoxide posted:

I think podman also has its own mount functionality too now? I haven't tried to use it for network shares, but it would be nice if I could use it to mount network directories without having to resort to system level mounts using fstab.
AutoFS should be able to handle NFS (which is why, at least on the BSDs, /net/ is documented in hier(7).

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



cruft posted:

Is there a way to orchestrate Podman that isn't kubernetes or their "generate a systemd service file to start the container" command?

You *can* use Portainer (or yacht I guess) plugged into the Podman socket since it replicates the api for docker in that socket for compatibility. Since Podman doesn't have a system daemon like Docker though it won't bring stuff up on boot unless you use Systemd or Kubernetes

cruft posted:

e: if I'm being honest, systemd services to start podman containers wouldn't be that bad.
It's by no means *terrible*, but you're going to be using the CLI rather than a GUI which is a barrier to entry for a lot of folks. It's not like you have to manually make and start the services. "podman generate systemd --new --name {containername}" will make it for you from a currently running container. Since it just shoves the systemd entries in with everything else running on the system you also don't really have a good way to export a list of them nicely. It's also 4 commands to set it up rather than the one click deploy in Portainer or one command in Docker. (podman start..., podman generate..., systemctl enable {service name},... systemctl start {service name})

When you want to update a container though systemd is kinda cool since you can just do podman auto-update to update everything in one go (or use --dry-run to get a list of what needs updating and pass along containers to update). You can also set the containers to auto-update in the systemd service with a timer, so you can go completely hands off if you're confident a new image won't bork your container at some point.

Edit: oh you also have to enable "lingering" if you using rootless Podman if you want to have your containers start up before you log in with your user to the system after a boot. Rootful podman systemd'd containers will happily start before any user is logged in though.

Nitrousoxide fucked around with this message at 00:13 on Mar 3, 2023

cruft
Oct 25, 2007

Nitrousoxide posted:

It's by no means *terrible*, but you're going to be using the CLI rather than a GUI which is a barrier to entry for a lot of folks.

NOT ME, BUDDY! I've been using Unix since before Windows had its own kernel! Hell, sometimes I use ed, just for nostalgia.

But this is good general advice :)

cruft
Oct 25, 2007

cruft posted:

Hell, sometimes I use ed, just for nostalgia.

Then I go out back and shape my massive beard with a couple rocks I've chipped at until they have sharp edges, and walk home barefoot through the forest with a smug look on my face.

Retrograde
Jan 22, 2007

Strange game-- the only winning move is not to play.

e.pilot posted:

I'm self hosting a freepbx container now, give it a call

1-408-709-4378

Couple of pages back but I legit laughed out loud when the Chrono Trigger music came on after pressing 1 a few times.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Retrograde posted:

Couple of pages back but I legit laughed out loud when the Chrono Trigger music came on after pressing 1 a few times.

there’s a few easter eggs in there you can dial from the main menu too

69, 420, 56k, ps1, wii, 0, 8675309, 999, I think there are some others I’m forgetting

it does a fake pregnant pause and ring making you think someone is answering after 5 minutes too

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


We've probably covered something like this so please feel free to point me backwards in the thread if already well covered. Searching on this gets a bit saturated with not quite relevant results or multiple ways to probably do what I want so hoping for some preference guidance.

I have 2 different google drive accounts 1 personal that's capped at 1-2tb and 1 work account that's unlimited. I have an Unraid NAS at home as well with storage room to spare. I have a network wired desktop and a wifi laptop both running win10 that need access to at minimum the personal google drive data.

What I'd like to do:

Have google drive files physically stored on the NAS, have that storage location mapped as a network drive that my windows machines can access. What is the best way to accomplish this?


Perfect scenario but not necessary if too bothersome to implement:

Do the above but have 2 separate drives mapped for each google drive account and also allow a wifi connected macOS laptop to have similar access.

What say you thread?


e: read/write speed isn't too critical, I'm not going to be streaming videos from this, its mostly spreadsheets, large DNASequence data file stuff and a lot of word processing stuff, papers figures etc for grant and article writing.

e:2 what I've found so far was either using Rclone https://forums.unraid.net/topic/75436-guide-how-to-use-rclone-to-mount-cloud-drives-and-play-files/
or Rsync https://www.youtube.com/watch?v=9oG7gNCS3bQ

Not very sure on the differences between these / is either fine or are both inferior to something else I don't know about etc.

That Works fucked around with this message at 12:31 on Mar 24, 2023

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
google changed something with api access to drive a bit ago that’s made rsync rclone kind of broken, it works but only for a week at a time

I use duplicati to back up to google drive, but that’s strictly backing up, the files in that case aren’t directly readable on google drive

e.pilot fucked around with this message at 22:21 on Mar 24, 2023

cruft
Oct 25, 2007

e.pilot posted:

google changed something with api access to drive a bit ago that’s made rsync kind of broken, it works but only for a week at a time

I use duplicati to back up to google drive, but that’s strictly backing up, the files in that case aren’t directly readable on google drive

Do you mean rclone? Did rsync ever work with drive?

Adbot
ADBOT LOVES YOU

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


If I can't make it work with Google drive for option 1 at least I'd be willing to do some kind of other cloud backup solution (box, dropbox etc) as long as I could get a few TB of storage there. That would give me 2 physical storage spaces (the unraid NAS and another old desktop I have with a large storage drive sitting on it).


Basically I want to plug in a laptop and just use a mapped drive from the NAS (that is also cloud mirrored) for all my google drive stuff as moving things on and off it via the web browser is far too tedious for when I am working on writing papers while designing figures and doing data analysis and formatting etc.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply