|
don't expose your plex instance to the internet folks https://arstechnica.com/information...orporate-vault/
|
# ? Feb 28, 2023 16:13 |
|
|
# ? May 23, 2024 23:48 |
|
BedBuglet posted:I assume you plan to map /dev/(bus|ttyUSB0) into the container? There's cturra but honestly, I would consider homecoming it. It is simple enough to make a ntp server container. Here's a guide if you want. yeah that’s the plan, I’ll give that guide a try
|
# ? Feb 28, 2023 16:15 |
|
Heck Yes! Loam! posted:don't expose your plex instance to the internet folks I’m not a target of a state actor, I’m fine
|
# ? Feb 28, 2023 16:16 |
|
Heck Yes! Loam! posted:don't expose your plex instance to the internet folks If I'm reading this right either the Plex server was used as a jump box or the person who got hacked shared a password?
|
# ? Feb 28, 2023 16:20 |
|
Matt Zerella posted:If I'm reading this right either the Plex server was used as a jump box or the person who got hacked shared a password? it doesn't say specifically, but you can read between the lines and assume this person shared passwords between their home functions and work functions or had their work stuff stored in their personal lastpass. My assumption would be: Plex hacked to get control of local machine > keylogger on plex machine > capture passwords to personal LastPass > Personal LastPass provided access to professional resources. or Plex hacked to get control of local machine > keylogger on plex machine > idiot used shared passwords and no mfa > access to professional resources.
|
# ? Feb 28, 2023 16:29 |
|
e.pilot posted:I’m not a target of a state actor, I’m fine Yep, this. I also don't keep my passwords on someone else's computer. And my Plex server is in a DMZ, not on my own desktop or even in the same subnet.
|
# ? Feb 28, 2023 16:32 |
|
Heck Yes! Loam! posted:don't expose your plex instance to the internet folks e.pilot posted:I’m not a target of a state actor, I’m fine How were you able to determine this? I might be, I just don't know. I wonder if there's a way to convince Jellyfin to work with the HTTP Basic auth that I have in front of literally everything else...
|
# ? Feb 28, 2023 17:44 |
|
Plex having an exploitable RCE is the least thing about this incident.quote:Already smarting from a breach that put partially encrypted login data into a threat actor’s hands, LastPass on Monday said that the same attacker hacked an employee’s home computer and obtained a decrypted vault available to only a handful of company developers. An employee with privileged access to customer information put their corporate LastPass vault on their home PC. This same vault held the access keys to their main S3 buckets that store encrypted customer data. I don't know why nerds insist on letting work stuff anywhere near their personal computers. You do that and you might as well slap an asset tag on it, because they can fire you and seize that poo poo at a moment's notice.
|
# ? Feb 28, 2023 19:16 |
|
Cenodoxus posted:Plex having an exploitable RCE is the least thing about this incident. BYOD, baby. Seems like this must have been the Plex client?
|
# ? Feb 28, 2023 19:54 |
|
Cenodoxus posted:Plex having an exploitable RCE is the least thing about this incident. At this point the plex client is all electron so it would be more if it didn't have an assload of RCE vulnerabilities.
|
# ? Feb 28, 2023 20:24 |
"I'm fine because I'm not important" only works right up until it's no longer a tool of APTs - once an exploit gets to the level of script kiddies who run it against /0, you're just as hosed as someone getting targeted by an APT.
|
|
# ? Feb 28, 2023 20:56 |
|
BlankSystemDaemon posted:"I'm fine because I'm not important" only works right up until it's no longer a tool of APTs - once an exploit gets to the level of script kiddies who run it against /0, you're just as hosed as someone getting targeted by an APT. It is a naïve take for sure, and in my experience as a professional computer security educator, it tends to lead people into overly-risky decisions.
|
# ? Feb 28, 2023 21:05 |
cruft posted:It is a naïve take for sure, and in my experience as a professional computer security educator, it tends to lead people into overly-risky decisions.
|
|
# ? Feb 28, 2023 22:57 |
|
I’m also running it in a container on its own vlan through nginx, so meh anything the internet can touch carries some risk, all you can do it try to limit said risk
|
# ? Mar 1, 2023 15:05 |
|
e.pilot posted:I’m also running it in a container on its own vlan through nginx, so meh Right, so let's talk about what else we can do to try and limit risk! Running it in a container is something you've done. How restricted is this container? Does it have write permissions to any important files? How often do you apply software upgrades? (Plex, to its credit, does a good job nagging you about needing to upgrade.) Let's pretend an attacker were able to get a local shell inside your Plex container. What could they do with that? They could ruin your Plex state for starters, there probably isn't much you can do to defend against that one other than scheduling regular backups so you can get back quickly. Could they also delete your library? Could they launch an attack against other machines on your network? Now let's say there's some second script they could run to become root in that container. We call this "privilege ecalation". What would that allow them to do? What if they're able to break out of the container and access things in the host operating system? I'm using you (e.pilot) as a sort of strawman for my own thinking here. I have to admit I haven't done much more than what you've outlined, and I'm increasingly worried that this exposes me way more than I'd like. Especially since my career makes me a target for grey and black hats.
|
# ? Mar 1, 2023 15:18 |
|
I just have mine behind Tailscale, but I’m also thinking it’s time to reduce what the machine housing the media suite can do. It’s a Mac laptop that used to be a work machine, so there’s some baggage there to take care of. Beyond just ditching a bunch of apps and removing browser addons, maybe using ufw on my other network machines to block incoming ssh from its local IP? Not sure that’s enough. I’m on an asus router so I can’t easily vlan the thing.
|
# ? Mar 1, 2023 16:07 |
|
Misread that as ‘/dev/titty’ and now I’m surprised that the self hosting/Linux world hasn’t debased themselves in the normally usual manner by now. Surely there is a really good CLI utility named butts or whatever.
|
# ? Mar 1, 2023 16:23 |
|
cruft posted:Right, so let's talk about what else we can do to try and limit risk! Running it in a container is something you've done. How restricted is this container? Does it have write permissions to any important files? How often do you apply software upgrades? (Plex, to its credit, does a good job nagging you about needing to upgrade.) my plex library is the only thing it has access to on the machine itself, it backs up nightly through means separate from plex to a machine that lives outside of my house in the shed in my yard, I could set it to read only but that’d break the dvr functionality, it’s also entirely and dvr content, so even worst case it’s not like I’d be losing anything irreplaceable root level access inside the container wouldn’t gain much if anything, save for deleting and messing with my plex library and files breaking out of the container would potentially be real bad, but there’s still a lot to get through to get to that point only one port is open on the firewall it’s on its own VLAN it only goes to nginx nginx only points at plex (and my mastodon instance, which is a much bigger threat vector than plex, but I digress) docker segregates everything from the host os again there’s no perfect security, but there are steps you can take to make the amount of layers to get through more substantial compared to just rawdogging a bare os plex install on your main PC to the internet you could also just keep everything closed off on your network and only access it via wireguard, but you wouldn’t be able to share with anyone unless you used plex relay which limits to 720p or if your friends are tech savvy, create a VPN specifically for plex guest access and nothing else, and give them access to that for plex e: passwords are also all randomly generated or 15+ character pass phrases e.pilot fucked around with this message at 19:17 on Mar 1, 2023 |
# ? Mar 1, 2023 19:04 |
|
e.pilot posted:you could also just keep everything closed off on your network and only access it via wireguard, but you wouldn’t be able to share with anyone unless you used plex relay which limits to 720p Aye, there's the rub.
|
# ? Mar 1, 2023 19:14 |
For the security topic that was discussed recently and avoiding running things as root - is it ok to start as root and relying on an apps ability to switch user? For example with mumble, I use a letsencrypt cert via certbot, which needs privileges to: quote:This include Certbot’s --work-dir, --logs-dir, and --config-dir. By default these are /var/lib/letsencrypt, /var/log/letsencrypt, and /etc/letsencrypt respectively. Mumble can switch users on startup with the uname config parameter. Is that sufficient? Similar for nginx with the user directive.
|
|
# ? Mar 2, 2023 20:40 |
fletcher posted:For the security topic that was discussed recently and avoiding running things as root - is it ok to start as root and relying on an apps ability to switch user? If it needs access to letsencrypt, you need to add the mumble user to the letsencrypt group, and set the permissions on the letsencrypt folders properly. EDIT: Some background on this: The only possible reason to run something as root is if you're binding on a privileged port, which is 0 through 1023, or if you're using raw socket access. The rest of the 16-bit range is divide into a few unofficial segments, with 1024 through 49151 being known as user ports, and 49152 to 65535 being known as dynamic/private ports. User ports are expected to be used by user processes (ie. proceses not run as root) and dynamic/private ports are generally meant to be things that haven't been assigned by IANA. Most Unix-likes also includes parts of this list under /etc/services (which in turn get used by netstat and other utilities to convert port numbers into human-readable protocols). BlankSystemDaemon fucked around with this message at 21:21 on Mar 2, 2023 |
|
# ? Mar 2, 2023 21:17 |
|
fletcher posted:For the security topic that was discussed recently and avoiding running things as root - is it ok to start as root and relying on an apps ability to switch user? I would consider that good enough for your homelab. Like, most apps (not all) drop root super early in the process, after opening whatever privileged doodads they need root for. The need to start anything as root should be reduced quite a bit by providing capabilities to the process when it starts, if you want to go next level.
|
# ? Mar 2, 2023 21:18 |
|
fletcher posted:For the security topic that was discussed recently and avoiding running things as root - is it ok to start as root and relying on an apps ability to switch user? Certbot runs independently of murmur as a cron job ( e: or systemd timer,) the only interaction it would have with murmur is restarting the systemd service to load the new certificates. But yes having it drop privileges on startup using its own built in method is a fine way of getting it to run as an unprivileged user. corgski fucked around with this message at 21:38 on Mar 2, 2023 |
# ? Mar 2, 2023 21:27 |
Most stuff could be run as something less than root, especially with port mapping in docker. No reason you couldn't map even your reverse proxy to 8080:80 and 4443:443 (or something). Just need to route the traffic accordingly in your home network too. Most stuff runs just fine with rootless Podman, which has way less authority over the system than Docker and is a drop in replacement for it. Shame Portainer doesn't work (well) with it since that makes the whole process of managing your containers way nicer. I find managing a Portainer stack waaaaay nicer than loving around with systemd stuff to get similar functionality. Nitrousoxide fucked around with this message at 21:42 on Mar 2, 2023 |
|
# ? Mar 2, 2023 21:39 |
|
Always follow principle of least privilege wherever possible.
|
# ? Mar 2, 2023 21:44 |
CommieGIR posted:Always follow principle of least privilege wherever possible. If you've got anything that listens on a port, it should be running as its own user/group, and only be allowed to have access to the directories that it needs access to. Notably, this means not running everything as user nobody - that user is meant to be used for anonymous/untrusted NFS access.
|
|
# ? Mar 2, 2023 21:54 |
|
BlankSystemDaemon posted:Since mumble isn't binding to a privileged port, it's probably doing raw socket access - so yes, you should absolutely use its facility to drop privileges, instead of letting it run as root. Even if you for some reason need require to a privileged port, on Linux it's trivial to grant CAP_NET_BIND_SERVICE instead of giving away full root access. I'm sure FreeBSD has the same kind of system.
|
# ? Mar 2, 2023 22:16 |
BlankSystemDaemon posted:Not just least privilege, but also privilege separation. You can throw a :z or :Z at the end of a volume bind in a Podman deployment to let SELinux limit access to binds. Little "z" will let other stuff beyond just the container access that bind mount, while big "Z" will ONLY let that container access it. Podman (and SELinux) handles all the userspace craziness required to ensure this which is nice so you don't have to. I think podman also has its own mount functionality too now? I haven't tried to use it for network shares, but it would be nice if I could use it to mount network directories without having to resort to system level mounts using fstab.
|
|
# ? Mar 2, 2023 22:23 |
|
Nitrousoxide posted:Most stuff runs just fine with rootless Podman, which has way less authority over the system than Docker and is a drop in replacement for it. Is there a way to orchestrate Podman that isn't kubernetes or their "generate a systemd service file to start the container" command? I'm using Docker Swarm right now and it's just so nice. But it has some CPU overhead and is frankly overkill for a single-node swarm. I've tried a couple times to move everything over to podman or docker-compose and always fall back to swarm after a couple hours of struggle. e: if I'm being honest, systemd services to start podman containers wouldn't be that bad.
|
# ? Mar 2, 2023 22:30 |
|
Nitrousoxide posted:You can throw a :z or :Z at the end of a volume bind in a Podman deployment to let SELinux limit access to binds. Little "z" will let other stuff beyond just the container access that bind mount, while big "Z" will ONLY let that container access it. Podman (and SELinux) handles all the userspace craziness required to ensure this which is nice so you don't have to. It just relabels the SELinux context of the file hierarchy on the host system. Not that nice IMO since you're changing host (meta)data to make it work.
|
# ? Mar 2, 2023 22:52 |
Keito posted:Even if you for some reason need require to a privileged port, on Linux it's trivial to grant CAP_NET_BIND_SERVICE instead of giving away full root access. I'm sure FreeBSD has the same kind of system. On FreeBSD, as you summized, it's [url="https://man.freebsd.org/mac_portacl(4)"]mac_portacl(4). Nitrousoxide posted:I think podman also has its own mount functionality too now? I haven't tried to use it for network shares, but it would be nice if I could use it to mount network directories without having to resort to system level mounts using fstab.
|
|
# ? Mar 2, 2023 23:31 |
cruft posted:Is there a way to orchestrate Podman that isn't kubernetes or their "generate a systemd service file to start the container" command? You *can* use Portainer (or yacht I guess) plugged into the Podman socket since it replicates the api for docker in that socket for compatibility. Since Podman doesn't have a system daemon like Docker though it won't bring stuff up on boot unless you use Systemd or Kubernetes cruft posted:e: if I'm being honest, systemd services to start podman containers wouldn't be that bad. When you want to update a container though systemd is kinda cool since you can just do podman auto-update to update everything in one go (or use --dry-run to get a list of what needs updating and pass along containers to update). You can also set the containers to auto-update in the systemd service with a timer, so you can go completely hands off if you're confident a new image won't bork your container at some point. Edit: oh you also have to enable "lingering" if you using rootless Podman if you want to have your containers start up before you log in with your user to the system after a boot. Rootful podman systemd'd containers will happily start before any user is logged in though. Nitrousoxide fucked around with this message at 00:13 on Mar 3, 2023 |
|
# ? Mar 3, 2023 00:07 |
|
Nitrousoxide posted:It's by no means *terrible*, but you're going to be using the CLI rather than a GUI which is a barrier to entry for a lot of folks. NOT ME, BUDDY! I've been using Unix since before Windows had its own kernel! Hell, sometimes I use ed, just for nostalgia. But this is good general advice
|
# ? Mar 3, 2023 00:12 |
|
cruft posted:Hell, sometimes I use ed, just for nostalgia. Then I go out back and shape my massive beard with a couple rocks I've chipped at until they have sharp edges, and walk home barefoot through the forest with a smug look on my face.
|
# ? Mar 3, 2023 00:14 |
|
e.pilot posted:I'm self hosting a freepbx container now, give it a call Couple of pages back but I legit laughed out loud when the Chrono Trigger music came on after pressing 1 a few times.
|
# ? Mar 8, 2023 16:33 |
|
Retrograde posted:Couple of pages back but I legit laughed out loud when the Chrono Trigger music came on after pressing 1 a few times. there’s a few easter eggs in there you can dial from the main menu too 69, 420, 56k, ps1, wii, 0, 8675309, 999, I think there are some others I’m forgetting it does a fake pregnant pause and ring making you think someone is answering after 5 minutes too
|
# ? Mar 8, 2023 20:38 |
We've probably covered something like this so please feel free to point me backwards in the thread if already well covered. Searching on this gets a bit saturated with not quite relevant results or multiple ways to probably do what I want so hoping for some preference guidance. I have 2 different google drive accounts 1 personal that's capped at 1-2tb and 1 work account that's unlimited. I have an Unraid NAS at home as well with storage room to spare. I have a network wired desktop and a wifi laptop both running win10 that need access to at minimum the personal google drive data. What I'd like to do: Have google drive files physically stored on the NAS, have that storage location mapped as a network drive that my windows machines can access. What is the best way to accomplish this? Perfect scenario but not necessary if too bothersome to implement: Do the above but have 2 separate drives mapped for each google drive account and also allow a wifi connected macOS laptop to have similar access. What say you thread? e: read/write speed isn't too critical, I'm not going to be streaming videos from this, its mostly spreadsheets, large DNASequence data file stuff and a lot of word processing stuff, papers figures etc for grant and article writing. e:2 what I've found so far was either using Rclone https://forums.unraid.net/topic/75436-guide-how-to-use-rclone-to-mount-cloud-drives-and-play-files/ or Rsync https://www.youtube.com/watch?v=9oG7gNCS3bQ Not very sure on the differences between these / is either fine or are both inferior to something else I don't know about etc. That Works fucked around with this message at 12:31 on Mar 24, 2023 |
|
# ? Mar 24, 2023 11:58 |
|
google changed something with api access to drive a bit ago that’s made I use duplicati to back up to google drive, but that’s strictly backing up, the files in that case aren’t directly readable on google drive e.pilot fucked around with this message at 22:21 on Mar 24, 2023 |
# ? Mar 24, 2023 18:11 |
|
e.pilot posted:google changed something with api access to drive a bit ago that’s made rsync kind of broken, it works but only for a week at a time Do you mean rclone? Did rsync ever work with drive?
|
# ? Mar 24, 2023 18:45 |
|
|
# ? May 23, 2024 23:48 |
If I can't make it work with Google drive for option 1 at least I'd be willing to do some kind of other cloud backup solution (box, dropbox etc) as long as I could get a few TB of storage there. That would give me 2 physical storage spaces (the unraid NAS and another old desktop I have with a large storage drive sitting on it). Basically I want to plug in a laptop and just use a mapped drive from the NAS (that is also cloud mirrored) for all my google drive stuff as moving things on and off it via the web browser is far too tedious for when I am working on writing papers while designing figures and doing data analysis and formatting etc.
|
|
# ? Mar 24, 2023 19:05 |