Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
RFC2324
Jun 7, 2012

http 418

cool. so partly how hard they are to delete, and partly because of an actual technical limitation, and partly because no one who works in unix really fully understands what they are doing

Adbot
ADBOT LOVES YOU

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
You just delete them. The file they point to stick around you delete the last hardlink. A softlink is like a creating a shortcut to a file, hard links are like having multiple filenames all of which are equally 'the one true filename'.

Hardlinks are a great way to do incremental backups without duplication, and with the ability to easily prune incremental backups without breaking the other backups (ie. do full backup, do incremental 1, do incremental 2, delete incremental 1 without breaking incremental 2. A lot of backup software won't let you do that due to the work required to decide which bits of 1 to keep so 2 stays valid).

Pablo Bluth fucked around with this message at 01:41 on Jan 15, 2022

RFC2324
Jun 7, 2012

http 418

Pablo Bluth posted:

You just delete them. The file they point to stick around you delete the last hardlink. A softlink is like a creating a shortcut to a file, hard links are like having multiple filenames all of which are equally 'the one true filename'.

Hardlinks are a great way to do incremental backups without duplication, and with the ability to easily prune incremental backups without breaking the other backups (ie. do full backup, do incremental 1, do incremental 2, delete incremental 1 without breaking incremental 2. A lot of backup software won't let you do that due to the work required to decide which bits of 1 to keep so 2 stays valid).

by hard to delete I was referring to being sure you actually did delete the data, and that isn't a technical issue, its a human one.

BlankSystemDaemon
Mar 13, 2009




The NAS/Storage thread title springs to mind: What is this "File Deletion" You Speak of.

Yaoi Gagarin
Feb 20, 2014

Hard links basically exist because with the way most unix file systems (and NTFS for that matter) work it would be harder to not have them. The file itself is represented as an inode, and directory entries are just mappings from strings to inodes. You would need some way figuring out which inodes are pointed at by a dirent, so they decided to just reference count it and let multiple dirents point to the same inode.

ExcessBLarg!
Sep 1, 2001
Some network filesystems outright don't support cross directory hard links.

They also just serve different purposes. Symlinks often fill the role of providing a canonical name with an easily updatable target, while hard links are more of a deduplication tool.

ExcessBLarg! fucked around with this message at 16:26 on Jan 15, 2022

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
NTFS seems to keep a canonical record of the number of hard links for a given file. With ext, it does just seen to be a car of using something like find to scan the entire filesystem.

Edit: stat will report the number of hard links for a given inode, you just have use find to look up what they actual are. Windows' fsutil seems to be able to list hatdlinks quicker.

Pablo Bluth fucked around with this message at 15:00 on Jan 15, 2022

ExcessBLarg!
Sep 1, 2001
Quote != Edit

WattsvilleBlues
Jan 25, 2005

Every demon wants his pound of flesh

VictualSquid posted:

Even if you can't get into the bios, there might be a separate boot order menu. It usually is at f8 or or f11 or f12 and on my lenovo laptop it is behind a hardware button that needs a paperclip to reach.
And I once used a computer with a corrupt bios for quite some time, it only lost the settings when I turned off the computer but kept settings for software reboots.

There also should be a way to chainload your usb-stick from whatever boot manager you current distro uses, but I never worked with that one.

Sorry for the delayed response, all goons who answered the call.

F12 is the boot menu shortcut, I'll give it a whirl here, thanks peeps!

BlankSystemDaemon
Mar 13, 2009




In the rare scenario that it occurs in on FreeBSD, I'm really going to appreciate this change to how OOMs are announced.

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home
For those that run a Postgres container, how do you handle upgrading major versions? Mine are tiny so the dump+restore approach is fine but I didn't know if there was something more automated?

I did see this, which might be it?
https://github.com/tianon/docker-postgres-upgrade

RFC2324
Jun 7, 2012

http 418

Chilled Milk posted:

For those that run a Postgres container, how do you handle upgrading major versions? Mine are tiny so the dump+restore approach is fine but I didn't know if there was something more automated?

I did see this, which might be it?
https://github.com/tianon/docker-postgres-upgrade

https://www.postgresql.org/docs/13/pgupgrade.html

We use this tool. its pretty straight forward, but still take backups first

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home

RFC2324 posted:

https://www.postgresql.org/docs/13/pgupgrade.html

We use this tool. its pretty straight forward, but still take backups first

Are you running this on the host system? AFAIK it needs the binaries for both versions available


fake edit: Ooof I remember this thread now from the last time I tried to research this. 2014.
https://github.com/docker-library/postgres/issues/37

MrPablo
Mar 21, 2003

Chilled Milk posted:

Are you running this on the host system? AFAIK it needs the binaries for both versions available


fake edit: Ooof I remember this thread now from the last time I tried to research this. 2014.
https://github.com/docker-library/postgres/issues/37

I haven't tried this, but since the Postgres Docker image is Debian-based, why not do the following:

  1. Start with the image for the new version of Postgres.
  2. Install the postgresql-contrib package (which contains pg_upgrade).
  3. Install the old version of Postgres in the container.
  4. Follow these instructions to run pg_upgrade as usual.

RFC2324
Jun 7, 2012

http 418

Chilled Milk posted:

Are you running this on the host system? AFAIK it needs the binaries for both versions available


fake edit: Ooof I remember this thread now from the last time I tried to research this. 2014.
https://github.com/docker-library/postgres/issues/37

oops, sorry, didn't proc that you are running containerized, my bad

Marinmo
Jan 23, 2005

Prisoner #95H522 Augustus Hill
Anyone using Fedora who didn't execute a dnf upgrade in the latest 24 hours or so probably want to hold back on that due to the maintainer of selinux-policy majorly messing up and causing all sorts of issues with selinux-policy-35.10-1. That update will brick a lot of things, among which are cockpit, podman, GDM/wayland and flatpak. I have no idea how someone pushes things to stable just because of a regex mismatch (which - unlike this - had a workaround) but here we are with more or less broken systems. :downs:

BlankSystemDaemon
Mar 13, 2009




Marinmo posted:

Anyone using Fedora who didn't execute a dnf upgrade in the latest 24 hours or so probably want to hold back on that due to the maintainer of selinux-policy majorly messing up and causing all sorts of issues with selinux-policy-35.10-1. That update will brick a lot of things, among which are cockpit, podman, GDM/wayland and flatpak. I have no idea how someone pushes things to stable just because of a regex mismatch (which - unlike this - had a workaround) but here we are with more or less broken systems. :downs:
I thought every project had the equivalent of an exp-run in the FreeBSD project?

Granted, they take upwards of 100 hours, but it seems like a relatively small price to pay.

BlankSystemDaemon fucked around with this message at 17:29 on Jan 19, 2022

Marinmo
Jan 23, 2005

Prisoner #95H522 Augustus Hill

BlankSystemDaemon posted:

I thought every project had the equivalent of an exp-run in the FreeBSD project?

Granted, they take upwards of 100 hours, but it seems like a relatively small price to pay.
Most of these should be incremental updates which tend to fix instead of break things, but they've had a few hiccups before - none like this though as far as I know. There's a new version in Bodhi by now, not sure whether it solves any or all of the issues though.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Last week Red Hat released a security update for OpenSSL which has caused crashes with Apache and Nginx. Fix was released today.

Bug 2039993 - httpd fails to start with double free after updating to openssl-1.0.2k-23.el7_9

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

RFC2324 posted:

random question thats always bugged me: why are symlink standard practice and not multiple hard links? is it just because its a pain in the rear end to make sure data is really deleted with a hard link, or is there some other reason?

I feel that they are used for different purposes. With symlinks genereally the user interface clearly tells you what you are dealing with. It is a clear indication someone intended these files to be identical and remain identical. Hardlinks are usually invisible, the user interface doesn't indicate them in any way. Find has the '-links n' option to search for files with N number of hardlinks, but of the top of my head I can't think what other ways there are to display the link count. Hardlinks feel like an incidental way to save space if files happen to be identical.

Hardlinks and symlinks also have different behaviour in various situations, especially how files might get unlinked. Someone mentioned backups and that is something I thought immediately because I use rsnapshot for backups. When you run rsnapshot once you get daily.0 backup directory. The next time you run it, it will first rename daily.0 to daily.1 and then create new daily.0 using hardlinks, every file is linked to it's copy in the daily.1. After that it will run rsync targeting daily.0 and every file that has been modified will be copied as new and will get unlinked from the previous version. This would of course not work with symlinks. With symlinks I feel I have pretty good understanding how unlinking works with symlinks and what you can do and still keep the files linked. With hardlinks I'm really not clear what the link can withstand, which operations retain the link and which severe it. If I edit a hardlinked file I assume the link stays. rsnapshot uses rsync and that seems to severe the link, but if you copy with standard cp the hardlink is retained. But if you have file invisibly hardlinked and you overwrite one with cp I'm not sure if you would want the other file to change too.

feedmegin
Jul 30, 2008

On top of this - less relevant these days but hardlinks don't work across filesystems (partitions). Multiple partitions for eg var, usr, home used to be p common in old school UNIX.

feedmegin fucked around with this message at 22:47 on Jan 19, 2022

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I'm trying to set up Kubuntu 20.04 on a machine with an Nvidia GPU that's giving me some grief. First, the graphics card didn't have a type-C port but the Nvidia driver would try to initialize one, so I had to do some blacklisting. I then reached a point where I had it half taken care of but still would see:

quote:

ucsi_acpi USBC000:00: PPM init failed (-110)

This would knock out the splash screen and it would appear to hang there. I could ALT-F2 to log in and even startx just fine. I then manually installed the latest nvidia Linux drivers from nvidia's own site, plowing over the Ubuntu files like a naughty man. This actually took care of the PPM init error but X still doesn't start automatically. Where should I look to find a disruption in the startup that would cause this? I can still manually startx and go in fine so I am assuming it's either some misconfiguration in the startup for the X server, or I'm silently hitting another error that blocks the startup sequence. I do have this:

quote:

[ 46.628935] sof-audio-pci-intel-cnl 0000:00:1f.3: init of i915 and HDMI codec failed

A pastebin of dmesg if you care:
https://pastebin.com/KHzbXYis

BlankSystemDaemon
Mar 13, 2009




feedmegin posted:

On top of this - less relevant these days but hardlinks don't work across filesystems (partitions). Multiple partitions for eg var, usr, home used to be p common in old school UNIX.
If you want to use boot environments the way they're done in Solaris or FreeBSD, you're going to end up using datasets that're set up much like old school UNIX:
pre:
debdrup@geroi:~ % zfs list
NAME                                      USED  AVAIL     REFER  MOUNTPOINT
zroot                                    22.8G   192G       96K  /zroot
zroot/ROOT                               15.3G   192G       96K  none
zroot/ROOT/14.0-CURRENT-20211230.105549  15.3G   192G     14.8G  /
zroot/ROOT/default                        460K   192G     5.98G  /
zroot/tmp                                 147M   192G      147M  /tmp
zroot/usr                                7.30G   192G       96K  /usr
zroot/usr/home                           5.13G   192G     5.13G  /usr/home
zroot/usr/ports                            96K   192G       96K  /usr/ports
zroot/usr/src                            2.17G   192G     2.17G  /usr/src
zroot/var                                1.12M   192G       96K  /var
zroot/var/audit                            96K   192G       96K  /var/audit
zroot/var/crash                            96K   192G       96K  /var/crash
zroot/var/log                             564K   192G      564K  /var/log
zroot/var/mail                            200K   192G      200K  /var/mail
zroot/var/tmp                              96K   192G       96K  /var/tmp

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

Rocko Bonaparte posted:

machine with an Nvidia GPU that's giving me some grief.
It looks like the Nvidia driver really is that much hot garbage. It would hang trying to hit a Type-C port that didn't exist and it would hang trying to start X. The times it succeeded was when the open-source driver was being loaded instead. So I just gave up on trying to use it at all.

CaptainSarcastic
Jul 6, 2013



Rocko Bonaparte posted:

It looks like the Nvidia driver really is that much hot garbage. It would hang trying to hit a Type-C port that didn't exist and it would hang trying to start X. The times it succeeded was when the open-source driver was being loaded instead. So I just gave up on trying to use it at all.

It might have been fighting with the iGPU on that i7? At least the processor your system is reporting shows an i7 that the spec sheet says has Intel UHD graphics.

I haven't ever had to deal with installing Linux on a laptop with both Intel and Nvidia GPUs, but it's generally known to frequently be a pain. Also, I haven't touched Ubuntu in like a decade so whatever that distro is doing specifically I am out of the loop on.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Is there any way I can get a local image of CIS Level 1 hardened CentOS/RHEL7 locally at home in a VM? We use it at work and I'd like to be able to spin up a test image to deploy against locally or to test out ansible automations.

Ive found a few Ansible routines to apply it but they're a little hokey and don't seem to support the latest versions of CentOS/RHEL.

other people
Jun 27, 2004
Associate Christ
I don't think RH provides images like that. You could use openscap on a clean minimal install to generate a playbook that should mostly bring a system into compliance and then snapshot that or whatever.

Kirov
May 4, 2006

Matt Zerella posted:

Is there any way I can get a local image of CIS Level 1 hardened CentOS/RHEL7 locally at home in a VM? We use it at work and I'd like to be able to spin up a test image to deploy against locally or to test out ansible automations.

Ive found a few Ansible routines to apply it but they're a little hokey and don't seem to support the latest versions of CentOS/RHEL.
These have sectioned task-files and j2-templates for various configs:
https://github.com/radsec/CentOS7-CIS
https://github.com/radsec/RHEL7-CIS

Seems pretty nice after a cursory look.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

CaptainSarcastic posted:

It might have been fighting with the iGPU on that i7? At least the processor your system is reporting shows an i7 that the spec sheet says has Intel UHD graphics.

I haven't ever had to deal with installing Linux on a laptop with both Intel and Nvidia GPUs, but it's generally known to frequently be a pain. Also, I haven't touched Ubuntu in like a decade so whatever that distro is doing specifically I am out of the loop on.

I'd be surprised if Nvidia never considered it or that the Linux distributions somehow choke on that, but it's a fair to worry about (like everything else when it comes to graphics). I'm fine with the open-source drivers since I really only need two displays and 2d. It's for a remote work setup and not my hardcore shitposting rig gaming machine.

A Real Happy Camper
Dec 11, 2007

These children have taught me how to believe.
I'm mad at my windows plex/other random poo poo server for a variety of reasons. I want to replace it with some flavour of linux, but I'm not sure what the best options are.


It's an i5 2500k running on the integrated gpu, with 8GB of ram, a small SSD for the OS, and a couple big ol' platter drives for storing everything else.
I use it to host a Plex server, a discord bot, and download torrents of public domain motion pictures (man its like that train comes right at me!!)

Ubuntu is looking like the most likely option, since i've used it before (many, many years ago), but I'm not sure if I should go with the desktop or server version (afaik the only major difference is the GUI?)

I also want to avoid having to reshuffle my storage too much, would it be able to handle the HDDs I have all my media on without needing to reformat anything? The SSD is going to be nuked, so there's no issue with that.

Because i use onedrive on my laptop, I have it to sync some of my project files, is there a good way to have that stuff sync, too?
I can switch that stuff to google drive (or FTP or "wait until i get home and throw a usb stick in") pretty easily if needed, so that's not a big deal.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
Going with what you are used to is a perfectly fine method for choosing a distro.

I assume your drives are on ntfs? That means it will mostly work, but there is a danger of running into strange bugs. You probably will want to reformat them eventually.

You can setup a desktop client for most cloud services in linux just like in windows. Or if you don't want to use a desktop you can sync through a cronjob with rclone, that is what I do. Google drive is actually less compatible with linux then most other cloud services.

Mr. Crow
May 22, 2008

Snap City mayor for life

A Real Happy Camper posted:

I'm mad at my windows plex/other random poo poo server for a variety of reasons. I want to replace it with some flavour of linux, but I'm not sure what the best options are.


It's an i5 2500k running on the integrated gpu, with 8GB of ram, a small SSD for the OS, and a couple big ol' platter drives for storing everything else.
I use it to host a Plex server, a discord bot, and download torrents of public domain motion pictures (man its like that train comes right at me!!)

Ubuntu is looking like the most likely option, since i've used it before (many, many years ago), but I'm not sure if I should go with the desktop or server version (afaik the only major difference is the GUI?)

I also want to avoid having to reshuffle my storage too much, would it be able to handle the HDDs I have all my media on without needing to reformat anything? The SSD is going to be nuked, so there's no issue with that.

Because i use onedrive on my laptop, I have it to sync some of my project files, is there a good way to have that stuff sync, too?
I can switch that stuff to google drive (or FTP or "wait until i get home and throw a usb stick in") pretty easily if needed, so that's not a big deal.

Use whatever distro you want, same with desktop or server. If you're partially familiar a terminal and want to use it more go with server, if the idea of never having a GUI scares you use the desktop.

https://syncthing.net/ is what I would recommend for an alternative to one drive. It handles out-of-band file syncing really well and is cross platform. You could even have it just act as a sync to your one-drive folder on some other machine but why bother tbh, just use syncthing as is.

a dingus
Mar 22, 2008

Rhetorical questions only
Fun Shoe

I did pretty much what you're trying to do and didn't have any major problems. Ubuntu is always a good bet if you're newer to Linux because there is so much documentation. Go with the gui version if you're unsure about the command line. Nothing is stopping you from running it headless later on and the overhead from a desktop environment isnt really going to impact you. Run stuff like Plex, sonarr, torrents etc in docker containers if you can because it will save you further headaches if you decide to swap hardware or OSs later.

Yaoi Gagarin
Feb 20, 2014

How good are BSD jails from a security perspective? I have one for a Minecraft server and thinking of running nginx in another. Is it ok for both servers to be running as the jail's root account?

BlankSystemDaemon
Mar 13, 2009




VostokProgram posted:

How good are BSD jails from a security perspective? I have one for a Minecraft server and thinking of running nginx in another. Is it ok for both servers to be running as the jail's root account?
They're good but like everything else security realted you're supposed to use them as part of a broader security design - ie. where you drop privileges for everything, run all daemons and their individual dependencies in separate jails, and all the rest.
Put another way and to paraphrase Poul Henning-Kamp who created them with the specific purpose of isolating root: If there are any jail escapes, he'd really like to know about them - because they're so rare.

I'm very curious about why you want to run any daemon as root (usually because it can't drop privileges, which is usually an indication that it's badly written) - the only reason to do so is to bind to a port under 1024 without using MAC/portacl or setting the allow.reserved_ports property to 1.

BlankSystemDaemon fucked around with this message at 22:46 on Jan 21, 2022

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

They're good but like everything else security realted you're supposed to use them as part of a broader security design - ie. where you drop privileges for everything, run all daemons and their individual dependencies in separate jails, and all the rest.
Put another way and to paraphrase Poul Henning-Kamp who created them with the specific purpose of isolating root: If there are any jail escapes, he'd really like to know about them - because they're so rare.

I'm very curious about why you want to run any daemon as root (usually because it can't drop privileges, which is usually an indication that it's badly written) - the only reason to do so is to bind to a port under 1024 without using MAC/portacl or setting the allow.reserved_ports property to 1.

It's not so much that I specifically want to run anything as root as I wasn't sure if it mattered at all since each service is running in its own jail anyway.

where do I set allowed.reserved_ports?

Minecraft is not actually a daemon, I just have an sh script that invokes Java.

BlankSystemDaemon
Mar 13, 2009




VostokProgram posted:

It's not so much that I specifically want to run anything as root as I wasn't sure if it mattered at all since each service is running in its own jail anyway.

where do I set allowed.reserved_ports?

Minecraft is not actually a daemon, I just have an sh script that invokes Java.
Well, this is getting into the territory of the InfoSec thread so I'll try to keep it short - but all good security is what's called "defense in depth" - ie. you never want to rely on one single thing.

As an example, let's say there's a remote code execution in minecraft/java (or better yet, in log4j since that's a rare 10.0 scoring CVE that exists and is being actively exploited - and is used in minecraft). Now, if java runs with privilege separation, that means that the code that's being run by the attacker is only run as that user.
However, if java is run as root, and you have an unpatched version of FreeBSD, then this advisory (which was an announcement to fix something that, to the best knowledge of everyone I've spoken to about it, hasn't been actively exploited) would mean that your system was at risk if you administrate the jail from outside the jail.
So while it's not a jail escape in and of itself, if you don't follow the recommendations, and insist on using pkg -j, jexec, or other commands that use jail_attach(2) instead of treating a jail like a true multi-tenancy environment by SSHing to it, your jail host is suddenly vulnerable to a complete system takeover if you run the wrong command.

If you see the jail(8) manual page, you can see that it can be set using either -m at runtime or at jail creation (ie via jail.conf, iocage, a webUI or however they're being created) - it depends on your specific setup, so it's hard give generic recommendations.

EDIT: It should also be noted that the above security announcement was a theoretical risk for a very long time (I remember being warned about it back when jails were introduced in 4.0 when I started using FreeBSD), and is the explicit reason why there's always been standing recommendation SSH to jails.

EDIT2: I failed to keep it short. orz

BlankSystemDaemon fucked around with this message at 01:05 on Jan 22, 2022

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

Well, this is getting into the territory of the InfoSec thread so I'll try to keep it short - but all good security is what's called "defense in depth" - ie. you never want to rely on one single thing.

As an example, let's say there's a remote code execution in minecraft/java (or better yet, in log4j since that's a rare 10.0 scoring CVE that exists and is being actively exploited - and is used in minecraft). Now, if java runs with privilege separation, that means that the code that's being run by the attacker is only run as that user.
However, if java is run as root, and you have an unpatched version of FreeBSD, then this advisory (which was an announcement to fix something that, to the best knowledge of everyone I've spoken to about it, hasn't been actively exploited) would mean that your system was at risk if you administrate the jail from outside the jail.
So while it's not a jail escape in and of itself, if you don't follow the recommendations, and insist on using pkg -j, jexec, or other commands that use jail_attach(2) instead of treating a jail like a true multi-tenancy environment by SSHing to it, your jail host is suddenly vulnerable to a complete system takeover if you run the wrong command.

If you see the jail(8) manual page, you can see that it can be set using either -m at runtime or at jail creation (ie via jail.conf, iocage, a webUI or however they're being created) - it depends on your specific setup, so it's hard give generic recommendations.

EDIT: It should also be noted that the above security announcement was a theoretical risk for a very long time (I remember being warned about it back when jails were introduced in 4.0 when I started using FreeBSD), and is the explicit reason why there's always been standing recommendation SSH to jails.

EDIT2: I failed to keep it short. orz

Thanks for the explanation. I really should read the handbook and manpages instead of just bumbling through poo poo. I'm using truenas's UI to make the jail for me but then doing all the stuff in the jail using the shell (which is I think equivalent to ssh'ing into the jail)

BlankSystemDaemon
Mar 13, 2009




VostokProgram posted:

Thanks for the explanation. I really should read the handbook and manpages instead of just bumbling through poo poo. I'm using truenas's UI to make the jail for me but then doing all the stuff in the jail using the shell (which is I think equivalent to ssh'ing into the jail)
I'm fairly sure that's the exact way that's vulnerable to the exploit that I'm talking about, unless ps -x in the jail includes sshd like this:
pre:
78069  -  S    0:00.01 sshd: debdrup@pts/1 (sshd)

Adbot
ADBOT LOVES YOU

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Late to reply but thanks for the CIS stuff to those who answered me. I got everything into a KS coming and now have a packer routine to build them automatically on my Proxmox NUC.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply