Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
xzzy
Mar 5, 2009

I'm trying to come up with a scheme to automate an audit of logins on serial ports. Short version, we got a room with couple thousand linux servers, using a mix of IPMI and old school db-9's attached to a serial aggregator.. gives us the ability to troubleshoot machines from POST without having to stand next to the machine itself.

So I'm trying to come up with a way to test if that serial connection is actually working. The traditional response is "use an expect script" or these days the pexpect python module, but that's not what I'm hung up on. What I can't figure out is the "rm -rf /" situation.. if I have a process connecting to these consoles and someone left an active shell, there is a chance of a script logging in and pounding enter a couple times could run commands. Or even worse, if someone is actively typing on the command line and this script shows up and smack enter, what if they were in the middle of typing a destructive command and this script executes it prematurely?

As serial connections don't give any sort of indication of a successful connection, it's mandatory that I send some kind of output over the wire to trigger a prompt.

I can think of a few ways of dealing with this, with varying levels of insanity:

a) Auto-logout any sessions on the console after some period of inactivity. Solves some issues, but not all.
b) Instead of mashing enter, send ctrl-c or ctrl-d or some other "safe" key sequence that generates a login or shell prompt
c) some kind of script on the server itself that checks for an active login, and if not, pings a server to initiate a check
d) Send newlines anyway and hope for the best
e) Configure all serial access so that only one connection can be open at a time, would also need auto-logout.

Anyone ever tried something like this and got some other sensible option that I'm not seeing?

Adbot
ADBOT LOVES YOU

xzzy
Mar 5, 2009

apropos man posted:

What do people use for benchmarking under Linux?

mprime (which is the linux version of prime95) for the cpu, badblocks for the disks. iozone is technically "better" for benchmarking disks because it produces actual statistics that you can make lovely graphs with, but badblocks is installed by default and we mostly use it for burnin. But you can get i/o rates out of it.

mprime with the -t option will make your processor sweat big time.

xzzy
Mar 5, 2009

mprime is so good at cooking processors we have to throttle our systems during burnin or we start flipping breakers.

One could reasonably argue that we should have bought enough power to allow the systems to actually run at full tilt, but we've discovered it's not really necessary. In production the systems almost never get past 80% of their theoretical max (due to i/o waits) so we spec for that instead.

xzzy
Mar 5, 2009

I like Arch and use it on my workstation because I dig the rolling release system, it's nice always having the latest and greatest on tap. I like literally anything else for machines I actually have to support because gently caress putting something like that on 2000+ machines and not knowing what incompatibilities version changes introduce with user processes.

Give me a distro with an annual point release for that. If I had to ask users to do compliance testing on any shorter a scale I'd end up shooting myself.

xzzy
Mar 5, 2009

We switched to xfs because we'd used it for years on our irix systems, and trusted it. Then once disks got over 1tb we continued to use it so we don't have to drum our fingers for an hour while mkfs.ext does its thing.

We don't have hard data but my group is getting the "feeling" that xfs file systems flip out and go read only more often than ext. It's something on our list to gather data for.

xzzy
Mar 5, 2009

We did take care of a few large array ext3 systems created with the large file system option for a few years, and physicists pretty much instantly filled up the inode table by storing code on those disks.

:downs:

So yeah it's been xfs for a while and I dig it.

xzzy
Mar 5, 2009

And a firewall. A comical number of problems completely go away if you take care to whitelist the a small number of incoming connections you actually need.

xzzy
Mar 5, 2009

peepsalot posted:

So in this case I don't want to share the entire desktop, the laptop isn't remote, i don't need to duplicate its entire UI, I only want to drag one or two windows over to my large monitors which are directly in front of my face, without having to do some physical dock nonsense, and having the laptop hijack the entire monitor.

I can't see how something like that would ever be possible. To make something like that happen you would either need a protocol to live migrate a running process from one machine to another, or some kind of X11-style protocol that allows changing the display of a running application.

I mean it would be a really cool feature, but I can't imagine anyone ever putting development time into it because not many people would ever ask for something like that.

The closest you'll be able to get to something like that with software available right this minute would be VNC, RDP, or maybe DLNA.

xzzy
Mar 5, 2009

I'm sure someone's thought of it before, that's why tools like screen/tmux and VNC exist.

But either it wasn't as easy to implement as you're hoping, or it just never got sufficient interest to see serious development.

xzzy
Mar 5, 2009

I don't do a VPN but I do run a squid proxy on a digital ocean box and it's fine, using the 1core/1gb configuration. It definitely adds latency because of the extra hop, but it's good enough for normal browsing. Like my SA bookmarks page loads in about 950ms without the tunnel, and it's 1100ms with the tunnel. It only hurts when viewing enormous imgur albums, the link saturates pretty quick.

The biggest problem with using digital ocean as a VPN is you'll hit google's catchpa all the drat time. I'm guessing digital ocean's less savory customers run all kinds of lovely bots and SEO bullshit and get the ip range flagged for verification constantly.

To get around that, I run a "primary" squid proxy on a raspberry pi at home. :v:

xzzy
Mar 5, 2009

The powershell disk management tool is a lot more capable, but if it's not formatted with something windows understands there's not much point.

xzzy
Mar 5, 2009

Run coredumpctl then, and have fun. :v:

xzzy
Mar 5, 2009

If for some reason you don't like giving the full path in all your crontab entries, you can set a PATH variable at the top of the crontab.

I tend to prefer full paths, but if the command gets too long there's it's reasonable to shorten things up.

xzzy
Mar 5, 2009

Maybe it has a limited path, like it'll search /bin and nothing else.

If you want to know get to the google because I ain't trying to figure out the keywords to determine the history or specification of cron. :colbert:

xzzy
Mar 5, 2009

I don't use nginx enough to be any kind of authoirty, but isn't -t to test the config? If that command doesn't return 0 the service won't start. There is an option to tell systemd to not abort if a ExecStartPre fails, put a minus after the equals sign.

ExecStartPre=-/opt/nginx-1.8.1/sbin/nginx -t

But I imagine you would want the config test to exit cleanly.

xzzy
Mar 5, 2009

That's actually what you want, store dates in as neutral a format as is possible and save all the horrible conversions for the code that's displaying it to a user. It saves massive headaches down the road when you change time zones or are trying to merge two sets of data.

xzzy
Mar 5, 2009

The main reason to deviate from the packages in the distribution is for new features. LTS releases can provide some really outdated software and if you want to use their new gizmos you got a tough decision to make.

I generally draw the line at libraries though. Like on a red hat system I will just live with the python it ships with because replacing it is a colossal pain in the rear end. GCC is the same.. if a user wants a newer compiler they install it in their home area or find a newer os release. Gnome and KDE are right out too.

But different versions of libraries can easily coexist so I'll install whatever.

xzzy
Mar 5, 2009

Is the f2b-sshd chain actually being processed?

The two rules you posted are not identical.

xzzy
Mar 5, 2009

You need add or check for a rule to the input chain telling it to jump to the f2b-sshd chain. So there needs to be a rule somewhere with '-A INPUT -j f2b-sshd' in it.. probably one that has a dport of 25, so that any incoming ssh connections get fed through the chain.

I'm garbage at writing iptables rules without testing them several times so I won't even try.

xzzy
Mar 5, 2009

Plus there's the issue of driving those pixels. The UHD yoga I use at work has integrated graphics and it chugs hard on video. Even 1080p is hard on it because it has to upscale so much.

xzzy
Mar 5, 2009

That's why if you're planning DNS work always have a second box somewhere that would not be accessing the sites you're updating so you can verify function after you do the change. Or just reboot, should clear everything out.

Also always set the TTL to be super short a couple days before.

xzzy
Mar 5, 2009

politicorific posted:

I've read that I can just copy the letsencrypt certs from one computer to another: does anyone have a guide for this?

Copy the /etc/letsencrypt folder over.

quote:

I have other Apache problems:
If I initially configure my ports.conf before logging into my wordpress install to use say port 7000, I can configure using HTTP, but HTTPS doesn't work... is this a mysql database problem?

Probably not a MySQL problem, but without some kind of error message it's impossible to help beyond that.

xzzy
Mar 5, 2009

Any Linux distribution should work fine, just don't use KDE or gnome because they'll blow through one gig of ram like it's nothing. Hunt around for a lightweight window manager.. joewm is my current favorite but there's millions out there to choose from.

JavaScript heavy sites will probably chug badly with the processor but reading forums or Reddit or whatever would probably be decent enough.

xzzy
Mar 5, 2009

CoreOS! It's the smallest! Containerize that Firefox!

xzzy
Mar 5, 2009

Alpine is actually smaller but usability suffers for it.

Plus CoreOS isn't really a minimal linux, it's just a distribution focused on hosting containers and nothing else.

xzzy
Mar 5, 2009

evol262 posted:

I mean, CoreOS is basically "barest possible requirements to run systemd+containers+cloud-init". I'm not sure how much more minimal you can get and still have usability/management.

CoreOS has some creature comforts like a more featured vim that can do syntax hilighting. Alpine is really skin and bones, it trims everything down to the minimum.

xzzy
Mar 5, 2009

Look at the rsync --files-from option.

Could also do it with a file list fed to tar, look at the -T option.

Of course this obligates you to create a text file listing every file you want, but if your needs are specific that might be the only way to do it.

xzzy
Mar 5, 2009

I tend to leave stuff under the path the software defaults to because I don't give a poo poo where stuff lives. If that partition isn't big enough for whatever reason, I'll put a mountpoint there.

xzzy
Mar 5, 2009

Try using Bluetooth anything on windows. It's still a crapshoot and the process always ends up digging through control panels or the registry to try and convince it that your device is there.

I'm partially sympathetic because they have to run on so much different hardware and macOS only has a couple configurations to deal with, but not by much because Linux actually does a lot better on the exact same hardware that windows barfs on.

Hibernation is pretty sketchy too.

xzzy
Mar 5, 2009

Is this going in a script or does it need to be a shell one liner? Because a one liner is kind of messy.

If it's a one shot I'd do something like "ls -t hostname*/dumps/*" and chop the wanted lines from the top of the output.

If it's a script that's going to be around for a while I'd probably do a for loop that runs 'ls -t | head -1' on each "hostname/dumps" directory because that way it you can easily put the newest directory into a variable without a bunch of extra logic.


But the fun part is there's a billion ways to do it. If you were working in python or something you could play with os.walk() and go really bonkers.

xzzy
Mar 5, 2009

evol262 posted:

Quick and dirty
code:
for dir in `ls -d /path/to/hostnames`; do latest=`ls -ltr $dir | tail -n 1 | awk '{print $9}'`; ...

ls -t | head -1 is faster. :spergin:

Which probably doesn't matter unless there's a billion directories being read.

xzzy
Mar 5, 2009

Wicaeed posted:

code:
for i in `find <path>/* -maxdepth 0 -type d` ; do find $i -mindepth 2 -type d | sort -r | head -1 ; done
The folder names are in a YYYYmmdd format so sorting as such works nicely :)

Don't want to get into "tell you how to do your job" territory but this is vulnerable to unexpected directories. If someone creates hostname1/butts/farts/ your command will return that instead of one of the timestamped directories.

If you can guarantee that no one will ever do that you're fine, but if I was publishing data to some cloud service I'd want sanity checks in there.

xzzy
Mar 5, 2009

Try an xstartup that only runs an xterm, will tell you if it's the wm causing problems. You'll just get a command prompt and nothing else. Alternatively look for the Xorg.0.log in var on the vnc server and see if it has any hints.

xzzy
Mar 5, 2009

If you can get an xterm just fine, your window manager is crashing. gently caress if I can troubleshoot gnome though, have fun with that one (or use fvwm :v:).

xzzy
Mar 5, 2009

Why do you care? If you like linux, use it. If you don't, use something else.

xzzy
Mar 5, 2009

apropos man posted:

I would care if I was using something I liked and then a while later it went all crap.

Well do a U-turn and get out then because linux has been crap for one reason or another since 1991. :v:

First it really didn't work at all, then it mostly worked but you'd spend all night compiling poo poo, then we got package managers and it was crap because everyone was yelling about which was the best and you ended up compiling constantly anyways. Then there was an argument about standardizing on paths and then it was crap because the only loving thing people ever talked about was SCO and then no one really cared about linux anymore, it was all KDE vs Gnome. Now it's crap again because of all the bickering about systemd.

xzzy
Mar 5, 2009

No that makes you an expert because that's what every other linux admin on the planet is doing.

The final lesson is to learn to identify the point where it's working "well enough" and stop loving with it.

xzzy
Mar 5, 2009

Tigren posted:

Seriously?

Yes, seriously.

xzzy
Mar 5, 2009

Suspicious Dish posted:

it's an unmaintained pile of crud with no regard to security or security design at all, poor user knowledge and practices (linux means i never have to reboot, right?), and a poor community attitude towards fixing major design flaws (the antiquated 50-year-old unix permissions model should save us from exploits, right?)

You forgot to mention selinux that promised to fix a lot of that.

"Goddamn setting up these contexts is hard, gently caress it, just set it to permissive."

Adbot
ADBOT LOVES YOU

xzzy
Mar 5, 2009

Selinux is conceptually a system of tags. You assign tags to a process, and assign tags to your files, and if the tag of a process matches the tag of the file selinux allows the process to access the file. Obviously the real implementation is a lot more complex.. selinux has enough granularity to control all file operations, such as normal reads and writes. But it also controls other stuff (like execution, network access, and some other stuff I'm forgetting).

The problems come from the database that selinux uses to assign those context to everything.. it's a massive amount of pattern matching and it means if you happen to install some software that isn't in that database your job turns into fixing those contexts. Most people just pipe audit.log through audit2allow which auto-generates rules to allow denied requests to succeed, and that works fine, but it's the lazy way to do it. The "correct" method is to identify the context of the process and the file it's trying to work with and update the file context to allow the necessary access. Without breaking any other process' access.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply