Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ExcessBLarg!
Sep 1, 2001

Bob Morales posted:

Is there a good way to block stuff like bittorrent from OpenVPN/PPTP clients?
It's easy enough to use netfilter to block destination port ranges from the VPN subnet.

Protocol-based blocking can be done with deep packet inspection. I haven't deployed it myself, so I don't know how advanced the tools are. I'd start with OpenDPI though. Mind you, obfuscated protocols are tough, so things like encrypted BT might be out.

In general unless you rate limit/throttle to make high-bandwidth applications undesirable, possibly with whitelisted sites for things you want to allow (e.g., YouTube, Akamai, whatever), folks will be able to thwart your filtering efforts with encryption.

Adbot
ADBOT LOVES YOU

ExcessBLarg!
Sep 1, 2001

Bob Morales posted:

I guess people could use IM, Skype, and that stuff as well, but I don't want anyone pulling torrents or newsgroups. Maybe throttling everything BUT web access is the best idea. If the bandwidth is available, I'd like people to be able to surf as fast as possible.
You could setup an HTTP proxy with no direct Internet access. If you ran a caching proxy, like Squid, you even save on your proxy-site bandwidth by caching redundant requests.

Mind you, an HTTP proxy alone isn't encrypted, it would have to be deployed alongside an ssh tunnel. I believe, however, as long as you tunnel the proxy traffic you don't have to worry about dns leaks, as URLs are forwarded to the proxy and resolved there.

Bob Morales posted:

I would rather just offer ssh tunneling but it's too complicated for 99% of people to use/configure.
Are VPNs any easier?

ExcessBLarg! fucked around with this message at 00:48 on Jul 22, 2011

ExcessBLarg!
Sep 1, 2001

Accipiter posted:

Something like this should do the trick:
Does that actually work? I'm pretty sure Linux doesn't do source routing, at least not by default anyways. The packets would just get sent out the interface with the highest metric, not caring what the source address of the packets are.

Actually, didn't this very issue come up a few pages ago?

Edit: Guess not, must've been another thread or something.

ExcessBLarg!
Sep 1, 2001

Smuckles posted:

Would the plan, then, be to buy one 2TB drive to expand /home (using LVM)
Unless you're already running LVM, I don't believe you can expand an existing partition. You'll have to copy everything over to an LVM volume first. Fortunately if you're doing RAID1, you don't even need extra disks. Just create a mirror with a missing disk, create your LVM PVs, VG, LV, then file system. Copy everything over. Then blow away the old disks and add them as spares.

Also, I'm not quite sure I understand what you're trying to do, you want a 4 TB home? Instead of four disks in a mirror, three 2 TB disks in RAID5 array is a bit cheaper.

But remember, RAID is not backup. If you don't already have offline (or multiple) copies of the data that's going in the array, and you care about it, you'll want to make sure you have an offline backup of it too. The only benefit to RAID is to avoid the restore time associated with a disk failure, but it won't protect you against a PSU failure wiping out all your disks or file system corruption.

ExcessBLarg!
Sep 1, 2001
So I upgraded the kernel on my Debian router machine yesterday to linux-image-3.0.0-1-amd64, and apparently it broke PMTU discovery again (or at least, TCPMSS clamping to PMTU), which means I couldn't sent packets larger than 1492 bytes (interface MTU is 1500).

Unfortunately the only symptom I ran into of this was that I couldn't post replies on SA--the connection would time out. I tried clearing cookies/cache/history/etc., different web browsers, different machines, etc. Of course, I didn't try a different Internet connection until after emailing SA devs. :doh:

So yeah, uh, if you can't post on SA, perhaps upgrading to Linux 3.0 is to blame?

ExcessBLarg!
Sep 1, 2001
Why does Ubuntu suddenly suck?

I run Debian myself, so I haven't been following Ubuntu shenanigans too closely. I know folks were upset with the switch to Unity in the latest release, although it's not clear to my why they're even upset with that except that it's different. However, can't you just tick "GNOME Desktop" at the *DM prompt if that's a problem?

Did they fuckup hardware support recently or something?

ExcessBLarg!
Sep 1, 2001

fletcher posted:

I'm trying to figure out how to use screen to split the window and then be able to disconnect and the reconnect to my split windows, what am I doing wrong?
The server-side screen process (the thing you're reattaching to) only has a concept of "windows", a new one of which is created everytime you do a 'C-a c'. You can see the list of windows by doing 'C-a "'. How those windows are displayed on the client (one at a time, split-screen simultaneous, etc.) is part of the client-side process only and lost everytime you detach, although an appropriate .screenrc can probably configure them back to the right place.

For your specific example, you should be able to do 'screen -r', 'C-a S', then 'C-a Tab' to the necessary window and 'C-a Space' until you get the one you want to show. Or select it from the 'C-a "' window list.

ExcessBLarg!
Sep 1, 2001

xPanda posted:

Instructions for RHEL/CentOS/SL all seem to recommend putting /boot on its own small RAID1, but this is the old school way and I want to use partitioning so I don't have more RAID devices than I need.
Honestly, just do this. Anything else is going to be more trouble than it's worth.

The problem is that your bootloader also has to be aware of partitions inside a mirror volume and be able to peer into them in order to boot. What's nice about a separate mdadm mirror for /boot is that bootloaders don't have to know about it at all, a mdadm mirror volume is indistinguishable from non-RAID volume except for the mdadm superblock at the end.

Furthermore, should you decide to move from a mirror setup to a three-disk RAID5, it's still probably better to keep /boot as a mirror, otherwise your bootloader will have to be smart enough to understand the RAID5 parity stripe. Perhaps GRUB 2 has all that poo poo built-in these days, it has quite a lot, but I'm a fan of keeping it simple since who knows how well that code is tested.

xPanda posted:

Also is it my imagination or does Ubuntu not seem to have a /boot partition under default installations?
It probably doesn't as we're well past the days when BIOSes had trouble loading past 500 MB or 8 GB or whatever, and by default "/" is just a regular partition containing a file system that most bootloaders are familiar with.

ExcessBLarg! fucked around with this message at 19:33 on Aug 7, 2011

ExcessBLarg!
Sep 1, 2001

spankmeister posted:

The reason to use a RAID1 for boot is because GRUB doesn't understand RAID,
To be fair, GRUB 2 appears to have some code to parse mdadm superblocks.

It also wouldn't have to run any md daemon for read-only support, assuming it knows about md's RAID 5 stripe scheme and all volumes are available. Can GRUB boot a RAID 5 volume in degraded mode though? I don't know, and quite frankly, that's not something that's worth waiting around to test when the cost of replicating a < 100 MB mirror across all your disks is nil.

spankmeister posted:

Also, this may no longer be the case but it used to be that you needed to run update-grub everytime you changed something in /boot, UNLESS you had a separate /boot partition, then you could skip that step.
Hmm, that's interesting.

I'm assuming the reason to run update-grub was to refresh GRUB's kernel blockmap, if it couldn't fit stage 1.5 (or GRUB 2's equivalent) in the boot track. Which may have to do more with the partitioning tool used (some use a one-sector boot track, others 63) than the existence of a separate /boot.

Or if you used some funky filesystem GRUB knows poo poo about.

ExcessBLarg! fucked around with this message at 22:08 on Aug 7, 2011

ExcessBLarg!
Sep 1, 2001

Dinty Moore posted:

No, 'update-grub' doesn't actually reinstall the bootstrap code in the MBR on its own at all.
Right, it stores the blockmap for stage2, not the kernel, if the boot track isn't large enough. :brainfart:

(We don't have one of those yet?)

ExcessBLarg!
Sep 1, 2001

JHVH-1 posted:

Only other thing you may want to worry about is the device names as sometimes that changes between hardware.
Along these lines, remember device names for wired Ethernet adapters, since they're often fixed to MAC addresses that will change with a new motherboard (if you're using on board ones).

In Debian land that means patching up "/etc/udev/rules.d/70-persistent-net.rules", or just blowing it away entirely if there's only one adapter you care about.

ExcessBLarg!
Sep 1, 2001

duck monster posted:

Yes exim4 has/had a loving remote vunerability where you could EMAIL loving untrusted scripts to it via the HeaderX tag :suicide:
Which version of Debian is this?

Ziir posted:

If I wanted to nuke everything and reinstall, do I just need to delete my LVM partitions or do I need to do the whole dd thing again on my entire hard drive?
As long as you're not zeroing the drive in between, it should contain "encrypted data + random noise" right now, which to someone who doesn't know the crypto keys just looks like random noise.

So all you need to do is nuke the crypto (LUKS) header and generate new crypto keys, at which point the ability to decode the old data is lost to everyone and it's effectively noise.

As for the slow writing of noise part, if you had done:
code:
dd if=/dev/urandom of=/dev/sdb bs=256k
it shouldn't be that slow. But it might be faster to create the encrypted volume first, then run:
code:
dd if=/dev/zero of=/dev/mapper/crypt0 bs=256k
which zeros the encrypted device, effectively writing random noise to the underlying disk.

ExcessBLarg!
Sep 1, 2001

T.K. posted:

So does the old advice about not using backticks and cat if you need to loop through a file still apply?
I didn't know it was advice, but there are at least two problematic distinctions between those commands.

First, "infile.txt" is tokenized differently, particularly if "infile.txt" has spaces. The "while" variant reads entire lines at a time into f, whereas the "for" variant splits on IFS, which defaults to any whitespace. For example, suppose "infile.txt" contains:
code:
foo bar baz
quux quuux quuuux
then the command "while read f; do echo "$f"; done < infile.txt" yields:
code:
foo bar baz
quux quuux quuuux
whlie the command "for f in `cat infile.txt`; do echo "$f"; done" yields:
code:
foo
bar
baz
quux
quuux
quuux
see?

Second, in the "while" case, stdin of any command in the loop is redirected to "infile.txt", which is nasty if you try to do "while read f; do vi "$f"; done < infile.txt":
code:
Vim: Warning: Input is not from a terminal
and a lot of nasty may follow. In the "for" case, stdin is untouched (usually the terminal), and so "for f in `cat infile.txt`; do vi "$f"; done" works as expected.

Misogynist posted:

but that it spawns an unnecessary process and that makes a lot of Unix nerds completely sperg out.
Not that I disagree, but the majority of shell script commands do this. If you're even remotely sensitive to process spawning times, a shell script is not for you. So it's more of a Unix :downs: thing to complain about.

ExcessBLarg!
Sep 1, 2001

Bob Morales posted:

You usually end up having to use 'find' or 'xargs'
Bah that link is incredibly wrong!

1. 'rm *.toto' and 'ls *.toto | xargs rm' have the same wildcard expansion, the latter is just needlessly convolved.

2. 'find . -type f -name *.toto | xargs rm' needs to properly escape the wildcard, otherwise it won't work right if there's any "*.todo" in the current directory.

3. 'find . -name "*.toto" -exec rm {} ;' works fine, but spawns a separate "rm" process to delete every file, which is a bit suboptimal.

The two best solutions which aren't mentioned are:

1. 'find . -name "*.todo" -exec rm {} +' which works like xargs and spawns as few "rm" processes as possible. The "{} +" may be a non-standard extension in GNU find, I'm not sure, but it does work in BusyBox which is about the only other case I care about these days.

2. 'find . -name "*.todo" -print0 | xargs -0 rm' uses nulls as delimeters, so properly handles the utterly-perverse case of file names with newlines (as does the above solution).

Edit edit: Better accuracy, more content.

ExcessBLarg! fucked around with this message at 16:28 on Nov 2, 2011

ExcessBLarg!
Sep 1, 2001

Misogynist posted:

The xargs example will likely try to delete all files with names that match those inside any directory named *.toto, which is probably really far away from the right behavior.
Ah, yes. I retract my retract, that link is incredibly wrong.

Misogynist posted:

I think you read the same bad information on Wikipedia that I just did -- you don't just have to worry about newlines, because xargs will break arguments on any whitespace by default.
drat, you're right!

I had been assuming the behavior of xargs was to split on newlines for years. I use it rather sparingly but, sigh.

ExcessBLarg!
Sep 1, 2001
I write shell scripts with "set -e" so that they'll automatically stop/fail on any failed command. Combined with "set -x" ("set -ex") I'll know exactly where it dies.

ExcessBLarg!
Sep 1, 2001

Bob Morales posted:

RedHat recommends 'at least 16GB' swap for 128-256GB RAM.
That's, uh, interesting.

OK, funny thing about swap, it's been pretty much useless since we've hit multi-GB RAM machines. Or rather, there's only two things that swap can do usefully:

1. Hold the RAM restore image when you hibernate (suspend-to-disk).

2. Store a few dirty pages that aren't backed by a file, and are never going to be read/used again. This makes RAM available for more applications, or even just to buffer disk pages that will be used frequently. Problem is that the machine can't "forget" the contents of these dirty pages because they could be used again (correctness error), even if in practice, they never will.

For both the above purposes, I've never found a particular reason to use more than 1 GB swap. Why is it otherwise useless? Two reasons:

First, if your swap is backed by (mechanical) disk, it's either not very large, or it's very slow. See, disk capacity has increased many times more compared to media transfer rates, and especially compared to random access time. Writing out a full gig sequentially still takes, what, at least ten seconds? Access 1 GB in a random pattern is far longer. 64 MB swap, back in the days of 64 MB RAM, could be accessed much faster, thus the cost to using it wasn't as high. These days if you're thrashing more than a gig, or even anywhere near a gig, your computation is so slowed down that you're best serializing what you're doing or giving up.

Second, if your swap is backed by SSD, you're blowing erase cycles. That doesn't matter much for reasons 1 & 2 above, but I wouldn't want to regularly thrash to SSD and replace it that much sooner. The replacement money is better spent on more RAM.

Bob Morales posted:

I have a 32GB system that for some reason wants to use 40MB swap even with 20GB free. :iiam:
Sounds about right. It's 40 MB worth of dirty pages for process that aren't ever going to do a drat thing and never run again.

ExcessBLarg! fucked around with this message at 03:14 on Mar 2, 2012

ExcessBLarg!
Sep 1, 2001

cr0y posted:

can someone spit me out a command that will relatively quickly calculate the size of a directory that has a MASSIVE amount of folders/files under it? du ain't cutting it
If you have GNU find, you can try this:
code:
find /path/to/dir -type f -printf "%s\n" | awk '{s+=$1} END {print s}'
Which will print the sum of the (inode-reported) size for all regular files beneath that directory. It might be faster than the du method, particularly if they're large files, because it's not looking up disk block usage. It probably won't make a difference though if they're small files. Also, the size will be over-inflated if you have any hardlinked or sparse files--which is why du computes size in terms of used disk blocks.

In general though what you're asking to do is slow. In the absence of an indexing mechanism, the file system has to, at minimum, look up the inodes for every file underneath that directory, and mechanical disk seeks are going to make that slow.

ExcessBLarg!
Sep 1, 2001

Martytoof posted:

I had a SMART failure on my four-disk software RAID10 array. ... The only way I was alerted to the failure was by checking dmesg ....
I actually prefer software RAID over hardware when performance isn't an issue. It's for the reasons others have mentioned, which is that it's flexible to the hardware environment, i.e., if a SATA controller or the entire box goes bad, I can put together a machine to bring up the array and pull whatever off I need, assuming I don't just continue with the disks in a new machine anyways. It's also far more flexible in terms of notifications.

Which gets to the real point. Folks, make sure your servers are running an MTA and configured properly. And make sure your servers are configured to deliver their mail to an addess you actually read. If you do that, then mdadm will spam your gripe email when an array goes down. That's far more convenient than waiting for it to beep. But you have to test your notification mechanisms to make sure they're working properly.

ExcessBLarg!
Sep 1, 2001

Wagonburner posted:

For years I've had my sshd port open to the internet on an odd port, like 35666 or something, thinking that this combined with root login disabled helps to mitigate my fairly lax upkeep of my system. ... How safe am I to leave it open to the internet on 22 if I'm kind of a "install linux and forget about it until it breaks" type guy?
Remote-root and ssh vulnerabilities are pretty darn rare in the grand scheme of things. Roughly "once in a decade" type events. And when they do happen, they're huge, like "if you casually read slashdot you'll definitely know" huge.

The last vulnerability of that type that I recall, or at least, that affected machines I run was the Debian Non-Random Number Generator fiasco that resulted in completely predictable SSH and SSL keys when generated with Debian OpenSSL packages from a particular timeframe.

Otherwise, I've been running root-enabled port 22 sshds on machines for a long time and they've never been owned. At least, not by that avenue. Assuming you don't have a trivial password, they're just not going to get cracked unless there is a vulnerability. And if such a vulnerability does come along, you're not a high-priority (i.e., 0-day) target. Of course, running on an odd port, root-disabled, password-auth disabled, active blocking, etc., those are all great too.

Wagonburner posted:

I wonder if there's some type of public ssh I could connect to from work and kind of double-port-fwd?
If you can tolerate limited bandwidth and some latency, you can ssh over Tor if you want.

It's actually a rather nifty solution as Tor has been used for the past few years (and optimized for) getting around censorship policies. Mind you, these guys are in an arms race to deal with the kind of Internet censorship that's going on in China, Iran, Syria, etc., so it typically does a darn good job with your typical non-whitelist firewall. Plus the Tor folks are trying to encourage a diverse user base so that it doesn't become representative of any one particular use group.

If you do use Tor though, be extra vigilant about connecting to hosts with previously-verified host keys and be weary of "key changed!" notices. It's possible for a Tor exit-node to man-in-the-middle you, but that's mitigated by using a previously-verified host key.

Wagonburner posted:

How bad really is using a weak-ish password on a normal account, the account name and pw would have to be guessed right?
Password-cracking becomes a big deal when it can be done offline, i.e., if someone steals your shadow file. Online password cracking is slow and difficult to do if you have a good active blocking strategy. That said, I still encourage strong passwords if not pubkey-only auth.

Goon Matchmaker posted:

Potentially stupid question. How much more secure is a 16384 bit ssh key over whatever the standard is (4096?)

I just spent a very very long time generating one and all it seems to do is make putty choke for a minute when it first connects.
I believe 2048 bits is the present OpenSSH default, up from 1024 bits a few years ago. 768-bit keys are "broken" in the sense that the largest known factored RSA key is 768 bits, although that doesn't yet mean that any 768-bit key is crackable without some serious hardware.

My understanding of RSA key strength among researchers is that 1024 bits is "uncomfortable", that is, some 1024 bit RSA key will likely be cracked soon. 2048 bit keys are estimated to be sufficient until 2030 and 3072 bit keys should be good beyond that.

In other words, if you're generating what's a reasonably-easily replaceable ssh keypair I see no particular reason to go beyond 2048 bits outside of paranoia. If you need an RSA key for a long-lived purpose, like a SSL (CA) certificate with 10 or longer year expiration time, you'd probably want to use 3072 bits.

16384 bits is crazy long, and, although I'm not a crypto person, I'd guess that if we reach the point with RSA where we require keys to be that long, switching to ECC or something might end up being the favored approach.

Wagonburner posted:

How does logging in with a key work day to day in the real world? I'll need the key on my android since I use ssh there (busybox and connectbot)
I don't use password-protected ssh keys on my phone since that's really inconvenient. However, I do make sure to use a different keypair for my phone so that if it's ever lost or stolen I can quickly invalidate that key without having to change any other password-protected one.

Wagonburner posted:

I guess any new PC I want to ssh from I'd just copy the key from my phone to the pc? Or do you all use box.com or dropbox or something? In that case your key's only as strong as your dropbox pw right?
The idea of having a password-protected key is that, if the key is stolen or otherwise intercepted, an attacker can't immediately use it to break into your machines without cracking the password first. However, a private-key password is far easier to crack than a password used for online authentication--since the attacker has your private key, he can crack and verify it offline and use it, once, to successfully attack you. So, key passwords shouldn't be relied on as a security mechanism, i.e., your private key shouldn't be made public. But it does buy you some time to invalidate the key and swap it out for a few one should your private key leak out.

So yes, avoid putting your password-protected private key on dropbox.

spankmeister posted:

If someone is able to obtain your key they will then have to crack it before they can use it, which is something I would not worry about.
I'm far more worried about someone cracking the password on a private key, especially for a weak password, than I am about someone cracking a password online where they have far fewer guess attempted before I notice and ban them.

ExcessBLarg! fucked around with this message at 21:38 on Mar 13, 2012

ExcessBLarg!
Sep 1, 2001
I wouldn't recommend old-school vi exclusively though, but something a bit more modern like vim. Although both appear arcane and weird to outsiders, vim is surprisingly awesome to use (highlight lines, blocks, multiple undo, syntax highlighting) whereas old-school vi really can be genuinely arcane at times. That said, with knowledge of vim, in the few instances where you do have to fall back on an older vi implementation, 90% of it still works just fine.

Not to start an editor war, but in general, one of vim or emacs is really good to learn well. Learning curves are a bit steep, but I'm far more productive writing code in vim than any IDE I've used. The fact that I can just as trivially write code over a remote shell from any computer, using detachable screen sessions (or tmux, whatever) is icing on the cake.

The consensus of the vim (and emacs, I guess) threads we've had over time is that they're a lot of features in them, you don't have to learn it all at once, but you'll end up learning new tricks all the time and they really do contribute to solid productivity.

ExcessBLarg!
Sep 1, 2001

Kaluza-Klein posted:

Can some one help me with my appalling lack of iptables knowledge?
1. ICMP echo is a meh heartbeat. It tells you that the machine is up, not that your server is operational. It might be better to have the heartbeat test an HTTP connect (or whatever tcp:80 is), since that wouldn't require any additional firewall rules and actually tests if the daemon is running.

2. You shouldn't need the "-A OUTPUT" rule at all, unless that's not your full firewall configuration and/or you're setting a DROP policy on the OUTPUT chain. If the latter, there's bigger problems here.

3. Your source/destination switches are swapped. What you really want is:
code:
iptables -A INPUT -s 1.2.3.4 -p icmp -j ACCEPT
Note the "-s" instead of the "-d".

4. Why not just accept ICMP traffic period? Dropping it is more likely to cause problems than it's going to solve. For one, if the monitor server is ever reIPed, you're going to have to update the firewall rule and will probably forget to. Or the guy who replaces you won't even know that rule is there and be baffled.

ExcessBLarg!
Sep 1, 2001
Also, those firewall rules are bizarrely overspecified. I'm guessing you folks found them from here?

There's a few things about that page that bothers me. But one of which is the overspecified firewall rules without motivation. For example, why hardcode the destination IP? Is it to prevent smurf attacks? If so, Linux has been ignoring ICMP echo requests on the broadcast address by default since 2.6.14.

ExcessBLarg!
Sep 1, 2001

spoon0042 posted:

So that's why that stopped working?
Probably. /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts, which is the parameter that controlls that, defaults to 1 now. That said, distributions may have been setting that in their sysctl.conf for some time prior to that even.

Pinging broadcasts is a cheap way to figure out which machines on a subnet are up, but nmap can basically do the same thing with unicast addresses so there's not a huge functionality loss.

ExcessBLarg!
Sep 1, 2001

Kaluza-Klein posted:

This "pinging to check if the server is up" is something the hosting service is doing, I had no idea it was there until it yelled at me!
OK, so they're not checking for a specific service. In general, that kind of monitoring is a good thing. :)

Kaluza-Klein posted:

This is just a server I can goof off with and idle on irc with, so hopefully no one is worried that I am learning on it.
Fair enough.

A modern Linux system usually doesn't need a firewall. That said, stateful, "default deny" of incoming TCP/UDP traffic isn't bad. Just open the ports you need and that way you can run whatever daemons without worry that a misconfiguration gets you owned.

ICMP is pretty harmless though. I've never had a problem just allowing it, and I've not found good motivation for blocking it in most circumstances. I have, however, run into problems where things like PMTU discovery breaks because folks unnecessarily filter ICMP and it gets annoying.

spankmeister posted:

"It's not working! I can't ping!"
To be fair, I'd at least playfully give someone poo poo for blocking ICMP traffic for whitelisted hosts too.

ExcessBLarg!
Sep 1, 2001

Bob Morales posted:

Sure it does.
Why?

What attack vector exists that non-firewalled, but properly-configured Linux machines are susceptible to?

I understand running a firewall on machines where semi-trusted users are running riff-raff services/programs that the world shouldn't have access to. I would go as far as to say that these machines should run a firewall. But if you're running a machine with limited services and no riff-raff users, why does it need a firewall?

ExcessBLarg!
Sep 1, 2001

Bob Morales posted:

What if you install a piece of software that does something dumb like opening a service/port without you knowledge and you get attacked that way?
That's not a properly configured machine, and it's something that should never happen in a production environment.

In the past, there were systems that were vulnerable to attack merely by virtue of being online, and thus, in absence of timely patching, needed a firewall just to function.

That's no longer the case, you can put a properly-configured Linux machine online without a firewall, and it won't get owned by virtue of inherently running Linux. That's what I mean by need. That said, running a firewall may well be prudent depending on what one intends to do with the machine. But it should be considered as part of risk assessment, not something that absolutely has to be done.

Bob Morales posted:

I understand that in most cases a non-open port is just as good as a firewalled port. But there's no reason to NOT run one just in case.
For the most part I agree. The problem comes in when "running a firewall" means copying and pasting a bunch of feature-breaking iptables mumble without consideration of what much of it really means. For example, indiscriminate blocking of ICMP traffic because "nobody uses ICMP" or misguided assumptions of how these things actually serve as attack vectors.

Long story short, if I'm connecting to a host I should have access to, I expect it to:
  • Respond to pings.
  • Send ICMP errors on things like TTL expiration.
  • Participate in PMTU discovery.
  • Not drop fragmented packets.
Bad firewall configurations will break one or more of the above, which usually ends up in my pulling my hair out when I can't make poo poo work and I'm trying to diagnose why.

ExcessBLarg!
Sep 1, 2001

Martytoof posted:

edit2: oh it's not starting because I need to specify a PROGRAM in mdadm.conf as well?
My autoconfigured mdadm.conf in Debian doesn't have a PROGRAM line. Unfortuntaely the mdadm.conf manpage is weakly specified here but accoring to the mdadm.conf-example file that ships with the documentation (and presumably the sources):
code:
# When used in --follow (aka --monitor) mode, mdadm needs a
# mail address and/or a program.  This can be given with "mailaddr"
# and "program" lines to that monitoring can be started using
#    mdadm --follow --scan & echo $! > /var/run/mdadm
# If the lines are not found, mdadm will exit quietly
#MAILADDR root@mydomain.tld
#PROGRAM /usr/sbin/handle-mdadm-event
And yeah, I have "MAILADDR root" and no PROGRAM line.

Also, my fear is that if you specify /bin/true as a PROGRAM, that it will invoke that as an alternative to sending mail. Hmm.

Edit: Silly forum shouldn't parse email addresses in code blocks.

ExcessBLarg!
Sep 1, 2001

Crush posted:

How would I go about deleting all lines within a file that do not match a certain string/pattern?
Assuming the pattern "foo", in awk:
code:
awk '/foo/ {print}'
sed:
code:
sed -n '/foo/ p'
and grep:
code:
grep 'foo'
Edit: Despite the obviousness of using grep, sed does have the benefit of the -i (in place) option which changes an existing file as opposed to outputting a new one.

ExcessBLarg! fucked around with this message at 18:32 on Apr 22, 2012

ExcessBLarg!
Sep 1, 2001

Lysidas posted:

They decided to start from scratch with a well-designed hierarchy in /sys -- as far as I know /proc is semi-deprecated for anything that isn't "information on running processes".
In addition to hierarchy, /sys has rules on the format of data within it, namely that for each file entry in /sys, there should be a single value that can be read or written to it.

/proc is much more free form. There's also debugfs (not to be confused with the user-space ext file system manipulation tool of the same name) that's intended for free-form user-space interaction with modules.

ExcessBLarg!
Sep 1, 2001

Misogynist posted:

In my experience, most of the stability issues with KDE 4 occur in configurations where compositing is turned on by default, even where the video driver in use is a buggy, unstable piece of poo poo on Linux.
Fun bit about compositing. It works great on Intel graphics chips as the drivers are reasonably solid. However, I've had the recent misfortune of using machines with Nvidia GPUs (worse, the kind that desolder themselves too).

Turns out the Nvidia proprietary driver has horrible compositing performance. The worst experience I had was using Firefox, where some websites would just trash the thing and exhibit multi-second latency in scrolling.

Then I switched to the nouveau driver and surprisingly, everything compositing-related works just fine. I don't do much 3D stuff beyond that, so it's good enough for me.

ExcessBLarg!
Sep 1, 2001

Zom Aur posted:

There's absolutely nothing wrong with screen that I know of. :)
There's one particular weakness of screen that I've run into a number of times now, and that is that windows are statically allocated and there's a defined maximum of 40.

That seems like a pretty big number. However, one of my uses for screen is to have a session for an entire rack of machines where each window is connected to a port on a serial concentrator. With 42U racks, the number of 1U machines per rack, minus switch/PSU/whatever overhead, runs uncomfortably close to the "max 40 windows" that it's an issue.

It's easy enough to change that maximum in the source and recompile, but I've thought about writing a patch to dynamically allocate windows instead. However, as soon as you dive into the code you realize it's a hairy mess and unmaintained, and my energy is probably best spend migrating to tmux anyways. Just haven't been able to pull the trigger on that yet.

ExcessBLarg!
Sep 1, 2001

spankmeister posted:

Screen can connect to serial ports? Huh, never knew.
Serial concentrators usually make a bunch of serial ports available over some kind of network connection, so usually you're just ssh or telnetting into them. But yeah, you can run minicom inside a screen session too.

SlightlyMadman posted:

I generally pop into a server to restart services, edit text files, or update from svn, so maybe it's not as useful to me since I don't do serious sysadmin work?
It's probably the opposite. Usually screen isn't all that useful for operations stuff because, like you say, you're just managing system configuration and restarting daemons and that kind of crap.

The big win for screen is where you want to run a long-running job interactively (and thus, can't simply background and/or nohup it), where it's absolutely painful to restart the job in the event of a lost network connection. For me, it's usually running number-crunching scripts that can take many hours, if not days to run. Just start them in a screen session, then I can detach, go elsewhere, and plug in when I need to again.

ExcessBLarg!
Sep 1, 2001

Martytoof posted:

Okay so I have an existing Linux-RAID raid10 array. Four 2TB drives, for a total of 4TB of storage which is no longer sufficient for my users.

Martytoof posted:

It's actually mdadm with a straight ext4 partition, no LVM layer.
Just wanted to mention that with the default mkfs options and a max 4K block-size, ext4 wastes a bunch of space on inode tables (~5-8% of file system size? going from memory), which on multi-TB file systems translates to hundreds of GBs lost to metadata. You may even have noticed that mkfs and fsck run really slowly on multi-TB ext4 partitions.

Most fancier file systems (btrfs, jfs, xfs, etc.) do dynamic allocation of metadata, and so are a good way to get very-practical space back. Alternatively, you can use mkfs.ext4's "-i" option to allocate fewer inodes in the file system, which is fine if you know you're going to only store large files. But if your users store many small files, there's the chance of running out of inodes.

Just thought I'd mention it as I recently was making large ext4 partitions for backups and the static inode allocation problem is, well, painful. If you're doubling your data capacity, it's an opportunity to consider switching file systems.

ExcessBLarg!
Sep 1, 2001

Goon Matchmaker posted:

Is it just me or does Firefox's performance suck under Linux? ... I'm using the nvidia proprietary driver.
The proprietary driver is known to have poor compositing performance and, somehow, Firefox performance under it is really bad. I've switched to nouveau and for non-3D use (so basically compositing and 2D desktop acceleration) it's much better.

ExcessBLarg!
Sep 1, 2001

Crush posted:

When is it preferred to use a tool like xargs or parallel (but mainly xargs) instead of a loop?
Usually the main circumstance in which you would need to use xargs instead of a loop (when they would both work) is when you have to do some relatively trivial operation on a lot of files.

For example, if you're generating a list of files you want to delete, doing 'rm "$i"' in a loop will spawn a new rm process for each file you want to delete. Doing '| xargs rm' spawns rm only once (or at least a small number of times relative to the number of files) and so you don't waste as much time in process creation since the same instance of rm can iterate among a bunch of files internally. But with cores being both fast and plentiful, and storage still kind of slow, you might not notice a difference between both approaches in most instances.

Of course, you'd have to use xargs when you need to do a single operation on a bunch of files, like if you're providing a list of files to tar to be concatenated into the same archive. And there's times you'd have to use a loop if the command you want to use only takes a single file name argument, like when extracting all the contents of multiple tar files.

Although to be honest, I do try to use "find -exec" as often as possible these days. It's worth nothing that 'find ... -exec foo_prog {} +' has similar semantics to xargs with foo_prog being called with multiple filename arguments, while 'find ... -exec foo_prog {} \;' has similar semantics to a loop, where an instance of foo_prog is called for each file found.

ExcessBLarg!
Sep 1, 2001

Fortuitous Bumble posted:

Is there a good way to edit files in some sort of hex mode over the console?
As mentioned, "xxd"/"xxd -r" is quite useful for this. I suppose some folks might prefer a dedicated terminal-based hex editor, but I'm quite happy with using xxd as a filter.

The only thing to beware of is that if you open binary files in a text editor directly, whether intending to apply xxd as a filter or not, do make sure to open them in binary mode (e.g., "vim -b") so that you don't get line endings added at the very end, or converted, or whatever.

ExcessBLarg!
Sep 1, 2001

fivre posted:

Our legal department (with the review of internal IT) has apparently banned the use of zlib.
Sounds like the "special snowflake" version of a legal department.

If there was a legitimate licensing issue with zlib, it would've come up a long time ago.

ExcessBLarg!
Sep 1, 2001

Bob Morales posted:

I'm not sure if flash drives or SD cards have wear leveling algorithims,
They do. I can't say how advanced they are, and I'm sure that even varies across manufacturers.

Bob Morales posted:

Can I securely erase this medium
The MMC/SD specs includes "erase" and "secure erase", the latter of which is supposed to blow away sectors and the FTL mapping. Linux exposes these to userspace with the BLKDISCARD and BLKSECDISCARD ioctls, but I'm not aware of many tools that use them.

One notable tool is "make_ext4fs" from Android, used in formatting ext4 partitions and, if used with the "-w" switch on MMC/SD devices, is supposed to securely wipe them (incidentally triggering the SGS2 brick bug). The binary is available for x86 Linux, although I've always built it as part of the (massive) Android source tree.

Google yields test-discard as a program for repetitively calling BLKDISCARD and could be modified to use BLKSECDISCARD. But yeah, it's not something I'm aware of being widely implemented.

Of course, all the above assumes the card's MMC/SD controller actually implements the erase commands and does something reasonable with them.

Bob Morales posted:

(like a spinning HD),
Spinning HDs frequently reallocate "unrecoverable" sectors, which may or may not actually be recoverable with some patience. I don't necessarily expect anyone to recover any unerased data from them, but I can't guarantee that all data I've ever stored on an HD is non-recoverable by someone sufficiently enterprising in hacking disk firmware.

Bob Morales posted:

If the answer to both of those is 'no', you need to encrypt the data before you write it.
That's a good policy.

Adbot
ADBOT LOVES YOU

ExcessBLarg!
Sep 1, 2001

Suspicious Dish posted:

/usr was traditionally put on a separate tape drive,
When was /usr ever put on tape drives? Tape drives are poo poo slow for random access, and /usr is mostly used in a random-access fashion. Network mounted, sure.

Suspicious Dish posted:

It was named "user" because the things in it were in "userspace",
"Userspace" is everything that's not in kernel space. All Unix executables (those found in /bin and /usr/bin) run in userspace.

Honestly the Rob Landley explanation makes the most sense. "/usr" is short for "user", in reference to user home directories. All subsequent attempts to backronym it to something else ("UNIX System Resources" har) are laughable. It's also significant that "usr" is used in reference to user home directories on a number of systems, for example within many AFS cells. I suspect it's usage dates back even further.

Suspicious Dish posted:

/bin was for dynamically linked binaries, /sbin was for statically linked binaries.
At least in "modern" usage, /bin and /usr/bin are binaries used by non-privileged users are stored, whereas /sbin and /usr/sbin are where binaries intended for use by the superuser (root) are stored. This is why /bin and /usr/bin are in $PATH for normal users, and /sbin and /usr/sbin are only in $PATH for root.

Shared libraries were introduced before my day, so I can't say the divide wasn't for the other reason in the past. It would seem weird that their roles changed though.

(Edit: Oh, Solaris did it that way. Silly Solaris.)

Suspicious Dish posted:

So, /bin requires /lib, and /sbin does not.
This is definitely no longer true, if it was at one time.

ExcessBLarg! fucked around with this message at 03:30 on Sep 6, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply