|
Bob Morales posted:Is there a good way to block stuff like bittorrent from OpenVPN/PPTP clients? Protocol-based blocking can be done with deep packet inspection. I haven't deployed it myself, so I don't know how advanced the tools are. I'd start with OpenDPI though. Mind you, obfuscated protocols are tough, so things like encrypted BT might be out. In general unless you rate limit/throttle to make high-bandwidth applications undesirable, possibly with whitelisted sites for things you want to allow (e.g., YouTube, Akamai, whatever), folks will be able to thwart your filtering efforts with encryption.
|
# ¿ Jul 21, 2011 15:04 |
|
|
# ¿ May 21, 2024 04:19 |
|
Bob Morales posted:I guess people could use IM, Skype, and that stuff as well, but I don't want anyone pulling torrents or newsgroups. Maybe throttling everything BUT web access is the best idea. If the bandwidth is available, I'd like people to be able to surf as fast as possible. Mind you, an HTTP proxy alone isn't encrypted, it would have to be deployed alongside an ssh tunnel. I believe, however, as long as you tunnel the proxy traffic you don't have to worry about dns leaks, as URLs are forwarded to the proxy and resolved there. Bob Morales posted:I would rather just offer ssh tunneling but it's too complicated for 99% of people to use/configure. ExcessBLarg! fucked around with this message at 00:48 on Jul 22, 2011 |
# ¿ Jul 22, 2011 00:42 |
|
Accipiter posted:Something like this should do the trick: Actually, didn't this very issue come up a few pages ago? Edit: Guess not, must've been another thread or something.
|
# ¿ Jul 22, 2011 00:43 |
|
Smuckles posted:Would the plan, then, be to buy one 2TB drive to expand /home (using LVM) Also, I'm not quite sure I understand what you're trying to do, you want a 4 TB home? Instead of four disks in a mirror, three 2 TB disks in RAID5 array is a bit cheaper. But remember, RAID is not backup. If you don't already have offline (or multiple) copies of the data that's going in the array, and you care about it, you'll want to make sure you have an offline backup of it too. The only benefit to RAID is to avoid the restore time associated with a disk failure, but it won't protect you against a PSU failure wiping out all your disks or file system corruption.
|
# ¿ Jul 23, 2011 17:49 |
|
So I upgraded the kernel on my Debian router machine yesterday to linux-image-3.0.0-1-amd64, and apparently it broke PMTU discovery again (or at least, TCPMSS clamping to PMTU), which means I couldn't sent packets larger than 1492 bytes (interface MTU is 1500). Unfortunately the only symptom I ran into of this was that I couldn't post replies on SA--the connection would time out. I tried clearing cookies/cache/history/etc., different web browsers, different machines, etc. Of course, I didn't try a different Internet connection until after emailing SA devs. So yeah, uh, if you can't post on SA, perhaps upgrading to Linux 3.0 is to blame?
|
# ¿ Jul 28, 2011 17:31 |
|
Why does Ubuntu suddenly suck? I run Debian myself, so I haven't been following Ubuntu shenanigans too closely. I know folks were upset with the switch to Unity in the latest release, although it's not clear to my why they're even upset with that except that it's different. However, can't you just tick "GNOME Desktop" at the *DM prompt if that's a problem? Did they fuckup hardware support recently or something?
|
# ¿ Aug 1, 2011 19:52 |
|
fletcher posted:I'm trying to figure out how to use screen to split the window and then be able to disconnect and the reconnect to my split windows, what am I doing wrong? For your specific example, you should be able to do 'screen -r', 'C-a S', then 'C-a Tab' to the necessary window and 'C-a Space' until you get the one you want to show. Or select it from the 'C-a "' window list.
|
# ¿ Aug 7, 2011 19:23 |
|
xPanda posted:Instructions for RHEL/CentOS/SL all seem to recommend putting /boot on its own small RAID1, but this is the old school way and I want to use partitioning so I don't have more RAID devices than I need. The problem is that your bootloader also has to be aware of partitions inside a mirror volume and be able to peer into them in order to boot. What's nice about a separate mdadm mirror for /boot is that bootloaders don't have to know about it at all, a mdadm mirror volume is indistinguishable from non-RAID volume except for the mdadm superblock at the end. Furthermore, should you decide to move from a mirror setup to a three-disk RAID5, it's still probably better to keep /boot as a mirror, otherwise your bootloader will have to be smart enough to understand the RAID5 parity stripe. Perhaps GRUB 2 has all that poo poo built-in these days, it has quite a lot, but I'm a fan of keeping it simple since who knows how well that code is tested. xPanda posted:Also is it my imagination or does Ubuntu not seem to have a /boot partition under default installations? ExcessBLarg! fucked around with this message at 19:33 on Aug 7, 2011 |
# ¿ Aug 7, 2011 19:31 |
|
spankmeister posted:The reason to use a RAID1 for boot is because GRUB doesn't understand RAID, It also wouldn't have to run any md daemon for read-only support, assuming it knows about md's RAID 5 stripe scheme and all volumes are available. Can GRUB boot a RAID 5 volume in degraded mode though? I don't know, and quite frankly, that's not something that's worth waiting around to test when the cost of replicating a < 100 MB mirror across all your disks is nil. spankmeister posted:Also, this may no longer be the case but it used to be that you needed to run update-grub everytime you changed something in /boot, UNLESS you had a separate /boot partition, then you could skip that step. I'm assuming the reason to run update-grub was to refresh GRUB's kernel blockmap, if it couldn't fit stage 1.5 (or GRUB 2's equivalent) in the boot track. Which may have to do more with the partitioning tool used (some use a one-sector boot track, others 63) than the existence of a separate /boot. Or if you used some funky filesystem GRUB knows poo poo about. ExcessBLarg! fucked around with this message at 22:08 on Aug 7, 2011 |
# ¿ Aug 7, 2011 22:06 |
|
Dinty Moore posted:No, 'update-grub' doesn't actually reinstall the bootstrap code in the MBR on its own at all. (We don't have one of those yet?)
|
# ¿ Aug 8, 2011 17:53 |
|
JHVH-1 posted:Only other thing you may want to worry about is the device names as sometimes that changes between hardware. In Debian land that means patching up "/etc/udev/rules.d/70-persistent-net.rules", or just blowing it away entirely if there's only one adapter you care about.
|
# ¿ Aug 9, 2011 17:58 |
|
duck monster posted:Yes exim4 has/had a loving remote vunerability where you could EMAIL loving untrusted scripts to it via the HeaderX tag Ziir posted:If I wanted to nuke everything and reinstall, do I just need to delete my LVM partitions or do I need to do the whole dd thing again on my entire hard drive? So all you need to do is nuke the crypto (LUKS) header and generate new crypto keys, at which point the ability to decode the old data is lost to everyone and it's effectively noise. As for the slow writing of noise part, if you had done: code:
code:
|
# ¿ Sep 2, 2011 15:41 |
|
T.K. posted:So does the old advice about not using backticks and cat if you need to loop through a file still apply? First, "infile.txt" is tokenized differently, particularly if "infile.txt" has spaces. The "while" variant reads entire lines at a time into f, whereas the "for" variant splits on IFS, which defaults to any whitespace. For example, suppose "infile.txt" contains: code:
code:
code:
Second, in the "while" case, stdin of any command in the loop is redirected to "infile.txt", which is nasty if you try to do "while read f; do vi "$f"; done < infile.txt": code:
Misogynist posted:but that it spawns an unnecessary process and that makes a lot of Unix nerds completely sperg out.
|
# ¿ Nov 1, 2011 16:16 |
|
Bob Morales posted:You usually end up having to use 'find' or 'xargs' 1. 'rm *.toto' and 'ls *.toto | xargs rm' have the same wildcard expansion, the latter is just needlessly convolved. 2. 'find . -type f -name *.toto | xargs rm' needs to properly escape the wildcard, otherwise it won't work right if there's any "*.todo" in the current directory. 3. 'find . -name "*.toto" -exec rm {} ;' works fine, but spawns a separate "rm" process to delete every file, which is a bit suboptimal. The two best solutions which aren't mentioned are: 1. 'find . -name "*.todo" -exec rm {} +' which works like xargs and spawns as few "rm" processes as possible. The "{} +" may be a non-standard extension in GNU find, I'm not sure, but it does work in BusyBox which is about the only other case I care about these days. 2. 'find . -name "*.todo" -print0 | xargs -0 rm' uses nulls as delimeters, so properly handles the utterly-perverse case of file names with newlines (as does the above solution). Edit edit: Better accuracy, more content. ExcessBLarg! fucked around with this message at 16:28 on Nov 2, 2011 |
# ¿ Nov 2, 2011 16:18 |
|
Misogynist posted:The xargs example will likely try to delete all files with names that match those inside any directory named *.toto, which is probably really far away from the right behavior. Misogynist posted:I think you read the same bad information on Wikipedia that I just did -- you don't just have to worry about newlines, because xargs will break arguments on any whitespace by default. I had been assuming the behavior of xargs was to split on newlines for years. I use it rather sparingly but, sigh.
|
# ¿ Nov 2, 2011 21:20 |
|
I write shell scripts with "set -e" so that they'll automatically stop/fail on any failed command. Combined with "set -x" ("set -ex") I'll know exactly where it dies.
|
# ¿ Nov 26, 2011 21:40 |
|
Bob Morales posted:RedHat recommends 'at least 16GB' swap for 128-256GB RAM. OK, funny thing about swap, it's been pretty much useless since we've hit multi-GB RAM machines. Or rather, there's only two things that swap can do usefully: 1. Hold the RAM restore image when you hibernate (suspend-to-disk). 2. Store a few dirty pages that aren't backed by a file, and are never going to be read/used again. This makes RAM available for more applications, or even just to buffer disk pages that will be used frequently. Problem is that the machine can't "forget" the contents of these dirty pages because they could be used again (correctness error), even if in practice, they never will. For both the above purposes, I've never found a particular reason to use more than 1 GB swap. Why is it otherwise useless? Two reasons: First, if your swap is backed by (mechanical) disk, it's either not very large, or it's very slow. See, disk capacity has increased many times more compared to media transfer rates, and especially compared to random access time. Writing out a full gig sequentially still takes, what, at least ten seconds? Access 1 GB in a random pattern is far longer. 64 MB swap, back in the days of 64 MB RAM, could be accessed much faster, thus the cost to using it wasn't as high. These days if you're thrashing more than a gig, or even anywhere near a gig, your computation is so slowed down that you're best serializing what you're doing or giving up. Second, if your swap is backed by SSD, you're blowing erase cycles. That doesn't matter much for reasons 1 & 2 above, but I wouldn't want to regularly thrash to SSD and replace it that much sooner. The replacement money is better spent on more RAM. Bob Morales posted:I have a 32GB system that for some reason wants to use 40MB swap even with 20GB free. ExcessBLarg! fucked around with this message at 03:14 on Mar 2, 2012 |
# ¿ Mar 2, 2012 03:09 |
|
cr0y posted:can someone spit me out a command that will relatively quickly calculate the size of a directory that has a MASSIVE amount of folders/files under it? du ain't cutting it code:
In general though what you're asking to do is slow. In the absence of an indexing mechanism, the file system has to, at minimum, look up the inodes for every file underneath that directory, and mechanical disk seeks are going to make that slow.
|
# ¿ Mar 9, 2012 05:24 |
|
Martytoof posted:I had a SMART failure on my four-disk software RAID10 array. ... The only way I was alerted to the failure was by checking dmesg .... Which gets to the real point. Folks, make sure your servers are running an MTA and configured properly. And make sure your servers are configured to deliver their mail to an addess you actually read. If you do that, then mdadm will spam your gripe email when an array goes down. That's far more convenient than waiting for it to beep. But you have to test your notification mechanisms to make sure they're working properly.
|
# ¿ Mar 10, 2012 20:03 |
|
Wagonburner posted:For years I've had my sshd port open to the internet on an odd port, like 35666 or something, thinking that this combined with root login disabled helps to mitigate my fairly lax upkeep of my system. ... How safe am I to leave it open to the internet on 22 if I'm kind of a "install linux and forget about it until it breaks" type guy? The last vulnerability of that type that I recall, or at least, that affected machines I run was the Debian Non-Random Number Generator fiasco that resulted in completely predictable SSH and SSL keys when generated with Debian OpenSSL packages from a particular timeframe. Otherwise, I've been running root-enabled port 22 sshds on machines for a long time and they've never been owned. At least, not by that avenue. Assuming you don't have a trivial password, they're just not going to get cracked unless there is a vulnerability. And if such a vulnerability does come along, you're not a high-priority (i.e., 0-day) target. Of course, running on an odd port, root-disabled, password-auth disabled, active blocking, etc., those are all great too. Wagonburner posted:I wonder if there's some type of public ssh I could connect to from work and kind of double-port-fwd? It's actually a rather nifty solution as Tor has been used for the past few years (and optimized for) getting around censorship policies. Mind you, these guys are in an arms race to deal with the kind of Internet censorship that's going on in China, Iran, Syria, etc., so it typically does a darn good job with your typical non-whitelist firewall. Plus the Tor folks are trying to encourage a diverse user base so that it doesn't become representative of any one particular use group. If you do use Tor though, be extra vigilant about connecting to hosts with previously-verified host keys and be weary of "key changed!" notices. It's possible for a Tor exit-node to man-in-the-middle you, but that's mitigated by using a previously-verified host key. Wagonburner posted:How bad really is using a weak-ish password on a normal account, the account name and pw would have to be guessed right? Goon Matchmaker posted:Potentially stupid question. How much more secure is a 16384 bit ssh key over whatever the standard is (4096?) My understanding of RSA key strength among researchers is that 1024 bits is "uncomfortable", that is, some 1024 bit RSA key will likely be cracked soon. 2048 bit keys are estimated to be sufficient until 2030 and 3072 bit keys should be good beyond that. In other words, if you're generating what's a reasonably-easily replaceable ssh keypair I see no particular reason to go beyond 2048 bits outside of paranoia. If you need an RSA key for a long-lived purpose, like a SSL (CA) certificate with 10 or longer year expiration time, you'd probably want to use 3072 bits. 16384 bits is crazy long, and, although I'm not a crypto person, I'd guess that if we reach the point with RSA where we require keys to be that long, switching to ECC or something might end up being the favored approach. Wagonburner posted:How does logging in with a key work day to day in the real world? I'll need the key on my android since I use ssh there (busybox and connectbot) Wagonburner posted:I guess any new PC I want to ssh from I'd just copy the key from my phone to the pc? Or do you all use box.com or dropbox or something? In that case your key's only as strong as your dropbox pw right? So yes, avoid putting your password-protected private key on dropbox. spankmeister posted:If someone is able to obtain your key they will then have to crack it before they can use it, which is something I would not worry about. ExcessBLarg! fucked around with this message at 21:38 on Mar 13, 2012 |
# ¿ Mar 13, 2012 21:31 |
|
I wouldn't recommend old-school vi exclusively though, but something a bit more modern like vim. Although both appear arcane and weird to outsiders, vim is surprisingly awesome to use (highlight lines, blocks, multiple undo, syntax highlighting) whereas old-school vi really can be genuinely arcane at times. That said, with knowledge of vim, in the few instances where you do have to fall back on an older vi implementation, 90% of it still works just fine. Not to start an editor war, but in general, one of vim or emacs is really good to learn well. Learning curves are a bit steep, but I'm far more productive writing code in vim than any IDE I've used. The fact that I can just as trivially write code over a remote shell from any computer, using detachable screen sessions (or tmux, whatever) is icing on the cake. The consensus of the vim (and emacs, I guess) threads we've had over time is that they're a lot of features in them, you don't have to learn it all at once, but you'll end up learning new tricks all the time and they really do contribute to solid productivity.
|
# ¿ Apr 1, 2012 17:39 |
|
Kaluza-Klein posted:Can some one help me with my appalling lack of iptables knowledge? 2. You shouldn't need the "-A OUTPUT" rule at all, unless that's not your full firewall configuration and/or you're setting a DROP policy on the OUTPUT chain. If the latter, there's bigger problems here. 3. Your source/destination switches are swapped. What you really want is: code:
4. Why not just accept ICMP traffic period? Dropping it is more likely to cause problems than it's going to solve. For one, if the monitor server is ever reIPed, you're going to have to update the firewall rule and will probably forget to. Or the guy who replaces you won't even know that rule is there and be baffled.
|
# ¿ Apr 3, 2012 17:44 |
|
Also, those firewall rules are bizarrely overspecified. I'm guessing you folks found them from here? There's a few things about that page that bothers me. But one of which is the overspecified firewall rules without motivation. For example, why hardcode the destination IP? Is it to prevent smurf attacks? If so, Linux has been ignoring ICMP echo requests on the broadcast address by default since 2.6.14.
|
# ¿ Apr 3, 2012 18:14 |
|
spoon0042 posted:So that's why that stopped working? Pinging broadcasts is a cheap way to figure out which machines on a subnet are up, but nmap can basically do the same thing with unicast addresses so there's not a huge functionality loss.
|
# ¿ Apr 3, 2012 19:01 |
|
Kaluza-Klein posted:This "pinging to check if the server is up" is something the hosting service is doing, I had no idea it was there until it yelled at me! Kaluza-Klein posted:This is just a server I can goof off with and idle on irc with, so hopefully no one is worried that I am learning on it. A modern Linux system usually doesn't need a firewall. That said, stateful, "default deny" of incoming TCP/UDP traffic isn't bad. Just open the ports you need and that way you can run whatever daemons without worry that a misconfiguration gets you owned. ICMP is pretty harmless though. I've never had a problem just allowing it, and I've not found good motivation for blocking it in most circumstances. I have, however, run into problems where things like PMTU discovery breaks because folks unnecessarily filter ICMP and it gets annoying. spankmeister posted:"It's not working! I can't ping!"
|
# ¿ Apr 3, 2012 19:39 |
|
Bob Morales posted:Sure it does. What attack vector exists that non-firewalled, but properly-configured Linux machines are susceptible to? I understand running a firewall on machines where semi-trusted users are running riff-raff services/programs that the world shouldn't have access to. I would go as far as to say that these machines should run a firewall. But if you're running a machine with limited services and no riff-raff users, why does it need a firewall?
|
# ¿ Apr 3, 2012 21:21 |
|
Bob Morales posted:What if you install a piece of software that does something dumb like opening a service/port without you knowledge and you get attacked that way? In the past, there were systems that were vulnerable to attack merely by virtue of being online, and thus, in absence of timely patching, needed a firewall just to function. That's no longer the case, you can put a properly-configured Linux machine online without a firewall, and it won't get owned by virtue of inherently running Linux. That's what I mean by need. That said, running a firewall may well be prudent depending on what one intends to do with the machine. But it should be considered as part of risk assessment, not something that absolutely has to be done. Bob Morales posted:I understand that in most cases a non-open port is just as good as a firewalled port. But there's no reason to NOT run one just in case. Long story short, if I'm connecting to a host I should have access to, I expect it to:
|
# ¿ Apr 3, 2012 22:02 |
|
Martytoof posted:edit2: oh it's not starting because I need to specify a PROGRAM in mdadm.conf as well? code:
Also, my fear is that if you specify /bin/true as a PROGRAM, that it will invoke that as an alternative to sending mail. Hmm. Edit: Silly forum shouldn't parse email addresses in code blocks.
|
# ¿ Apr 5, 2012 16:48 |
|
Crush posted:How would I go about deleting all lines within a file that do not match a certain string/pattern? code:
code:
code:
ExcessBLarg! fucked around with this message at 18:32 on Apr 22, 2012 |
# ¿ Apr 22, 2012 18:30 |
|
Lysidas posted:They decided to start from scratch with a well-designed hierarchy in /sys -- as far as I know /proc is semi-deprecated for anything that isn't "information on running processes". /proc is much more free form. There's also debugfs (not to be confused with the user-space ext file system manipulation tool of the same name) that's intended for free-form user-space interaction with modules.
|
# ¿ May 3, 2012 17:45 |
|
Misogynist posted:In my experience, most of the stability issues with KDE 4 occur in configurations where compositing is turned on by default, even where the video driver in use is a buggy, unstable piece of poo poo on Linux. Turns out the Nvidia proprietary driver has horrible compositing performance. The worst experience I had was using Firefox, where some websites would just trash the thing and exhibit multi-second latency in scrolling. Then I switched to the nouveau driver and surprisingly, everything compositing-related works just fine. I don't do much 3D stuff beyond that, so it's good enough for me.
|
# ¿ May 4, 2012 20:15 |
|
Zom Aur posted:There's absolutely nothing wrong with screen that I know of. That seems like a pretty big number. However, one of my uses for screen is to have a session for an entire rack of machines where each window is connected to a port on a serial concentrator. With 42U racks, the number of 1U machines per rack, minus switch/PSU/whatever overhead, runs uncomfortably close to the "max 40 windows" that it's an issue. It's easy enough to change that maximum in the source and recompile, but I've thought about writing a patch to dynamically allocate windows instead. However, as soon as you dive into the code you realize it's a hairy mess and unmaintained, and my energy is probably best spend migrating to tmux anyways. Just haven't been able to pull the trigger on that yet.
|
# ¿ May 7, 2012 04:25 |
|
spankmeister posted:Screen can connect to serial ports? Huh, never knew. SlightlyMadman posted:I generally pop into a server to restart services, edit text files, or update from svn, so maybe it's not as useful to me since I don't do serious sysadmin work? The big win for screen is where you want to run a long-running job interactively (and thus, can't simply background and/or nohup it), where it's absolutely painful to restart the job in the event of a lost network connection. For me, it's usually running number-crunching scripts that can take many hours, if not days to run. Just start them in a screen session, then I can detach, go elsewhere, and plug in when I need to again.
|
# ¿ May 7, 2012 19:39 |
|
Martytoof posted:Okay so I have an existing Linux-RAID raid10 array. Four 2TB drives, for a total of 4TB of storage which is no longer sufficient for my users. Martytoof posted:It's actually mdadm with a straight ext4 partition, no LVM layer. Most fancier file systems (btrfs, jfs, xfs, etc.) do dynamic allocation of metadata, and so are a good way to get very-practical space back. Alternatively, you can use mkfs.ext4's "-i" option to allocate fewer inodes in the file system, which is fine if you know you're going to only store large files. But if your users store many small files, there's the chance of running out of inodes. Just thought I'd mention it as I recently was making large ext4 partitions for backups and the static inode allocation problem is, well, painful. If you're doubling your data capacity, it's an opportunity to consider switching file systems.
|
# ¿ May 14, 2012 22:41 |
|
Goon Matchmaker posted:Is it just me or does Firefox's performance suck under Linux? ... I'm using the nvidia proprietary driver.
|
# ¿ Jul 30, 2012 16:39 |
|
Crush posted:When is it preferred to use a tool like xargs or parallel (but mainly xargs) instead of a loop? For example, if you're generating a list of files you want to delete, doing 'rm "$i"' in a loop will spawn a new rm process for each file you want to delete. Doing '| xargs rm' spawns rm only once (or at least a small number of times relative to the number of files) and so you don't waste as much time in process creation since the same instance of rm can iterate among a bunch of files internally. But with cores being both fast and plentiful, and storage still kind of slow, you might not notice a difference between both approaches in most instances. Of course, you'd have to use xargs when you need to do a single operation on a bunch of files, like if you're providing a list of files to tar to be concatenated into the same archive. And there's times you'd have to use a loop if the command you want to use only takes a single file name argument, like when extracting all the contents of multiple tar files. Although to be honest, I do try to use "find -exec" as often as possible these days. It's worth nothing that 'find ... -exec foo_prog {} +' has similar semantics to xargs with foo_prog being called with multiple filename arguments, while 'find ... -exec foo_prog {} \;' has similar semantics to a loop, where an instance of foo_prog is called for each file found.
|
# ¿ Jul 31, 2012 16:30 |
|
Fortuitous Bumble posted:Is there a good way to edit files in some sort of hex mode over the console? The only thing to beware of is that if you open binary files in a text editor directly, whether intending to apply xxd as a filter or not, do make sure to open them in binary mode (e.g., "vim -b") so that you don't get line endings added at the very end, or converted, or whatever.
|
# ¿ Aug 20, 2012 16:10 |
|
fivre posted:Our legal department (with the review of internal IT) has apparently banned the use of zlib. If there was a legitimate licensing issue with zlib, it would've come up a long time ago.
|
# ¿ Aug 23, 2012 04:07 |
|
Bob Morales posted:I'm not sure if flash drives or SD cards have wear leveling algorithims, Bob Morales posted:Can I securely erase this medium One notable tool is "make_ext4fs" from Android, used in formatting ext4 partitions and, if used with the "-w" switch on MMC/SD devices, is supposed to securely wipe them (incidentally triggering the SGS2 brick bug). The binary is available for x86 Linux, although I've always built it as part of the (massive) Android source tree. Google yields test-discard as a program for repetitively calling BLKDISCARD and could be modified to use BLKSECDISCARD. But yeah, it's not something I'm aware of being widely implemented. Of course, all the above assumes the card's MMC/SD controller actually implements the erase commands and does something reasonable with them. Bob Morales posted:(like a spinning HD), Bob Morales posted:If the answer to both of those is 'no', you need to encrypt the data before you write it.
|
# ¿ Aug 27, 2012 18:07 |
|
|
# ¿ May 21, 2024 04:19 |
|
Suspicious Dish posted:/usr was traditionally put on a separate tape drive, Suspicious Dish posted:It was named "user" because the things in it were in "userspace", Honestly the Rob Landley explanation makes the most sense. "/usr" is short for "user", in reference to user home directories. All subsequent attempts to backronym it to something else ("UNIX System Resources" har) are laughable. It's also significant that "usr" is used in reference to user home directories on a number of systems, for example within many AFS cells. I suspect it's usage dates back even further. Suspicious Dish posted:/bin was for dynamically linked binaries, /sbin was for statically linked binaries. Shared libraries were introduced before my day, so I can't say the divide wasn't for the other reason in the past. It would seem weird that their roles changed though. (Edit: Oh, Solaris did it that way. Silly Solaris.) Suspicious Dish posted:So, /bin requires /lib, and /sbin does not. ExcessBLarg! fucked around with this message at 03:30 on Sep 6, 2012 |
# ¿ Sep 6, 2012 03:24 |