|
Thermopyle posted:On Ubuntu, /etc/cron.d/mdadm is configured to run a redundancy check on first Sunday of the month only. I commented that out and just set it to run on the 19th of the month as ZFS is usually running a scrub on the first Sunday of the month and the two together drag the system to its knees. Check out man 5 crontab. The day of week field operates differently than the others in that having both a day of the week field and day of the month field will cause the command to run when either condition is met. 57 0 19 * 0 = on every sunday and also on every 19th calendar day run the command at 00:57 57 0 19 * * = on every 19th calendar day run the command at 00:57. Salt Fish fucked around with this message at 23:05 on Mar 2, 2014 |
# ? Mar 2, 2014 22:59 |
|
|
# ? Jun 6, 2024 05:35 |
|
Thermopyle posted:No, yes. /etc/cron.d/monthly?
|
# ? Mar 3, 2014 00:51 |
|
Salt Fish posted:Check out man 5 crontab. The day of week field operates differently than the others in that having both a day of the week field and day of the month field will cause the command to run when either condition is met. Oh, I bet this was it. Thanks!
|
# ? Mar 3, 2014 03:28 |
|
On my Xubuntu system, "$ last" only shows the logins from the last reboot. How can I get it to show all logins going back a long time? Also, my bash history seems to be confused and only saves stuff I've typed when I've been ssh'd in to this computer. If I'm working on the computer itself in a terminal, and then I also ssh in and work on it a bit, the bash history will only show one branch and not the other. How can I get it to store everything?
|
# ? Mar 3, 2014 05:06 |
|
reading posted:On my Xubuntu system, "$ last" only shows the logins from the last reboot. How can I get it to show all logins going back a long time? last just parses a log file (/var/log/wtmp by default). If the log file is recreated at every boot then the information you want just isn't there to parse. The output of last should tell you when the log file was created and appears to be distro specific. The file on my Debian machine was last boot, but on Arch it appears to be since I originally installed. e: On the Debian machine there is a backup log which contains older entries. (wtmp.1). Check if there is one on your machine too. If so, point last to it like so: code:
Xik fucked around with this message at 06:26 on Mar 3, 2014 |
# ? Mar 3, 2014 06:21 |
|
Xik posted:last just parses a log file (/var/log/wtmp by default). If the log file is recreated at every boot then the information you want just isn't there to parse. Debian logrotates these by default (in /etc/logrotate.conf, unless someone finally convinced them to use /etc/logrotate.d/security or something). Arch probably doesn't even have logrotate installed
|
# ? Mar 3, 2014 07:22 |
|
evol262 posted:Debian logrotates these by default (in /etc/logrotate.conf, unless someone finally convinced them to use /etc/logrotate.d/security or something). Arch probably doesn't even have logrotate installed Yep you're right. Debian is setup to rotate wtmp on a monthly basis. Same with Arch (so says logrotate.conf), but of course with Arch logrotate appears to require some interaction before it will work properly.
|
# ? Mar 3, 2014 07:30 |
|
reading posted:Also, my bash history seems to be confused and only saves stuff I've typed when I've been ssh'd in to this computer. If I'm working on the computer itself in a terminal, and then I also ssh in and work on it a bit, the bash history will only show one branch and not the other. How can I get it to store everything? .bash_history only gets written when a shell exits, does that explain what's happening? If you close a local session and then close an ssh session, the ssh session's commands will be the most recent history in a new shell. (Also the previous local history may be gone if the previous ssh session had more than $HISTFILESIZE commands. I think. "man bash" for more on that.) Unless you're asking if it's possible to have the history updated continuously, I'm pretty sure there's no way to do that in bash at least.
|
# ? Mar 3, 2014 17:02 |
|
spoon0042 posted:.bash_history only gets written when a shell exits, does that explain what's happening? If you close a local session and then close an ssh session, the ssh session's commands will be the most recent history in a new shell. (Also the previous local history may be gone if the previous ssh session had more than $HISTFILESIZE commands. I think. "man bash" for more on that.) In .bashrc shopt -s histappend PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n" Of course, you'll have to press enter once to refresh the history if you're trying to get to a command from another session, but...
|
# ? Mar 3, 2014 18:23 |
|
Misogynist posted:Anyone have any ideas what might be going on? I totally missed this earlier, but it doesn't look like xl2tpd is actually picking it up. Have you tried xl2tpd -D? What client? nm-applet? racoon?
|
# ? Mar 3, 2014 20:16 |
evol262 posted:Chef can be super finicky about string interpolation. Try %x{} instead of `` Still no go I even changed it to: code:
|
|
# ? Mar 3, 2014 21:09 |
|
fletcher posted:Still no go Do you have permission problems?
|
# ? Mar 3, 2014 21:13 |
evol262 posted:Do you have permission problems? I don't think I do. get_last_database_backup_filename.sh is supposed to be created by the template resource just a few lines above the last snippet I posted, but my last run failed on that ShellOut command because get_last_database_backup_filename.sh doesn't even exist. Here's a bit more of it: code:
|
|
# ? Mar 3, 2014 21:24 |
|
Anyone know how to troubleshoot fedup issues? I'm trying to upgrade from 19 to 20. Fedup runs, downloads a bunch of packages, does the kernel thing etc.. All the preparation seems to work okay, except it complaining about the steam yum repo i have enabled, but that shouldn't matter since it says I might have to update it manually after the upgrade (fine). Then, I reboot, select the update from grub and instead of upgrading it just continues on booting normally. Any way I can figure out what's going wrong exactly?
|
# ? Mar 3, 2014 21:27 |
|
I am pretty sure Fedup writes to a log file probably something like /var/log/fedup.log Do you have the latest fedup? I think there was a bug in it when 20 was initially released and it wouldn't update 19 to 20 I think I had to get the version of Fedup in the testing repo. Although I would have thought that would be in the main repos by now.
|
# ? Mar 3, 2014 21:55 |
|
spankmeister posted:Anyone know how to troubleshoot fedup issues? I'm trying to upgrade from 19 to 20. Fedup runs, downloads a bunch of packages, does the kernel thing etc.. Just upgrade with regular yum, really. Fedup is only worth the trouble when there's some big change
|
# ? Mar 3, 2014 21:59 |
|
fletcher posted:This one failed because get_last_database_backup_filename.sh didn't exist when ShellOut tried to run it (what??). Then I flipped restore_from_previous_backup to false and the chef run completes just fine, get_last_database_backup_filename.sh exists and works perfectly when executed manually. Have you tried using an execute resource? Something like: code:
|
# ? Mar 3, 2014 22:10 |
evol262 posted:This really isn't wtf. It's exactly the problem you've been having the entire time, just phrased slightly differently. The run completed fine because it didn't hit the problematic code. I need the return value from get_last_database_backup_filename.sh though, which I thought you couldn't do with the execute resource?
|
|
# ? Mar 3, 2014 22:21 |
|
evol262 posted:Just upgrade with regular yum, really. Fedup is only worth the trouble when there's some big change Isn't Fedup the recommended method for upgrading between releases? e.g 19 to 20 would be considered a big change. Having done a few fedup cycles it certainly doesn't seem to be much of a hassle outside the one instance I got hit by that fedup 0.7 bug.
|
# ? Mar 3, 2014 22:29 |
|
fletcher posted:I need the return value from get_last_database_backup_filename.sh though, which I thought you couldn't do with the execute resource? The problem, IIRC, is that Chef has no idea what to do with your command in backticks while it's compiling the cookbook. "execute" and "script" say "run this on the node", but I'm more of a Puppet/Salt guy than Chef... code:
|
# ? Mar 3, 2014 22:31 |
|
Varkk posted:Isn't Fedup the recommended method for upgrading between releases? e.g 19 to 20 would be considered a big change. Sure, but I basically never use it. 19->20 is relatively minor. No /bin -> /usr/bin move, cgroups changes, or other nasty stuff. yum upgrade is dead simple 19->20.
|
# ? Mar 3, 2014 22:33 |
|
evol262 posted:I totally missed this earlier, but it doesn't look like xl2tpd is actually picking it up. Have you tried xl2tpd -D? I'll see if I can get any more valuable debugging output out of it. Vulture Culture fucked around with this message at 23:13 on Mar 3, 2014 |
# ? Mar 3, 2014 23:04 |
evol262 posted:The problem, IIRC, is that Chef has no idea what to do with your command in backticks while it's compiling the cookbook. "execute" and "script" say "run this on the node", but I'm more of a Puppet/Salt guy than Chef... Ah that makes sense. I didn't know about the ruby_block resource. It still fails with the same error though quote:[2014-03-03T21:46:50+00:00] ERROR: Could not find last_backup_filename
|
|
# ? Mar 3, 2014 23:15 |
fletcher posted:Ah that makes sense. I didn't know about the ruby_block resource. It still fails with the same error though I changed get_last_database_backup_filename.sh to just write the value out to a text file and in Chef I do a last_backup_filename = File.read(...). Seems to work fine. Spent way too much time on this already, I'd like to come back to it again some time though. Thanks for all the help evol262.
|
|
# ? Mar 4, 2014 00:04 |
|
Playing with nfs4 on my home system and I've come across an odd error with showmount from the client sidecode:
Edit: solved, aaagh. It helps to put a dot at the end of a network address ewe2 fucked around with this message at 15:02 on Mar 4, 2014 |
# ? Mar 4, 2014 14:46 |
|
Obscure question that's probably due to our san, but I thought I'd check the linux side just in case. Our san / fc / multipath volumes are not behaving the same when it cames to trim / unmap / discard:code:
Due to postgresql's bloaty ways we need volumes at least twice as big as we're going to use, so I'd like to be able to (very carefully) over provision this pool to get the most out of it. I can't do this until the SAN thinks we're using the amount of space that we're actually using. I also don't feel safe testing discard or fstrim until I have a non production volume to try it on (our ~200 other volumes also report discard_max_bytes = 0).
|
# ? Mar 4, 2014 20:54 |
|
Aquila posted:Obscure question that's probably due to our san, but I thought I'd check the linux side just in case. Our san / fc / multipath volumes are not behaving the same when it cames to trim / unmap / discard:
|
# ? Mar 4, 2014 21:01 |
|
Nope:code:
Also nice, undocumented df features Edit: unrelated, woah this looks to be a bad one for security: http://arstechnica.com/security/2014/03/critical-crypto-bug-leaves-linux-hundreds-of-apps-open-to-eavesdropping/ Aquila fucked around with this message at 21:29 on Mar 4, 2014 |
# ? Mar 4, 2014 21:14 |
|
Varkk posted:I am pretty sure Fedup writes to a log file probably something like /var/log/fedup.log I think so, I recall installing it from testing at some point. I'll verify though.
|
# ? Mar 4, 2014 21:26 |
|
Aquila posted:Nope: Why max-depth 1? I guess they may not be sparse files. Some deleted handle held open?
|
# ? Mar 4, 2014 21:33 |
|
evol262 posted:Why max-depth 1? Is there another way to make du not to recursively show each subdir?
|
# ? Mar 4, 2014 21:36 |
|
spankmeister posted:Is there another way to make du not to recursively show each subdir? Piping it to grep, obviously. I guess I instinctively type "du <options> | sort -n" and just ignore the spam, since the important directories are at the bottom anyway.
|
# ? Mar 4, 2014 21:52 |
|
evol262 posted:Piping it to grep, obviously. I guess I instinctively type "du <options> | sort -n" and just ignore the spam, since the important directories are at the bottom anyway. Uhh well sure but in that case your question is answered.
|
# ? Mar 4, 2014 21:56 |
|
spankmeister posted:Is there another way to make du not to recursively show each subdir? Something like this? code:
|
# ? Mar 4, 2014 22:35 |
|
My strange and completely personal du usage patterns: I'm usually only interested in finding out where space is being used so I only need to go one level deep, though in this specific case I was only interested in some terse output as an example. In theory the more widely used "du -sh *" gives similar enough output when you want to just go one level deep, but I find it less usefull (I guess --total would be equivalent, but then I'm entering a "--" option just like --max-depth). I often do pipe it to sort (but without the -h, would be nice if sort could handle that). As for the discard_max_bytes thing, I think it's pretty low level, I'm getting it from: /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/host0/rport-0:0-0/target0:0:0/0:0:0:1/block/sdc/queue/discard_max_bytes For example. All our local boot ssd's report it correctly. I don't think it's a filesystem thing (this is what mount checks when it decides if it's going to allow mounting with discard). I did check for deleted files just in case and only found the normal few wal logs.
|
# ? Mar 4, 2014 22:37 |
|
Aquila posted:As for the discard_max_bytes thing, I think it's pretty low level, I'm getting it from: You may end up in the SAN thread. My last thought is a thin provisioned LUN where filesystem usage has reduced but the space hasn't been reclaimed on the head
|
# ? Mar 5, 2014 02:52 |
|
spankmeister posted:I think so, I recall installing it from testing at some point. I'll verify though. Nope, I had 0.8.0-3 and the latest is -4, one of the bugs fixed is: 1045168 - failure to boot upgrade environment if /var is not on rootfs Aaand I have a separate var. Trying the upgrade later today.
|
# ? Mar 5, 2014 09:49 |
|
Which kernel versions have USB attached SCSI? I can't seem to find clear data on that.wikipedia posted:As of 2012, the Linux kernel also had native UAS support, but it had compatibility problems with Texas Instruments chipsets.[16] The Linux driver had "broken" status from December 2012[17] until September 2013.[18]
|
# ? Mar 6, 2014 01:06 |
|
I've got a RHEL 6 server with a broken RPM database. None of my attempts to rebuild the DB have worked, so I've been steeling myself for an OS reinstall. Can I get by with an in-place upgrade, or will nothing short of a full install fix the problem?
|
# ? Mar 6, 2014 01:14 |
|
|
# ? Jun 6, 2024 05:35 |
|
Molten Boron posted:I've got a RHEL 6 server with a broken RPM database. None of my attempts to rebuild the DB have worked, so I've been steeling myself for an OS reinstall. Can I get by with an in-place upgrade, or will nothing short of a full install fix the problem? I'm assuming you've tried "rpm --rebuilddb" - can you paste in the error(s) you're seeing?
|
# ? Mar 6, 2014 01:33 |