Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Salt Fish
Sep 11, 2003

Cybernetic Crumb

Thermopyle posted:

On Ubuntu, /etc/cron.d/mdadm is configured to run a redundancy check on first Sunday of the month only. I commented that out and just set it to run on the 19th of the month as ZFS is usually running a scrub on the first Sunday of the month and the two together drag the system to its knees.

The problem is...it's still running the mdadm redundancy check on the first Sunday of the month? Is there somewhere else I should look to figure out why mdadm keeps doing this?

Here's the contents of /etc/cron.d/mdadm:

code:
#
# cron.d/mdadm -- schedules periodic redundancy checks of MD devices
#
# Copyright © martin f. krafft <madduck@madduck.net>
# distributed under the terms of the Artistic Licence 2.0
#

# By default, run at 00:57 on every Sunday, but do nothing unless the day of
# the month is less than or equal to 7. Thus, only run on the first Sunday of
# each month. crontab(5) sucks, unfortunately, in this regard; therefore this
# hack (see #380425).
#57 0 * * 0 root if [ -x /usr/share/mdadm/checkarray ] && \ (no table breaking)
[ $(date +\%d) -le 7 ]; then /usr/share/mdadm/checkarray --cron --all --idle --quiet; fi
57 0 19 * 0 root /usr/share/mdadm/checkarray --cron --all --idle --quiet

Check out man 5 crontab. The day of week field operates differently than the others in that having both a day of the week field and day of the month field will cause the command to run when either condition is met.

57 0 19 * 0 = on every sunday and also on every 19th calendar day run the command at 00:57

57 0 19 * * = on every 19th calendar day run the command at 00:57.

Salt Fish fucked around with this message at 23:05 on Mar 2, 2014

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

/etc/cron.d/monthly?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Salt Fish posted:

Check out man 5 crontab. The day of week field operates differently than the others in that having both a day of the week field and day of the month field will cause the command to run when either condition is met.

57 0 19 * 0 = on every sunday and also on every 19th calendar day run the command at 00:57

57 0 19 * * = on every 19th calendar day run the command at 00:57.

Oh, I bet this was it. Thanks!

reading
Jul 27, 2013
On my Xubuntu system, "$ last" only shows the logins from the last reboot. How can I get it to show all logins going back a long time?

Also, my bash history seems to be confused and only saves stuff I've typed when I've been ssh'd in to this computer. If I'm working on the computer itself in a terminal, and then I also ssh in and work on it a bit, the bash history will only show one branch and not the other. How can I get it to store everything?

Xik
Mar 10, 2011

Dinosaur Gum

reading posted:

On my Xubuntu system, "$ last" only shows the logins from the last reboot. How can I get it to show all logins going back a long time?

last just parses a log file (/var/log/wtmp by default). If the log file is recreated at every boot then the information you want just isn't there to parse.

The output of last should tell you when the log file was created and appears to be distro specific. The file on my Debian machine was last boot, but on Arch it appears to be since I originally installed.


e:
On the Debian machine there is a backup log which contains older entries. (wtmp.1). Check if there is one on your machine too. If so, point last to it like so:

code:
last -f /var/log/wtmp.1

Xik fucked around with this message at 06:26 on Mar 3, 2014

evol262
Nov 30, 2010
#!/usr/bin/perl

Xik posted:

last just parses a log file (/var/log/wtmp by default). If the log file is recreated at every boot then the information you want just isn't there to parse.

The output of last should tell you when the log file was created and appears to be distro specific. The file on my Debian machine was last boot, but on Arch it appears to be since I originally installed.


e:
On the Debian machine there is a backup log which contains older entries. (wtmp.1). Check if there is one on your machine too. If so, point last to it like so:

code:
last -f /var/log/wtmp.1

Debian logrotates these by default (in /etc/logrotate.conf, unless someone finally convinced them to use /etc/logrotate.d/security or something). Arch probably doesn't even have logrotate installed

Xik
Mar 10, 2011

Dinosaur Gum

evol262 posted:

Debian logrotates these by default (in /etc/logrotate.conf, unless someone finally convinced them to use /etc/logrotate.d/security or something). Arch probably doesn't even have logrotate installed

Yep you're right. Debian is setup to rotate wtmp on a monthly basis. Same with Arch (so says logrotate.conf), but of course with Arch logrotate appears to require some interaction before it will work properly. :rolleyes:

Polygynous
Dec 13, 2006
welp

reading posted:

Also, my bash history seems to be confused and only saves stuff I've typed when I've been ssh'd in to this computer. If I'm working on the computer itself in a terminal, and then I also ssh in and work on it a bit, the bash history will only show one branch and not the other. How can I get it to store everything?

.bash_history only gets written when a shell exits, does that explain what's happening? If you close a local session and then close an ssh session, the ssh session's commands will be the most recent history in a new shell. (Also the previous local history may be gone if the previous ssh session had more than $HISTFILESIZE commands. I think. "man bash" for more on that.)

Unless you're asking if it's possible to have the history updated continuously, I'm pretty sure there's no way to do that in bash at least.

evol262
Nov 30, 2010
#!/usr/bin/perl

spoon0042 posted:

.bash_history only gets written when a shell exits, does that explain what's happening? If you close a local session and then close an ssh session, the ssh session's commands will be the most recent history in a new shell. (Also the previous local history may be gone if the previous ssh session had more than $HISTFILESIZE commands. I think. "man bash" for more on that.)

Unless you're asking if it's possible to have the history updated continuously, I'm pretty sure there's no way to do that in bash at least.

In .bashrc

shopt -s histappend
PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n"

Of course, you'll have to press enter once to refresh the history if you're trying to get to a command from another session, but...

evol262
Nov 30, 2010
#!/usr/bin/perl

Misogynist posted:

Anyone have any ideas what might be going on?

I totally missed this earlier, but it doesn't look like xl2tpd is actually picking it up. Have you tried xl2tpd -D?

What client? nm-applet? racoon?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

evol262 posted:

Chef can be super finicky about string interpolation. Try %x{} instead of ``

Still no go :(

I even changed it to:
code:
find_last_backup_command = Mixlib::ShellOut.new(node["myproject"]["backup_scripts_location"]+"/get_last_database_backup_filename.sh")
find_last_backup_command.run_command
last_backup_filename = find_last_backup_command.stdout
last_backup_filename = last_backup_filename.strip
Still doesn't work, something wreaks havoc on the chef run and by the time it gets to this it's like half the poo poo in the run_list failed silently

evol262
Nov 30, 2010
#!/usr/bin/perl

fletcher posted:

Still no go :(

I even changed it to:
code:
find_last_backup_command = Mixlib::ShellOut.new(node["myproject"]["backup_scripts_location"]+"/get_last_database_backup_filename.sh")
find_last_backup_command.run_command
last_backup_filename = find_last_backup_command.stdout
last_backup_filename = last_backup_filename.strip
Still doesn't work, something wreaks havoc on the chef run and by the time it gets to this it's like half the poo poo in the run_list failed silently

Do you have permission problems?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

evol262 posted:

Do you have permission problems?

I don't think I do. get_last_database_backup_filename.sh is supposed to be created by the template resource just a few lines above the last snippet I posted, but my last run failed on that ShellOut command because get_last_database_backup_filename.sh doesn't even exist.

Here's a bit more of it:

code:
auth = Chef::EncryptedDataBagItem.load("myproject", "auth")

template node["myproject"]["backup_scripts_location"]+"/get_last_database_backup_filename.sh" do
	source "get_last_database_backup_filename.sh.erb"
	mode 0700
	owner "root"
	group "root"
	variables(
		:aws_default_region => node["myproject"]["aws_default_region"],
		:aws_access_key_id => auth["aws_access_key_id"],
		:aws_secret_access_key => auth["aws_secret_access_key"],
		:aws_executable => node["myproject"]["virtualenv"]+"/bin/aws"
	)
end

if node["myproject"]["restore_from_previous_backup"]
	last_backup_filename = ""
	if node["myproject"]["restore_specific_database_backup"]
		last_backup_filename = node["myproject"]["restore_specific_database_backup"]
	else
		find_last_backup_command = Mixlib::ShellOut.new(node["myproject"]["backup_scripts_location"]+"/get_last_database_backup_filename.sh")
		find_last_backup_command.run_command
		last_backup_filename = find_last_backup_command.stdout
		last_backup_filename = last_backup_filename.strip
	end

	raise "Could not find last_backup_filename" if last_backup_filename == ''

	# now download and restore last_backup_filename
end
This one failed because get_last_database_backup_filename.sh didn't exist when ShellOut tried to run it (what??). Then I flipped restore_from_previous_backup to false and the chef run completes just fine, get_last_database_backup_filename.sh exists and works perfectly when executed manually. :wtf:

spankmeister
Jun 15, 2008






Anyone know how to troubleshoot fedup issues? I'm trying to upgrade from 19 to 20. Fedup runs, downloads a bunch of packages, does the kernel thing etc..

All the preparation seems to work okay, except it complaining about the steam yum repo i have enabled, but that shouldn't matter since it says I might have to update it manually after the upgrade (fine).

Then, I reboot, select the update from grub and instead of upgrading it just continues on booting normally. :confused:

Any way I can figure out what's going wrong exactly?

Varkk
Apr 17, 2004

I am pretty sure Fedup writes to a log file probably something like /var/log/fedup.log

Do you have the latest fedup? I think there was a bug in it when 20 was initially released and it wouldn't update 19 to 20 I think I had to get the version of Fedup in the testing repo. Although I would have thought that would be in the main repos by now.

evol262
Nov 30, 2010
#!/usr/bin/perl

spankmeister posted:

Anyone know how to troubleshoot fedup issues? I'm trying to upgrade from 19 to 20. Fedup runs, downloads a bunch of packages, does the kernel thing etc..

All the preparation seems to work okay, except it complaining about the steam yum repo i have enabled, but that shouldn't matter since it says I might have to update it manually after the upgrade (fine).

Then, I reboot, select the update from grub and instead of upgrading it just continues on booting normally. :confused:

Any way I can figure out what's going wrong exactly?

Just upgrade with regular yum, really. Fedup is only worth the trouble when there's some big change

evol262
Nov 30, 2010
#!/usr/bin/perl

fletcher posted:

This one failed because get_last_database_backup_filename.sh didn't exist when ShellOut tried to run it (what??). Then I flipped restore_from_previous_backup to false and the chef run completes just fine, get_last_database_backup_filename.sh exists and works perfectly when executed manually. :wtf:
This really isn't wtf. It's exactly the problem you've been having the entire time, just phrased slightly differently. The run completed fine because it didn't hit the problematic code.

Have you tried using an execute resource? Something like:

code:
execute "find-backup" do 
    Mixlib::ShellOut..

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

evol262 posted:

This really isn't wtf. It's exactly the problem you've been having the entire time, just phrased slightly differently. The run completed fine because it didn't hit the problematic code.

Have you tried using an execute resource? Something like:

code:
execute "find-backup" do 
    Mixlib::ShellOut..

I need the return value from get_last_database_backup_filename.sh though, which I thought you couldn't do with the execute resource?

Varkk
Apr 17, 2004

evol262 posted:

Just upgrade with regular yum, really. Fedup is only worth the trouble when there's some big change

Isn't Fedup the recommended method for upgrading between releases? e.g 19 to 20 would be considered a big change.
Having done a few fedup cycles it certainly doesn't seem to be much of a hassle outside the one instance I got hit by that fedup 0.7 bug.

evol262
Nov 30, 2010
#!/usr/bin/perl

fletcher posted:

I need the return value from get_last_database_backup_filename.sh though, which I thought you couldn't do with the execute resource?

The problem, IIRC, is that Chef has no idea what to do with your command in backticks while it's compiling the cookbook. "execute" and "script" say "run this on the node", but I'm more of a Puppet/Salt guy than Chef...

code:
ruby_block "find-backup" do
  block do
    #find some way to make this idempotent, I don't know your environment
    node["last-backup"] = `#{the_command}`.strip
  end
  action :create
end

if node["myproject"]["restore_from_previous_backup"]
	last_backup_filename = ""
	if node["myproject"]["restore_specific_database_backup"]
		last_backup_filename = node["myproject"]["restore_specific_database_backup"]
	else
		last_backup_filename = node["last-backup"]
	end

	raise "Could not find last_backup_filename" if last_backup_filename == ''

evol262
Nov 30, 2010
#!/usr/bin/perl

Varkk posted:

Isn't Fedup the recommended method for upgrading between releases? e.g 19 to 20 would be considered a big change.

Sure, but I basically never use it. 19->20 is relatively minor. No /bin -> /usr/bin move, cgroups changes, or other nasty stuff. yum upgrade is dead simple 19->20.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

I totally missed this earlier, but it doesn't look like xl2tpd is actually picking it up. Have you tried xl2tpd -D?

What client? nm-applet? racoon?
echo "c <connection-name>" > /var/run/xl2tpd/l2tp-control once the IPsec tunnel is up. Interestingly, the same config is working on an Ubuntu 13.10 VM I stood up on my desktop so :confused:

I'll see if I can get any more valuable debugging output out of it.

Vulture Culture fucked around with this message at 23:13 on Mar 3, 2014

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

evol262 posted:

The problem, IIRC, is that Chef has no idea what to do with your command in backticks while it's compiling the cookbook. "execute" and "script" say "run this on the node", but I'm more of a Puppet/Salt guy than Chef...

code:
ruby_block "find-backup" do
  block do
    #find some way to make this idempotent, I don't know your environment
    node["last-backup"] = `#{the_command}`.strip
  end
  action :create
end

if node["myproject"]["restore_from_previous_backup"]
	last_backup_filename = ""
	if node["myproject"]["restore_specific_database_backup"]
		last_backup_filename = node["myproject"]["restore_specific_database_backup"]
	else
		last_backup_filename = node["last-backup"]
	end

	raise "Could not find last_backup_filename" if last_backup_filename == ''


Ah that makes sense. I didn't know about the ruby_block resource. It still fails with the same error though :(

quote:

[2014-03-03T21:46:50+00:00] ERROR: Could not find last_backup_filename

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

fletcher posted:

Ah that makes sense. I didn't know about the ruby_block resource. It still fails with the same error though :(

I changed get_last_database_backup_filename.sh to just write the value out to a text file and in Chef I do a last_backup_filename = File.read(...). Seems to work fine. Spent way too much time on this already, I'd like to come back to it again some time though. Thanks for all the help evol262.

ewe2
Jul 1, 2009

Playing with nfs4 on my home system and I've come across an odd error with showmount from the client side

code:
root@marvin:~# showmount -e server
rpc mount export: RPC: Authentication error; why = Failed (unspecified error)
Client isn't running mountd, /etc/hosts.{allow,deny} check out ok, rpcinfo -p server from the client works fine too. Is there some wacky nfs4 extra I'm missing, I'm not using gss/kerberos on the server?

Edit: solved, aaagh. It helps to put a dot at the end of a network address :doh:

ewe2 fucked around with this message at 15:02 on Mar 4, 2014

Aquila
Jan 24, 2003

Obscure question that's probably due to our san, but I thought I'd check the linux side just in case. Our san / fc / multipath volumes are not behaving the same when it cames to trim / unmap / discard:

code:
Volume 5   postgresql1 discard_max_bytes = 0        Cap 1.9TB, SAN reported usage: 1.9TB, df reports: 1.1TB
Volume 37  postgresql2 discard_max_bytes = 33553920 Cap 2.0TB, SAN reported usage: 1.4TB, df reports: 785GB
Volume 38  postgresql3 discard_max_bytes = 33553920 Cap 2.0TB, SAN reported usage: 1.5TB, df reports: 896GB
Volume 140 postgresql4 discard_max_bytes = 0        Cap 1.9TB, SAN reported usage: 1.7TB, df reports: 994GB
All are ext4 volumes, all are running on ubuntu 12.04 lts servers, same kernel (3.8.0-33-generic #48~precise1-Ubuntu), same version of multipath tools, same qla2xxx kernel module. SAN is a Hitachi HUS 150 with dp and dt. I am wondering if there's anything on the linux side at volume detection that could vary. Mount options are the same, and this is probably happening before mounting. Volumes were created basically the same, capacity differences are just due to specifying in GB or TB at creation.

Due to postgresql's bloaty ways we need volumes at least twice as big as we're going to use, so I'd like to be able to (very carefully) over provision this pool to get the most out of it. I can't do this until the SAN thinks we're using the amount of space that we're actually using. I also don't feel safe testing discard or fstrim until I have a non production volume to try it on (our ~200 other volumes also report discard_max_bytes = 0).

evol262
Nov 30, 2010
#!/usr/bin/perl

Aquila posted:

Obscure question that's probably due to our san, but I thought I'd check the linux side just in case. Our san / fc / multipath volumes are not behaving the same when it cames to trim / unmap / discard:

code:
Volume 5   postgresql1 discard_max_bytes = 0        Cap 1.9TB, SAN reported usage: 1.9TB, df reports: 1.1TB
Volume 37  postgresql2 discard_max_bytes = 33553920 Cap 2.0TB, SAN reported usage: 1.4TB, df reports: 785GB
Volume 38  postgresql3 discard_max_bytes = 33553920 Cap 2.0TB, SAN reported usage: 1.5TB, df reports: 896GB
Volume 140 postgresql4 discard_max_bytes = 0        Cap 1.9TB, SAN reported usage: 1.7TB, df reports: 994GB
All are ext4 volumes, all are running on ubuntu 12.04 lts servers, same kernel (3.8.0-33-generic #48~precise1-Ubuntu), same version of multipath tools, same qla2xxx kernel module. SAN is a Hitachi HUS 150 with dp and dt. I am wondering if there's anything on the linux side at volume detection that could vary. Mount options are the same, and this is probably happening before mounting. Volumes were created basically the same, capacity differences are just due to specifying in GB or TB at creation.

Due to postgresql's bloaty ways we need volumes at least twice as big as we're going to use, so I'd like to be able to (very carefully) over provision this pool to get the most out of it. I can't do this until the SAN thinks we're using the amount of space that we're actually using. I also don't feel safe testing discard or fstrim until I have a non production volume to try it on (our ~200 other volumes also report discard_max_bytes = 0).
I bet if you checked "du -h --apparent-size", it'd match what the SAN thinks.

Aquila
Jan 24, 2003

Nope:

code:
root@postgresql1:/var/lib/postgresql# du -h --apparent-size --max-depth 1
16K	./lost+found
1.1T	./9.3
857M	./9.2
1.1T	.

root@postgresql2:/var/lib/postgresql# du -h --apparent-size --max-depth 1
16K	./lost+found
816G	./9.3
832G	.
(pg2 df now reports 833GB, those numbers in my first post are a week or so old).

Also nice, undocumented df features :(

Edit: unrelated, woah this looks to be a bad one for security: http://arstechnica.com/security/2014/03/critical-crypto-bug-leaves-linux-hundreds-of-apps-open-to-eavesdropping/

Aquila fucked around with this message at 21:29 on Mar 4, 2014

spankmeister
Jun 15, 2008






Varkk posted:

I am pretty sure Fedup writes to a log file probably something like /var/log/fedup.log

Do you have the latest fedup? I think there was a bug in it when 20 was initially released and it wouldn't update 19 to 20 I think I had to get the version of Fedup in the testing repo. Although I would have thought that would be in the main repos by now.

I think so, I recall installing it from testing at some point. I'll verify though.

evol262
Nov 30, 2010
#!/usr/bin/perl

Aquila posted:

Nope:

code:
root@postgresql1:/var/lib/postgresql# du -h --apparent-size --max-depth 1
16K	./lost+found
1.1T	./9.3
857M	./9.2
1.1T	.

root@postgresql2:/var/lib/postgresql# du -h --apparent-size --max-depth 1
16K	./lost+found
816G	./9.3
832G	.
(pg2 df now reports 833GB, those numbers in my first post are a week or so old).

Also nice, undocumented df features :(

Why max-depth 1?

I guess they may not be sparse files. Some deleted handle held open?

spankmeister
Jun 15, 2008






evol262 posted:

Why max-depth 1?

I guess they may not be sparse files. Some deleted handle held open?

Is there another way to make du not to recursively show each subdir?

evol262
Nov 30, 2010
#!/usr/bin/perl

spankmeister posted:

Is there another way to make du not to recursively show each subdir?

Piping it to grep, obviously. I guess I instinctively type "du <options> | sort -n" and just ignore the spam, since the important directories are at the bottom anyway.

spankmeister
Jun 15, 2008






evol262 posted:

Piping it to grep, obviously. I guess I instinctively type "du <options> | sort -n" and just ignore the spam, since the important directories are at the bottom anyway.

Uhh well sure but in that case your question is answered. ;)

Longinus00
Dec 29, 2005
Ur-Quan

spankmeister posted:

Is there another way to make du not to recursively show each subdir?

Something like this?
code:
du -sh *

Aquila
Jan 24, 2003

My strange and completely personal du usage patterns: I'm usually only interested in finding out where space is being used so I only need to go one level deep, though in this specific case I was only interested in some terse output as an example. In theory the more widely used "du -sh *" gives similar enough output when you want to just go one level deep, but I find it less usefull (I guess --total would be equivalent, but then I'm entering a "--" option just like --max-depth). I often do pipe it to sort (but without the -h, would be nice if sort could handle that).

As for the discard_max_bytes thing, I think it's pretty low level, I'm getting it from:

/sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/host0/rport-0:0-0/target0:0:0/0:0:0:1/block/sdc/queue/discard_max_bytes

For example. All our local boot ssd's report it correctly. I don't think it's a filesystem thing (this is what mount checks when it decides if it's going to allow mounting with discard). I did check for deleted files just in case and only found the normal few wal logs.

evol262
Nov 30, 2010
#!/usr/bin/perl

Aquila posted:

As for the discard_max_bytes thing, I think it's pretty low level, I'm getting it from:

/sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/host0/rport-0:0-0/target0:0:0/0:0:0:1/block/sdc/queue/discard_max_bytes

For example. All our local boot ssd's report it correctly. I don't think it's a filesystem thing (this is what mount checks when it decides if it's going to allow mounting with discard). I did check for deleted files just in case and only found the normal few wal logs.
It's Dev+sysfs+filesystem. All three have to agree. Why multiple LUNs from the same head all running ext4 would present differently unless they were created with different mkfs options or mounted differently. Does tune2fs or mount show "discard" on some but not others? These aren't all paths to the same LUN, right?

You may end up in the SAN thread. My last thought is a thin provisioned LUN where filesystem usage has reduced but the space hasn't been reclaimed on the head

spankmeister
Jun 15, 2008






spankmeister posted:

I think so, I recall installing it from testing at some point. I'll verify though.

Nope, I had 0.8.0-3 and the latest is -4, one of the bugs fixed is:
1045168 - failure to boot upgrade environment if /var is not on rootfs

Aaand I have a separate var. :downs:

Trying the upgrade later today. :)

Shaocaholica
Oct 29, 2002

Fig. 5E
Which kernel versions have USB attached SCSI? I can't seem to find clear data on that.

wikipedia posted:

As of 2012, the Linux kernel also had native UAS support, but it had compatibility problems with Texas Instruments chipsets.[16] The Linux driver had "broken" status from December 2012[17] until September 2013.[18]

Molten Boron
Nov 1, 2010

Fucking boars, hunting whores.
I've got a RHEL 6 server with a broken RPM database. None of my attempts to rebuild the DB have worked, so I've been steeling myself for an OS reinstall. Can I get by with an in-place upgrade, or will nothing short of a full install fix the problem?

Adbot
ADBOT LOVES YOU

xdice
Feb 15, 2006

Molten Boron posted:

I've got a RHEL 6 server with a broken RPM database. None of my attempts to rebuild the DB have worked, so I've been steeling myself for an OS reinstall. Can I get by with an in-place upgrade, or will nothing short of a full install fix the problem?

I'm assuming you've tried "rpm --rebuilddb" - can you paste in the error(s) you're seeing?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply