Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Doctor w-rw-rw- posted:

Having played with Docker, it leaves a lot to be desired in terms of efficiency vs, say, FreeBSD jails. And as far as base systems go, I'd create a CentOS image for Docker anyways, because you still want a solid, reliable base system.

Would be nice for Red Hat to speed up RHEL's release cycle a little bit, though, because new OS innovations tend to be built on new or young-and-stable-ish kernels.
Are you comparing FreeBSD jails against Docker, or against LXC?

Adbot
ADBOT LOVES YOU

Doctor w-rw-rw-
Jun 24, 2008

Misogynist posted:

Are you comparing FreeBSD jails against Docker, or against LXC?

I forgot that Docker was a layer on top of LXC, whoops. When I tested Docker recently, running an empty image killed my containers relatively quickly, is my point.

EDIT: killed due to memory

Doctor w-rw-rw- fucked around with this message at 20:15 on Jan 8, 2014

Ashex
Jun 25, 2007

These pipes are cleeeean!!!
After losing my entire array to two disks I'm doing what I can to setup better monitoring, configured postfix with a relay so I get emails about failed jobs/mdadm issues and munin warnings. What else should I use to check disk health? I don't trust SMART tests (but I do have smartmon setup) as I had run long tests on those disks a couple weeks ago and they passed.

Ashex fucked around with this message at 11:46 on Jan 10, 2014

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
I hate RPM for lots of reasons. Can I still hate RPM? Here are some. Probably should go in the bitching thread, but oh well.

Due to an external spec file requirement, there is no way easy way to reverse-engineer and repackage rpms like you can with .debs. It makes it very hard to customize deployments by adding text/config files and packaging it back up unless people release the source/spec rpms. It's an architecture decision that I find annoying and pointless and against the spirit of open source. You can't just deploy a packagename-config package either, because then you're overwriting a default config file owned by one package with another of your own making - a big no-no. Better hope that rpm has a conf.d directory!

I hate the way pre/postinstall scripts are obfuscated. It's very difficult to, again, extract/read them given a rpm. you have to use 3 separate rpm commands and string together the output, making troubleshooting and testing difficult.

You can't have multiple packages owning a directory, like you can in deb. This means that rpms tend to leave empty shared directories all over the place when you remove rpms instead of doing proper cleanup, since per rpm best practices you don't have packages own ANY directories or you run the risk of a rpm removing unowned files. This goes against the idea that the system should be in the exact state after removing a rpm than it was before installing it.

During a package upgrade, un-intuitively, the new package pre-install is run BEFORE the old package post-install. That means if you have configuration file generation in a pre-install, or directory cleanups in a post-install, the old post-install can actually remove or modify files that might have been created during the pre-install phase of the new package. This is a huge gotcha and completely backwards from expected functionality. Especially since it only happens during upgrade; a normal removal and then installation of the new version is a different (and to my mind, correct) order.

By and large, RPMs/DEBs are still pretty damned convenient and mostly work, as long as you know the quirks. If you're really worried about completely isolating your app stack, go with an external virtualization solution. Package management does one thing and it does it reasonably well.

Hire me, Red Hat. I will fix this for you. I already have the code for fixing the directory ownage problems!

Bhodi fucked around with this message at 21:34 on Jan 10, 2014

spankmeister
Jun 15, 2008






question is; does the code you propose retain compatibility?

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Brother, have you heard the good news about Puppet?

I wish I could work in places where Puppet or similar was feasible.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Of course! I use puppet every day. But there are puppet scaling performance problems, especially if you're editing configuration files with augeas or dealing with large numbers of hosts.

As for compatibility, sure. All it does is keep track of created directories and then sweep them up if they are empty during post-install.

stray
Jun 28, 2005

"It's a jet pack, Michael. What could possibly go wrong?"

fivre posted:

Brother, have you heard the good news about Puppet?

I wish I could work in places where Puppet or similar was feasible.
Speaking of Puppet: can anyone recommend a good resource for getting my feet wet with Puppet? I tried the tutorial VM, but it's super short and I'm still really confused. There has to be a tutorial out there somewhere which walks a person through setting up a server with common services (e.g., SSH, Samba, web server) using Puppet... right?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

stray posted:

Speaking of Puppet: can anyone recommend a good resource for getting my feet wet with Puppet? I tried the tutorial VM, but it's super short and I'm still really confused. There has to be a tutorial out there somewhere which walks a person through setting up a server with common services (e.g., SSH, Samba, web server) using Puppet... right?

It's not Puppet but I found http://gettingstartedwithchef.com/ to be extremely helpful when I started playing around with Chef.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bhodi posted:

Of course! I use puppet every day. But there are puppet scaling performance problems, especially if you're editing configuration files with augeas or dealing with large numbers of hosts.

As for compatibility, sure. All it does is keep track of created directories and then sweep them up if they are empty during post-install.
If you have a ton of hosts, you might want to consider going masterless. I was running a few hundred against a single six-core VM without too much issue, though.

Salt Fish
Sep 11, 2003

Cybernetic Crumb

Ashex posted:

After losing my entire array to two disks I'm doing what I can to setup better monitoring, configured postfix with a relay so I get emails about failed jobs/mdadm issues and munin warnings. What else should I use to check disk health? I don't trust SMART tests (but I do have smartmon setup) as I had run long tests on those disks a couple weeks ago and they passed.

What is your setup like? Are you using a hardware card? It sounds like you're not checking the array state directly and are instead depending on SMART tests to determine if there is an issue. Maybe I'm missing something?

hubnuts
Dec 10, 2004
hubnuts....yummy in your tummy

evol262 posted:

With that said, I think Docker is a very neat wrapper around LXC. I think CoreOS is a great idea (much as it's similar in concept to SmartOS). But I think the hype is hype, and that the CoreOS people are very good at stirring up HackerNews.

CoreOS experience designer here. We went through YC so Hacker News is our home turf :)

I'm not the most technical, but I run ~25 containers on a personal 5 node CoreOS cluster across AWS and Rackspace so I have experience with almost everything docker/systemd/CoreOS. I'd be happy to answer any questions you guys have.

FriedDijaaj
Feb 15, 2012

Suspicious Dish posted:

There's no documentation for GNOME Shell themes or extensions. CSS is considered a convenience for us, and the shell is not designed to be themeable by users.

I am one of the main GNOME Shell developers, so if you have any specific questions trying to find a specific element, I can answer that for you.

Well, if there's not actual documentation, is there a specific place in the gnome-shell source I could investigate to determine what properties are available?

For instance, I'm trying style the .popup-menu-boxpointer. Where is the -arrow-border-radius property defined for this class?

Doctor w-rw-rw-
Jun 24, 2008

hubnuts posted:

CoreOS experience designer here. We went through YC so Hacker News is our home turf :)

I'm not the most technical, but I run ~25 containers on a personal 5 node CoreOS cluster across AWS and Rackspace so I have experience with almost everything docker/systemd/CoreOS. I'd be happy to answer any questions you guys have.
What's the minimum amount of RAM required? I've got a VPS (xen, prgmr to be precise) with a gig of RAM which struggles to even run a single container.

evol262
Nov 30, 2010
#!/usr/bin/perl

Bhodi posted:

Due to an external spec file requirement, there is no way easy way to reverse-engineer and repackage rpms like you can with .debs. It makes it very hard to customize deployments by adding text/config files and packaging it back up unless people release the source/spec rpms. It's an architecture decision that I find annoying and pointless and against the spirit of open source. You can't just deploy a packagename-config package either, because then you're overwriting a default config file owned by one package with another of your own making - a big no-no. Better hope that rpm has a conf.d directory!

This is actually an intended design decision. Add text/config files with another rpm which depends on the first, but don't repackage so versions of packages which are nominally similar are actually different. It's insane.

Overwriting config files is fine. RPM will just make a .rpmsave, but its a non issue.


Bhodi posted:

I hate the way pre/postinstall scripts are obfuscated. It's very difficult to, again, extract/read them given a rpm. you have to use 3 separate rpm commands and string together the output, making troubleshooting and testing difficult.

rpm -q --scripts somepackage ?

You should troubleshoot package installation as part of a holistic process, not scripts in a vacuum. RPM macros are terrible, but this is another "works as intended" decision

Bhodi posted:

You can't have multiple packages owning a directory, like you can in deb. This means that rpms tend to leave empty shared directories all over the place when you remove rpms instead of doing proper cleanup, since per rpm best practices you don't have packages own ANY directories or you run the risk of a rpm removing unowned files. This goes against the idea that the system should be in the exact state after removing a rpm than it was before installing it.

Works as intended. There's a %dir macro. It's perfectly supported and is fine practice. You shouldn't have multiple packages owning the same directory, since you can't guarantee which will get removed first. File bugs. RPMs should not be leaving empty directories.

Bhodi posted:

During a package upgrade, un-intuitively, the new package pre-install is run BEFORE the old package post-install. That means if you have configuration file generation in a pre-install, or directory cleanups in a post-install, the old post-install can actually remove or modify files that might have been created during the pre-install phase of the new package. This is a huge gotcha and completely backwards from expected functionality. Especially since it only happens during upgrade; a normal removal and then installation of the new version is a different (and to my mind, correct) order.

You're conflating the ordering and the scripts. A normal removal and then installation runs:

%preun
%postun
%pre
%post

An upgrade runs

%pre
%post
%preun
%postun

The preinstall is run before the %postun(install), but all you're doing there is checking if it's the last copy of that package and cleaning everything up, right?

It won't actually conflict unless you're removing configuration files in %postun, and why would you?

Bhodi posted:

Hire me, Red Hat. I will fix this for you. I already have the code for fixing the directory ownage problems!
But we like this!

Seriously, all the things you think are bad are things I think are good.

hubnuts posted:

CoreOS experience designer here. We went through YC so Hacker News is our home turf :)

I'm not the most technical, but I run ~25 containers on a personal 5 node CoreOS cluster across AWS and Rackspace so I have experience with almost everything docker/systemd/CoreOS. I'd be happy to answer any questions you guys have.
Developers welcome! What are you doing with your containers?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Bhodi posted:

Hire me, Red Hat. I will fix this for you. I already have the code for fixing the directory ownage problems!

Breaking compatibility in a 20 year old codebase is exactly the sort of criteria we won't hire you for.

Yes, rpm is broken. Yes, it's extremely silly that people love it to the point where IBM is paying us to add >2GB cpio archives so they don't have to split their disk images into 12 different RPMs. Yes, they distribute disk images as RPMs. No, I have no idea why.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

FriedDijaaj posted:

Well, if there's not actual documentation, is there a specific place in the gnome-shell source I could investigate to determine what properties are available?

For instance, I'm trying style the .popup-menu-boxpointer. Where is the -arrow-border-radius property defined for this class?

That's part of the boxpointer widget. A boxpointer is a menu with an arrow at the end that sticks out. It's used all over the place, but one quick example is the panel menus on the top. It's also used for the context menu of entries (run dialog, search box, looking glass), for the context menu of apps in the dash on the left of the overview, for the IBus candidate popup.

The code that draws the border isn't standard CSS since it has to merge in with the arrow, so we added the -arrow prefix so it wouldn't conflict with normal CSS and make the theming engine get confused. It's just like border-radius, but it only takes one length (because we didn't need that feature, we could fix that if we really cared enough), and it might get cut short if the arrow overlaps with it, like what can happen at the edge of the screen.

Ashex
Jun 25, 2007

These pipes are cleeeean!!!

Salt Fish posted:

What is your setup like? Are you using a hardware card? It sounds like you're not checking the array state directly and are instead depending on SMART tests to determine if there is an issue. Maybe I'm missing something?

It's purely software with mdadm/lvm, I'd go hardware but cards cost quite a bit.

kyuss
Nov 6, 2004

Question:

I want to modify my current dual boot setup in GRUB2 to not only contain Linux and Windows, but also my VMWare ESXi 5.1 install residing on a micro USB Stick.
Selecting the USB stick manually from the BIOS works, but I'm lost at the integration part into GRUB2. Google is of no help either.

:iiam:

How would I go about this?

kyuss fucked around with this message at 10:47 on Jan 12, 2014

midnightclimax
Dec 3, 2011

by XyloJW
What's the easiest/best way to keep two folders in sync via FTP?

Riso
Oct 11, 2008

by merry exmarx
Try rsync

Doctor w-rw-rw-
Jun 24, 2008
Agreed. Rsync, if you want to automate it. Otherwise, some random FTP client that supports mirroring folders would work in a pinch.

midnightclimax
Dec 3, 2011

by XyloJW
Yeah, I figured lftp mirror might be the way to go, forgot about rsync and curlftpfs. Might try that out next.

Ashex
Jun 25, 2007

These pipes are cleeeean!!!
Another option to consider is ncftp

JHVH-1
Jun 28, 2002

Ashex posted:

Another option to consider is ncftp

Or lftp's mirror command.
http://russbrooks.com/2010/11/19/lftp-cheetsheet

I love lftp.

evol262
Nov 30, 2010
#!/usr/bin/perl

kyuss posted:

Question:

I want to modify my current dual boot setup in GRUB2 to not only contain Linux and Windows, but also my VMWare ESXi 5.1 install residing on a micro USB Stick.
Selecting the USB stick manually from the BIOS works, but I'm lost at the integration part into GRUB2. Google is of no help either.

:iiam:

How would I go about this?

set root=(hd1,1) #or whatever
chainloader +1

telcoM
Mar 21, 2009
Fallen Rib

kyuss posted:

I want to modify my current dual boot setup in GRUB2 to not only contain Linux and Windows, but also my VMWare ESXi 5.1 install residing on a micro USB Stick.
Selecting the USB stick manually from the BIOS works [...]

evol262 posted:

set root=(hd1,1) #or whatever
chainloader +1

I'm afraid it won't be quite that easy. (Or if it is, you're very lucky!)

The problem is that when the BIOS is set to boot from a regular HDD, it probably won't fire up BIOS-based USB Storage functionality at all. As far as BIOS is concerned, the boot disk has already been selected and boot is underway, and if USB functionality is desired, the OS that is being booted must do it all. So the USB storage "disk" will probably be inaccessible at the point your HDD-based GRUB does its job.

The situation is similar when booting from a CD-ROM/DVD: when you tell BIOS that you wish to boot from CD-ROM, it does the necessary magic to make the boot media visible as a "regular BIOS-accessible disk device". But when you're booting from a plain old HDD, the magic is not present and the optical discs are invisible until the OS has booted up and loaded the necessary drivers.

But since you already have GRUB2 installed on your HDD, you can rather easily check if it sees the USB media.
Make sure the USB stick is plugged in, and power up the system. When you see the GRUB2 boot menu, press "c" to enter the GRUB command prompt. Then type "ls" without any arguments and press Enter. It should output a list of GRUB disk identifiers, corresponding to all disks and partitions BIOS (and therefore GRUB) sees.

With commands like "hdparm -i (hd0)" or "drivemap -l", you can get more information to help you see what each GRUB disk identifier corresponds to. Once you know the correspondence between the physical devices and GRUB disk identifiers, identifying the partitions should be easy. There is also the "search" command which can be used to look for a particular file on any partition GRUB understands: if the file is found, the command will list the disk/partition identifier(s) where the file was found.

evol262
Nov 30, 2010
#!/usr/bin/perl

telcoM posted:

But since you already have GRUB2 installed on your HDD, you can rather easily check if it sees the USB media.
Make sure the USB stick is plugged in, and power up the system. When you see the GRUB2 boot menu, press "c" to enter the GRUB command prompt. Then type "ls" without any arguments and press Enter. It should output a list of GRUB disk identifiers, corresponding to all disks and partitions BIOS (and therefore GRUB) sees.

With commands like "hdparm -i (hd0)" or "drivemap -l", you can get more information to help you see what each GRUB disk identifier corresponds to. Once you know the correspondence between the physical devices and GRUB disk identifiers, identifying the partitions should be easy. There is also the "search" command which can be used to look for a particular file on any partition GRUB understands: if the file is found, the command will list the disk/partition identifier(s) where the file was found.

It's 2014, and the vast majority of motherboards will support USB HDD, which GRUB2 treats as regular drives.

It doesn't hurt at all to probe, but it shouldn't be necessary unless you're trying to chainload USB from a PXE menu or similar.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Suspicious Dish posted:

Breaking compatibility in a 20 year old codebase is exactly the sort of criteria we won't hire you for.

Yes, rpm is broken. Yes, it's extremely silly that people love it to the point where IBM is paying us to add >2GB cpio archives so they don't have to split their disk images into 12 different RPMs. Yes, they distribute disk images as RPMs. No, I have no idea why.
I was only half joking! You wouldn't want to hire me to code anything, anyway. I'm very much a scripter. I honestly think RPM is adequate, even good, for what it does. Although I slightly disagree about breaking compatibility; RPM may be sacrosanct but I know of at least one patch that has broken yum compatibility between RHEL uh, I think it's 5 and 6, the removal of the createrepo flags to specify older hashing/encryption algorithms that work with some older RHEL4 servers.

Sadly, there's no appropriate job openings at RH here in NoVA. I did notice some openings down in Raleigh, but I'm stuck here until next year. I might check back then, since friends tell me NC is pretty nice and hey, Beasley's chicken and waffles!

Bhodi fucked around with this message at 17:03 on Jan 13, 2014

hubnuts
Dec 10, 2004
hubnuts....yummy in your tummy

evol262 posted:

Developers welcome! What are you doing with your containers?

I've written a Heroku-like routing layer with Varnish that's backed by etcd. Right now I use it to route to a bunch of websites I host.

Doctor w-rw-rw- posted:

What's the minimum amount of RAM required? I've got a VPS (xen, prgmr to be precise) with a gig of RAM which struggles to even run a single container.

It should run fine with 512 MB but it really depends on what you're running in the container. Our main use-case is on bare metal with a large amount of RAM, so we don't swap by default.

evol262
Nov 30, 2010
#!/usr/bin/perl

Bhodi posted:

I was only half joking! You wouldn't want to hire me to code anything, anyway. I'm very much a scripter.

If you saw my resume or LinkedIn, my move to Red Hat may surprise you. I went from 7 years of systems admin/engineering into a full-time developer gig. The leap from "scripter" to developer isn't as large as it seems, and the many admins are fluent in multiple languages anyway (or fluent enough that 2 month adjustment period is enough to get you up to speed).

Bhodi posted:

I honestly think RPM is adequate, even good, for what it does. Although I slightly disagree about breaking compatibility; RPM may be sacrosanct but I know of at least one patch that has broken yum compatibility between RHEL uh, I think it's 5 and 6, the removal of the createrepo flags to specify older hashing/encryption algorithms that work with some older RHEL4 servers.
Yes, XZ and FileDigest support both happened between RHEL5 and RHEL6. But that didn't break compatibility, in the sense that old RPMs work fine on newer versions of RHEL, basic behavior (like the ownership of directories) didn't change, and there are easy macros to make RPMs which work on earlier RHEL versions. Similarly, RPMs made on RHEL5 have occassional problems on RHEL4. And so forth.

Fortunately, we have Mock, which you should use anyway.

But we don't promise backwards compatibility with RPM, really. We do promise that extant features of RPM will behave the same going forward, and changing RPM internals isn't the same as changing what happens when you build a specfile or SRPM.

Bhodi posted:

Sadly, there's no appropriate job openings at RH here in NoVA. I did notice some openings down in Raleigh, but I'm stuck here until next year. I might check back then, since friends tell me NC is pretty nice and hey, Beasley's chicken and waffles!
We have a lot of remote workers, including myself. My requisition was rewritten from Brno to Remote - US. There's flexibility in some of the positions for the right candidate, I guess. Never hurts to try.

You probably won't get a remote position as a Sysadmin, GSS, or similar. It's very possible as an engineer/developer.

Beve Stuscemi
Jun 6, 2001




This is not necessarily a Linux question, but a Nagios question. Any nagios experts in the house?

I have nagios 3 installed on Debian and I cant get plugins to work correctly. Well, the plugins work, but I dont think I'm giving them the right syntax.

Take the check_http plugin for example, when I test it by running ./check_http -I 10.0.254.84 -S -u /owa/auth/logon.aspx

I get: HTTP OK: HTTP/1.1 200 OK - 8322 bytes in 0.019 second response time |time=0.018892s;;;0.000000 size=8322B;;;0


Thats expected.


I put this in a cfg file for nagios to pick up:

code:
define host {
        use             generic-host
        host_name       exchange02
        alias           exchange02
        address         10.0.254.84
        }

define service {
        use                     generic-service
        host_name               exchange02
        service_description     Exchange-Webmail-Service
        check_command           check_http!-S -u "/owa/auth/logon.aspx"
        }
Now that should work even without the -I flag, because the IP address is defined in the host definition.

I get this in the nagios monitoring interface: HTTP WARNING: HTTP/1.1 403 Forbidden - 1412 bytes in 0.034 second response time

Something is not translating from running the command on the command line to running it in the cfg file. Any ideas? The documentation here says I'm basically doing the correct thing: http://nagios.sourceforge.net/docs/nagioscore/3/en/monitoring-publicservices.html

Ninja Rope
Oct 22, 2005

Wee.
What is the -S flag? My nagios says [-S <version>] and I don't see you supplying a version. You should probably specify a proper HTTP host header with -k 'Host: blah.com'.

Beve Stuscemi
Jun 6, 2001




Ninja Rope posted:

What is the -S flag? My nagios says [-S <version>] and I don't see you supplying a version. You should probably specify a proper HTTP host header with -k 'Host: blah.com'.

-S is SSL

quote:

-S, --ssl=VERSION
Connect via SSL. Port defaults to 443. VERSION is optional, and prevents auto-negotiation (1 = TLSv1, 2 = SSLv2, 3 = SSLv3).

The problem is this works OK on the command line as-is, but I cant get it to accept my syntax for some reason when nagios takes it up from the cfg file

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Can you post your check_http command definition? It sounds like macro expansion gone bad.

Sheep
Jul 24, 2003
I've presently got a situation where I have a RAID-1 setup that I'd like to encrypt, and I'm wondering whether it would be better to encrypt it before or after building the array. At the moment I've got the drives encrypted, so I have to open each partition with cryptsetup and then use the resultant mappers to build the array. I'm thinking I should probably build an empty array and then encrypt that instead, as I'd wind up with just one mapping instead of one for each drive in the RAID, as that would probably cut down on access costs somewhere. Any suggestions?

Edit: I guess encrypting the RAID mapping is obviously the better idea since it would be encrypting the data just once and then replicating that out instead of encrypting the data X times over, and I just needed to type it out to realize that fact.

Sheep fucked around with this message at 21:23 on Jan 13, 2014

hackedaccount
Sep 28, 2009
Any ideas on how to run a script as root the FIRST time a system shuts down and then never again? We need to flip a setting but I want to automate it and don't want to give the implementation people root. I was thinking just put a K script in /etc/rc0.d/ and have the script delete itself at the end.

Is there a better way?

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Unless there is a specific distro-specific way of doing things, a rc.local script that runs once and then deletes/moves itself (rm $0) (mv $0 $0.done;mv $0.orig $0) on startup is the way to go.

The actual script location/name is going to vary, anywhere from /etc/rc.local (Red Hat, most others) to /etc/init.d/boot.local (SuSE), but all of them have a script that runs post-init, and most are named rc.local.

I would REALLY recommend doing an appropriate test and doing the configuration on startup, not on shutdown, as servers can shut down / hang for any number of reasons and that script might never get run. Just stick in your appropriate tests to trip it. You can also do what you suggested, but actual rc.X scripts are generally reserved for daemons, and that's kind of a kludgy non-obvious solution.

If you're trying to get around giving people root, there are any number of ways (ENV variable comes to mind) to get around that. Besides, if they can shut down a server, they must already have some high level access, right? Create a sudoers entry.

Edit: Are you trying to pull a server out of monitoring or something?

Bhodi fucked around with this message at 22:10 on Jan 13, 2014

hackedaccount
Sep 28, 2009
It's on first shut down, not on boot or on first boot - it has to be this way. sudo's kinda out of the picture too because there would be a huge shitstorm about giving non-admins root and blah blah blah.

I won't type up all the details about how and why because ya it's kinda dumb but ENTERPRISE SOFTWARE.

Adbot
ADBOT LOVES YOU

RFC2324
Jun 7, 2012

http 418

hackedaccount posted:

sudo's kinda out of the picture too because there would be a huge shitstorm about giving non-admins root and blah blah blah.

You can configure sudo to only allow certain commands, like, say, the one you want to have run.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply