|
Doctor w-rw-rw- posted:Having played with Docker, it leaves a lot to be desired in terms of efficiency vs, say, FreeBSD jails. And as far as base systems go, I'd create a CentOS image for Docker anyways, because you still want a solid, reliable base system.
|
# ? Jan 8, 2014 18:51 |
|
|
# ? May 31, 2024 02:28 |
|
Misogynist posted:Are you comparing FreeBSD jails against Docker, or against LXC? I forgot that Docker was a layer on top of LXC, whoops. When I tested Docker recently, running an empty image killed my containers relatively quickly, is my point. EDIT: killed due to memory Doctor w-rw-rw- fucked around with this message at 20:15 on Jan 8, 2014 |
# ? Jan 8, 2014 20:11 |
|
After losing my entire array to two disks I'm doing what I can to setup better monitoring, configured postfix with a relay so I get emails about failed jobs/mdadm issues and munin warnings. What else should I use to check disk health? I don't trust SMART tests (but I do have smartmon setup) as I had run long tests on those disks a couple weeks ago and they passed.
Ashex fucked around with this message at 11:46 on Jan 10, 2014 |
# ? Jan 10, 2014 09:29 |
|
I hate RPM for lots of reasons. Can I still hate RPM? Here are some. Probably should go in the bitching thread, but oh well. Due to an external spec file requirement, there is no way easy way to reverse-engineer and repackage rpms like you can with .debs. It makes it very hard to customize deployments by adding text/config files and packaging it back up unless people release the source/spec rpms. It's an architecture decision that I find annoying and pointless and against the spirit of open source. You can't just deploy a packagename-config package either, because then you're overwriting a default config file owned by one package with another of your own making - a big no-no. Better hope that rpm has a conf.d directory! I hate the way pre/postinstall scripts are obfuscated. It's very difficult to, again, extract/read them given a rpm. you have to use 3 separate rpm commands and string together the output, making troubleshooting and testing difficult. You can't have multiple packages owning a directory, like you can in deb. This means that rpms tend to leave empty shared directories all over the place when you remove rpms instead of doing proper cleanup, since per rpm best practices you don't have packages own ANY directories or you run the risk of a rpm removing unowned files. This goes against the idea that the system should be in the exact state after removing a rpm than it was before installing it. During a package upgrade, un-intuitively, the new package pre-install is run BEFORE the old package post-install. That means if you have configuration file generation in a pre-install, or directory cleanups in a post-install, the old post-install can actually remove or modify files that might have been created during the pre-install phase of the new package. This is a huge gotcha and completely backwards from expected functionality. Especially since it only happens during upgrade; a normal removal and then installation of the new version is a different (and to my mind, correct) order. By and large, RPMs/DEBs are still pretty damned convenient and mostly work, as long as you know the quirks. If you're really worried about completely isolating your app stack, go with an external virtualization solution. Package management does one thing and it does it reasonably well. Hire me, Red Hat. I will fix this for you. I already have the code for fixing the directory ownage problems! Bhodi fucked around with this message at 21:34 on Jan 10, 2014 |
# ? Jan 10, 2014 21:20 |
|
question is; does the code you propose retain compatibility?
|
# ? Jan 10, 2014 21:41 |
|
Brother, have you heard the good news about Puppet? I wish I could work in places where Puppet or similar was feasible.
|
# ? Jan 10, 2014 21:45 |
|
Of course! I use puppet every day. But there are puppet scaling performance problems, especially if you're editing configuration files with augeas or dealing with large numbers of hosts. As for compatibility, sure. All it does is keep track of created directories and then sweep them up if they are empty during post-install.
|
# ? Jan 10, 2014 21:45 |
|
fivre posted:Brother, have you heard the good news about Puppet?
|
# ? Jan 10, 2014 22:39 |
stray posted:Speaking of Puppet: can anyone recommend a good resource for getting my feet wet with Puppet? I tried the tutorial VM, but it's super short and I'm still really confused. There has to be a tutorial out there somewhere which walks a person through setting up a server with common services (e.g., SSH, Samba, web server) using Puppet... right? It's not Puppet but I found http://gettingstartedwithchef.com/ to be extremely helpful when I started playing around with Chef.
|
|
# ? Jan 10, 2014 22:43 |
|
Bhodi posted:Of course! I use puppet every day. But there are puppet scaling performance problems, especially if you're editing configuration files with augeas or dealing with large numbers of hosts.
|
# ? Jan 11, 2014 00:49 |
|
Ashex posted:After losing my entire array to two disks I'm doing what I can to setup better monitoring, configured postfix with a relay so I get emails about failed jobs/mdadm issues and munin warnings. What else should I use to check disk health? I don't trust SMART tests (but I do have smartmon setup) as I had run long tests on those disks a couple weeks ago and they passed. What is your setup like? Are you using a hardware card? It sounds like you're not checking the array state directly and are instead depending on SMART tests to determine if there is an issue. Maybe I'm missing something?
|
# ? Jan 11, 2014 01:26 |
|
evol262 posted:With that said, I think Docker is a very neat wrapper around LXC. I think CoreOS is a great idea (much as it's similar in concept to SmartOS). But I think the hype is hype, and that the CoreOS people are very good at stirring up HackerNews. CoreOS experience designer here. We went through YC so Hacker News is our home turf I'm not the most technical, but I run ~25 containers on a personal 5 node CoreOS cluster across AWS and Rackspace so I have experience with almost everything docker/systemd/CoreOS. I'd be happy to answer any questions you guys have.
|
# ? Jan 11, 2014 01:27 |
|
Suspicious Dish posted:There's no documentation for GNOME Shell themes or extensions. CSS is considered a convenience for us, and the shell is not designed to be themeable by users. Well, if there's not actual documentation, is there a specific place in the gnome-shell source I could investigate to determine what properties are available? For instance, I'm trying style the .popup-menu-boxpointer. Where is the -arrow-border-radius property defined for this class?
|
# ? Jan 11, 2014 02:52 |
|
hubnuts posted:CoreOS experience designer here. We went through YC so Hacker News is our home turf
|
# ? Jan 11, 2014 03:24 |
|
Bhodi posted:Due to an external spec file requirement, there is no way easy way to reverse-engineer and repackage rpms like you can with .debs. It makes it very hard to customize deployments by adding text/config files and packaging it back up unless people release the source/spec rpms. It's an architecture decision that I find annoying and pointless and against the spirit of open source. You can't just deploy a packagename-config package either, because then you're overwriting a default config file owned by one package with another of your own making - a big no-no. Better hope that rpm has a conf.d directory! This is actually an intended design decision. Add text/config files with another rpm which depends on the first, but don't repackage so versions of packages which are nominally similar are actually different. It's insane. Overwriting config files is fine. RPM will just make a .rpmsave, but its a non issue. Bhodi posted:I hate the way pre/postinstall scripts are obfuscated. It's very difficult to, again, extract/read them given a rpm. you have to use 3 separate rpm commands and string together the output, making troubleshooting and testing difficult. rpm -q --scripts somepackage ? You should troubleshoot package installation as part of a holistic process, not scripts in a vacuum. RPM macros are terrible, but this is another "works as intended" decision Bhodi posted:You can't have multiple packages owning a directory, like you can in deb. This means that rpms tend to leave empty shared directories all over the place when you remove rpms instead of doing proper cleanup, since per rpm best practices you don't have packages own ANY directories or you run the risk of a rpm removing unowned files. This goes against the idea that the system should be in the exact state after removing a rpm than it was before installing it. Works as intended. There's a %dir macro. It's perfectly supported and is fine practice. You shouldn't have multiple packages owning the same directory, since you can't guarantee which will get removed first. File bugs. RPMs should not be leaving empty directories. Bhodi posted:During a package upgrade, un-intuitively, the new package pre-install is run BEFORE the old package post-install. That means if you have configuration file generation in a pre-install, or directory cleanups in a post-install, the old post-install can actually remove or modify files that might have been created during the pre-install phase of the new package. This is a huge gotcha and completely backwards from expected functionality. Especially since it only happens during upgrade; a normal removal and then installation of the new version is a different (and to my mind, correct) order. You're conflating the ordering and the scripts. A normal removal and then installation runs: %preun %postun %pre %post An upgrade runs %pre %post %preun %postun The preinstall is run before the %postun(install), but all you're doing there is checking if it's the last copy of that package and cleaning everything up, right? It won't actually conflict unless you're removing configuration files in %postun, and why would you? Bhodi posted:Hire me, Red Hat. I will fix this for you. I already have the code for fixing the directory ownage problems! Seriously, all the things you think are bad are things I think are good. hubnuts posted:CoreOS experience designer here. We went through YC so Hacker News is our home turf
|
# ? Jan 11, 2014 04:42 |
|
Bhodi posted:Hire me, Red Hat. I will fix this for you. I already have the code for fixing the directory ownage problems! Breaking compatibility in a 20 year old codebase is exactly the sort of criteria we won't hire you for. Yes, rpm is broken. Yes, it's extremely silly that people love it to the point where IBM is paying us to add >2GB cpio archives so they don't have to split their disk images into 12 different RPMs. Yes, they distribute disk images as RPMs. No, I have no idea why.
|
# ? Jan 11, 2014 05:06 |
|
FriedDijaaj posted:Well, if there's not actual documentation, is there a specific place in the gnome-shell source I could investigate to determine what properties are available? That's part of the boxpointer widget. A boxpointer is a menu with an arrow at the end that sticks out. It's used all over the place, but one quick example is the panel menus on the top. It's also used for the context menu of entries (run dialog, search box, looking glass), for the context menu of apps in the dash on the left of the overview, for the IBus candidate popup. The code that draws the border isn't standard CSS since it has to merge in with the arrow, so we added the -arrow prefix so it wouldn't conflict with normal CSS and make the theming engine get confused. It's just like border-radius, but it only takes one length (because we didn't need that feature, we could fix that if we really cared enough), and it might get cut short if the arrow overlaps with it, like what can happen at the edge of the screen.
|
# ? Jan 11, 2014 05:10 |
|
Salt Fish posted:What is your setup like? Are you using a hardware card? It sounds like you're not checking the array state directly and are instead depending on SMART tests to determine if there is an issue. Maybe I'm missing something? It's purely software with mdadm/lvm, I'd go hardware but cards cost quite a bit.
|
# ? Jan 11, 2014 10:41 |
|
Question: I want to modify my current dual boot setup in GRUB2 to not only contain Linux and Windows, but also my VMWare ESXi 5.1 install residing on a micro USB Stick. Selecting the USB stick manually from the BIOS works, but I'm lost at the integration part into GRUB2. Google is of no help either. How would I go about this? kyuss fucked around with this message at 10:47 on Jan 12, 2014 |
# ? Jan 12, 2014 10:44 |
|
What's the easiest/best way to keep two folders in sync via FTP?
|
# ? Jan 12, 2014 14:07 |
|
Try rsync
|
# ? Jan 12, 2014 14:48 |
|
Agreed. Rsync, if you want to automate it. Otherwise, some random FTP client that supports mirroring folders would work in a pinch.
|
# ? Jan 12, 2014 15:04 |
|
Yeah, I figured lftp mirror might be the way to go, forgot about rsync and curlftpfs. Might try that out next.
|
# ? Jan 12, 2014 17:01 |
|
Another option to consider is ncftp
|
# ? Jan 12, 2014 18:17 |
|
Ashex posted:Another option to consider is ncftp Or lftp's mirror command. http://russbrooks.com/2010/11/19/lftp-cheetsheet I love lftp.
|
# ? Jan 12, 2014 18:22 |
|
kyuss posted:Question: set root=(hd1,1) #or whatever chainloader +1
|
# ? Jan 12, 2014 19:27 |
|
kyuss posted:I want to modify my current dual boot setup in GRUB2 to not only contain Linux and Windows, but also my VMWare ESXi 5.1 install residing on a micro USB Stick. evol262 posted:set root=(hd1,1) #or whatever I'm afraid it won't be quite that easy. (Or if it is, you're very lucky!) The problem is that when the BIOS is set to boot from a regular HDD, it probably won't fire up BIOS-based USB Storage functionality at all. As far as BIOS is concerned, the boot disk has already been selected and boot is underway, and if USB functionality is desired, the OS that is being booted must do it all. So the USB storage "disk" will probably be inaccessible at the point your HDD-based GRUB does its job. The situation is similar when booting from a CD-ROM/DVD: when you tell BIOS that you wish to boot from CD-ROM, it does the necessary magic to make the boot media visible as a "regular BIOS-accessible disk device". But when you're booting from a plain old HDD, the magic is not present and the optical discs are invisible until the OS has booted up and loaded the necessary drivers. But since you already have GRUB2 installed on your HDD, you can rather easily check if it sees the USB media. Make sure the USB stick is plugged in, and power up the system. When you see the GRUB2 boot menu, press "c" to enter the GRUB command prompt. Then type "ls" without any arguments and press Enter. It should output a list of GRUB disk identifiers, corresponding to all disks and partitions BIOS (and therefore GRUB) sees. With commands like "hdparm -i (hd0)" or "drivemap -l", you can get more information to help you see what each GRUB disk identifier corresponds to. Once you know the correspondence between the physical devices and GRUB disk identifiers, identifying the partitions should be easy. There is also the "search" command which can be used to look for a particular file on any partition GRUB understands: if the file is found, the command will list the disk/partition identifier(s) where the file was found.
|
# ? Jan 12, 2014 22:11 |
|
telcoM posted:But since you already have GRUB2 installed on your HDD, you can rather easily check if it sees the USB media. It's 2014, and the vast majority of motherboards will support USB HDD, which GRUB2 treats as regular drives. It doesn't hurt at all to probe, but it shouldn't be necessary unless you're trying to chainload USB from a PXE menu or similar.
|
# ? Jan 13, 2014 16:07 |
|
Suspicious Dish posted:Breaking compatibility in a 20 year old codebase is exactly the sort of criteria we won't hire you for. Sadly, there's no appropriate job openings at RH here in NoVA. I did notice some openings down in Raleigh, but I'm stuck here until next year. I might check back then, since friends tell me NC is pretty nice and hey, Beasley's chicken and waffles! Bhodi fucked around with this message at 17:03 on Jan 13, 2014 |
# ? Jan 13, 2014 17:00 |
|
evol262 posted:Developers welcome! What are you doing with your containers? I've written a Heroku-like routing layer with Varnish that's backed by etcd. Right now I use it to route to a bunch of websites I host. Doctor w-rw-rw- posted:What's the minimum amount of RAM required? I've got a VPS (xen, prgmr to be precise) with a gig of RAM which struggles to even run a single container. It should run fine with 512 MB but it really depends on what you're running in the container. Our main use-case is on bare metal with a large amount of RAM, so we don't swap by default.
|
# ? Jan 13, 2014 18:58 |
|
Bhodi posted:I was only half joking! You wouldn't want to hire me to code anything, anyway. I'm very much a scripter. If you saw my resume or LinkedIn, my move to Red Hat may surprise you. I went from 7 years of systems admin/engineering into a full-time developer gig. The leap from "scripter" to developer isn't as large as it seems, and the many admins are fluent in multiple languages anyway (or fluent enough that 2 month adjustment period is enough to get you up to speed). Bhodi posted:I honestly think RPM is adequate, even good, for what it does. Although I slightly disagree about breaking compatibility; RPM may be sacrosanct but I know of at least one patch that has broken yum compatibility between RHEL uh, I think it's 5 and 6, the removal of the createrepo flags to specify older hashing/encryption algorithms that work with some older RHEL4 servers. Fortunately, we have Mock, which you should use anyway. But we don't promise backwards compatibility with RPM, really. We do promise that extant features of RPM will behave the same going forward, and changing RPM internals isn't the same as changing what happens when you build a specfile or SRPM. Bhodi posted:Sadly, there's no appropriate job openings at RH here in NoVA. I did notice some openings down in Raleigh, but I'm stuck here until next year. I might check back then, since friends tell me NC is pretty nice and hey, Beasley's chicken and waffles! You probably won't get a remote position as a Sysadmin, GSS, or similar. It's very possible as an engineer/developer.
|
# ? Jan 13, 2014 19:10 |
|
This is not necessarily a Linux question, but a Nagios question. Any nagios experts in the house? I have nagios 3 installed on Debian and I cant get plugins to work correctly. Well, the plugins work, but I dont think I'm giving them the right syntax. Take the check_http plugin for example, when I test it by running ./check_http -I 10.0.254.84 -S -u /owa/auth/logon.aspx I get: HTTP OK: HTTP/1.1 200 OK - 8322 bytes in 0.019 second response time |time=0.018892s;;;0.000000 size=8322B;;;0 Thats expected. I put this in a cfg file for nagios to pick up: code:
I get this in the nagios monitoring interface: HTTP WARNING: HTTP/1.1 403 Forbidden - 1412 bytes in 0.034 second response time Something is not translating from running the command on the command line to running it in the cfg file. Any ideas? The documentation here says I'm basically doing the correct thing: http://nagios.sourceforge.net/docs/nagioscore/3/en/monitoring-publicservices.html
|
# ? Jan 13, 2014 20:30 |
|
What is the -S flag? My nagios says [-S <version>] and I don't see you supplying a version. You should probably specify a proper HTTP host header with -k 'Host: blah.com'.
|
# ? Jan 13, 2014 20:50 |
|
Ninja Rope posted:What is the -S flag? My nagios says [-S <version>] and I don't see you supplying a version. You should probably specify a proper HTTP host header with -k 'Host: blah.com'. -S is SSL quote:-S, --ssl=VERSION The problem is this works OK on the command line as-is, but I cant get it to accept my syntax for some reason when nagios takes it up from the cfg file
|
# ? Jan 13, 2014 21:00 |
|
Can you post your check_http command definition? It sounds like macro expansion gone bad.
|
# ? Jan 13, 2014 21:15 |
|
I've presently got a situation where I have a RAID-1 setup that I'd like to encrypt, and I'm wondering whether it would be better to encrypt it before or after building the array. At the moment I've got the drives encrypted, so I have to open each partition with cryptsetup and then use the resultant mappers to build the array. I'm thinking I should probably build an empty array and then encrypt that instead, as I'd wind up with just one mapping instead of one for each drive in the RAID, as that would probably cut down on access costs somewhere. Any suggestions? Edit: I guess encrypting the RAID mapping is obviously the better idea since it would be encrypting the data just once and then replicating that out instead of encrypting the data X times over, and I just needed to type it out to realize that fact. Sheep fucked around with this message at 21:23 on Jan 13, 2014 |
# ? Jan 13, 2014 21:18 |
|
Any ideas on how to run a script as root the FIRST time a system shuts down and then never again? We need to flip a setting but I want to automate it and don't want to give the implementation people root. I was thinking just put a K script in /etc/rc0.d/ and have the script delete itself at the end. Is there a better way?
|
# ? Jan 13, 2014 21:47 |
|
Unless there is a specific distro-specific way of doing things, a rc.local script that runs once and then deletes/moves itself (rm $0) (mv $0 $0.done;mv $0.orig $0) on startup is the way to go. The actual script location/name is going to vary, anywhere from /etc/rc.local (Red Hat, most others) to /etc/init.d/boot.local (SuSE), but all of them have a script that runs post-init, and most are named rc.local. I would REALLY recommend doing an appropriate test and doing the configuration on startup, not on shutdown, as servers can shut down / hang for any number of reasons and that script might never get run. Just stick in your appropriate tests to trip it. You can also do what you suggested, but actual rc.X scripts are generally reserved for daemons, and that's kind of a kludgy non-obvious solution. If you're trying to get around giving people root, there are any number of ways (ENV variable comes to mind) to get around that. Besides, if they can shut down a server, they must already have some high level access, right? Create a sudoers entry. Edit: Are you trying to pull a server out of monitoring or something? Bhodi fucked around with this message at 22:10 on Jan 13, 2014 |
# ? Jan 13, 2014 22:01 |
|
It's on first shut down, not on boot or on first boot - it has to be this way. sudo's kinda out of the picture too because there would be a huge shitstorm about giving non-admins root and blah blah blah. I won't type up all the details about how and why because ya it's kinda dumb but ENTERPRISE SOFTWARE.
|
# ? Jan 13, 2014 22:42 |
|
|
# ? May 31, 2024 02:28 |
|
hackedaccount posted:sudo's kinda out of the picture too because there would be a huge shitstorm about giving non-admins root and blah blah blah. You can configure sudo to only allow certain commands, like, say, the one you want to have run.
|
# ? Jan 13, 2014 22:45 |