Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Nope, turned it off completely (not just turning off the policies), still doesn't do poo poo.

Adbot
ADBOT LOVES YOU

Salt Fish
Sep 11, 2003

Cybernetic Crumb
I'm pretty noob so I could a bit of help. I want to write a one-liner to rapidly compare php configurations on two servers. Here is what I have so far:

ssh user@remote "php -i" | diff <(php -i)

The basic idea is that I want to feed diff the remote php -i output and the local php -i output. That one throws an interesting error:

diff: missing operand after `/dev/fd/63'

I was also thinking:

diff <(php -i) <(ssh user@remote "php -i")

This one actually gives me a password prompt but then feeds characters into the password field before I can type it.

Soup in a Bag
Dec 4, 2009

Salt Fish posted:

ssh user@remote "php -i" | diff <(php -i)

You have to tell diff to use STDIN for the second file. That's the missing operand the error is referring to. Try:

ssh user@remote "php -i" | diff <(php -i) -

Salt Fish
Sep 11, 2003

Cybernetic Crumb

Soup in a Bag posted:

You have to tell diff to use STDIN for the second file. That's the missing operand the error is referring to. Try:

ssh user@remote "php -i" | diff <(php -i) -

Amazing! It works! Thank you, I'm not familiar with the use of "-". What would I call if it I were searching for it in documentation?

Soup in a Bag
Dec 4, 2009
I don't know of a term for that use of '-' as a filename. It's just a linux/unix convention that some programs (e.g. diff, tar, cat) use to indicate stdin/stdout as the input/output.

waffle iron
Jan 16, 2004

Salt Fish posted:

Amazing! It works! Thank you, I'm not familiar with the use of "-". What would I call if it I were searching for it in documentation?

Alternatively you could use /dev/stdin. On Linux that is a symlink to /proc/self/fd/0. Every Unix process opens three file descriptors by default: 0 is stdin, 1 is stdout and 2 is stderr.

waffle iron fucked around with this message at 09:05 on Jan 5, 2014

cowboy beepboop
Feb 24, 2001

Is it possible to run a Windows VM in the background (or remotely?) and have windows applications open in their own windows on my Gnome 3 desktop?

Empathy + SIPE just doesn't work well enough to replace Lync yet :(

RFC2324
Jun 7, 2012

http 418

my stepdads beer posted:

Is it possible to run a Windows VM in the background (or remotely?) and have windows applications open in their own windows on my Gnome 3 desktop?

Empathy + SIPE just doesn't work well enough to replace Lync yet :(

Virtualbox has a seamless mode that I used to do a thing like this with. You still need the windows taskbar to show, tho.

cowboy beepboop
Feb 24, 2001

Cheers I will check it out. Maybe I can do some awful hack to hide the taskbar as well

nitrogen
May 21, 2004

Oh, what's a 217°C difference between friends?

my stepdads beer posted:

Is it possible to run a Windows VM in the background (or remotely?) and have windows applications open in their own windows on my Gnome 3 desktop?

Empathy + SIPE just doesn't work well enough to replace Lync yet :(

you try pidgin + SIPE? I use that and it works great, except for the screensharing and other multimedia bits.

If thats what you need, then yeah, youre on the right track.

Experto Crede
Aug 19, 2008

Keep on Truckin'
What would be the best way to run a script automatically at boot?

hackedaccount
Sep 28, 2009
What's the script gonna do? Hit us with a bit more info because the answer can vary depending on what yer doin.

nitrogen
May 21, 2004

Oh, what's a 217°C difference between friends?

Experto Crede posted:

What would be the best way to run a script automatically at boot?

Also, what flavor of linux/UNIX? It'll depend on that, too.

Basically, though, the generic answers without knowing more information are: (in order of generic properness)

1) Init script/upstart/systemd (Plus to this way is you can run the same script or similar one at shutdown, too.)
2) rc.init (or your unix's equiv) (lazy answer, especially if whatever you start doesn't need to be properly shut down on reboot/poweroff)

Megaman
May 8, 2004
I didn't read the thread BUT...

Experto Crede posted:

What would be the best way to run a script automatically at boot?

/etc/rc.local

It's bash, doesn't get much simpler

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Experto Crede posted:

What would be the best way to run a script automatically at boot?

It really depends on what you're trying to do.

cowboy beepboop
Feb 24, 2001

nitrogen posted:

you try pidgin + SIPE? I use that and it works great, except for the screensharing and other multimedia bits.

If thats what you need, then yeah, youre on the right track.

That's what I am currently using, unfortunately Pidgin doesn't integrate well with Gnome 3 like Empathy does - I often miss people chatting to me. I'll probably just stick with it instead of running VMs now I think about it.

3spades
Mar 20, 2003

37! My girlfriend sucked 37 dicks!

Customer: In a row?

CentOS posted:

With great excitement I'd like to announce that we are joining the Red Hat family. The CentOS Project ( http://www.centos.org ) is joining forces with Red Hat.
Working as part of the Open Source and Standards team ( http://community.redhat.com/ ) to foster rapid innovation beyond the platform into the next generation of emerging technologies.
Working alongside the Fedora and RHEL ecosystems, we hope to further expand on the community offerings by providing a platform that is easily consumed, by other projects to promote their code while we maintain the established base.

We are also launching the new CentOS.org website (http://www.centos.org ).
http://lists.centos.org/pipermail/centos-announce/2014-January/020100.html

New site looks pretty nice.

Doctor w-rw-rw-
Jun 24, 2008
Wait, what? A Red Hat-debranded RHEL without support is now an official Red Hat product?

Not sure I totally understand, though I guess from a competitive point of view it pulls the rug out from under the other derivatives if the CentOS->RHEL upsell keeps you attached to Red Hat.

EDIT: BTW, http://www.centos.org/variants/ <--there's a misspelling of "variants" as "varients"

Doctor w-rw-rw- fucked around with this message at 23:15 on Jan 7, 2014

JHVH-1
Jun 28, 2002
I guess instead of fighting it they could give a support or upgrade path and make money off it easier. Would be nice if turnover time between RHEL releases and the corresponding CentOS release was shortened.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
It's not a Red Hat product. This does not mean CentOS will be supported by Red Hat. It's one of the many things we help out through our "Open Source & Standards Team". We contribute design resources, consulting services, etc. I don't think we're providing any engineering effort to CentOS specifically.

The Open Source & Standards Team does also investigate third-party upstreams where we could contribute meaningfully. Our contributions to OpenStack came through an investigation from that team.

Does anybody have any other questions about this or about Red Hat in general and our involvement with upstreams?

Doctor w-rw-rw-
Jun 24, 2008

Suspicious Dish posted:

It's not a Red Hat product. This does not mean CentOS will be supported by Red Hat. It's one of the many things we help out through our "Open Source & Standards Team". We contribute design resources, consulting services, etc. I don't think we're providing any engineering effort to CentOS specifically.

Officially endorsed, then, which is arguably more important to the decisionmakers who would prefer to go from free Red Hat-endorsed community product to paid Red Hat-supported enterprise product. And key developers now receive a Red Hat paycheck. Don't get me wrong, I'm not worried, and I think Red Hat is very trustworthy. The firewalls go further to serve that trust than most companies would, even.

While I can certainly guess, I'm curious as to the strategic reasons for doing this. Though I suppose that's not something that is necessarily OK to talk about openly.

spankmeister
Jun 15, 2008






JHVH-1 posted:

I guess instead of fighting it they could give a support or upgrade path and make money off it easier. Would be nice if turnover time between RHEL releases and the corresponding CentOS release was shortened.

AFAIK they already do have a support/upgrade path. Basically you RHEL-ify your system in-place then buy support.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
I just double-checked, and all I'm allowed to say is already in the FAQ: http://community.redhat.com/centos-faq/#_motivations

The private explanation is almost the same exact wording, but with one extra sentence in the middle somewhere. You guys have pretty much already figured out what that sentence is.

Personally, I'm curious about this supposed "firewall" between RHEL and CentOS. Our infrastructure doesn't have this (all the source code to everything we do is open everywhere) and memo-list and announce-list talk about RHEL planning all the time.

Suspicious Dish fucked around with this message at 00:02 on Jan 8, 2014

evol262
Nov 30, 2010
#!/usr/bin/perl

Suspicious Dish posted:

Personally, I'm curious about this supposed "firewall" between RHEL and CentOS. Our infrastructure doesn't have this (all the source code to everything we do is open everywhere) and memo-list and announce-list talk about RHEL planning all the time.

I have a couple ideas, but I'm not exactly an expert on internal infra, and they may not be feasible.

Maybe it's just the honor system and not attending RHEL planning calls.

Experto Crede
Aug 19, 2008

Keep on Truckin'
Basically, what it will do is every time the system boots is copy a clean hosts file over the (/etc/cleanhost) one currently in /etc/hosts.

I'm sharing my work machine with someone and we both need to make changes to it for testing sites, so I want a clean canvas every time the system boots.

Edit: Also this box is running Mint 15

Experto Crede fucked around with this message at 00:53 on Jan 8, 2014

JHVH-1
Jun 28, 2002

Experto Crede posted:

Basically, what it will do is every time the system boots is copy a clean hosts file over the (/etc/cleanhost) one currently in /etc/hosts.

I'm sharing my work machine with someone and we both need to make changes to it for testing sites, so I want a clean canvas every time the system boots.

Edit: Also this box is running Mint 15

I never knew this existed personally, but have you seen this?
http://blog.tremily.us/posts/HOSTALIASES/

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

I have a couple ideas, but I'm not exactly an expert on internal infra, and they may not be feasible.

Maybe it's just the honor system and not attending RHEL planning calls.
The single biggest problem the CentOS project has is that when a new major version is on the table, it takes the CentOS devs months (closer to a year, in the case of CentOS 6.0) to get the build system rejiggered to the point where they can produce a useful set of packages. Even if it's one guy from Red Hat letting them borrow some of the build infrastructure, I think it's a great move, especially as they're gearing up for a RHEL 7 release and trying to move customers onto it.

If they're not doing that, I don't really see the point either :dominic:

nitrogen
May 21, 2004

Oh, what's a 217°C difference between friends?

Suspicious Dish posted:

I just double-checked, and all I'm allowed to say is already in the FAQ: http://community.redhat.com/centos-faq/#_motivations

The private explanation is almost the same exact wording, but with one extra sentence in the middle somewhere. You guys have pretty much already figured out what that sentence is.

Personally, I'm curious about this supposed "firewall" between RHEL and CentOS. Our infrastructure doesn't have this (all the source code to everything we do is open everywhere) and memo-list and announce-list talk about RHEL planning all the time.

Does that sentence happen to mention a company with headquarters on Twin Dolphin drive in Redwood city at all? oracle (Yeah, I know you can't answer either way, but gently caress those guys.)

I bring that up, because I had a funny with that company today.

Those loving guys keep trying to poach my customers when we have to call for Oracle support. Last time I called using a customer support agreement (what we usually do for much larger customers we support) they go, "You know, if you dump $MYCOMPANY and allow us to run your infrastructure, we'll do it cheaper and better!"

Considering the customer in question left Oracle and paid ETF's to come to us, we all laughed at that little bit afterwards, but still. They are a scummy company, even if their DB product is pretty decent.

evol262
Nov 30, 2010
#!/usr/bin/perl

Misogynist posted:

The single biggest problem the CentOS project has is that when a new major version is on the table, it takes the CentOS devs months (closer to a year, in the case of CentOS 6.0) to get the build system rejiggered to the point where they can produce a useful set of packages. Even if it's one guy from Red Hat letting them borrow some of the build infrastructure, I think it's a great move, especially as they're gearing up for a RHEL 7 release and trying to move customers onto it.

If they're not doing that, I don't really see the point either :dominic:
Much of the CentOS 6 delay was due to absenteeism in project leadership, from what I remember, hence SL picking up the torch and beating them out of the gate.

I'd be surprised if any of the CentOS people landed on the build engineering team, which is intrinsically tied into various internal systems that I don't think are public knowledge.

The speculated reasons match the internal ones, basically, and it benefits us to make sure EL distros are up to date and in use instead of dumping to Ubuntu or something.

What Dish meant is that, though some teams maintain their own internal gerrit, everyone in engineering (and maybe everyone in the entire company) has access to browse sources. Actually submitting builds is tighter, but it's not really structured to "firewall" off people from source.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
It's not like RHEL has a lot of source code to it. It's mostly upstream code from the respective projects, and upstream build scripts and projects. Most of the stuff in RHEL that would affect CentOS is more about planning and strategizing and making decisions than anything else. And I'm not actually sure why the CentOS team feels they should be out of the loop from that — does CentOS want to be a "pure" downstream of us?

So, personally, I'm somewhat just as confused by this decision just as much as you guys are.

nitrogen posted:

Does that sentence happen to mention a company with headquarters on Twin Dolphin drive in Redwood city at all?

Actually, it doesn't mention Oracle or any other companies at all. It provides some more internal motivation as to why CentOS is a net win for us. It's mostly a repeat of what's already said there, and parts of the story are public already: other distributions like Debian and Ubuntu are winning in my butt space, and we need to represent ourselves better in order to capture that market.


I mean, Oracle isn't even trying to hide the fact that it's scummy or that OEL is a "supported" fork of RHEL that costs less. I mean, look:

http://linux.oracle.com/documentation/OL6/Red_Hat_Enterprise_Linux-6-Deployment_Guide-en-US.pdf

We try to be a true open-source company and release our manuals as CC-BY-SA, so this is entirely legal for them to do. But this sort of stuff where people just take the hard work we do makes me really cynical towards the "open-source dream" where nobody owns any IP. We've long tried to claim that nobody would be as big of a poo poo to steal all your stuff. And we're right: people didn't steal our work and release it as their own. Only companies can do something that shameless.

Suspicious Dish fucked around with this message at 06:22 on Jan 8, 2014

Ashex
Jun 25, 2007

These pipes are cleeeean!!!
Lost two disks in my raid 5 array a couple days ago so I'm rebuilding with raid 6 now. This will be a 8TB volume, is ext4 still the best choice? I'd like to try out XFS but not if I'm going to see reduced performance/reliability.

Ashex fucked around with this message at 09:40 on Jan 8, 2014

fishmonger
Jan 26, 2004

This is a title.

my stepdads beer posted:

Cheers I will check it out. Maybe I can do some awful hack to hide the taskbar as well

I'm doing this now, actually. Must use Lync for voice calls - no more desk phone, all Lync at my company now.
Bah.
Lync sucks bad enough that I usually use my cell phone anyway. Not sure why I'm bothering.

Doctor w-rw-rw-
Jun 24, 2008

Suspicious Dish posted:

Actually, it doesn't mention Oracle or any other companies at all. It provides some more internal motivation as to why CentOS is a net win for us. It's mostly a repeat of what's already said there, and parts of the story are public already: other distributions like Debian and Ubuntu are winning in my butt space, and we need to represent ourselves better in order to capture that market.

As someone who generally likes what Ubuntu has done and continues to do for desktop Linux, I tend to think less of sysadmins who see fit to use it for a serious server in production (though I keep it to myself). I have a lot of trouble taking Ubuntu seriously as a server distro, so I have trouble understanding why other people would. Is it really just a cross section of "free" and "officially endorsed" that make people use it for that?

As much as I like Ubuntu, I really hope this hurts their server market share significantly. Godspeed.

FriedDijaaj
Feb 15, 2012
I am trying to update an old gnome shell theme. Is there documentation somewhere that lists all the elements available to be themed and what CSS properties can be applied to them? Currently, I'm just picking and choosing elements in the adwaita theme that I recognize, applying a CSS property, and refreshing the theme to see what happens.

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug

Doctor w-rw-rw- posted:

As someone who generally likes what Ubuntu has done and continues to do for desktop Linux, I tend to think less of sysadmins who see fit to use it for a serious server in production (though I keep it to myself). I have a lot of trouble taking Ubuntu seriously as a server distro, so I have trouble understanding why other people would. Is it really just a cross section of "free" and "officially endorsed" that make people use it for that?

As much as I like Ubuntu, I really hope this hurts their server market share significantly. Godspeed.

As de facto admin for a CS/bioinformatics/genetics research group:
  • Packages that are more up-to-date in default repositories
  • Can actually be upgraded in-place from one version to the next, which has never given me any problems at all with server installs. It's far less disruptive for users than a full reinstall (including having to redo a lot of custom/one-off software installs)
  • It matches what a lot of us use on our desktop machines -- most of us use desktop Linux and nobody uses anything besides an Ubuntu variant

Personal:
  • I irrationally hate RPM because of bad experiences in Red Hat 7.2 a long time ago
  • Ubuntu has concrete plans to ditch Python 2 very soon

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Lysidas posted:

As de facto admin for a CS/bioinformatics/genetics research group:
  • Packages that are more up-to-date in default repositories
  • Can actually be upgraded in-place from one version to the next, which has never given me any problems at all with server installs. It's far less disruptive for users than a full reinstall (including having to redo a lot of custom/one-off software installs)
  • It matches what a lot of us use on our desktop machines -- most of us use desktop Linux and nobody uses anything besides an Ubuntu variant

Personal:
  • I irrationally hate RPM because of bad experiences in Red Hat 7.2 a long time ago
  • Ubuntu has concrete plans to ditch Python 2 very soon
Every distro has such poo poo variety of packaged R modules (particularly Bioconductor) that I never saw much of a difference once you start getting into highly custom builds. My HPC cluster back at Cold Spring Harbor Laboratory had over 3,000 custom RPMs that we built, and I'm not sure building them as debs for Ubuntu would have been any easier.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

FriedDijaaj posted:

I am trying to update an old gnome shell theme. Is there documentation somewhere that lists all the elements available to be themed and what CSS properties can be applied to them? Currently, I'm just picking and choosing elements in the adwaita theme that I recognize, applying a CSS property, and refreshing the theme to see what happens.

There's no documentation for GNOME Shell themes or extensions. CSS is considered a convenience for us, and the shell is not designed to be themeable by users.

I am one of the main GNOME Shell developers, so if you have any specific questions trying to find a specific element, I can answer that for you.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Lysidas posted:

  • I irrationally hate RPM because of bad experiences in Red Hat 7.2 a long time ago

I want to bring this up, because it keeps being brought up. I'm not going to defend RPM or DEB: they both have some cool features, but both are irreparably broken in my opinion. I'm just going to say that it doesn't matter anymore, and that's a big failing.

Docker was a great innovation, because it completely invalidated the "RPM or DEB" argument, and allowed separate, vertically integrated apps to be shipped as bundles. A lot of big companies (Oracle, GitHub, VMWare) ship their apps as OVFs or Docker containers now, not as RPMs/DEBs.

With Docker, the underlying OS doesn't even matter. (But, as you might notice, they built it on Ubuntu's kernel! Yes, Ubuntu server is a thing.) A lot of distros made the "technically pure" decision to prevent packages from bundling libraries, which mean that if some app you were using used an old version of a library and wasn't rebuilt, you couldn't upgrade the app or your distro. You just got locked out.

RHEL's solution to this is to just not upgrade the distro, or to do it very, very carefully and never break ABI. That's a terrible solution. Docker containers are really cool tech, because now apps are separate from the OS, and separate, vertically integrated components isolated from the underlying system.

The ideal OS in this case is not RHEL, it's CoreOS, which is just systemd and like two other things. There's no rpm, no dpkg, no nothing. Docker completely destroyed that dumb argument.

</rant>

Sorry, I've been pretty annoyed by our lack of vision and real product with RHEL. It's too hard to change it because our hundreds of thousands of customers morph it into something else, and meanwhile we're getting our rear end kicked by newer tech. The Innovator's Dilemma at work here.

Doctor w-rw-rw-
Jun 24, 2008
Having played with Docker, it leaves a lot to be desired in terms of efficiency vs, say, FreeBSD jails. And as far as base systems go, I'd create a CentOS image for Docker anyways, because you still want a solid, reliable base system.

Would be nice for Red Hat to speed up RHEL's release cycle a little bit, though, because new OS innovations tend to be built on new or young-and-stable-ish kernels.

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

Lysidas posted:

As de facto admin for a CS/bioinformatics/genetics research group:
  • Packages that are more up-to-date in default repositories
  • Can actually be upgraded in-place from one version to the next, which has never given me any problems at all with server installs. It's far less disruptive for users than a full reinstall (including having to redo a lot of custom/one-off software installs)
  • It matches what a lot of us use on our desktop machines -- most of us use desktop Linux and nobody uses anything besides an Ubuntu variant

Personal:
  • I irrationally hate RPM because of bad experiences in Red Hat 7.2 a long time ago
  • Ubuntu has concrete plans to ditch Python 2 very soon

All of these advantages are concretely in Fedora, which is on the same release cadence.

I don't really have anything against Ubuntu, though (despite my feelings towards Canonical). Breaking ABI in LTS releases is stupid. Other than that, people should use whatever.

Doctor w-rw-rw- posted:

Having played with Docker, it leaves a lot to be desired in terms of efficiency vs, say, FreeBSD jails. And as far as base systems go, I'd create a CentOS image for Docker anyways, because you still want a solid, reliable base system.

Would be nice for Red Hat to speed up RHEL's release cycle a little bit, though, because new OS innovations tend to be built on new or young-and-stable-ish kernels.

Docker works on top of LXC. It's a very thin wrapper. LXC is very similiar in efficiency to any other container-based virt.

Many of the OS innovations get backported to RHEL kernels, even if only as tech releases. But yes, the release cycle is slow. This is a common complaint internally as well. Keep waiting for cool new stuff in the next version of Fedora and all of a sudden it's been 3 years since the last version of RHEL and you're nowhere close.

Suspicious Dish posted:

I want to bring this up, because it keeps being brought up. I'm not going to defend RPM or DEB: they both have some cool features, but both are irreparably broken in my opinion. I'm just going to say that it doesn't matter anymore, and that's a big failing.
I'll defend RPM in the sense that "hating RPM" from RH 6.3 is not the same as "hating RPM" when it's backed by yum and you're not stuck tracking down 300 packages for dependencies on your own, and the majority of people I've spoken to who "hate RPM" "love apt" as if apt is somehow the competitor instead of DEB. DEB is just as broken as RPM. APT, DNF, Yum, Zypper, and any other package manager mostly makes these debates pointless.

I prefer the design decisions in RPM to those in DEB, but it's a wash.

Suspicious Dish posted:

Docker was a great innovation, because it completely invalidated the "RPM or DEB" argument, and allowed separate, vertically integrated apps to be shipped as bundles. A lot of big companies (Oracle, GitHub, VMWare) ship their apps as OVFs or Docker containers now, not as RPMs/DEBs.[

With Docker, the underlying OS doesn't even matter. (But, as you might notice, they built it on Ubuntu's kernel! Yes, Ubuntu server is a thing.) A lot of distros made the "technically pure" decision to prevent packages from bundling libraries, which mean that if some app you were using used an old version of a library and wasn't rebuilt, you couldn't upgrade the app or your distro. You just got locked out.
This is exactly the way OSX apps are shipped, and Windows DLL packing is similar. I generally feel that Docker is something old repackaged as something new -- spewing libraries all over /opt/someapp/lib and setting LD_LIBRARY_PATH has many of the same advantages as Docker's bundling.

The underlying OS matters in the sense that Docker is still using LXC, and all the limitations of LXC apply. With Docker being ported to every distro known to man, you're still going to have to worry about ABI. The kernel still matters. Especially for Oracle, VMware, and other products which rely on kernel modules, which are nontrivial to work with in LXC.

Suspicious Dish posted:

RHEL's solution to this is to just not upgrade the distro, or to do it very, very carefully and never break ABI. That's a terrible solution. Docker containers are really cool tech, because now apps are separate from the OS, and separate, vertically integrated components isolated from the underlying system.

The ideal OS in this case is not RHEL, it's CoreOS, which is just systemd and like two other things. There's no rpm, no dpkg, no nothing. Docker completely destroyed that dumb argument.

</rant>

Sorry, I've been pretty annoyed by our lack of vision and real product with RHEL. It's too hard to change it because our hundreds of thousands of customers morph it into something else, and meanwhile we're getting our rear end kicked by newer tech. The Innovator's Dilemma at work here.
With that said, I think Docker is a very neat wrapper around LXC. I think CoreOS is a great idea (much as it's similar in concept to SmartOS). But I think the hype is hype, and that the CoreOS people are very good at stirring up HackerNews.

Containerization is the way forward, whether that's OpenStack instances, OpenShift gears, Docker containers, or something else. Docker's done very well at making something readily available and easy to use for the average user, and it's convenient for RSA, Oracle, or whomever to be able to ship a small Dockerfile to build things. But it's not a replacement for a full-fledged OS (much as containerized systems are OSes) that needs tweaking to anything below the software level.

The strategy taken by RHEL (and SLES) is why companies like Oracle have traditionally been willing to work with us. IBM doesn't want muck with MVFS to keep up with upstream. They want to sell ClearCase licenses. This applies to everyone else. I agree that there's a lack of vision with RHEL's product direction, but in the same sense that all operating systems have this problem, basically. RHEL is slowly becoming something to build products on top of. Gluster, oVirt, Openstack, Condor, et al. and their corresponding downstream products are where we go now. RHEL needs to be stable enough to keep going with traditional Linux workloads and flexible enough to build on top of until companies start shipping appliances instead of applications.

But Docker is not necessarily a solution to that problem (though it's certainly a solution to some problems, like "how do I easily ship a development environment without worrying about Vagrant or hosting expanding qcow images, vmdks, and whatever other format is popular next year").

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply