Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

enterprises running a relatively homogeneous environment
I don't want to sound like a troll here, but ahahahahahahaha, good lord, for all their bluster this isn't the way enterprises run at all. It's a mishmash of un-standardized tooling, reproducing the same system six times in six different departments using six different sets of software packages. If you're lucky enough to run a central one of something, it's some horrible software package with Tivoli or BMC or CA on the box that SELinux will never, ever protect and the vendor would never, ever support if it did.

I see the use cases for SELinux: it's a great way of buttressing the security of a handful of applications that everyone runs, like OpenSSH or Apache. But for most organizations, the real vulnerabilities are going to come from weird line-of-business apps requiring a 5-year-old version of Java, or custom in-house software that's never, ever going to have someone writing Mandatory Access Control policies for it. It was a great idea for the world of shared LAMP hosting, but it's a really old and weird and outmoded concept in the cloud age.

Varkk posted:

If it is really slowing you down set it to permissive and check what it is logging and then change the settings to allow what you are trying to do.
Others have touched on this before, but I'd generally rather focus that time on reading real audit logs (web, IDS, etc.) for actual malicious behavior than spend time blowing at fairy dust to make the system do the thing I already told it to do.


For the record, SELinux generally works well out of the box in RHEL6+ for things that don't need to modify content (!!), and I don't recommend people disable it outright. But disabling protection of a specific daemon is often a much more reliable way to fix an apparent problem than running some audit2allow duct-tape-and-staples security management system that provides no discernible value to the business.

Vulture Culture fucked around with this message at 04:49 on Feb 26, 2014

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

Misogynist posted:

I don't want to sound like a troll here, but ahahahahahahaha, good lord, for all their bluster this isn't the way enterprises run at all. It's a mishmash of un-standardized tooling, reproducing the same system six times in six different departments using six different sets of software packages. If you're lucky enough to run a central one of something, it's some horrible software package with Tivoli or BMC or CA on the box that SELinux will never, ever protect and the vendor would never, ever support if it did.
Having spent years in an enterprise environment, I don't disagree. But you're not an enterprise-wide admin. You're working on a team which almost certainly deploys the same set of packages (and their updates) over and over again on top of kickstarted, centrally managed (cfengine, puppet, satellite, hpsa, whatever) systems. While there's some variation, each of those departments probably has their own admin team responsible for security. You don't need unified policies with different package sets.

But the point of not disabling SELinux is that you don't need a competent person watching IDS logs, it doesn't need to wait for intervention after Tripwire or chrootkit, and you don't need to gently caress with it. If you don't want to bother with audit2allow, just run your poo poo in unconfined_t and let SELinux do its job in packages you don't control. But audit2allow gives you a general idea of what your package is trying to do, and is a reasonable next step after "don't run that poo poo as root", which many businesses did within the last few years. Again, layering is the point. Audit logs are great for finding out what happened after the fact. They don't do a drat thing to stop it unless you configure Tripwire to be even more intrusive than SELinux and revert every change made.

No, it's not appropriate for cloud environments, really. But that's still a relatively small business segment, honestly. Far more people have pets than cattle in 2014. And being a "cloud" company still doesn't help you when there's a vulnerability in HAproxy, varnish, or whatever's fronting your stack. It's not a cure-all. But you lose nothing.

JHVH-1
Jun 28, 2002
On the project I am on now security and ease of operations end up lower on the list, and there are various separate teams doing their own thing that aren't don't even get involved with personally most of the time. We have part of our application done in a nice way that we can automate but a large chunk is just dump stuff that gets delivered without any kind of packaging (literally just tar/war/zip stuff) and so many bits and pieces with a LAMP stack here, JBoss here, Tomcat here, IIS windows machines here (barf).
I wish it was unified and our team that manages it had more say in the initial design but the whole thing was designed and specced out before they even had an operations team hired.

I spent the whole 10 hours of my shift going through logs in splunk and I end up doing more app debugging and log analysis in production and staging environments than any actual sys admin work.
The people that do all the planning care more about API response time and error thresholds in the logs than server uptime or locking everything down.... Though at least everything is behind a firewall, which is most often behind a CDN as well and they do have penetration testing done.

I guess it could be worse, but some big companies would rather do something stupid than spend some time to lay down a good groundwork. Like a certain company I heard of that would rather have the vendor rewrite their entire OS instead of fixing their own code to make it no longer dependant on big endian architecture.

nitrogen
May 21, 2004

Oh, what's a 217°C difference between friends?
If you disable selinux, you're a bad admin, and you should feel bad.

My employer disables selinux on builds because most (99%) of our customers demand it. If a customer wants it, then I will enable it with a specific policy file that enables our monitoring and backuptools.

IT's sad how most of my team doesn't understand selinux either.

Riso
Oct 11, 2008

by merry exmarx
How does SELinux compare with AppArmour?

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

evol262 posted:

Having spent years in an enterprise environment, I don't disagree. But you're not an enterprise-wide admin. You're working on a team which almost certainly deploys the same set of packages (and their updates) over and over again on top of kickstarted, centrally managed (cfengine, puppet, satellite, hpsa, whatever) systems.
Keep in mind, you're coming from a perspective of working in banking, where rigid policies like always running SELinux is both desirable and encouraged (can never have too much security, and drat the inflexibility!)

Contrast that with a generic startup in Cali, where a dev reads about some new tech on his lunch break and they spin it up onto a production server in the afternoon to see what it can do.

There is no such thing as a generic "enterprise environment". There are various tools and their ability to align with business interests.

evol262
Nov 30, 2010
#!/usr/bin/perl

Bhodi posted:

Keep in mind, you're coming from a perspective of working in banking, where rigid policies like always running SELinux is both desirable and encouraged (can never have too much security, and drat the inflexibility!)
Oh, if only banking actually worked this way.

But I also worked in defense, a Usenet/CDN provider, and a meteorology company.

Bhodi posted:

Contrast that with a generic startup in Cali, where a dev reads about some new tech on his lunch break and they spin it up onto a production server in the afternoon to see what it can do.
Banking is pretty much the opposite of this, in the sense that our software had been ported from OS/2->HPUX->AIX->Linux, with all the mess that entailed (tellers using X-forwarded applications in 2012 which relied on scripts written in 1998 which bypassed xauth; recursive symlink loops, LD_LIBRARY_PATH all over the place, everything running as root, etc).

Implementing SELinux was a total overhaul, and an initiative that I took to get our environment under control. There was a significant amount of pushback.

Bhodi posted:

There is no such thing as a generic "enterprise environment". There are various tools and their ability to align with business interests.
That's just splitting hairs. Enterprise implies a lot of things, mostly financial, but "enterprise environment" isn't weasel words. Startups in Cali are not enterprise. Companies with less than 500 employees are almost certainly not enterprise. Companies where you're wearing 10 hats because there's not a dedicated team to handle it (AD, email, network, storage, Oracle, other "big" products) probably aren't enterprise. This really isn't the topic of this thread, and I didn't say there's a generic "enterprise environment".

But making broad assumptions about how companies large enough to even need to mandate what tools to use (PowerBroker, HPSA, Tivoli, etc) and probably have teams to manage them is not asserting that there aren't multiple tools. It's that there's a certain amount of administrative overhead and cargo cult in operations of that size, and while they're all different in their own way, they're more alike than not.

Buckhead
Aug 12, 2005

___ days until the 2010 trade deadline :(
I am running a command via cron, but stderr is outputting "stdin: is not a tty"

From much Googling, I have deduced that this is because of how the bash command is being called. However, making various changes to the .profile file does not change things (one thing I saw was take out the "mesg y" line, but my .profile is already set to "mesg n").

My current crontab line goes like: /bin/bash -l -c 'command'

What options should I modify when calling bash to execute this correctly?

evol262
Nov 30, 2010
#!/usr/bin/perl

Buckhead posted:

I am running a command via cron, but stderr is outputting "stdin: is not a tty"

From much Googling, I have deduced that this is because of how the bash command is being called. However, making various changes to the .profile file does not change things (one thing I saw was take out the "mesg y" line, but my .profile is already set to "mesg n").

My current crontab line goes like: /bin/bash -l -c 'command'

What options should I modify when calling bash to execute this correctly?

But it's not allocated a tty.

What's in .profile?
What's in .bashrc?
[ $- == *i* ]] && Do interactive stuff...

e: Don't use "bash -l" in cron.

evol262 fucked around with this message at 20:03 on Feb 26, 2014

SurgicalOntologist
Jun 17, 2004

I have a really weird problem, not sure if here or the Python thread is the place for it but I think it's more of a Linux question.

I have a Python library that creates a console script using the setuptools library (as in, the console script gets created automatically, I didn't hack it together). It works great on my home PC (Ubuntu 13.10) but on the lab PC (12.04) there's a weird issue. It just runs and returns, nothing happens. Which is weird because it's supposed to at least output the usage message if it can't figure out what to do.

The script is called exp. So I tried which exp and sure enough it's on the path and looks good. So I tried `which exp` and it works fine. In fact, all the commands and options and help messages, etc. work great by using `which exp` instead of just exp. How can that be if they both point to the same place?

evol262
Nov 30, 2010
#!/usr/bin/perl

SurgicalOntologist posted:

I have a really weird problem, not sure if here or the Python thread is the place for it but I think it's more of a Linux question.

I have a Python library that creates a console script using the setuptools library (as in, the console script gets created automatically, I didn't hack it together). It works great on my home PC (Ubuntu 13.10) but on the lab PC (12.04) there's a weird issue. It just runs and returns, nothing happens. Which is weird because it's supposed to at least output the usage message if it can't figure out what to do.

The script is called exp. So I tried which exp and sure enough it's on the path and looks good. So I tried `which exp` and it works fine. In fact, all the commands and options and help messages, etc. work great by using `which exp` instead of just exp. How can that be if they both point to the same place?

Is something in the script relying on an absolute path? Does /path/to/exp work?

alias

which -a exp

head -n 1 `which exp`

python /path/to/exp

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
edit: ughhhh I need to test this again to make sure this is what I'm actually seeing

fletcher fucked around with this message at 03:00 on Feb 27, 2014

SurgicalOntologist
Jun 17, 2004

evol262 posted:

Is something in the script relying on an absolute path? Does /path/to/exp work?

alias

which -a exp

head -n 1 `which exp`

python /path/to/exp

Thanks for the suggestions. I forgot I had made an alias, a while back before I figured out how to make a console script. :doh:

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Why is the default file for vagrant called Vagrantfile and not Vagrantfile.rb? It's really annoying having to manually set the language syntax in sublime text.

crazysim
May 23, 2004
I AM SOOOOO GAY

fletcher posted:

Why is the default file for vagrant called Vagrantfile and not Vagrantfile.rb? It's really annoying having to manually set the language syntax in sublime text.

Rakefile, Makefile, and etc.

Here's a GitHub issue on a project that I use in that talks a bit about this.

https://github.com/test-kitchen/test-kitchen/issues/182

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Thanks for the link!

Ok this next one is driving my crazy, I'm totally stumped.

One of my recipes in Chef is for restoring from the most recent backup. The backups are stored on s3, so I use aws-cli to just list the bucket and nab the most recent one:

code:
the_command = "AWS_DEFAULT_REGION='#{node["myproject"]["backups_region"]}' AWS_ACCESS_KEY_ID='#{auth["backups_key"]}' AWS_SECRET_ACCESS_KEY='#{auth["backups_secret"]}' #{aws_app} s3 ls s3://path/to/my/backups/ | tail -n 1 | awk '{ print $4; }'"
Chef::Log.info the_command
last_backup_filename = `#{the_command}`
last_backup_filename = last_backup_filename.strip
raise "Could not find last_backup_filename" if last_backup_filename == ''
When I have that in my recipe, my chef run is all messed up. There's a recipe before this one in the run_list that sets up a virtualenv and installs aws-cli, but after this fails with the "Could not find last_backup_filename" message, my virtualenv doesn't even exist.

If I instead use:
code:
the_command = "AWS_DEFAULT_REGION='#{node["myproject"]["backups_region"]}' AWS_ACCESS_KEY_ID='#{auth["backups_key"]}' AWS_SECRET_ACCESS_KEY='#{auth["backups_secret"]}' #{aws_app} s3 ls s3://path/to/my/backups/ | tail -n 1 | awk '{ print $4; }'"
Chef::Log.info the_command
last_backup_filename = `echo 'somebackup.tar.gz'`
last_backup_filename = last_backup_filename.strip
raise "Could not find last_backup_filename" if last_backup_filename == ''
...then everything works just fine. The virtualenv is correctly setup before this, and I can grab the_command from the log output and execute it manually just fine. What is going on here??

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Any ideas?? It seems like such a simple thing I'm trying to do. And I thought it worked just fine a couple weeks ago!

evol262
Nov 30, 2010
#!/usr/bin/perl

fletcher posted:

Any ideas?? It seems like such a simple thing I'm trying to do. And I thought it worked just fine a couple weeks ago!

Chef can be super finicky about string interpolation. Try %x{} instead of ``

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

I have a weird thing going on with youtube or maybe flash in general, where viewing videos in 480p or 240p makes the contrast all messed up and black turns grey. Any other resolution does not have this issue.

Here are screenshots to show what I mean. The video content pictured is supposed to be just a black screen for 10hrs, which I chose just to test this(though the same issue happens on any other video I view).

480p (also happens at 240p):


720p:


I'm on Thinkpad W510 w/ NVIDIA Corporation GT216GLM [Quadro FX 880M]
Ubuntu 13.10 w/ cinnamon desktop,

I tried it logging in to other desktop: gnome 3, gnome "flashback", cinnmaon w/ software rendering, all do the same thing.

I tried switching the version of nvidia drivers (had nvidia-304, switched to nvidia-319), didn't make a difference.

Is my video card taking a poo poo or what the hell is going on here. Has anyone else experienced something like this?

Also I've been running this setup for a pretty long time, but only noticed this issue in the last couple weeks it seems like.

peepsalot fucked around with this message at 07:04 on Mar 1, 2014

Xik
Mar 10, 2011

Dinosaur Gum
They both look black to me, does that help?

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Xik posted:

They both look black to me, does that help?

I have special eyes.

RFC2324
Jun 7, 2012

http 418

Xik posted:

They both look black to me, does that help?

same here.

Xik
Mar 10, 2011

Dinosaur Gum

peepsalot posted:

I have special eyes.

Have you tried viewing this thread (or those screenshots) from another device? It's possible whatever you are seeing can't be captured at a software level, which would probably indicate a hardware issue.

Da Mott Man
Aug 3, 2012


Xik posted:

Have you tried viewing this thread (or those screenshots) from another device? It's possible whatever you are seeing can't be captured at a software level, which would probably indicate a hardware issue.

Nope, the second one is definitely darker, just checked it in photoshop to make sure my eyes weren't playing tricks.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Are you using Flash or HTML5? Try the other one?

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Suspicious Dish posted:

Are you using Flash or HTML5? Try the other one?
I was using Flash, it looks like HTML5 fixes it. I guess I'll keep it that way until I get "This video is unavailable" and have to switch back again.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Suspicious Dish posted:

Are you using Flash or HTML5? Try the other one?

Youtube HTML5 on Linux: an exciting way to use all of a CPU core!

Well, with Firefox+Gstreamer anyway.

evol262
Nov 30, 2010
#!/usr/bin/perl

scroogle nmaps posted:

Youtube HTML5 on Linux: an exciting way to use all of a CPU core!

Well, with Firefox+Gstreamer anyway.

The Gstreamer devs are giving you incentive to avoid that cesspool

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

scroogle nmaps posted:

Youtube HTML5 on Linux: an exciting way to use all of a CPU core!

Well, with Firefox+Gstreamer anyway.

GStreamer 1.0 is a lot better about resource management, so it should be a lot better as soon as Firefox is ported.

Hardware-accelerated decoding is also on the roadmap. It's not really possible under X11 right now.

Xik
Mar 10, 2011

Dinosaur Gum

evol262 posted:

The Gstreamer devs are giving you incentive to avoid that cesspool

What do you mean? It's not like there are a huge array of choices. Flash has been abandoned on the platform unless you want to get locked into Chrome.

Before I got html5 working I relied solely on youtube-dl, but is a little inconvient when you just want to quickly play something that is embedded.

Longinus00
Dec 29, 2005
Ur-Quan

Xik posted:

What do you mean? It's not like there are a huge array of choices. Flash has been abandoned on the platform unless you want to get locked into Chrome.

Before I got html5 working I relied solely on youtube-dl, but is a little inconvient when you just want to quickly play something that is embedded.

I like to open YouTube urls in VLC. Hardware acceleration and no flash.

evol262
Nov 30, 2010
#!/usr/bin/perl

Xik posted:

What do you mean? It's not like there are a huge array of choices. Flash has been abandoned on the platform unless you want to get locked into Chrome.

Before I got html5 working I relied solely on youtube-dl, but is a little inconvient when you just want to quickly play something that is embedded.

I meant YouTube as the cesspool.

Hollow Talk
Feb 2, 2014

Xik posted:

What do you mean? It's not like there are a huge array of choices. Flash has been abandoned on the platform unless you want to get locked into Chrome.

Before I got html5 working I relied solely on youtube-dl, but is a little inconvient when you just want to quickly play something that is embedded.

Have you tried ViewTube? → https://userscripts.org/scripts/show/87011

It's a greasemonkey script which lets you pick how to open files, and you can use it in Firefox and Chrome, though you need to install it manually on the latter. You also get to choose which format you'd like to play (flv, MP4, webm or whatever else YT might offer), and it works with some other sites like Vimeo as well!



edit: It also conveniently gets rid of in-video advertisements. :)

Xik
Mar 10, 2011

Dinosaur Gum

evol262 posted:

I meant YouTube as the cesspool.

Oh right, I suppose. I don't have an account and don't "browse" it or anything like that so I don't really see the community side of it. It's just the de-facto video hosting site and it's frequently embedded in posts here so it sucks not to have it.


That's pretty cool. I'll have to try it out. I'll probably still use HTML5 for stuff that is embedded but if I want to download a long IT related talk or something that would probably be a little more convenient then using youtube-dl.

mcbexx
Jul 4, 2004

British dentistry is
not on trial here!



Is there a way to keep Ubuntu 12.04 from falsely reporting a low battery status every couple of secondson an old notebook (2003 HP nx7000)? The battery icon shows a 95% charge and about 2 hours left, yet the warning dialog pops up every 20 seconds.

The battery should still be able to hold a charge, at least it did when running Windows XP on it before, so I'm assuming the power management has trouble dealing with the info from the old hardware.

I'm fairly new to Ubuntu and Linux in general, so if there's a solution, I'd appreciate step by step instructions. Thanks!

Hollow Talk
Feb 2, 2014

Xik posted:

Oh right, I suppose. I don't have an account and don't "browse" it or anything like that so I don't really see the community side of it. It's just the de-facto video hosting site and it's frequently embedded in posts here so it sucks not to have it.


That's pretty cool. I'll have to try it out. I'll probably still use HTML5 for stuff that is embedded but if I want to download a long IT related talk or something that would probably be a little more convenient then using youtube-dl.

It can do HTML5 as well! :science: I use it really as my day-to-day youtube player, and I have yet to look back.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Anyone good at IPsec VPNs?

I've got a system on EC2 that's running Openswan+xl2tpd for client VPN access, using PSK authentication. My Windows desktop and my Mac laptop can connect just fine (though my Mac laptop needs the routes defined manually after the connection comes up). I can ping the VPN server, and I can ping through the VPN server to other hosts. My Ubuntu 12.04 system, though, doesn't like the connection. I can bring up IPsec and L2TP, and the ppp0 interface gets created, but no data goes over it and after a minute or so it just times out and drops the interface:

code:
Mar  2 17:09:01 client xl2tpd[26850]: Connecting to host server.company.net, port 1701
Mar  2 17:09:03 client xl2tpd[26850]: Connection established to [server EIP], 1701.  Local: 20824, Remote: 62931 (ref=0/0).
Mar  2 17:09:03 client xl2tpd[26850]: Calling on tunnel 20824
Mar  2 17:09:03 client xl2tpd[26850]: check_control: Received out of order control packet on tunnel 62931 (got 0, expected 1)
Mar  2 17:09:03 client xl2tpd[26850]: handle_packet: bad control packet!
Mar  2 17:09:03 client xl2tpd[26850]: check_control: Received out of order control packet on tunnel 62931 (got 0, expected 1)
Mar  2 17:09:03 client xl2tpd[26850]: handle_packet: bad control packet!
Mar  2 17:09:03 client xl2tpd[26850]: Call established with [server EIP], Local: 31066, Remote: 42850, Serial: 3 (ref=0/0)
Mar  2 17:09:03 client xl2tpd[26850]: start_pppd: I'm running:
Mar  2 17:09:03 client xl2tpd[26850]: "/usr/sbin/pppd"
Mar  2 17:09:03 client xl2tpd[26850]: "passive"
Mar  2 17:09:03 client xl2tpd[26850]: "nodetach"
Mar  2 17:09:03 client xl2tpd[26850]: ":"
Mar  2 17:09:03 client xl2tpd[26850]: "auth"
Mar  2 17:09:03 client xl2tpd[26850]: "debug"
Mar  2 17:09:03 client xl2tpd[26850]: "file"
Mar  2 17:09:03 client xl2tpd[26850]: "/etc/ppp/options.xl2tpd.client"
Mar  2 17:09:03 client xl2tpd[26850]: "ipparam"
Mar  2 17:09:03 client xl2tpd[26850]: "[server EIP]"
Mar  2 17:09:03 client xl2tpd[26850]: "/dev/pts/8"
Mar  2 17:09:03 client pppd[29840]: pppd 2.4.5 started by root, uid 0
Mar  2 17:09:03 client pppd[29840]: using channel 5
Mar  2 17:09:03 client pppd[29840]: Using interface ppp0
Mar  2 17:09:03 client pppd[29840]: Connect: ppp0 <--> /dev/pts/8
Mar  2 17:09:03 client pppd[29840]: sent [LCP ConfReq id=0x1 <mru 1410> <asyncmap 0x0> <magic 0xfaf2305d> <pcomp> <accomp>]
Mar  2 17:09:03 client pppd[29840]: rcvd [LCP ConfReq id=0x1 <mru 1410> <asyncmap 0x0> <auth chap MS-v2> <magic 0x496af711> <pcomp> <accomp>]
Mar  2 17:09:03 client pppd[29840]: sent [LCP ConfAck id=0x1 <mru 1410> <asyncmap 0x0> <auth chap MS-v2> <magic 0x496af711> <pcomp> <accomp>]
Mar  2 17:09:03 client pppd[29840]: rcvd [LCP ConfAck id=0x1 <mru 1410> <asyncmap 0x0> <magic 0xfaf2305d> <pcomp> <accomp>]
Mar  2 17:09:03 client pppd[29840]: sent [LCP EchoReq id=0x0 magic=0xfaf2305d]
Mar  2 17:09:03 client pppd[29840]: rcvd [LCP EchoReq id=0x0 magic=0x496af711]
Mar  2 17:09:03 client pppd[29840]: sent [LCP EchoRep id=0x0 magic=0xfaf2305d]
Mar  2 17:09:03 client pppd[29840]: rcvd [CHAP Challenge id=0x27 <42c40a12593db77e2f90f92a905d790d>, name = "vpn-west-1"]
Mar  2 17:09:03 client pppd[29840]: sent [CHAP Response id=0x27 <f3472a2f55e64611b084dafac64a621100000000000000006893ff70449652c9f1aa21c2b69b6b46939a5c44a2ca761c00>, name = "vpn"]
Mar  2 17:09:03 client pppd[29840]: rcvd [LCP EchoRep id=0x0 magic=0x496af711]
Mar  2 17:09:03 client pppd[29840]: rcvd [CHAP Success id=0x27 "S=4C4D1F68707657849000A5EA93C8D82515360EF7 M=Access granted"]
Mar  2 17:09:03 client pppd[29840]: CHAP authentication succeeded
Mar  2 17:09:03 client pppd[29840]: sent [IPCP ConfReq id=0x1 <compress VJ 0f 01> <addr [client external IP]> <ms-dns1 0.0.0.0> <ms-dns2 0.0.0.0>]
Mar  2 17:09:03 client pppd[29840]: rcvd [IPCP ConfReq id=0x1 <compress VJ 0f 01> <addr [server internal IP]>]
Mar  2 17:09:03 client pppd[29840]: sent [IPCP ConfAck id=0x1 <compress VJ 0f 01> <addr [server internal IP]>]
Mar  2 17:09:03 client pppd[29840]: rcvd [IPCP ConfRej id=0x1 <ms-dns1 0.0.0.0> <ms-dns2 0.0.0.0>]
Mar  2 17:09:03 client pppd[29840]: sent [IPCP ConfReq id=0x2 <compress VJ 0f 01> <addr [client external IP]>]
Mar  2 17:09:03 client pppd[29840]: rcvd [IPCP ConfAck id=0x2 <compress VJ 0f 01> <addr [client external IP]>]
Mar  2 17:09:03 client pppd[29840]: local  IP address [client external IP]
Mar  2 17:09:03 client pppd[29840]: remote IP address [server internal IP]
Mar  2 17:09:03 client pppd[29840]: Script /etc/ppp/ip-up started (pid 29843)
Mar  2 17:09:03 client pppd[29840]: Script /etc/ppp/ip-up finished (pid 29843), status = 0x0
Mar  2 17:09:33 client pppd[29840]: sent [LCP EchoReq id=0x1 magic=0xfaf2305d]
Mar  2 17:10:03 client pppd[29840]: sent [LCP EchoReq id=0x2 magic=0xfaf2305d]
Mar  2 17:10:08 client xl2tpd[26850]: Maximum retries exceeded for tunnel 20824.  Closing.
Mar  2 17:10:08 client xl2tpd[26850]: Terminating pppd: sending TERM signal to pid 29840
Mar  2 17:10:08 client xl2tpd[26850]: Connection 62931 closed to [server EIP], port 1701 (Timeout)
Mar  2 17:10:08 client pppd[29840]: Terminating on signal 15
Mar  2 17:10:08 client pppd[29840]: Modem hangup
Mar  2 17:10:08 client pppd[29840]: Connect time 1.1 minutes.
Mar  2 17:10:08 client pppd[29840]: Sent 0 bytes, received 0 bytes.
Mar  2 17:10:08 client pppd[29840]: Script /etc/ppp/ip-down started (pid 29943)
Mar  2 17:10:08 client pppd[29840]: Connection terminated.
Mar  2 17:10:08 client pppd[29840]: Waiting for 1 child processes...
Mar  2 17:10:08 client pppd[29840]:   script /etc/ppp/ip-down, pid 29943
Mar  2 17:10:08 client pppd[29840]: Script /etc/ppp/ip-down finished (pid 29943), status = 0x0
Mar  2 17:10:08 client pppd[29840]: Exit.
Mar  2 17:10:13 client xl2tpd[26850]: Unable to deliver closing message for tunnel 20824. Destroying anyway.
If it makes any difference, the Windows and Mac systems are doing 2x NAT traversal, while the Linux system is only doing 1x NAT traversal (it's directly connected to the Internet with a public IP), but I can't imagine that's a determining factor here.

Anyone have any ideas what might be going on?

Vulture Culture fucked around with this message at 20:45 on Mar 2, 2014

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

On Ubuntu, /etc/cron.d/mdadm is configured to run a redundancy check on first Sunday of the month only. I commented that out and just set it to run on the 19th of the month as ZFS is usually running a scrub on the first Sunday of the month and the two together drag the system to its knees.

The problem is...it's still running the mdadm redundancy check on the first Sunday of the month? Is there somewhere else I should look to figure out why mdadm keeps doing this?

Here's the contents of /etc/cron.d/mdadm:

code:
#
# cron.d/mdadm -- schedules periodic redundancy checks of MD devices
#
# Copyright © martin f. krafft <madduck@madduck.net>
# distributed under the terms of the Artistic Licence 2.0
#

# By default, run at 00:57 on every Sunday, but do nothing unless the day of
# the month is less than or equal to 7. Thus, only run on the first Sunday of
# each month. crontab(5) sucks, unfortunately, in this regard; therefore this
# hack (see #380425).
#57 0 * * 0 root if [ -x /usr/share/mdadm/checkarray ] && \ (no table breaking)
[ $(date +\%d) -le 7 ]; then /usr/share/mdadm/checkarray --cron --all --idle --quiet; fi
57 0 19 * 0 root /usr/share/mdadm/checkarray --cron --all --idle --quiet

more like dICK
Feb 15, 2010

This is inevitable.
Is the file a symlink? Have you restarted crond?

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

more like dICK posted:

Is the file a symlink? Have you restarted crond?

No, yes.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply