|
evol262 posted:enterprises running a relatively homogeneous environment I see the use cases for SELinux: it's a great way of buttressing the security of a handful of applications that everyone runs, like OpenSSH or Apache. But for most organizations, the real vulnerabilities are going to come from weird line-of-business apps requiring a 5-year-old version of Java, or custom in-house software that's never, ever going to have someone writing Mandatory Access Control policies for it. It was a great idea for the world of shared LAMP hosting, but it's a really old and weird and outmoded concept in the cloud age. Varkk posted:If it is really slowing you down set it to permissive and check what it is logging and then change the settings to allow what you are trying to do. For the record, SELinux generally works well out of the box in RHEL6+ for things that don't need to modify content (!!), and I don't recommend people disable it outright. But disabling protection of a specific daemon is often a much more reliable way to fix an apparent problem than running some audit2allow duct-tape-and-staples security management system that provides no discernible value to the business. Vulture Culture fucked around with this message at 04:49 on Feb 26, 2014 |
# ? Feb 26, 2014 04:36 |
|
|
# ? Jun 1, 2024 06:24 |
|
Misogynist posted:I don't want to sound like a troll here, but ahahahahahahaha, good lord, for all their bluster this isn't the way enterprises run at all. It's a mishmash of un-standardized tooling, reproducing the same system six times in six different departments using six different sets of software packages. If you're lucky enough to run a central one of something, it's some horrible software package with Tivoli or BMC or CA on the box that SELinux will never, ever protect and the vendor would never, ever support if it did. But the point of not disabling SELinux is that you don't need a competent person watching IDS logs, it doesn't need to wait for intervention after Tripwire or chrootkit, and you don't need to gently caress with it. If you don't want to bother with audit2allow, just run your poo poo in unconfined_t and let SELinux do its job in packages you don't control. But audit2allow gives you a general idea of what your package is trying to do, and is a reasonable next step after "don't run that poo poo as root", which many businesses did within the last few years. Again, layering is the point. Audit logs are great for finding out what happened after the fact. They don't do a drat thing to stop it unless you configure Tripwire to be even more intrusive than SELinux and revert every change made. No, it's not appropriate for cloud environments, really. But that's still a relatively small business segment, honestly. Far more people have pets than cattle in 2014. And being a "cloud" company still doesn't help you when there's a vulnerability in HAproxy, varnish, or whatever's fronting your stack. It's not a cure-all. But you lose nothing.
|
# ? Feb 26, 2014 06:12 |
|
On the project I am on now security and ease of operations end up lower on the list, and there are various separate teams doing their own thing that aren't don't even get involved with personally most of the time. We have part of our application done in a nice way that we can automate but a large chunk is just dump stuff that gets delivered without any kind of packaging (literally just tar/war/zip stuff) and so many bits and pieces with a LAMP stack here, JBoss here, Tomcat here, IIS windows machines here (barf). I wish it was unified and our team that manages it had more say in the initial design but the whole thing was designed and specced out before they even had an operations team hired. I spent the whole 10 hours of my shift going through logs in splunk and I end up doing more app debugging and log analysis in production and staging environments than any actual sys admin work. The people that do all the planning care more about API response time and error thresholds in the logs than server uptime or locking everything down.... Though at least everything is behind a firewall, which is most often behind a CDN as well and they do have penetration testing done. I guess it could be worse, but some big companies would rather do something stupid than spend some time to lay down a good groundwork. Like a certain company I heard of that would rather have the vendor rewrite their entire OS instead of fixing their own code to make it no longer dependant on big endian architecture.
|
# ? Feb 26, 2014 07:06 |
|
If you disable selinux, you're a bad admin, and you should feel bad. My employer disables selinux on builds because most (99%) of our customers demand it. If a customer wants it, then I will enable it with a specific policy file that enables our monitoring and backuptools. IT's sad how most of my team doesn't understand selinux either.
|
# ? Feb 26, 2014 15:22 |
|
How does SELinux compare with AppArmour?
|
# ? Feb 26, 2014 16:24 |
|
evol262 posted:Having spent years in an enterprise environment, I don't disagree. But you're not an enterprise-wide admin. You're working on a team which almost certainly deploys the same set of packages (and their updates) over and over again on top of kickstarted, centrally managed (cfengine, puppet, satellite, hpsa, whatever) systems. Contrast that with a generic startup in Cali, where a dev reads about some new tech on his lunch break and they spin it up onto a production server in the afternoon to see what it can do. There is no such thing as a generic "enterprise environment". There are various tools and their ability to align with business interests.
|
# ? Feb 26, 2014 16:43 |
|
Bhodi posted:Keep in mind, you're coming from a perspective of working in banking, where rigid policies like always running SELinux is both desirable and encouraged (can never have too much security, and drat the inflexibility!) But I also worked in defense, a Usenet/CDN provider, and a meteorology company. Bhodi posted:Contrast that with a generic startup in Cali, where a dev reads about some new tech on his lunch break and they spin it up onto a production server in the afternoon to see what it can do. Implementing SELinux was a total overhaul, and an initiative that I took to get our environment under control. There was a significant amount of pushback. Bhodi posted:There is no such thing as a generic "enterprise environment". There are various tools and their ability to align with business interests. But making broad assumptions about how companies large enough to even need to mandate what tools to use (PowerBroker, HPSA, Tivoli, etc) and probably have teams to manage them is not asserting that there aren't multiple tools. It's that there's a certain amount of administrative overhead and cargo cult in operations of that size, and while they're all different in their own way, they're more alike than not.
|
# ? Feb 26, 2014 17:35 |
|
I am running a command via cron, but stderr is outputting "stdin: is not a tty" From much Googling, I have deduced that this is because of how the bash command is being called. However, making various changes to the .profile file does not change things (one thing I saw was take out the "mesg y" line, but my .profile is already set to "mesg n"). My current crontab line goes like: /bin/bash -l -c 'command' What options should I modify when calling bash to execute this correctly?
|
# ? Feb 26, 2014 19:47 |
|
Buckhead posted:I am running a command via cron, but stderr is outputting "stdin: is not a tty" But it's not allocated a tty. What's in .profile? What's in .bashrc? [ $- == *i* ]] && Do interactive stuff... e: Don't use "bash -l" in cron. evol262 fucked around with this message at 20:03 on Feb 26, 2014 |
# ? Feb 26, 2014 19:56 |
|
I have a really weird problem, not sure if here or the Python thread is the place for it but I think it's more of a Linux question. I have a Python library that creates a console script using the setuptools library (as in, the console script gets created automatically, I didn't hack it together). It works great on my home PC (Ubuntu 13.10) but on the lab PC (12.04) there's a weird issue. It just runs and returns, nothing happens. Which is weird because it's supposed to at least output the usage message if it can't figure out what to do. The script is called exp. So I tried which exp and sure enough it's on the path and looks good. So I tried `which exp` and it works fine. In fact, all the commands and options and help messages, etc. work great by using `which exp` instead of just exp. How can that be if they both point to the same place?
|
# ? Feb 26, 2014 21:57 |
|
SurgicalOntologist posted:I have a really weird problem, not sure if here or the Python thread is the place for it but I think it's more of a Linux question. Is something in the script relying on an absolute path? Does /path/to/exp work? alias which -a exp head -n 1 `which exp` python /path/to/exp
|
# ? Feb 26, 2014 22:17 |
edit: ughhhh I need to test this again to make sure this is what I'm actually seeing
fletcher fucked around with this message at 03:00 on Feb 27, 2014 |
|
# ? Feb 27, 2014 02:42 |
|
evol262 posted:Is something in the script relying on an absolute path? Does /path/to/exp work? Thanks for the suggestions. I forgot I had made an alias, a while back before I figured out how to make a console script.
|
# ? Feb 27, 2014 19:25 |
Why is the default file for vagrant called Vagrantfile and not Vagrantfile.rb? It's really annoying having to manually set the language syntax in sublime text.
|
|
# ? Feb 27, 2014 20:30 |
|
fletcher posted:Why is the default file for vagrant called Vagrantfile and not Vagrantfile.rb? It's really annoying having to manually set the language syntax in sublime text. Rakefile, Makefile, and etc. Here's a GitHub issue on a project that I use in that talks a bit about this. https://github.com/test-kitchen/test-kitchen/issues/182
|
# ? Feb 27, 2014 20:34 |
Thanks for the link! Ok this next one is driving my crazy, I'm totally stumped. One of my recipes in Chef is for restoring from the most recent backup. The backups are stored on s3, so I use aws-cli to just list the bucket and nab the most recent one: code:
If I instead use: code:
|
|
# ? Feb 27, 2014 23:36 |
Any ideas?? It seems like such a simple thing I'm trying to do. And I thought it worked just fine a couple weeks ago!
|
|
# ? Mar 1, 2014 02:04 |
|
fletcher posted:Any ideas?? It seems like such a simple thing I'm trying to do. And I thought it worked just fine a couple weeks ago! Chef can be super finicky about string interpolation. Try %x{} instead of ``
|
# ? Mar 1, 2014 02:48 |
|
I have a weird thing going on with youtube or maybe flash in general, where viewing videos in 480p or 240p makes the contrast all messed up and black turns grey. Any other resolution does not have this issue. Here are screenshots to show what I mean. The video content pictured is supposed to be just a black screen for 10hrs, which I chose just to test this(though the same issue happens on any other video I view). 480p (also happens at 240p): 720p: I'm on Thinkpad W510 w/ NVIDIA Corporation GT216GLM [Quadro FX 880M] Ubuntu 13.10 w/ cinnamon desktop, I tried it logging in to other desktop: gnome 3, gnome "flashback", cinnmaon w/ software rendering, all do the same thing. I tried switching the version of nvidia drivers (had nvidia-304, switched to nvidia-319), didn't make a difference. Is my video card taking a poo poo or what the hell is going on here. Has anyone else experienced something like this? Also I've been running this setup for a pretty long time, but only noticed this issue in the last couple weeks it seems like. peepsalot fucked around with this message at 07:04 on Mar 1, 2014 |
# ? Mar 1, 2014 06:59 |
|
They both look black to me, does that help?
|
# ? Mar 1, 2014 07:06 |
|
Xik posted:They both look black to me, does that help? I have special eyes.
|
# ? Mar 1, 2014 07:13 |
|
Xik posted:They both look black to me, does that help? same here.
|
# ? Mar 1, 2014 07:24 |
|
peepsalot posted:I have special eyes. Have you tried viewing this thread (or those screenshots) from another device? It's possible whatever you are seeing can't be captured at a software level, which would probably indicate a hardware issue.
|
# ? Mar 1, 2014 07:30 |
|
Xik posted:Have you tried viewing this thread (or those screenshots) from another device? It's possible whatever you are seeing can't be captured at a software level, which would probably indicate a hardware issue. Nope, the second one is definitely darker, just checked it in photoshop to make sure my eyes weren't playing tricks.
|
# ? Mar 1, 2014 07:35 |
|
Are you using Flash or HTML5? Try the other one?
|
# ? Mar 1, 2014 07:35 |
|
Suspicious Dish posted:Are you using Flash or HTML5? Try the other one?
|
# ? Mar 1, 2014 07:43 |
|
Suspicious Dish posted:Are you using Flash or HTML5? Try the other one? Youtube HTML5 on Linux: an exciting way to use all of a CPU core! Well, with Firefox+Gstreamer anyway.
|
# ? Mar 1, 2014 07:45 |
|
scroogle nmaps posted:Youtube HTML5 on Linux: an exciting way to use all of a CPU core! The Gstreamer devs are giving you incentive to avoid that cesspool
|
# ? Mar 1, 2014 08:00 |
|
scroogle nmaps posted:Youtube HTML5 on Linux: an exciting way to use all of a CPU core! GStreamer 1.0 is a lot better about resource management, so it should be a lot better as soon as Firefox is ported. Hardware-accelerated decoding is also on the roadmap. It's not really possible under X11 right now.
|
# ? Mar 1, 2014 08:03 |
|
evol262 posted:The Gstreamer devs are giving you incentive to avoid that cesspool What do you mean? It's not like there are a huge array of choices. Flash has been abandoned on the platform unless you want to get locked into Chrome. Before I got html5 working I relied solely on youtube-dl, but is a little inconvient when you just want to quickly play something that is embedded.
|
# ? Mar 1, 2014 08:06 |
|
Xik posted:What do you mean? It's not like there are a huge array of choices. Flash has been abandoned on the platform unless you want to get locked into Chrome. I like to open YouTube urls in VLC. Hardware acceleration and no flash.
|
# ? Mar 1, 2014 09:01 |
|
Xik posted:What do you mean? It's not like there are a huge array of choices. Flash has been abandoned on the platform unless you want to get locked into Chrome. I meant YouTube as the cesspool.
|
# ? Mar 1, 2014 15:42 |
|
Xik posted:What do you mean? It's not like there are a huge array of choices. Flash has been abandoned on the platform unless you want to get locked into Chrome. Have you tried ViewTube? → https://userscripts.org/scripts/show/87011 It's a greasemonkey script which lets you pick how to open files, and you can use it in Firefox and Chrome, though you need to install it manually on the latter. You also get to choose which format you'd like to play (flv, MP4, webm or whatever else YT might offer), and it works with some other sites like Vimeo as well! edit: It also conveniently gets rid of in-video advertisements.
|
# ? Mar 1, 2014 16:13 |
|
evol262 posted:I meant YouTube as the cesspool. Oh right, I suppose. I don't have an account and don't "browse" it or anything like that so I don't really see the community side of it. It's just the de-facto video hosting site and it's frequently embedded in posts here so it sucks not to have it. Hollow Talk posted:Have you tried ViewTube? → https://userscripts.org/scripts/show/87011 That's pretty cool. I'll have to try it out. I'll probably still use HTML5 for stuff that is embedded but if I want to download a long IT related talk or something that would probably be a little more convenient then using youtube-dl.
|
# ? Mar 1, 2014 22:29 |
|
Is there a way to keep Ubuntu 12.04 from falsely reporting a low battery status every couple of secondson an old notebook (2003 HP nx7000)? The battery icon shows a 95% charge and about 2 hours left, yet the warning dialog pops up every 20 seconds. The battery should still be able to hold a charge, at least it did when running Windows XP on it before, so I'm assuming the power management has trouble dealing with the info from the old hardware. I'm fairly new to Ubuntu and Linux in general, so if there's a solution, I'd appreciate step by step instructions. Thanks!
|
# ? Mar 1, 2014 23:51 |
|
Xik posted:Oh right, I suppose. I don't have an account and don't "browse" it or anything like that so I don't really see the community side of it. It's just the de-facto video hosting site and it's frequently embedded in posts here so it sucks not to have it. It can do HTML5 as well! I use it really as my day-to-day youtube player, and I have yet to look back.
|
# ? Mar 2, 2014 00:58 |
|
Anyone good at IPsec VPNs? I've got a system on EC2 that's running Openswan+xl2tpd for client VPN access, using PSK authentication. My Windows desktop and my Mac laptop can connect just fine (though my Mac laptop needs the routes defined manually after the connection comes up). I can ping the VPN server, and I can ping through the VPN server to other hosts. My Ubuntu 12.04 system, though, doesn't like the connection. I can bring up IPsec and L2TP, and the ppp0 interface gets created, but no data goes over it and after a minute or so it just times out and drops the interface: code:
Anyone have any ideas what might be going on? Vulture Culture fucked around with this message at 20:45 on Mar 2, 2014 |
# ? Mar 2, 2014 18:14 |
|
On Ubuntu, /etc/cron.d/mdadm is configured to run a redundancy check on first Sunday of the month only. I commented that out and just set it to run on the 19th of the month as ZFS is usually running a scrub on the first Sunday of the month and the two together drag the system to its knees. The problem is...it's still running the mdadm redundancy check on the first Sunday of the month? Is there somewhere else I should look to figure out why mdadm keeps doing this? Here's the contents of /etc/cron.d/mdadm: code:
|
# ? Mar 2, 2014 18:52 |
|
Is the file a symlink? Have you restarted crond?
|
# ? Mar 2, 2014 19:23 |
|
|
# ? Jun 1, 2024 06:24 |
|
more like dICK posted:Is the file a symlink? Have you restarted crond? No, yes.
|
# ? Mar 2, 2014 21:06 |