|
Nice, those are both really good ideas thanks I have been obsessively trying to secure those wordpress sites as much as possible since they got hacked, so I should definitely implement that second idea. gently caress, I am always confused by permissions though... what's the best way to only give www-data read access? Should I set the owner to my user account and change the group to www-data? Then only give the group read permissions? I use wp-cli to do all the wordpress updates...
|
# ? Dec 24, 2014 18:40 |
|
|
# ? May 9, 2024 22:09 |
|
fuf posted:Nice, those are both really good ideas thanks Assuming a Wordpress install in /var/www/blaggoblag.com, cd /var/www && chown -R root:www-data blagoblag.com && chown -R www-data:www-data blagoblag.com/wp-content/uploads should do the trick. So long as www-data isn't the owner and doesn't have write access via the group permissions it'll be fine. Leave www-data as the group so you can ensure that www-data gets read permissions. cd first simply to avoid accidentally hitting enter and chowning /. Basically never start a chown -R with /, unless you really like to live dangerously. Also, I don't know if you have or not, but make sure you never chmod 777 stuff, that's just asking for trouble and is terrible advice that seems to live only in the PHP community.
|
# ? Dec 24, 2014 18:53 |
|
Thanks Thalagyrt, super helpful. btw I was moments away from signing up for your managed vps service the other day, but all of my traffic is from the UK. you should get a european location!
|
# ? Dec 24, 2014 19:27 |
|
Can you guys recommend a way for me to monitor mysql, and essentially ensure it is always running? I don't know why, but it occasionally crashes, in which case I get a Pingdom alert. I've setup a script that will allow me to quickly login from my phone (using WorkflowHQ) and restart either the process or the machine, but I really need to figure out why it is happening and find a way to respond it as needed. Where to start?
|
# ? Dec 24, 2014 22:10 |
|
Something as small as supervisord or runit might be enough to ensure it's always running. Monit might be a better choice for error recovery and telling you when it fell over.
|
# ? Dec 24, 2014 22:27 |
|
minato posted:Something as small as supervisord or runit might be enough to ensure it's always running. Thanks. I was actually just looking at Monit.
|
# ? Dec 24, 2014 22:47 |
|
This might work on wordpress but I remember using it for other applications where they had a directory that was for images and other media and you knew for sure scripts were never going to run from it. The place I was working at the time had customers who just wouldn't switch to an application that wasn't written horribly. There is also an htaccess directive you can disable just php with, "php_flag engine off" you can put in apache's config or .htaccess depending on how your server is set up. code:
|
# ? Dec 24, 2014 22:56 |
|
Hooray, my wish came true and rpmfusion-nonfree started repackaging Dropbox's horrible RPMs.
|
# ? Dec 25, 2014 01:22 |
|
Is it me, or is Docker really awesome? I just installed a Wordpress container on a fresh Debian install. Smooooth..... Is it too good to be true? Am I being retarded for installing and running Wordpress through Docker? Mr Shiny Pants fucked around with this message at 20:29 on Dec 27, 2014 |
# ? Dec 27, 2014 20:08 |
|
Mr Shiny Pants posted:Is it me, or is Docker really awesome? I just installed a Wordpress container on a fresh Debian install. Smooooth..... Docker is great. Just, you know, be aware that the docker image will not be updated automatically when you update the host OS. You'll need to either update things inside the running docker image, or download and run a newer docker image (and figure out how to keep your data. Either export/import it, or -- much better -- keep the data on a separate docker volume).
|
# ? Dec 27, 2014 20:39 |
|
I am a huge Docker fanboy (well, specifically a containerization fanboy). Compared to the host + configuration hell I've been lumped with for the last few years, managing and running containers are a breath of fresh air. Just being able to have a multitude of apps available on a single system without any chance of them conflicting is a huge benefit for both Dev and Ops. It's not a panacea, it's still early days and it's yet to reach maturity. But I truly think that most Linux apps will be delivered this way in the future.
|
# ? Dec 27, 2014 20:44 |
|
As awesome as Docker is, it is amusing to see the containerization community rediscover all the problems that traditional packaging systems worked through fifteen years ago.
|
# ? Dec 27, 2014 20:52 |
|
Misogynist posted:As awesome as Docker is, it is amusing to see the containerization community rediscover all the problems that traditional packaging systems worked through fifteen years ago. True, but it is pretty nice when it works. It seems like it has a lot of mindshare. I've just installed a couple of programs that had Docker containers readily available. It feels a bit wasteful though, creating almost complete Linux installations to run an app. The part of keeping my data is also something I am not really keen on yet. The Wordpress container runs it's own MySQL installation. How do I extract my stuff? Another thing is that you don't know how well they packaged the container and the application settings. Maybe they left dumb defaults in the container, how do you figure this out? Might be my newbness though.
|
# ? Dec 27, 2014 21:01 |
|
Go check the security fuckup thread in yospos for some docker hilarity. It's the wave of the future no doubt but someone is going to do it better
|
# ? Dec 27, 2014 21:01 |
|
Mr Shiny Pants posted:True, but it is pretty nice when it works. It seems like it has a lot of mindshare. I've just installed a couple of programs that had Docker containers readily available. Mr Shiny Pants posted:The part of keeping my data is also something I am not really keen on yet. The Wordpress container runs it's own MySQL installation. How do I extract my stuff? Mr Shiny Pants posted:Another thing is that you don't know how well they packaged the container and the application settings. Maybe they left dumb defaults in the container, how do you figure this out? Might be my newbness though. When Shellshock hit, I was really glad my applications weren't from some loving community Docker repository. Janitor Prime posted:Go check the security fuckup thread in yospos for some docker hilarity. It's the wave of the future no doubt but someone is going to do it better Vulture Culture fucked around with this message at 21:32 on Dec 27, 2014 |
# ? Dec 27, 2014 21:27 |
|
Misogynist posted:If it's done correctly, it's pretty far from a complete Linux installation. It shouldn't be significantly heavier than an omnibus install of an app running on a native image, but people are still figuring out the best way to Dockerize their applications. Thanks for the post, seems like I will be installing Wordpress on it's own machine. Isn't that IT in a nutshell? Old is new again? I've seen it countless times now.
|
# ? Dec 27, 2014 21:41 |
|
One of Docker's hooks is "wow, it's so easy, I just downloaded this community Docker build of application X and had it running in seconds!" and that's great for experimentation, but the community is really not where people should be getting their production containers. To my mind, the value of Docker is "wow, it's so easy, I just downloaded this build of application X from our internal build server* and had it running in seconds!" * where the build server is maintained by a competent DevOps group who will audit the build process and take care of any Shellshock-style vulnerabilities.
|
# ? Dec 27, 2014 22:01 |
|
minato posted:One of Docker's hooks is "wow, it's so easy, I just downloaded this community Docker build of application X and had it running in seconds!" and that's great for experimentation, but the community is really not where people should be getting their production containers. To my mind, the value of Docker is "wow, it's so easy, I just downloaded this build of application X from our internal build server* and had it running in seconds!" p.s. gently caress "devops groups"
|
# ? Dec 27, 2014 22:54 |
|
.
maskenfreiheit fucked around with this message at 21:04 on Apr 28, 2019 |
# ? Dec 27, 2014 23:30 |
|
GregNorc posted:Technically an OSX question, but I'd rather do this on the CLI than use some proprietary app. Delete everything in that folder every two weeks, or delete files older than two weeks every two weeks? e: In any case, the Trash is located in /Users/username/.Trash/ , so what you would want is to move the files there instead of outright deleting them. If what you want is to move everything in that folder to trash, something like "mv ~/screenshots/* ~/.Trash/", possibly creating a new directory to sort them, e.g.: code:
http://launched.zerowidth.com/ Launchd will make sure that if your laptop is off or suspended when the command was supposed to run, the command will run when your laptop wakes up. For example, this will run the command at 1800 hours, on the 1st and 15th every month: http://launched.zerowidth.com/plists/2f9afac0-7049-0132-b41f-3625a680fe4a kujeger fucked around with this message at 23:56 on Dec 27, 2014 |
# ? Dec 27, 2014 23:38 |
|
I want to do a minimal (headless) linux install for a dedicated VM host. I will use KVM. Do any particular distros stand out for this use case? Obvious concerns that come to mind are stability and security. It's a home server that will only ever be a single host.
|
# ? Dec 28, 2014 15:08 |
|
Death Vomit Wizard posted:I want to do a minimal (headless) linux install for a dedicated VM host. I will use KVM. Do any particular distros stand out for this use case? Obvious concerns that come to mind are stability and security. It's a home server that will only ever be a single host. The standard Ubuntu/Debian/CentOS choices would be good. Go with what you're familiar with.
|
# ? Dec 28, 2014 18:09 |
|
That is good news. Coming from a background of Debian and Ubuntu desktop use, I look forward to trying out CentOS for this project. Checking out the amount of free, quality documentation out there for RHEL has made me pretty excited.
|
# ? Dec 29, 2014 10:21 |
|
Red Hat is pretty heavily invested in KVM which makes RHEL and by extension CentOS a good platform for it.
|
# ? Dec 29, 2014 10:31 |
|
I have a server running right beneath my laptop. It's connected to the same network. Both are running Ubuntu. Short of hooking up a monitor, how do I find out its IP?
|
# ? Dec 30, 2014 03:34 |
|
If the server is configured to use DHCP, just check the leases on whatever device hands out DHCP. If not, grab the network info from your laptop and try running a scan of your subnet with nmap. It should locate the server unless it's severely firewalled off. Assuming it's a home lab or something else you're in control of, that is. Randomly port scanning other people's networks is frowned upon. vvv cool Docjowles fucked around with this message at 03:53 on Dec 30, 2014 |
# ? Dec 30, 2014 03:47 |
|
Docjowles posted:If the server is configured to use DHCP, just check the leases on whatever device hands out DHCP. If not, grab the network info from your laptop and try running a scan of your subnet with nmap. It should locate the server unless it's severely firewalled off. Assuming it's a home lab or something else you're in control of, that is. Randomly port scanning other people's networks is frowned upon. Yeah found it with nmap, thanks.
|
# ? Dec 30, 2014 03:50 |
|
Liam Emsa posted:Yeah found it with nmap, thanks. http://bash.org/?5273
|
# ? Dec 30, 2014 07:00 |
|
That's what the eject command is for. Are there any decent linux news and discussion podcasts?
|
# ? Dec 30, 2014 18:10 |
|
madpanda posted:Are there any decent linux news and discussion podcasts? Seconding this. I listened to This Week in Enterprise Tech a bit, but it's very... broad. I'm still looking for something good and a bit more technically-focused.
|
# ? Dec 30, 2014 18:14 |
|
TWiT's FLOSS weekly podcast is good in my opinion. It's always on top of recent events and interviews people from interesting projects. http://twit.tv/show/floss-weekly
|
# ? Dec 30, 2014 19:55 |
|
I heard a story once about a local university where they were tracking down and replacing old computers. They had found every machine except for one. Eventually they had to follow the cable physically, and it led them straight into a wall. The machine had been behind the wall, built over, and been just sitting there turned on for like a decade with no one noticing.
|
# ? Dec 31, 2014 00:57 |
|
Liam Emsa posted:I heard a story once about a local university where they were tracking down and replacing old computers. They had found every machine except for one. Eventually they had to follow the cable physically, and it led them straight into a wall. The machine had been behind the wall, built over, and been just sitting there turned on for like a decade with no one noticing. Is this on Snopes yet? Not saying I doubt you personally but I've probably heard this exact story ten different times. I will say that (before I worked there) my company went through a full data center move. They found a couple servers that had been racked, powered on, connected to the network... And then never used We had been in that DC for many years so those boxes spent their entire 3 year amortization (or whatever) just sitting idle. loving inventory, how does it work?
|
# ? Dec 31, 2014 03:23 |
|
Docjowles posted:Is this on Snopes yet? Not saying I doubt you personally but I've probably heard this exact story ten different times. Nah, I think this one was real, I was pretty sure it was UNC, and this came up: http://www.theregister.co.uk/2001/04/12/missing_novell_server_discovered_after/ quote:In the kind of tale any aspiring BOFH would be able to dine out on for months, the University of North Carolina has finally located one of its most reliable servers - which nobody had seen for FOUR years.
|
# ? Dec 31, 2014 03:51 |
|
Docjowles posted:Is this on Snopes yet? Not saying I doubt you personally but I've probably heard this exact story ten different times. Ask again in the poo poo that pisses you off thread. I have seen 2 people claim it happened to them there. Or just read the thread, it was in the current incarnation that I read it(or possibly the Ticket Came in thread)
|
# ? Dec 31, 2014 14:49 |
|
.
maskenfreiheit fucked around with this message at 21:04 on Apr 28, 2019 |
# ? Dec 31, 2014 15:17 |
|
.
maskenfreiheit fucked around with this message at 21:04 on Apr 28, 2019 |
# ? Dec 31, 2014 15:31 |
|
I am having a hell of a time figuring out what switch I need to flip in order to get kernel debugging messages to show up. This is under Debian 7 with the following kernel:code:
code:
code:
|
# ? Dec 31, 2014 20:39 |
|
If you run "sudo dmesg" do you see it there? I mostly work on Red Hat so it may not be true of Debian, but IIRC by default syslog is not configured to log kernel messages. You want to add an entry like this to /etc/rsyslog.conf, which will capture those messages into /var/log/kern.log: code:
|
# ? Dec 31, 2014 20:55 |
|
|
# ? May 9, 2024 22:09 |
|
Yes, I have been using dmesg to check the logs. Apparently something in my driver code is hosed - because a simple Hello World driver prints kernel messages as expected. So it appears to be a false alarm. Thanks for your feedback anyway!
|
# ? Dec 31, 2014 22:07 |