Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DholmbladRU
May 4, 2006
Having trouble performing something pretty trivial, display blob image on a page. They render as broken links, meaning I am missing something.


code to save img to database

php:
<?
 if(isset($tmpName) && isset($mimeType) && isset($fileName)){
        // Read the file
        $fp = fopen($tmpName, 'r');
        $data = fread($fp, filesize($tmpName));
        $data = addslashes($data);
        fclose($fp);

        

        if($data){

            echo $fileName;
            echo $id;   
         $parameters = array(':id' => null,
                            ':workorder_id' => $id,
                            ':filename' => $fileName,
                            ':mime' => $mimeType,
                            ':category_id' => null,
                            ':FileOrder' => null,
                            ':photoBlob' => $data,
                     );
       
         DB::insert('inspection_photos')
                ->values(array_keys($parameters))
                ->parameters($parameters)
                ->execute($this->db);
            }
        }

?>
retrieving img
php:
<?
  public function get_photos_by_id($id){
             $result = DB::query(Database::SELECT, 'SELECT * 
                                               FROM inspection_photos
                                               WHERE workorder_id = :id')
                      ->parameters(array(':id' => $id))
                      ->as_object()
                      ->execute($this->db);

        return $result;
?>
poor attempt at displaying
php:
<?
echo "<img src='data:".$photos[$i]->mime.";base64," 
                                    . base64_encode($photos[$i]->photoBlob). "'/>";
?>
Result(the blob is a lot longer but I truncated it for this post.

code:
<img src="data:image/jpeg;base64,/9j/4S/+RXhpZ2mSl1aEk+bkAbWXsMZP">

Adbot
ADBOT LOVES YOU

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

DholmbladRU posted:

PHP code:
        $data = addslashes($data);

My guess it's this part

DholmbladRU
May 4, 2006

Wheany posted:

My guess it's this part

Looks like it only uploads a portion of the photo. Is it possible some php.ini setting is preventing the whole file from being uploaded?

DholmbladRU fucked around with this message at 18:31 on Jan 14, 2014

obstipator
Nov 8, 2009

by FactsAreUseless
There's no reason to be using addslashes() on that data since your database querying is parameterized. addslashes/mysql_real_escape_string/etc are used to prevent sql injection, which isn't necessary if everything in the input for the query is parameterized. What's probably happening is extra slashes are showing up and making the result an invalid image. Also, you may want to consider saving the images to the file system instead of the database, and then just store the file location in the table for later retrieval.

If it's not addslashes, then it could be that the size of the image is bigger than the space you've allowed for it in the database, so its getting truncated.

DholmbladRU
May 4, 2006

obstipator posted:

There's no reason to be using addslashes() on that data since your database querying is parameterized. addslashes/mysql_real_escape_string/etc are used to prevent sql injection, which isn't necessary if everything in the input for the query is parameterized. What's probably happening is extra slashes are showing up and making the result an invalid image. Also, you may want to consider saving the images to the file system instead of the database, and then just store the file location in the table for later retrieval.

If it's not addslashes, then it could be that the size of the image is bigger than the space you've allowed for it in the database, so its getting truncated.

Are there any real benifits to storing the files in the file system VS storing them in the database? Is retrieval quicker for the file system?

obstipator
Nov 8, 2009

by FactsAreUseless
I don't know many specifics on this, but filesystems were designed to store files, so I've always gone with it.

The main benefit of storing things on the filesystem would be that you won't have to do any coding to output the image. Apache and others already know how to figure it out and cache it, so if you have the same image popping up multiple times or for multiple users, it would be more optimized. By storing it in the db, you have to do some extra processing to base64 it (which is negligible for the most part) and it also requires more bandwidth since the base64'd version will be longer.

I've never done any comparisons between filesystem and database, so who knows, for your purpose it might be perfectly fine.

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

DholmbladRU posted:

Looks like it only uploads a portion of the photo.

Judging from what?

Is part of the image as shown in the browser after doing the roundtrip (browser -> php/uploaded image -> database -> php -> back to browser) intact, or are you looking at the database blob and it is shorter than expected?

Spades
Sep 18, 2011

DholmbladRU posted:

Are there any real benifits to storing the files in the file system VS storing them in the database? Is retrieval quicker for the file system?

I'd say that in the majority of cases, serving from the file system should be considerably faster.

The advantages that come to mind immediately for storing images in the file system is there is no need to get PHP or the Database involved in serving images, which should considerably shorten the time for the request.

Additionally, if one does not have some kind of server-side caching in place, the whole process (PHP request, Database seek, etc) will repeat every time the image is viewed by the same user. Generally speaking, this means extremely poor scalability - something I noticed quickly on a previous project where I attempted to use this approach for the sake of experimentation.

I haven't researched it, but I surmise having BLOB fields in your relations would play some kind of havoc with your DB page sizes and push out seek times. I seem to remember some RDBs like MSSQL have specialized support for BLOB-style fields that need to be accessed as a Binary File.

Since none of this is helpful if you have actually decided to use a database to store images - I would think that storing images in the database but caching them to the file system when they actually need to be viewed might be a possible workaround if you are concerned about speed.

Spades fucked around with this message at 05:19 on Jan 15, 2014

mclifford82
Jan 27, 2009

Bump the Barnacle!
Not sure if this is the best place for this question, or a new thread, but I really need a jumping off point here.

I'm basically trying to understand what functions/concepts I need to know in PHP to get the Twitter Streaming API functioning. I have a simple node.js script that I can run to accomplish this, but I'd prefer something on a hosted environment (which mine doesn't run node.js) and using a data structure I'm more familiar with (MySQL versus MongoDB NOSQL in Node.js).

So I turn to the PHP goons in the hope someone can kick me in the right starting direction. Or should I keep doing it this way (it is very fast at collection), but then process the tweets from Mongo to MySQL for reporting?

edit: Also worth noting is that I have implemented scripts that get called every hour and return results to MySQL from the Twitter REST API, however the number of results you get with that are far fewer, and not in real time. One of the concepts I'm having trouble wrapping my mind around with PHP is how to get a script to just run perpetually and accept input from Twitter? I think there's foundation knowledge I'm missing here.

code:
var util = require('util'),
    twitter = require('twitter'),
    mongo = require("mongodb");
var twit = new twitter({
	consumer_key: 'super',
	consumer_secret: 'secret',
	access_token_key: 'stuff',
	access_token_secret: 'man'
});

var host = "127.0.0.1";
var port = mongo.Connection.DEFAULT_PORT;
var db = new mongo.Db("nodejs-seahawks", new mongo.Server(host, port, {}));
var tweetCollection;
db.open(function(error) {
	console.log("We are connected.");
	
	db.collection("tweet", function(error, collection) {
		tweetCollection = collection;
	});
});

// Initialize the twitter stream
twit.stream('statuses/filter', {track: '#Seahawks'}, function(stream) {
	stream.on('data', function(data) {
		tweetCollection.insert(data, function(error) {
			if (error) {
				console.log("Error inserting: ", error);
			} else {
				console.log("Inserted tweet: ", data.text);
			}
		});
	});	
});

mclifford82 fucked around with this message at 21:23 on Jan 15, 2014

Spades
Sep 18, 2011
I am spitballing as I'm not sure what precisely you're aiming to do, but if you are looking to do any streaming/push-style behaviors you'll likely need to look into PHP's WebSockets implementations.

mclifford82
Jan 27, 2009

Bump the Barnacle!

Spades posted:

I am spitballing as I'm not sure what precisely you're aiming to do, but if you are looking to do any streaming/push-style behaviors you'll likely need to look into PHP's WebSockets implementations.

Basically, I want to setup groupings of related hashtags and store as many tweets as possible during a 1-3 hour window of time. I guess a decent example might be #BreakingBad, where I'd want to store any tweet with that hashtag during the 1 hour the show airs on Television. Then I'll want to do some analysis on the backend of that, cause what fun is just raw data?

I'll have a look into WebSockets, thanks for the recommendation. I'm still very very green when it comes to any kind of networking.

Thanks again, and to anyone else who replies as well.

McGlockenshire
Dec 16, 2005

GOLLOCKS!

mclifford82 posted:

I'm basically trying to understand what functions/concepts I need to know in PHP to get the Twitter Streaming API functioning. I have a simple node.js script that I can run to accomplish this, but I'd prefer something on a hosted environment (which mine doesn't run node.js) and using a data structure I'm more familiar with (MySQL versus MongoDB NOSQL in Node.js).

[...]

Basically, I want to setup groupings of related hashtags and store as many tweets as possible during a 1-3 hour window of time.

You aren't going to get away with running *anything* PHP for hours at a time on shared hosting. Unless you have a shell there and have already received permission to spawn long-running processes, you'll want to find another hosting option before going the PHP route.

If you end up using PHP, check out the phirehose library to handle the Twitter Streaming API.

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

McGlockenshire posted:

You aren't going to get away with running *anything* PHP for hours at a time on shared hosting.

Mclifford82 could just run it locally on his own computer or some abandoned netbook or something, do whatever analysis offline and then push the results in one go, if they need to be available publicly.

DholmbladRU
May 4, 2006
I am generating a PDF file in PHP using dompdf. This is working and I am able to generate the PDF based off a view which contains html and php. One requirement I have is to include another pdf inside of the pdf. However I do not need to embed the PDF with the PDF viewer like seen in the below link

http://pdfobject.com/examples/simplest-full-window.html

All I really need is a static "image" of the pdf inside the parent pdf. I dont know if this makes sense. Does anyone have experience performing this with dompdf or html

Spades
Sep 18, 2011
Could you clear up what you mean with regards to static image? I am not sure of the behavior you are attempting to create.

McGlockenshire
Dec 16, 2005

GOLLOCKS!
Are you just trying to combine multiple PDFs into a single document?

DholmbladRU
May 4, 2006
Ah yeah I guess I need to merge the PDFs. With dompdf you build the pdf you would like to render with html. Initially I was thinking I could render the "embeded" pdf with html but the only options seemed to include some pdf player which allowed you to scroll through and highlight things. I will investigate merging pdfs, thanks

revmoo
May 25, 2006

#basta
I've been annoyed having to run composer manually in the past and I don't want to use Git hooks and lock files. I'm thinking about doing a reverse of the typical setup and committing my vendor folder content and .gititnoring the composer binary. Can anyone think of a downside to doing it that way? I've never want to pull in a third-party lib without testing it first so the auto-update functionality seems only useful on dev machines. I worked at a couple places that just ran composer after a clone or pull, but that seems stupid, especially if you deploy via Git.

musclecoder
Oct 23, 2006

I'm all about meeting girls. I'm all about meeting guys.

revmoo posted:

I've been annoyed having to run composer manually in the past and I don't want to use Git hooks and lock files. I'm thinking about doing a reverse of the typical setup and committing my vendor folder content and .gititnoring the composer binary. Can anyone think of a downside to doing it that way? I've never want to pull in a third-party lib without testing it first so the auto-update functionality seems only useful on dev machines. I worked at a couple places that just ran composer after a clone or pull, but that seems stupid, especially if you deploy via Git.

I'm not quite sure what you're talking about but it sounds like a bad idea. You shouldn't commit your vendor directory, that's a bad idea.

How often are you running Composer manually?

All of my code bases has a build-dev script in them that downloads the PHAR's I need (they're ignored) and then runs Phing to build the application itself. So, if I need to nuke everything and start over, just run build-dev. Or if I'm just starting out, just run build-dev. This way I don't have to muck with tons of command line arguments.

But don't commit your vendor directory that should be completely ignored.

Edit: Also, don't deploy with Git. Use a real deployment tool.

revmoo
May 25, 2006

#basta
I know, I created devops at my last job but I haven't had time to do anything with the current setup.

What is so bad about committing vendor/ ? It seems reckless to me to deploy auto-updating libs to a production system without fully evaluating new versions in a development environment. It also seems a lot simpler to me to be able to git clone and immediately have a fully operational app. Assuming you want to manually evaluate every lib that you use in an app, is there anything else wrong with committing it? I've never heard an argument for .gitignor-ing it that I agreed with, but everybody seems to be very against it.

McGlockenshire
Dec 16, 2005

GOLLOCKS!
Most of the arguments against committing vendor/ are pretty low-key, but they can matter.

I have a simple CRUD application with minimal requirements, and my vendor/ is 6 megs (15 megs on disk!) with 1300 files and 75k sloc (so says sloccount at least). Depending on what's included and how often it's updated, keeping it in source control also means duplicating vendor changes in your repo needlessly. This can make for quite a mess.

You should just be committing your composer.json and composer.lock file. Don't worry about the update feature - you have to manually invoke it. Composer will only install the exact versions from the exact URLs specified in the .lock file.

musclecoder
Oct 23, 2006

I'm all about meeting girls. I'm all about meeting guys.

revmoo posted:

I know, I created devops at my last job but I haven't had time to do anything with the current setup.

What is so bad about committing vendor/ ? It seems reckless to me to deploy auto-updating libs to a production system without fully evaluating new versions in a development environment. It also seems a lot simpler to me to be able to git clone and immediately have a fully operational app. Assuming you want to manually evaluate every lib that you use in an app, is there anything else wrong with committing it? I've never heard an argument for .gitignor-ing it that I agreed with, but everybody seems to be very against it.

Ideally you would checkout a test branch, install the new lib you were evaluating, run your tests against it, ensure it doesn't break anything and that it works like expected, commit composer.json and .lock, and then merge your branch into master and deploy. Are you really just doing everything in master and just deploying straight from there? Running 'php composer.phar install' installs everything off of composer.lock, not whatever the latest version of every library are.

Also, you can checkout your code and immediately have it run? Does that mean you're committing credentials to your configuration files?

The entire point of having a vendor system (and every major language does: Ruby, Python, Scala, Java, Go, Clojure, JavaScript, etc, etc) is so that you don't have to commit a bunch of libraries to your code base.

v1nce
Sep 19, 2004

Plant your brassicas in may and cover them in mulch.
Muscle; I agree with what you've said in essence, but we have our own approach at my current workplace I'd like opinions on. We're an agency who deal with a wide variety of sites and clients with varying budgets and needs, so finding a process which works for the vast majority has been difficult. I agree and disagree with varying parts of what follows.

For deployment we exclusively use GIT; we have a repo on the production server which has a post-commit hook which will pull the latest staging branch onto the staging vhost. This is used for testing and client sign-off before changes are made live.
For changes to go into production, you have to login to the server itself and do a manual pull. This safe-guards us against doing retarded things and accidentally pushing crap to the live site.

Provided nobody farts about on the live server and keeps to our git-flow-esq procedure for branching, everything's been working just fine with this procedure.

In regards to the vendor directory, we .gitignore the entire contents and keep it out of the repo, providing the site is continuously being worked on - this is usually during the initial development phase or if the site is being actively developed day-to-day, which actually only applies to a handful of our client base.

For all other clients, it's not unusual for the vendor directory to have its packages locked to specific versions, and then entered into git. This has been done typically to ensure that git deployment is seamless, we have an entire snapshot of the site, avoid issues with sub-dependencies, and because the contents of the vendor directory is not expected to change.
After a given period we might then update the package versions to something more recent and run the site through a series of end-user tests to ensure nothing is broken and then send the updated site live.

Now for something I really don't agree with; we do commit credentials in the vast majority of cases into source control. This was originally enforced to create a "dev is as close to live as possible" experience. Between our three environments; dev, staging and production, we use Apache environment variables to detect when we're in dev or staging, otherwise production is assumed.

One change we're looking into is using environment variables to pass the credentials to the configuration script. This seems like a double edged sword as you'd need access to the server itself to see where the variables originate (apache/nginx config), however a dump of $_ENV would reveal your credentials. That's a rare thing to dump, mind you, and if that kind of data about your site is out there then you probably have other problems.

My preferred alternative is basically what everyone else does; a local-only git-ignored file containing credentials, based off a versioned .dist copy that has dummy data.

We currently also have no procedure for database versioning. One of the possible methods is to produce SQL files containing ALTER statements, and keep a version number of the DB in the DB itself. A deployment script would then check the database version number (eg. 6) and then any file with a number greater than that would be executed (eg. 7,8,9), and the database version would be updated to whatever the highest number file was.

Id be interested to hear various opinions on the above, and any suggestions for the SQL database versioning approach would be welcome.

musclecoder
Oct 23, 2006

I'm all about meeting girls. I'm all about meeting guys.

v1nce posted:

Muscle; I agree with what you've said in essence, but we have our own approach at my current workplace I'd like opinions on. We're an agency who deal with a wide variety of sites and clients with varying budgets and needs, so finding a process which works for the vast majority has been difficult. I agree and disagree with varying parts of what follows.

I own a small software development consultancy (probably similar to the place where you work) and we manage about 10 different projects. We do normalize them by basing them all on Symfony to ensure everything is consistent.

v1nce posted:

For deployment we exclusively use GIT; we have a repo on the production server which has a post-commit hook which will pull the latest staging branch onto the staging vhost. This is used for testing and client sign-off before changes are made live.
For changes to go into production, you have to login to the server itself and do a manual pull. This safe-guards us against doing retarded things and accidentally pushing crap to the live site.

Provided nobody farts about on the live server and keeps to our git-flow-esq procedure for branching, everything's been working just fine with this procedure.

Don't do this. It doesn't scale well if you have to add more servers. Learn to use a proper deployment tool like Capistrano (and then eventually Chef/Puppet/Ansible) to automate all of this. It's silly to have to do this manually, and you could easily prevent broken builds from going out by using some sort of continuous integration server (CI-Joe/Jenkins/CircleCI).

v1nce posted:

In regards to the vendor directory, we .gitignore the entire contents and keep it out of the repo, providing the site is continuously being worked on - this is usually during the initial development phase or if the site is being actively developed day-to-day, which actually only applies to a handful of our client base.

For all other clients, it's not unusual for the vendor directory to have its packages locked to specific versions, and then entered into git. This has been done typically to ensure that git deployment is seamless, we have an entire snapshot of the site, avoid issues with sub-dependencies, and because the contents of the vendor directory is not expected to change.
After a given period we might then update the package versions to something more recent and run the site through a series of end-user tests to ensure nothing is broken and then send the updated site live.
It should always be ignored and installed using "php composer.phar install" whenever you build and release your software. There's no reason to muddy up your repository with hundreds of library style PHP files.

v1nce posted:

Now for something I really don't agree with; we do commit credentials in the vast majority of cases into source control. This was originally enforced to create a "dev is as close to live as possible" experience. Between our three environments; dev, staging and production, we use Apache environment variables to detect when we're in dev or staging, otherwise production is assumed.

One change we're looking into is using environment variables to pass the credentials to the configuration script. This seems like a double edged sword as you'd need access to the server itself to see where the variables originate (apache/nginx config), however a dump of $_ENV would reveal your credentials. That's a rare thing to dump, mind you, and if that kind of data about your site is out there then you probably have other problems.

All of this is horrible and you should stop doing this immediately. This is really bad for a few reasons.

First, if your code is compromised from GitHub or wherever you store it, your credentials are compromised too. There's never a good reason to store your credential files in version control. Also, what happens when a dev accidental commits an invalid credential and now your production site isn't working because the Stripe key is broken?

Next, it makes it more difficult to develop with. Devs must now force their development environment to bend to the will of the credentials. If you're using a 3rd party API, do you store both test and live credentials in your config files? Or do devs have to swap out credentials when testing something? How do you write automated tests with production credentials?

v1nce posted:

My preferred alternative is basically what everyone else does; a local-only git-ignored file containing credentials, based off a versioned .dist copy that has dummy data.

There's a reason everyone else does this. It's the right way to do things.

v1nce posted:

We currently also have no procedure for database versioning. One of the possible methods is to produce SQL files containing ALTER statements, and keep a version number of the DB in the DB itself. A deployment script would then check the database version number (eg. 6) and then any file with a number greater than that would be executed (eg. 7,8,9), and the database version would be updated to whatever the highest number file was.

Id be interested to hear various opinions on the above, and any suggestions for the SQL database versioning approach would be welcome.

This is called a database migration and there are libraries available for every major framework and stand-alone application. Find one you like and start using it. If you had a proper deployment process in place (like using Capistrano and Phing) adding the database migrations would be simple and automated.

I love this stuff so much I even wrote a book on it. I've already plugged it once in this thread and I don't want to spam it up, but you can shave $2 off with coupon code goon. It'll teach you how to do everything I covered in this post.

Edit: I should also mention that 12Factor is a great resource developed by Heroku on how to build modern web apps.

musclecoder fucked around with this message at 13:51 on Jan 29, 2014

spacebard
Jan 1, 2007

Football~

musclecoder posted:

This is called a database migration and there are libraries available for every major framework and stand-alone application. Find one you like and start using it. If you had a proper deployment process in place (like using Capistrano and Phing) adding the database migrations would be simple and automated.

I agree.

However I'm always nervous about automating major updates. I like to babysit: run the migration/update script, inspect the data at the end, and then be ready to rollback or not. I do this on staging, but I still get nervous about everything regarding production deployments so I feel as if I have to do that there too. Better unit and functional tests in development have reduced this fear over time, but it's still there.

There are also times when data is too large or sensitive to create functional/integration tests (on a limited infrastructure). And for that it's great to make staging specifically as a deployment test.

If a company has the resources, then it's best to throw away the concept of three environments. If I need to test or develop or stage, spin up a box via Vagrant and Puppet, back port the database, install, go, and do my worrying there. Then spin down the site later. Not everyone has those resources though.

revmoo
May 25, 2006

#basta

musclecoder posted:

Ideally you would checkout a test branch, install the new lib you were evaluating, run your tests against it, ensure it doesn't break anything and that it works like expected, commit composer.json and .lock, and then merge your branch into master and deploy. Are you really just doing everything in master and just deploying straight from there? Running 'php composer.phar install' installs everything off of composer.lock, not whatever the latest version of every library are.

Also, you can checkout your code and immediately have it run? Does that mean you're committing credentials to your configuration files?

The entire point of having a vendor system (and every major language does: Ruby, Python, Scala, Java, Go, Clojure, JavaScript, etc, etc) is so that you don't have to commit a bunch of libraries to your code base.

Yes credentials are committed, however the git server is secure enough that it's not at the extreme top of the list for me to go screaming from the building at like so many other things. We're dealing with hundreds and hundreds of sites that all have their own quirks and needs so while I totally understand and have the experience with continuous deployment to know the benefit, I simply haven't had the time.

Anyway, I'm making really good progress on my goal which is to have composer+query builder+eloquent integrated into our monolithic framework. I actually have everything working at feature parity with laravel and I'm just trying to figure out how to integrate it into our framework installer now and develop a workflow for composer. Composer is awesome, but to be honest it's really limited for the type of scenario I'm trying to setup. An example would be that I burned almost half a day just trying to figure out how to programatically call composer and it's methods in a way that will integrate well with our framework and installer. My query about comitting vendor/ is just part of a larger picture of trying to develop a workflow that makes sense for our very unique needs.

v1nce
Sep 19, 2004

Plant your brassicas in may and cover them in mulch.
I think it's fair to say I don't understand exactly what it is you're trying to accomplish. It sounds like you're trying to make your framework handle all aspects of setting up your project, including GIT, composer, blah blah blah?

Due to the myriad of projects we work on, I've been investigating a modified version of Varying Vagrant Vagrants, in the hope future projects could just clone down the project repo and run "vagrant up <project>" which would provision a close-to-production virtual machine within minutes, and require no further intervention on the developers part. This system would need the capability to bring up several virtual machines, several operating systems, different web stacks, etc, etc.

I have this working for a multi-repo multi-db project right now, but I still need to get the OK from higher-ups to do a public release. It's probably only really useful to agencies, but the hope is it'll save hours of setup time between devs.

DarkLotus
Sep 30, 2001

Lithium Hosting
Personal, Reseller & VPS Hosting
30-day no risk Free Trial &
90-days Money Back Guarantee!
I have a question for you php guys that do this for a living either on your own for a for a company.

I'm getting the ball rolling on a personal project and have a few questions.

When you're hired by a customer to develop a full featured app from the ground up, do you use an MVC framework every time? Or does it depend on the project? If it depends on the project, what is it that helps you decide if you need a framework or not?

If you build something without an MVC, do you use pre-packaged libraries and classes to handle authentication, user management, PDO, email and other features / functionality or do you roll your own?

I've never actually built anything from nothing with PHP, I've always modified or added features to existing packages or projects. I'm having a hard time decided where and how to get started and was looking for advice.

Impotence
Nov 8, 2010
Lipstick Apathy
It depends on what the project is. Some things I do not use any OO or MVC whatsoever.
Some things I use Laravel, some things are client-requested.

I really like microframeworks and hate things like Zend and Laravel and CI though.

DarkLotus
Sep 30, 2001

Lithium Hosting
Personal, Reseller & VPS Hosting
30-day no risk Free Trial &
90-days Money Back Guarantee!

Biowarfare posted:

It depends on what the project is. Some things I do not use any OO or MVC whatsoever.
Some things I use Laravel, some things are client-requested.

I really like microframeworks and hate things like Zend and Laravel and CI though.

This project will have a customer interface and a backend admin area. It will require user authentication, 2fa, incoming email parsing, outgoing email for account related notices, multiple pages for displaying and managing information stored in the database.

In the past, anything I've built for fun has always been procedural, the only time I used a class was for a particular task like sending email. Now using PDO is drat near a requirement to avoid SQL injection and because its the smart thing to do, you don't send emails using the mail function directly anymore and user authentication and overall site security needs to be handle carefully to prevent CSRF and XSS attacks.

If you use microframeworks, why and which ones? If a client isn't specifically requesting things be done a certain way as long as it works in the end and is fast and easy to manage, how would you determine your development method?

Pseudo-God
Mar 13, 2006

I just love oranges!
I would definitely use a framework in this case. I have been using Laravel for the past few months, and I am quite happy with it. I will do many things that you would otherwise do by hand, which can save you months of time.

v1nce
Sep 19, 2004

Plant your brassicas in may and cover them in mulch.
That exactly; I rarely build anything ground-up unless I have absolutely no choice, or I need to bring some ancient project up to modern standards. I'd much rather work with a minature framework than do everything "manually".

Just like if you were going to make a basic CMS site you might choose to use something like Wordpress, rather than spend months rolling your own (because why be Denny Different?), if you're going to make a much more complex site it's easier to start off with a well tested, proven framework which has a lot of shortcuts and helpers.

Further to that, Wordpress has numerous plugins which - assuming they don't suck - can be used to give your basic CMS site extra functionality within minutes. Frameworks have exactly the same capability, and can have extra modules and plugins tacked on to provide all sorts of features you don't want to spend forever writing yorself. ACL? There's modules for that. MongoDB? There's a module for that. LessCSS compiling, css and js minify, image processing? There's a module for that. And all these should have unit tests and documentation, so you can prove (or disprove) their capacity to perform.

And it's not just about making your life easier; as a developer you have a duty to ensure your code is secure and bug-free, so in an ideal world you'd have 100% unit-test coverage, because if something important breaks and the client decides to sue you for it, you want to prove you made every effort and took reasonable steps to ensure things were done right. Unless you have unlimited time and money, that's simply not going to happen when you're developing a system from the ground up. Be smart and use a unit-tested industry-recognised framework, and all the basic stuff like DB interaction and system security should already be water tight, leaving you to think about the app and not about the basics.

Now, each framework is a specific tool for a specific job, even if they all purport to be a veritable swiss army knife. For instance, Zend Framework 2 can do drat near everything quite well, but it's got an extremely sharp learning curve and takes a while to get even a simple site up and running, and is a real bitch to debug. CakePHP on the other hand is relatively easy to get up and going, but a lot of people don't like it's structure (testing capability, active record, etc).

If I were to suggest the framework right now, I'd probably side with Pseudo-God and point you to Laravel. I'm not a huge fan of everything it does, but as a lightweight, fast, and easy to learn framework it's probably up at the top of my list for someone new.

Also if you have a huge list of requirements that you know you need to fulfil and you haven't decided on a technology stack just yet, now is a good time to grab a framework off the shelf and search for packages related to what you need. Laravel might have modules for handling IMAP interaction, 2 factor auth, etc.

I also like to get a "vertical slice" (albeit a prototype) up and running as soon as possible, because then I know if all the tech will play nicely together, or if I have a roadblock in the basic packages I want to use. For instance, you may want to deliver the ability for a user to login (Basic Laravel page and DB interaction, 2FA test) and see a list of incoming emails (reading from IMAP), and send a notification (outbound Email). If you get this going then you know everything is going to be OK, but if you run into any problems like framework restrictions or problems with the IMAP module not supporting the Microsoft Exchange server or whatever, you'll discover them early on and not when you're 90% complete.

I had exactly this experience recently using MongoDB with ZF2; support wasn't quite up to scratch for a mission-critical system to rely on, so in the starting phase of the project we fell back onto another framework that wasn't suffering from the problems we were experiencing, rather than hack away at MongoDB support until it maybe worked. A bit of a bummber, but we only lost two days rather than several weeks.

v1nce fucked around with this message at 06:12 on Feb 1, 2014

DarkLotus
Sep 30, 2001

Lithium Hosting
Personal, Reseller & VPS Hosting
30-day no risk Free Trial &
90-days Money Back Guarantee!

Thanks a bunch for the detailed post. I'm going to try laravel and see what I can do with it. I at least have a better idea of what to focus on in the beginning and not worry about everything at once.
Worst case scenario, I can build functionality later that there isn't a package for. This may be the case with some of the very specific needs I'll have down the road.

PleasantDilemma
Dec 5, 2006

The Last Hope for Peace
Anyone written an Excel file using PHP and have a recommendation on a library to use? This PHPExcel is my best find but I thought I would ask here too.

Houston Rockets
Apr 15, 2006

Does anyone have experience with phpredis in a production environment? I can't seem to get a timeout to trigger despite setting the timeout to a very low number in testing. Maybe I need to set a higher level socket timeout or something? I want to exercise some edge cases! I'm a Python person, so not familiar with the PHP ecosystem much. We're using ZF1 if that matters.

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION
I have a question around testing that you lot may be able to help me with. I'm working on a footy tipping system that will use dates quite a bit to control things. For example, I'll want it to understand the concept of the 'current' football round. I'll also want it to know when a game has started and ended, that sort of thing.

This is fine logically, it's easy enough to write this stuff but how would you guys go about testing it? For example, maybe I want to make sure notification emails are sent out at the correct time, or that games are locked down from tipping once they've begun - how would I test this? I could manually change the date/time of my dev environment but that's a lot of stuffing around.

spacebard
Jan 1, 2007

Football~

Gnack posted:

This is fine logically, it's easy enough to write this stuff but how would you guys go about testing it? For example, maybe I want to make sure notification emails are sent out at the correct time, or that games are locked down from tipping once they've begun - how would I test this? I could manually change the date/time of my dev environment but that's a lot of stuffing around.

Hopefully your classes are loosely-coupled enough so that you could create a unit test based off of PHPUnit_Frameworke_TestCase. Then you should have high confidence that if you assert that your isLocked method or whatever returns true or false.

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION

spacebard posted:

Hopefully your classes are loosely-coupled enough so that you could create a unit test based off of PHPUnit_Frameworke_TestCase. Then you should have high confidence that if you assert that your isLocked method or whatever returns true or false.

Sorry but I've not used that before - that will allow me to define an arbitrary point in time and have my app behave as though it is currently that time?

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Gnack posted:

Sorry but I've not used that before - that will allow me to define an arbitrary point in time and have my app behave as though it is currently that time?

I think what he's suggesting is that you should make your Match class's isLocked method accept a datetime rather than assume the current output of date(). That way you can arbitrarily change whatever time you want to use to test whether a match is locked.

Adbot
ADBOT LOVES YOU

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION

Blinkz0rz posted:

I think what he's suggesting is that you should make your Match class's isLocked method accept a datetime rather than assume the current output of date(). That way you can arbitrarily change whatever time you want to use to test whether a match is locked.

Ah okay, thank you. I did wonder if that's what he was getting at. I was hoping to avoid that but it makes sense so I think I'll go for that approach. Thanks both of you.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply