Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
revmoo
May 25, 2006

#basta
Anyone have a guide for Laravel 5 custom authentication? The current docs are flat-out incorrect in a number of areas (ie listing paths that no longer exist in L5), and missing a ton of crucial info.

Adbot
ADBOT LOVES YOU

DarkLotus
Sep 30, 2001

Lithium Hosting
Personal, Reseller & VPS Hosting
30-day no risk Free Trial &
90-days Money Back Guarantee!

revmoo posted:

Anyone have a guide for Laravel 5 custom authentication? The current docs are flat-out incorrect in a number of areas (ie listing paths that no longer exist in L5), and missing a ton of crucial info.

This is one reason I've stuck with L4 for my projects. I've extended authentication and other core features that don't work in L5 and I can't find supporting docs to make it work.
There are only a couple of things L5 offers that L4 doesn't that I am interested in, not enough to force me to make the switch yet.

revmoo
May 25, 2006

#basta
I'm definitely regretting using L5 at this point, it's in now way finished. Unfortunately this app is like 60% built and I'm not sure downgrading would be worth the energy.

I've gone through a couple different L5 custom auth guides and just copying verbatim their code I get random nonsensical errors.

I think I'm just going to write my user auth from scratch.

Mogomra
Nov 5, 2005

simply having a wonderful time

revmoo posted:

I think I'm just going to write my user auth from scratch.

I feel your pain. :(

Writing stuff like that from scratch is the point of using something like Laravel, right?

revmoo
May 25, 2006

#basta

Mogomra posted:

I feel your pain. :(

Writing stuff like that from scratch is the point of using something like Laravel, right?

Yes it's my fault for just downloading laravel and installing it without researching and realizing that v5 is still in super alpha-stage of development.

spiritual bypass
Feb 19, 2008

Grimey Drawer
Could you use a Symfony component for it?

http://symfony.com/doc/current/components/security/authentication.html

revmoo
May 25, 2006

#basta
I really don't need all that much. I'm tying into another site's authentication system so I already have all the code I need, it's just a matter of forcing it into the L5 middleware in a way that works.

v1nce
Sep 19, 2004

Plant your brassicas in may and cover them in mulch.
Can you discern anything useful by looking at both the Github-based docs and the framework Unit tests?

substitute
Aug 30, 2003

you for my mum
Does this help?

https://laracasts.com/series/laravel-5-fundamentals/episodes/15

revmoo
May 25, 2006

#basta
Ended up figuring it out by trial and error. Also that remember_token requirement is really annoying since I'm using an external db for auth.

an skeleton
Apr 23, 2012

scowls @ u
I'm working on a Laravel 4.2 project at work and have been for several months. The whole process has been a bit of a mess -- I'm an intern, and the project started being lead (and solely developed) by an individual who had a somewhat poor understanding of software design principles, mainly separation of concerns. As an intern, I had worked on several of the other projects at the company, solving non-trivial issues *but* not usually having to architect things or be involved with deployment on any level. Well, that guy got fired, and responsibility of the project has been more-or-less handed off to another intern and I. Anyway, that's just a bit of background-- here is the meat of the problem.

We began doing "staging" deployments recently, and the largest issue we've run into (besides the fact that our deployment team is rather opaque about how they want to do deployment), and one big issue has been managing packages/components in the "/vendor" folder.

I have a somewhat functional understanding -- I know you run composer update to update/install your packages, composer install prioritizes the composer.lock file to mirror the picture of your dependencies, and composer dump-autoload creates a new autoload file for your project if it's somehow forgotten how things are mapped (?). If that's babby's first understanding of composer, well it's because this is babby's first PHP project.

Anyways, more important info. Deployment (henceforth referred to as NetOps) has changed how they wanted the packages. First, they wanted the project zipped up in a tarball. Then, they wanted to pull down the repository themselves. First, we didn't have /vendor/ files in the repository. Now, we do.

It took us around 4 hours to deploy on staging (it takes me and the other intern about 5 minutes to upgrade the existing "test" server)... Besides general communication issues, most of our problems have centered around what I just mentioned: to include /vendor in the repository or not, or whether composer should be run on the staging server, and for me I am confused about what a correct composer.json file looks like -- for example, a lot of the items that are in composer.lock, and are part of the laravel framework, are not in the composer.json file. Is this correct?

So if I wanted to do a fresh install of the project on a new environment (and didn't want to include the /vendor folder in the repository), I would need to include the composer.lock file and run install, *not* just the composer.json?

As a bonus question... how much should we expect the NetOps team to learn in terms of composer? Is it reasonable to expect them to run composer commands from the command line? They seem to be very resistant to us having any kind of access to the staging server at all, which makes sense I guess.

Sorry if this is a bit of a rant, I'll accept any and all help including any suggestions or resources on how to better understand composer and maintaining dependencies.

revmoo
May 25, 2006

#basta
This is a holy war I'm willing to dive in to. You should commit /vendor as a regular part of your source control. Why? Because all the stuff in there is what actually constitutes your 'app'. The way so many teams depend on a 'composer install' as a key point in their deployment process is asinine and utterly stupid. Most of them actually open themselves up to all sorts of issues regarding breaking changes, etc. Assuming your .lock file is actually set right and assuming there are no DDoS's on Github during your deploy, this whole stupid process CAN actually work, but that doesn't make it a good idea. The idea of running a local repo as a workaround is just so much layering of stupid on top of stupid it's unbelievable. It's adding complexity for the sake of adding complexity.

I think there are certain workflows and projects that this whole thing can work, but for the average group deploying average apps, it is a huge minefield to depend on composer for deploys. A 'git clone' should be ALL you need to spin up an app, nothing more. (excluding DB stuff, but Laravel has awesome migrations so it's a moot point).

We have push-button deploys to three different environments at my work including production. An average deploy to any environment is 60 seconds. There is zero downtime. There are zero 'extra steps.' I don't care how bloated my VCS is.

musclecoder
Oct 23, 2006

I'm all about meeting girls. I'm all about meeting guys.
As revmoo said, yes this is a holy war because you absolutely should not commit your /vendor directory. First, Composer caches the vendors so as long as you don't change a packages version between deployments, you'll be pulling from a local cache on the server itself. The /vendor directory can be quite large, so there's no reason to commit thousands of files to your repo (one of my projects has over 12,000 files alone). Composer also supports different environments (require vs require-dev) so you probably don't need to commit and deploy the stuff you need for development.

I've done thousands of deployments using Composer, GitHub, and Packagist over the last 3 or so years and I can only recall one or two times that GitHub has been down (and of course, with the aforementioned caching, it's not a big deal if it is as my packages rarely change).

Now, a good argument against installing vendors on a production system is the security component. Though most installations are over https, if it's ever somehow MITM'ed you could deploy malicious code to production.

Finally, setting up Satis (or Toran Proxy to support the developer of Composer - https://toranproxy.com/) is not really that difficult. You could easily set up Jenkins or Bamboo or whatever to build your final application as a tarball, RPM, or .deb, save it in an artifact repository, and deploy a single, prebuilt application to your servers.

For development, all of my repositories have a build-dev script in them that cleans out your development environment, installs everything you need, runs your migrations, builds your CSS and JavaScript, and you're good to go. A sample looks like this:

code:
#!/bin/bash

GREEN="\033[1;32m"
RED="\033[1;31m"
BLUE="\033[1;34m"
YELLOW="\033[1;33m"
ENDCOLOR="\033[0m"

echo -e "${BLUE}[BEGIN]${ENDCOLOR} Beginning Goon Project development build process."
echo

echo -e "${BLUE}[CHECK]${ENDCOLOR} ./app/config/build.settings exists."
if [ ! -f ./app/config/build.settings ]
then
    echo -e "${RED}[FAILURE]${ENDCOLOR} The ./app/config/build.settings file does not exist. Create it from ./app/config/build.settings.template."
    exit 1
fi
echo -e "${GREEN}[OK]${ENDCOLOR} ./app/config/build.settings exists."
echo

echo -e "${BLUE}[INSTALL]${ENDCOLOR} Installing Composer."
curl -s [url]https://getcomposer.org/installer[/url] | php >/dev/null 2>&1

echo -e "${BLUE}[INSTALL]${ENDCOLOR} Installing PHPUnit."
wget -qO phpunit.phar [url]https://phar.phpunit.de/phpunit-4.1.3.phar[/url] --no-check-certificate

echo -e "${BLUE}[INSTALL]${ENDCOLOR} Installing Phing."
wget -qO phing.phar [url]http://www.phing.info/get/phing-2.6.1.phar[/url]
echo

echo -e "${BLUE}[BEGIN]${ENDCOLOR} Building the Goon Project."

if [ ! -d "log" ]; then
    mkdir log
fi

php phing.phar -Dbuild_settings_file=app/config/build.settings build

echo -e "${GREEN}[FINISHED]${ENDCOLOR} Building the Goon Project."

echo -e "${BLUE}[BEGIN]${ENDCOLOR} Compiling Goon Project Sass."
compass compile src/GoonProject/AppBundle/Resources/config/compass
echo -e "${GREEN}[FINISHED]${ENDCOLOR} Compiling Goon Project Sass."

echo
echo -e "${GREEN}# [SUCCESS] The Goon Project is ready to go!${ENDCOLOR}"
echo

exit 0
The very first time you clone the repo, you copy app/config/build.settings.template to app/config/build.settings, update the settings (or leave them alone in most of the cases), and run build-dev in your Vagrant and you're good to go.

TheOtherContraGuy
Jul 4, 2007

brave skeleton sacrifice
Hey I need to learn PHP for work, but I only know Python. Does anyone have any good resources for learning PHP for bridging the gap between the two languages?

v1nce
Sep 19, 2004

Plant your brassicas in may and cover them in mulch.
Saw this on a cursory google search, might help bridge the gap for you: http://hyperpolyglot.org/scripting

Alright, let's get a bit more in-depth: Holy war aside, there's a few obvious different ways you can manage your vendors directory.

Option 1: Don't commit /vendors, rely on Composer to fetch your dependencies every time
This is the approach myself and musclecoder use. You don't commit /vendors, and you let Composer fetch all your dependencies from the internet each time using composer install.

Pros
  • You aren't muddying your repository with dependencies (clean history, smaller repo)
  • composer.lock tells you exactly what you checked out and when
  • Each git-managed dependency is still linked to the origin, so you have history, ability to branch, blah blah blah
  • People don't go making local changes to 3rd party libs. Branch and check that out instead.
Cons
  • You don't have your dependencies stored anywhere permanent. Every /vendor directory is essentially temporary
  • If you don't specify a version/branch/hash of a composer managed dependency, running a "composer update" could update you to a breaking/broken/insecure version of the dependency
  • If a 3rd party lib gets taken down, or the host isn't available (eg. DDoS on github), you won't be able to grab the libs you need.
  • Deployment requires you to run "composer install" on a production machine, or transfer the files in another manner.
Use this when
  • You know what you're doing with Composer
  • You're happy to put a modicum of trust in the 3rd party you're grabbing code from.
  • You have some control over your deployment process.
Maybe don't use this when
  • You're paranoid about 3rd parties
  • You need to keep a static copy of the site and all its required dependencies (historical, legal or contractual needs)
  • You don't have much control over your deployment process.
Option 2: Commit vendors to your VCS with everything else

This basically inverses everything. You instead commit all dependencies to the same repo as your project code.

Pros
  • Everything is kept together. Your repo is one entire project, essentially a direct snapshot of everything you need.
  • Deployment process is simplified. You just throw the code from your repo at the server.
  • You can still use composer, you just check in code when you update a 3rd party lib.
  • There's no way for a dependency update to just "show up", because significant human intervention is required to get it into the project.
Cons
  • Makes your VCS repo really fat. On smaller projects this isn't so bad, but on enterprise stuff this can get really out of hand, especially if you update dependencies frequently to stay abreast of features and security updates.
  • If you remove a dependency, the size it added to your project will still be in VCS unless you rewrite history (bad!).
  • You don't typically get to keep the link to the 3rd party VCS. You can mitigate this somewhat by using Composer.
  • If your repo becomes unreasonably huge, you need to either rewrite history to eradicate the vendors files, or make a new repo and drop history.
  • People can and do make local changes. These are difficult to find and track, get missed, and unless you unit test for them, bad things can easily happen.
Use this when
  • You don't trust 3rd parties so much and want to vet everything you bring into your codebase.
  • Your project is small and doesn't depend on hundreds of megabytes of dependencies.
  • You want a static copy of your site in the one repository. Good for history, or a disparate team who don't all "get" composer, or for code you'll only come back to in 1-2 years time (see also: working at an agency).
  • You need a dead simple deployment process.
Maybe don't use this when
  • You have enough dependencies to cripple a simple VCS checkout.
  • You update dependencies frequently
Option 3: Commit vendors to a sister repository, or maintain a deployment repository

You don't want to muddy your main VCS with a gazillion bytes of dependencies and their history, but you want one repo as a snapshot of your entire code and deps, OR you want to maintain a snapshot of your deps, so together they form an entire history.
You can do this by making a second repo which encompasses the content of your /vendors directory, or when you go to make a release version you can copy everything to a release directory which is managed by a different repository.
You can maintain a relationship between the two by giving them the commit position the same tag. Easy.

Pros
  • You have a clean VCS where you do your work, and you have a fat VCS where you keep your dependencies, or releases.
  • Gives you a full history and full copies of all the code and stored somewhere
Cons
  • You have to maintain two repos. Unless you outline this process, people are likely to trip over their own dicks when they encounter it.
  • Makes your internal process for a deployment more complicated. The external deployment is simplified.
Use this when
  • You need full history, and to store dependencies, but you have a LOT of dependencies.
Maybe don't use this when
  • Your developers are easily confused by internal procedures (juniors!)

Options 4: Fork all of the things
You'll only typically see this at a large SAAS company. You just fork everything you use into your own local repositories. This maintains history, and ensures you keep control over the code, lets you use composer and keeps your repositories clean.
You'd never see this in an agency/freelance where hundreds of projects might pass through.

an skeleton posted:

So if I wanted to do a fresh install of the project on a new environment (and didn't want to include the /vendor folder in the repository), I would need to include the composer.lock file and run install, *not* just the composer.json?

As a bonus question... how much should we expect the NetOps team to learn in terms of composer? Is it reasonable to expect them to run composer commands from the command line? They seem to be very resistant to us having any kind of access to the staging server at all, which makes sense I guess.

Oddly, NetOps/DevOps is a role i'm constantly trying to eradicate via heavy automation. Ideally whoever pushes the final deployment of your code wouldn't have much of a manual process to run - they just push "go" and it takes care of itself.
From the sound of it, you have a process where you push code over the fence, and it ends up on a live server via a black-box process that NetOps manage. If that's the case, you could look at making a pre-deployment process which looks a bit like this:
  • Fire up an empty server
  • Check out code and dependencies
  • Run test suite
  • Compact all code to a tarball
  • Throw tarball at NetOps team
Where your teams are not well integrated, it's nicer to settle on an agreement of what you will blindly provide to NetOps. Think of it as code/deployment and separation of concerns. If developers had control over the deployment process then it would be fine to run "composer install", but the more you can prevent NetOps having to do, the smoother your deployments will go.

That said, the more complicated your system and your code, the more NetOps will have to put up with. For instance, in our system we have some software dependencies at the server level, and about 5 or 6 cache-warming commands we run before new code hits production, including composer install. This is fine for us, because the upper echelons of the Dev team are also DevOps and maintain the servers. We're trying to simplify this so our entire server infrastructure rebuilds at deployment time (AWS, docker, etc!) and then our versioned server-side dependencies will get pulled down the same way our composer dependencies do, and the entire thing will be automated from top to bottom.

Fake edit: Yikes, I need to talk less.

Real edit:
When talking about using git as a deployment tool, there's nothing stopping you from adding a git hook which, upon pull/push runs "composer install" on the server in your checkout directory, and when complete it swaps that new version of code into the live environment, giving you seamless deployment. Automated deployments are always the best deployments.

v1nce fucked around with this message at 04:26 on May 23, 2015

an skeleton
Apr 23, 2012

scowls @ u
No, that's all really great and exactly what I needed, thank you. I'll try and represent this stuff well and report back with whatever the outcome is.

v1nce
Sep 19, 2004

Plant your brassicas in may and cover them in mulch.
Hello, thread. I'm looking for a couple of opinions.

In the Symfony2 project I'm looking after, we have the following repository access pattern:
php:
<?
class MyController
{
    public function myAction(Request $request)
    {
        $id = (int) $request->get('id', null);
        $someStuff = $this->getDoctrine()->getRepository('MyBundle:SomeEntity')->findSomeStuff($id, true);
    }
}
?>
There's been a suggestion to change it to something like this:
php:
<?
class MyController
{
    public function myAction(Request $request)
    {
        $id = (int) $request->get('id', null);

        $someService = $this->get('service.some_service');
        $someStuff = $someService->findSomeStuff($id, true);
    }
}

class SomeService
{
    protected $someEntityRepository;

    // Omitted: constructor

    public function findSomeStuff($id, $includeDeleted)
    {
        $query = $this->someEntityRepository->findSomeStuff($id, $includeDeleted);
        return $query->getResult();
    }
}
?>
These two approaches obviously lead to exactly the same result.

Is there any major benefit to moving the Repository access behind a service rather than accessing the repo directly in the controller?
Is it a good idea to put the repository in the service?

spacebard
Jan 1, 2007

Football~

v1nce posted:

Hello, thread. I'm looking for a couple of opinions.

In the Symfony2 project I'm looking after, we have the following repository access pattern:
php:
<?
class MyController
{
    public function myAction(Request $request)
    {
        $id = (int) $request->get('id', null);
        $someStuff = $this->getDoctrine()->getRepository('MyBundle:SomeEntity')->findSomeStuff($id, true);
    }
}
?>
There's been a suggestion to change it to something like this:
php:
<?
class MyController
{
    public function myAction(Request $request)
    {
        $id = (int) $request->get('id', null);

        $someService = $this->get('service.some_service');
        $someStuff = $someService->findSomeStuff($id, true);
    }
}

class SomeService
{
    protected $someEntityRepository;

    // Omitted: constructor

    public function findSomeStuff($id, $includeDeleted)
    {
        $query = $this->someEntityRepository->findSomeStuff($id, $includeDeleted);
        return $query->getResult();
    }
}
?>
These two approaches obviously lead to exactly the same result.

Is there any major benefit to moving the Repository access behind a service rather than accessing the repo directly in the controller?
Is it a good idea to put the repository in the service?

I guess whichever is easier to mock the Repository service for testing MyController, which is probably the second option even though that depends on mocking the service container too, right?

It would probably be "better" to inject the Repository service into myAction by making MyController a service too. I asked several people about whether there would be a big performance impact, and even with a ton of routes, there wouldn't be any.

DimpledChad
May 14, 2002
Rigging elections since '87.
I'm not super familiar with Symfony, but in the first example, does $this->getDoctrine() use the dependency injection container? I.e., if you can already mock out the doctrine dependency in a testing environment, I see no reason to add an extra layer of indirection – it just adds cognitive overhead. But if putting the repository access in a service is the only way to make the database access mock-able, then you might want to do it.

musclecoder
Oct 23, 2006

I'm all about meeting girls. I'm all about meeting guys.
Use a ParamConverter to convert the parameter from the request into the object you're looking for (or throw a 404): http://symfony.com/doc/current/bundles/SensioFrameworkExtraBundle/annotations/converters.html and http://symfony.com/doc/current/best_practices/controllers.html

php:
<?php

class MyController extends Controller
{

    /**
     * @ParamConverter ....
     */
    public function myAction(SomeEntity $someEntityRequest $request)
    {
        // ....
    }

}

v1nce
Sep 19, 2004

Plant your brassicas in may and cover them in mulch.

spacebard posted:

I guess whichever is easier to mock the Repository service for testing MyController [...]
It would probably be "better" to inject the Repository service into myAction by making MyController a service too
My head was so full of other stuff yesterday I didn't even think about testability while we were discussing it. Thank you.
We don't do controllers-as-services because the other devs here have grown up using shortcut methods like $this->makeMyLifeEasy() on the Controller, and services seem to confuse people. I thought about moving to that method for my own code, but for consistently I've avoided it unless I can persuade other leads it's a good move and something we should adopt in the long-term.

musclecoder posted:

Use a ParamConverter to convert the parameter from the request into the object you're looking for (or throw a 404): http://symfony.com/doc/current/bundles/SensioFrameworkExtraBundle/annotations/converters.html and http://symfony.com/doc/current/best_practices/controllers.html
That's an good approach I didn't know was available, thanks for pointing it out. Unfortunately the "load a thing by param" was just a practical example, and there's lots of other scenarios which call on Repositories in the controller because reasons.

DimpledChad posted:

does $this->getDoctrine() use the dependency injection container? I.e., if you can already mock out the doctrine dependency in a testing environment, I see no reason to add an extra layer of indirection – it just adds cognitive overhead. But if putting the repository access in a service is the only way to make the database access mock-able, then you might want to do it.
Yep, it's a symfony-provided shortcut method in the Controller class for just $this->container->get('doctrine').

Unless I've missed something, I'd really rather not mock three levels of objects if I can avoid it (container, doctrine, repo). Although some argue that Controllers are better tested with functional tests, and leave the unit tests to service-level stuff.
It doesn't help that some of our controllers contain code that should be in a service to begin with.

Tomahawk
Aug 13, 2003

HE KNOWS
If I need to pull some metrics (10 or so queries) for the last X amount of days from a SQL database (each day has it's own metrics), how do I do that so I'm not doing a loop that does 10*X select queries? This is in Symfony for reference but I'm just using raw SQL. I can't really use an IN statement with an array to group everything I don't think, because I'm selecting by using BETWEEN 2 timestamps.

Tomahawk fucked around with this message at 04:05 on May 28, 2015

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Tomahawk posted:

If I need to pull some metrics (10 or so queries) for the last X amount of days from a SQL database (each day has it's own metrics), how do I do that so I'm not doing a loop that does 10*X select queries? This is in Symfony for reference but I'm just using raw SQL. I can't really use an IN statement with an array to group everything I don't think, because I'm selecting by using BETWEEN 2 timestamps.

I'd post this over in the database megathread and maybe provide some more details (column names, sample rows, what you want the output to look like)

musclecoder
Oct 23, 2006

I'm all about meeting girls. I'm all about meeting guys.

Tomahawk posted:

I can't really use an IN statement with an array to group everything I don't think, because I'm selecting by using BETWEEN 2 timestamps.

Like fletcher said, we'll need more details - but one thing to be aware of is that if you're doing a BETWEEN two dates and one date doesn't have any records, it obviously won't show up in the results. You can get around this by building a pre-set table of all dates and LEFT JOINing that, or if you're using Postgres, use the generate_series() to generate a series of empty data between the two dates and COALESCE() the results.

Tomahawk
Aug 13, 2003

HE KNOWS

musclecoder posted:

Like fletcher said, we'll need more details - but one thing to be aware of is that if you're doing a BETWEEN two dates and one date doesn't have any records, it obviously won't show up in the results. You can get around this by building a pre-set table of all dates and LEFT JOINing that, or if you're using Postgres, use the generate_series() to generate a series of empty data between the two dates and COALESCE() the results.

Cross posted from the DB thread now but they're just pretty straight forward count/avg select statements for each day.

Right now I am using a loop that decrements the time by one day, and executes a bunch of

code:

COUNT(something) FROM thing WHERE timestamp BETWEEN firstTime and secondTime

Each day is an array of metrics that all eventually gets put into a CSV. It feels like there's a much better way to do this.

spiritual bypass
Feb 19, 2008

Grimey Drawer
Could you be more specific about these statements? I bet they can be combined, but it's impossible to say how unless I actually know what they are.

Tomahawk
Aug 13, 2003

HE KNOWS

rt4 posted:

Could you be more specific about these statements? I bet they can be combined, but it's impossible to say how unless I actually know what they are.

Well I guess my main concern is the executing SQL statements in a loop (since I have been taught that that is very bad) rather than how many there are in that loop re: combining them. Is there a way I could run something like

code:
SELECT COUNT(thing) FROM table WHERE timestamp BETWEEN startTime and endTime
for 30 different timestamps instead of modifying the timestamp in the loop and then executing that statement 30 times?


Edit: Nevermind, someone in the DB thread pointed me in the right direction for my needs with:

NihilCredo posted:

"group by extract( day from dateadd(hour, -2, ts))" ?

Thanks for the help all and apologies for not spotting the DB thread in the first place! :)

Tomahawk fucked around with this message at 19:08 on May 28, 2015

revmoo
May 25, 2006

#basta
I'm parsing the output of an API that, depending on the object you're querying, either may or may not have extremely recursive output, IE nested arrays and objects that can be up to 10 levels deep.

Is there a smarter way of transforming this data than doing a poo poo ton of ugly isset() crap everywhere? I'm looking to have probably 300 lines of isset() and nested loops just to check if data exists or not and it's getting extremely unwieldy. I feel like I'm going about this the wrong way.

Heskie
Aug 10, 2002
Phone posting but I believe array_filter() removes keys with empty values, could that help, or a recursive variant?

Fake edit: just re read and I don't think my reply will help, sorry!

spacebard
Jan 1, 2007

Football~

revmoo posted:

I'm parsing the output of an API that, depending on the object you're querying, either may or may not have extremely recursive output, IE nested arrays and objects that can be up to 10 levels deep.

Is there a smarter way of transforming this data than doing a poo poo ton of ugly isset() crap everywhere? I'm looking to have probably 300 lines of isset() and nested loops just to check if data exists or not and it's getting extremely unwieldy. I feel like I'm going about this the wrong way.

Seems like one of those times a recursive function may come in handy. Like I needed to parse an arbitrary nested list elements in a HTML partial file the other day. So I wrote a recursive function to look thru the DOMDocument via xpath queries.

v1nce
Sep 19, 2004

Plant your brassicas in may and cover them in mulch.
I was going to suggest RestRemoteObject, but it's old, there's an insane amount of black magic, you need to use Zend, and I'm pretty sure it doesn't play well with deep arrays.

What sort of data is being spat out by the API? Is there a reliable way to map the sub-items, or are you identifying the "type" of data returned by the keys that are available?

Myself, I'd make a parser that can map the response to objects - some recursion, reflection and maybe the ability to custom parse an object or two. This might get really hairy if the API structure is garbage, though. If you provide some more info, we might be able to come up with something.

Hogscraper
Nov 6, 2004

Audio master

Tomahawk posted:


Edit: Nevermind, someone in the DB thread pointed me in the right direction for my needs with:


Beware of BETWEEN. It won't match your start or end time. Better to use

code:
SELECT
	COUNT(thing)
FROM
	table
WHERE
	timestamp >= startTime
AND
	timestamp <= endTime
#GROUP BY
#	Group by stuff
Your date_add code looks weird. What did they tell you in the db thread?

musclecoder
Oct 23, 2006

I'm all about meeting girls. I'm all about meeting guys.
BETWEEN includes the start and end values in Postgres http://www.postgresql.org/docs/9.4/static/functions-comparison.html and MySQL https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_between

First Time Caller
Nov 1, 2004

Has anyone used Zookeeper to manage locking between concurrent gearman jobs?

Experto Crede
Aug 19, 2008

Keep on Truckin'
I'm having real trouble sending data to an API and I'm not sure why it's not working.

Basically, I need to send user supplied data to an API running on another server via a GET request. It works fine when manually running the URL, but trying to do it via PHP doesn't seem to work properly.

This is the code chunk I'm using to generate the request and send it:

code:
$request = http_build_query(array('brand'=>$brand,'id'=>$id,'note'=>$note),'','&amp;');
$r = new HttpRequest("http://xxx.xxx.xxx.xxx/api.php?$request");
$r -> send();
I've tried it using curl as well but with no joy, I just installed the pecl http extension to test, but same result. The request seems to send okay (I get a 200 code when checking the status), but it doesn't do what's expected (Add the urlencoded note to a database).

The weird thing is if I just echo the url with the request, and then copy and paste the url, it kicks off as expected.

Is it possible something on the API's end at all? I'm starting to think that because I've tried a few methods with no success, but I'm definitely not excluding myself being an idiot.

Experto Crede fucked around with this message at 16:11 on Jun 13, 2015

McGlockenshire
Dec 16, 2005

GOLLOCKS!
So unless whatever HttpRequest class that is is doing something magic and weird, that code is broken.

The third param to http_build_query is the separator. When you are embedding a link in an HTML document, using &amp; is correct, but this is a direct request, so you want an actual ampersand there instead of the HTML entity.

Impotence
Nov 8, 2010
Lipstick Apathy
'&amp;'

if that isn't vbulletin mangling it, im reasonably sure it uses that literally so you'll end up with unusable urls of &amp;x=y&amp;a=b instead of &x=y&a=b

Experto Crede
Aug 19, 2008

Keep on Truckin'
Turns out it was me being an idiot. First I had a bug in my code stopping curl sending the requests out, then using the &‌amp; (Which I did after echoing the URL caused issues because &‌‌not was being seen as ¬ in HTML encoding) stopped the request being accepted properly by the API after I fixed the initial curl issue.

Thanks for your help!

Experto Crede fucked around with this message at 17:07 on Jun 15, 2015

spiritual bypass
Feb 19, 2008

Grimey Drawer
I'm about to embark on a project that involves a bunch of different websites that have the same backend behavior. Would it be a sensible approach to use a single installation of Symfony with a bundle for each site's frontend plus a bundle of shared backend code?

Adbot
ADBOT LOVES YOU

0zzyRocks
Jul 10, 2001

Lord of the broken bong

rt4 posted:

I'm about to embark on a project that involves a bunch of different websites that have the same backend behavior. Would it be a sensible approach to use a single installation of Symfony with a bundle for each site's frontend plus a bundle of shared backend code?

Are they all going to be on the same server? Are they going to be installed by your client/customer in their own environment?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply