|
Anyone have a guide for Laravel 5 custom authentication? The current docs are flat-out incorrect in a number of areas (ie listing paths that no longer exist in L5), and missing a ton of crucial info.
|
# ? May 21, 2015 14:40 |
|
|
# ? Jun 8, 2024 03:12 |
|
revmoo posted:Anyone have a guide for Laravel 5 custom authentication? The current docs are flat-out incorrect in a number of areas (ie listing paths that no longer exist in L5), and missing a ton of crucial info. This is one reason I've stuck with L4 for my projects. I've extended authentication and other core features that don't work in L5 and I can't find supporting docs to make it work. There are only a couple of things L5 offers that L4 doesn't that I am interested in, not enough to force me to make the switch yet.
|
# ? May 21, 2015 14:48 |
|
I'm definitely regretting using L5 at this point, it's in now way finished. Unfortunately this app is like 60% built and I'm not sure downgrading would be worth the energy. I've gone through a couple different L5 custom auth guides and just copying verbatim their code I get random nonsensical errors. I think I'm just going to write my user auth from scratch.
|
# ? May 21, 2015 15:07 |
|
revmoo posted:I think I'm just going to write my user auth from scratch. I feel your pain. Writing stuff like that from scratch is the point of using something like Laravel, right?
|
# ? May 21, 2015 15:10 |
|
Mogomra posted:I feel your pain. Yes it's my fault for just downloading laravel and installing it without researching and realizing that v5 is still in super alpha-stage of development.
|
# ? May 21, 2015 15:23 |
|
Could you use a Symfony component for it? http://symfony.com/doc/current/components/security/authentication.html
|
# ? May 21, 2015 15:32 |
|
I really don't need all that much. I'm tying into another site's authentication system so I already have all the code I need, it's just a matter of forcing it into the L5 middleware in a way that works.
|
# ? May 21, 2015 15:39 |
|
Can you discern anything useful by looking at both the Github-based docs and the framework Unit tests?
|
# ? May 22, 2015 03:48 |
|
Does this help? https://laracasts.com/series/laravel-5-fundamentals/episodes/15
|
# ? May 22, 2015 05:16 |
|
Ended up figuring it out by trial and error. Also that remember_token requirement is really annoying since I'm using an external db for auth.
|
# ? May 22, 2015 14:08 |
|
I'm working on a Laravel 4.2 project at work and have been for several months. The whole process has been a bit of a mess -- I'm an intern, and the project started being lead (and solely developed) by an individual who had a somewhat poor understanding of software design principles, mainly separation of concerns. As an intern, I had worked on several of the other projects at the company, solving non-trivial issues *but* not usually having to architect things or be involved with deployment on any level. Well, that guy got fired, and responsibility of the project has been more-or-less handed off to another intern and I. Anyway, that's just a bit of background-- here is the meat of the problem. We began doing "staging" deployments recently, and the largest issue we've run into (besides the fact that our deployment team is rather opaque about how they want to do deployment), and one big issue has been managing packages/components in the "/vendor" folder. I have a somewhat functional understanding -- I know you run composer update to update/install your packages, composer install prioritizes the composer.lock file to mirror the picture of your dependencies, and composer dump-autoload creates a new autoload file for your project if it's somehow forgotten how things are mapped (?). If that's babby's first understanding of composer, well it's because this is babby's first PHP project. Anyways, more important info. Deployment (henceforth referred to as NetOps) has changed how they wanted the packages. First, they wanted the project zipped up in a tarball. Then, they wanted to pull down the repository themselves. First, we didn't have /vendor/ files in the repository. Now, we do. It took us around 4 hours to deploy on staging (it takes me and the other intern about 5 minutes to upgrade the existing "test" server)... Besides general communication issues, most of our problems have centered around what I just mentioned: to include /vendor in the repository or not, or whether composer should be run on the staging server, and for me I am confused about what a correct composer.json file looks like -- for example, a lot of the items that are in composer.lock, and are part of the laravel framework, are not in the composer.json file. Is this correct? So if I wanted to do a fresh install of the project on a new environment (and didn't want to include the /vendor folder in the repository), I would need to include the composer.lock file and run install, *not* just the composer.json? As a bonus question... how much should we expect the NetOps team to learn in terms of composer? Is it reasonable to expect them to run composer commands from the command line? They seem to be very resistant to us having any kind of access to the staging server at all, which makes sense I guess. Sorry if this is a bit of a rant, I'll accept any and all help including any suggestions or resources on how to better understand composer and maintaining dependencies.
|
# ? May 22, 2015 22:10 |
|
This is a holy war I'm willing to dive in to. You should commit /vendor as a regular part of your source control. Why? Because all the stuff in there is what actually constitutes your 'app'. The way so many teams depend on a 'composer install' as a key point in their deployment process is asinine and utterly stupid. Most of them actually open themselves up to all sorts of issues regarding breaking changes, etc. Assuming your .lock file is actually set right and assuming there are no DDoS's on Github during your deploy, this whole stupid process CAN actually work, but that doesn't make it a good idea. The idea of running a local repo as a workaround is just so much layering of stupid on top of stupid it's unbelievable. It's adding complexity for the sake of adding complexity. I think there are certain workflows and projects that this whole thing can work, but for the average group deploying average apps, it is a huge minefield to depend on composer for deploys. A 'git clone' should be ALL you need to spin up an app, nothing more. (excluding DB stuff, but Laravel has awesome migrations so it's a moot point). We have push-button deploys to three different environments at my work including production. An average deploy to any environment is 60 seconds. There is zero downtime. There are zero 'extra steps.' I don't care how bloated my VCS is.
|
# ? May 23, 2015 01:43 |
|
As revmoo said, yes this is a holy war because you absolutely should not commit your /vendor directory. First, Composer caches the vendors so as long as you don't change a packages version between deployments, you'll be pulling from a local cache on the server itself. The /vendor directory can be quite large, so there's no reason to commit thousands of files to your repo (one of my projects has over 12,000 files alone). Composer also supports different environments (require vs require-dev) so you probably don't need to commit and deploy the stuff you need for development. I've done thousands of deployments using Composer, GitHub, and Packagist over the last 3 or so years and I can only recall one or two times that GitHub has been down (and of course, with the aforementioned caching, it's not a big deal if it is as my packages rarely change). Now, a good argument against installing vendors on a production system is the security component. Though most installations are over https, if it's ever somehow MITM'ed you could deploy malicious code to production. Finally, setting up Satis (or Toran Proxy to support the developer of Composer - https://toranproxy.com/) is not really that difficult. You could easily set up Jenkins or Bamboo or whatever to build your final application as a tarball, RPM, or .deb, save it in an artifact repository, and deploy a single, prebuilt application to your servers. For development, all of my repositories have a build-dev script in them that cleans out your development environment, installs everything you need, runs your migrations, builds your CSS and JavaScript, and you're good to go. A sample looks like this: code:
|
# ? May 23, 2015 02:30 |
|
Hey I need to learn PHP for work, but I only know Python. Does anyone have any good resources for learning PHP for bridging the gap between the two languages?
|
# ? May 23, 2015 02:39 |
|
Saw this on a cursory google search, might help bridge the gap for you: http://hyperpolyglot.org/scripting Alright, let's get a bit more in-depth: Holy war aside, there's a few obvious different ways you can manage your vendors directory. Option 1: Don't commit /vendors, rely on Composer to fetch your dependencies every time This is the approach myself and musclecoder use. You don't commit /vendors, and you let Composer fetch all your dependencies from the internet each time using composer install. Pros
This basically inverses everything. You instead commit all dependencies to the same repo as your project code. Pros
You don't want to muddy your main VCS with a gazillion bytes of dependencies and their history, but you want one repo as a snapshot of your entire code and deps, OR you want to maintain a snapshot of your deps, so together they form an entire history. You can do this by making a second repo which encompasses the content of your /vendors directory, or when you go to make a release version you can copy everything to a release directory which is managed by a different repository. You can maintain a relationship between the two by giving them the commit position the same tag. Easy. Pros
Options 4: Fork all of the things You'll only typically see this at a large SAAS company. You just fork everything you use into your own local repositories. This maintains history, and ensures you keep control over the code, lets you use composer and keeps your repositories clean. You'd never see this in an agency/freelance where hundreds of projects might pass through. an skeleton posted:So if I wanted to do a fresh install of the project on a new environment (and didn't want to include the /vendor folder in the repository), I would need to include the composer.lock file and run install, *not* just the composer.json? Oddly, NetOps/DevOps is a role i'm constantly trying to eradicate via heavy automation. Ideally whoever pushes the final deployment of your code wouldn't have much of a manual process to run - they just push "go" and it takes care of itself. From the sound of it, you have a process where you push code over the fence, and it ends up on a live server via a black-box process that NetOps manage. If that's the case, you could look at making a pre-deployment process which looks a bit like this:
That said, the more complicated your system and your code, the more NetOps will have to put up with. For instance, in our system we have some software dependencies at the server level, and about 5 or 6 cache-warming commands we run before new code hits production, including composer install. This is fine for us, because the upper echelons of the Dev team are also DevOps and maintain the servers. We're trying to simplify this so our entire server infrastructure rebuilds at deployment time (AWS, docker, etc!) and then our versioned server-side dependencies will get pulled down the same way our composer dependencies do, and the entire thing will be automated from top to bottom. Fake edit: Yikes, I need to talk less. Real edit: When talking about using git as a deployment tool, there's nothing stopping you from adding a git hook which, upon pull/push runs "composer install" on the server in your checkout directory, and when complete it swaps that new version of code into the live environment, giving you seamless deployment. Automated deployments are always the best deployments. v1nce fucked around with this message at 04:26 on May 23, 2015 |
# ? May 23, 2015 04:17 |
|
No, that's all really great and exactly what I needed, thank you. I'll try and represent this stuff well and report back with whatever the outcome is.
|
# ? May 23, 2015 07:09 |
|
Hello, thread. I'm looking for a couple of opinions. In the Symfony2 project I'm looking after, we have the following repository access pattern: php:<? class MyController { public function myAction(Request $request) { $id = (int) $request->get('id', null); $someStuff = $this->getDoctrine()->getRepository('MyBundle:SomeEntity')->findSomeStuff($id, true); } } ?> php:<? class MyController { public function myAction(Request $request) { $id = (int) $request->get('id', null); $someService = $this->get('service.some_service'); $someStuff = $someService->findSomeStuff($id, true); } } class SomeService { protected $someEntityRepository; // Omitted: constructor public function findSomeStuff($id, $includeDeleted) { $query = $this->someEntityRepository->findSomeStuff($id, $includeDeleted); return $query->getResult(); } } ?> Is there any major benefit to moving the Repository access behind a service rather than accessing the repo directly in the controller? Is it a good idea to put the repository in the service?
|
# ? May 27, 2015 05:43 |
|
v1nce posted:Hello, thread. I'm looking for a couple of opinions. I guess whichever is easier to mock the Repository service for testing MyController, which is probably the second option even though that depends on mocking the service container too, right? It would probably be "better" to inject the Repository service into myAction by making MyController a service too. I asked several people about whether there would be a big performance impact, and even with a ton of routes, there wouldn't be any.
|
# ? May 27, 2015 11:23 |
|
I'm not super familiar with Symfony, but in the first example, does $this->getDoctrine() use the dependency injection container? I.e., if you can already mock out the doctrine dependency in a testing environment, I see no reason to add an extra layer of indirection – it just adds cognitive overhead. But if putting the repository access in a service is the only way to make the database access mock-able, then you might want to do it.
|
# ? May 27, 2015 18:19 |
|
Use a ParamConverter to convert the parameter from the request into the object you're looking for (or throw a 404): http://symfony.com/doc/current/bundles/SensioFrameworkExtraBundle/annotations/converters.html and http://symfony.com/doc/current/best_practices/controllers.htmlphp:<?php class MyController extends Controller { /** * @ParamConverter .... */ public function myAction(SomeEntity $someEntity, Request $request) { // .... } }
|
# ? May 27, 2015 19:38 |
|
spacebard posted:I guess whichever is easier to mock the Repository service for testing MyController [...] We don't do controllers-as-services because the other devs here have grown up using shortcut methods like $this->makeMyLifeEasy() on the Controller, and services seem to confuse people. I thought about moving to that method for my own code, but for consistently I've avoided it unless I can persuade other leads it's a good move and something we should adopt in the long-term. musclecoder posted:Use a ParamConverter to convert the parameter from the request into the object you're looking for (or throw a 404): http://symfony.com/doc/current/bundles/SensioFrameworkExtraBundle/annotations/converters.html and http://symfony.com/doc/current/best_practices/controllers.html DimpledChad posted:does $this->getDoctrine() use the dependency injection container? I.e., if you can already mock out the doctrine dependency in a testing environment, I see no reason to add an extra layer of indirection it just adds cognitive overhead. But if putting the repository access in a service is the only way to make the database access mock-able, then you might want to do it. Unless I've missed something, I'd really rather not mock three levels of objects if I can avoid it (container, doctrine, repo). Although some argue that Controllers are better tested with functional tests, and leave the unit tests to service-level stuff. It doesn't help that some of our controllers contain code that should be in a service to begin with.
|
# ? May 28, 2015 00:34 |
|
If I need to pull some metrics (10 or so queries) for the last X amount of days from a SQL database (each day has it's own metrics), how do I do that so I'm not doing a loop that does 10*X select queries? This is in Symfony for reference but I'm just using raw SQL. I can't really use an IN statement with an array to group everything I don't think, because I'm selecting by using BETWEEN 2 timestamps.
Tomahawk fucked around with this message at 04:05 on May 28, 2015 |
# ? May 28, 2015 03:48 |
Tomahawk posted:If I need to pull some metrics (10 or so queries) for the last X amount of days from a SQL database (each day has it's own metrics), how do I do that so I'm not doing a loop that does 10*X select queries? This is in Symfony for reference but I'm just using raw SQL. I can't really use an IN statement with an array to group everything I don't think, because I'm selecting by using BETWEEN 2 timestamps. I'd post this over in the database megathread and maybe provide some more details (column names, sample rows, what you want the output to look like)
|
|
# ? May 28, 2015 05:22 |
|
Tomahawk posted:I can't really use an IN statement with an array to group everything I don't think, because I'm selecting by using BETWEEN 2 timestamps. Like fletcher said, we'll need more details - but one thing to be aware of is that if you're doing a BETWEEN two dates and one date doesn't have any records, it obviously won't show up in the results. You can get around this by building a pre-set table of all dates and LEFT JOINing that, or if you're using Postgres, use the generate_series() to generate a series of empty data between the two dates and COALESCE() the results.
|
# ? May 28, 2015 13:30 |
|
musclecoder posted:Like fletcher said, we'll need more details - but one thing to be aware of is that if you're doing a BETWEEN two dates and one date doesn't have any records, it obviously won't show up in the results. You can get around this by building a pre-set table of all dates and LEFT JOINing that, or if you're using Postgres, use the generate_series() to generate a series of empty data between the two dates and COALESCE() the results. Cross posted from the DB thread now but they're just pretty straight forward count/avg select statements for each day. Right now I am using a loop that decrements the time by one day, and executes a bunch of code:
Each day is an array of metrics that all eventually gets put into a CSV. It feels like there's a much better way to do this.
|
# ? May 28, 2015 13:51 |
|
Could you be more specific about these statements? I bet they can be combined, but it's impossible to say how unless I actually know what they are.
|
# ? May 28, 2015 14:28 |
|
rt4 posted:Could you be more specific about these statements? I bet they can be combined, but it's impossible to say how unless I actually know what they are. Well I guess my main concern is the executing SQL statements in a loop (since I have been taught that that is very bad) rather than how many there are in that loop re: combining them. Is there a way I could run something like code:
Edit: Nevermind, someone in the DB thread pointed me in the right direction for my needs with: NihilCredo posted:"group by extract( day from dateadd(hour, -2, ts))" ? Thanks for the help all and apologies for not spotting the DB thread in the first place! Tomahawk fucked around with this message at 19:08 on May 28, 2015 |
# ? May 28, 2015 17:04 |
|
I'm parsing the output of an API that, depending on the object you're querying, either may or may not have extremely recursive output, IE nested arrays and objects that can be up to 10 levels deep. Is there a smarter way of transforming this data than doing a poo poo ton of ugly isset() crap everywhere? I'm looking to have probably 300 lines of isset() and nested loops just to check if data exists or not and it's getting extremely unwieldy. I feel like I'm going about this the wrong way.
|
# ? Jun 9, 2015 17:06 |
|
Phone posting but I believe array_filter() removes keys with empty values, could that help, or a recursive variant? Fake edit: just re read and I don't think my reply will help, sorry!
|
# ? Jun 9, 2015 17:14 |
|
revmoo posted:I'm parsing the output of an API that, depending on the object you're querying, either may or may not have extremely recursive output, IE nested arrays and objects that can be up to 10 levels deep. Seems like one of those times a recursive function may come in handy. Like I needed to parse an arbitrary nested list elements in a HTML partial file the other day. So I wrote a recursive function to look thru the DOMDocument via xpath queries.
|
# ? Jun 9, 2015 20:18 |
|
I was going to suggest RestRemoteObject, but it's old, there's an insane amount of black magic, you need to use Zend, and I'm pretty sure it doesn't play well with deep arrays. What sort of data is being spat out by the API? Is there a reliable way to map the sub-items, or are you identifying the "type" of data returned by the keys that are available? Myself, I'd make a parser that can map the response to objects - some recursion, reflection and maybe the ability to custom parse an object or two. This might get really hairy if the API structure is garbage, though. If you provide some more info, we might be able to come up with something.
|
# ? Jun 10, 2015 02:07 |
|
Tomahawk posted:
Beware of BETWEEN. It won't match your start or end time. Better to use code:
|
# ? Jun 10, 2015 03:05 |
|
BETWEEN includes the start and end values in Postgres http://www.postgresql.org/docs/9.4/static/functions-comparison.html and MySQL https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_between
|
# ? Jun 10, 2015 03:15 |
|
Has anyone used Zookeeper to manage locking between concurrent gearman jobs?
|
# ? Jun 13, 2015 06:23 |
|
I'm having real trouble sending data to an API and I'm not sure why it's not working. Basically, I need to send user supplied data to an API running on another server via a GET request. It works fine when manually running the URL, but trying to do it via PHP doesn't seem to work properly. This is the code chunk I'm using to generate the request and send it: code:
The weird thing is if I just echo the url with the request, and then copy and paste the url, it kicks off as expected. Is it possible something on the API's end at all? I'm starting to think that because I've tried a few methods with no success, but I'm definitely not excluding myself being an idiot. Experto Crede fucked around with this message at 16:11 on Jun 13, 2015 |
# ? Jun 13, 2015 16:08 |
|
So unless whatever HttpRequest class that is is doing something magic and weird, that code is broken. The third param to http_build_query is the separator. When you are embedding a link in an HTML document, using & is correct, but this is a direct request, so you want an actual ampersand there instead of the HTML entity.
|
# ? Jun 15, 2015 05:02 |
|
'&' if that isn't vbulletin mangling it, im reasonably sure it uses that literally so you'll end up with unusable urls of &x=y&a=b instead of &x=y&a=b
|
# ? Jun 15, 2015 09:37 |
|
Turns out it was me being an idiot. First I had a bug in my code stopping curl sending the requests out, then using the & (Which I did after echoing the URL caused issues because ¬ was being seen as ¬ in HTML encoding) stopped the request being accepted properly by the API after I fixed the initial curl issue. Thanks for your help! Experto Crede fucked around with this message at 17:07 on Jun 15, 2015 |
# ? Jun 15, 2015 17:04 |
|
I'm about to embark on a project that involves a bunch of different websites that have the same backend behavior. Would it be a sensible approach to use a single installation of Symfony with a bundle for each site's frontend plus a bundle of shared backend code?
|
# ? Jun 18, 2015 14:33 |
|
|
# ? Jun 8, 2024 03:12 |
|
rt4 posted:I'm about to embark on a project that involves a bunch of different websites that have the same backend behavior. Would it be a sensible approach to use a single installation of Symfony with a bundle for each site's frontend plus a bundle of shared backend code? Are they all going to be on the same server? Are they going to be installed by your client/customer in their own environment?
|
# ? Jun 22, 2015 05:42 |