|
Pollyanna posted:One of the senior engineers on the team says that some of our service code shouldn’t actually set any default values for a given model (which needs it to build the form), and that the default values for the new model should instead be determined by whatever the form partial defines as the defaults. I really dislike the idea of our views being the ones in charge of default values, but I don’t know if I’m being too dogmatic about it. Am I overly concerned about this? No, you are correct. That's loving weird. Defaults and constraints should be in postgres, and maybe also in the models. In the views is wrong.
|
# ? May 30, 2019 21:17 |
|
|
# ? Jun 5, 2024 03:47 |
|
Pollyanna posted:One of the senior engineers on the team says that some of our service code shouldn’t actually set any default values for a given model (which needs it to build the form), and that the default values for the new model should instead be determined by whatever the form partial defines as the defaults. I really dislike the idea of our views being the ones in charge of default values, but I don’t know if I’m being too dogmatic about it. Am I overly concerned about this? While I agree that views should be as dumb as possible, you can make a compromise where, once the form is submitted, the back end sets default values for anything is missing. I don't think this is a worthy fight to have, but mostly ... a meh, should be done the other way.
|
# ? May 30, 2019 22:42 |
|
Pollyanna posted:One of the senior engineers on the team says that some of our service code shouldn’t actually set any default values for a given model (which needs it to build the form), and that the default values for the new model should instead be determined by whatever the form partial defines as the defaults. I really dislike the idea of our views being the ones in charge of default values, but I don’t know if I’m being too dogmatic about it. Am I overly concerned about this? Have they explained why?
|
# ? May 30, 2019 22:44 |
|
Pollyanna posted:One of the senior engineers on the team says that some of our service code shouldn’t actually set any default values for a given model (which needs it to build the form), and that the default values for the new model should instead be determined by whatever the form partial defines as the defaults. I really dislike the idea of our views being the ones in charge of default values, but I don’t know if I’m being too dogmatic about it. Am I overly concerned about this? That's definitely weird and I'd want to know why? If the form's object is being built deeper than in the controller action I guess there could be weird side-effects. But even then I'd reach for custom constructor methods before putting that logic in the view.
|
# ? May 30, 2019 23:43 |
|
I’m guessing I just misunderstood him or maybe that we were actually talking about putting the default values (if not present) in a FormBuilder, though I didn’t think FormBuilders could do that. His main point is that form generation should account for the defaults such that you don’t have to hardcode any default values, but I don’t know how you don’t hardcore defaults. This is why I hate Rails forms
|
# ? May 31, 2019 14:54 |
|
Aquarium of Lies posted:Rails 6 has parallel testing built in which we’re looking forward too Other devs have tried parallelizing in the last without much luck so I’m curious to see if it’ll work better with 6. The Rails 6 parallel testing is for Test::Unit, not RSpec. There's an open issue about supporting it in rspec-rails, but "We'd love to support parallel tests with RSpec itself, its a rather large task however." Still, the multi-database connection stuff is nice.
|
# ? Jun 15, 2019 08:13 |
|
Been bumbling my way through learning Rails, wondering if I'm setting up my tests even semi correctly. I have a Project model that belongs to a user, and I'm testing that the controllers show route only allows the authenticated user to see their own projects, is there anything glaringly wrong with this test? code:
|
# ? Jul 11, 2019 05:26 |
|
ddiddles posted:Been bumbling my way through learning Rails, wondering if I'm setting up my tests even semi correctly. Nothing glaringly wrong, in fact, it's a very good test for a first rails app. It often helps to keep each test to a single `expect` statement, so that each test description matches a single simple expectation, keeping the individual tests smaller and easier to reason about. It's not obvious in the code what `authorize_header` does. It seems to set some kind of global, or similar? Effectively, it looks like a side-effect, and while rails' whole `render` within a controller is a whole nasty side effect, it's best to avoid them if possible. There's no real need for the last 2 expectations, you've tested the expected behaviour, and in production, I don't think users can re-authorize and change identities within the lifespan of a request. rubocop is great at giving feedback and explaining why certain "best practises" exist also. code:
code:
|
# ? Jul 11, 2019 10:46 |
|
Trammel posted:It often helps to keep each test to a single `expect` statement, so that each test description matches a single simple expectation, keeping the individual tests smaller and easier to reason about. This seems like a good idea until your app and test suite grow: the setup and teardown for a bunch of tiny tests can really add up quickly. The approach you have now is completely reasonable for an integration test (which these are). For unit tests, where you aren't creating records or calling services, one expect per test is more reasonable.
|
# ? Jul 11, 2019 13:47 |
|
Aren’t controller specs getting deprecated?
|
# ? Jul 11, 2019 14:13 |
|
Integration tests aren't, which go through the request flow. Testing the controllers directly is indeed deprecated, test the routes. That is specific to TestUnit, the default test harness in Rails. Dunno if RSpec is changing anything.
|
# ? Jul 11, 2019 14:47 |
|
necrotic posted:Integration tests aren't, which go through the request flow. Testing the controllers directly is indeed deprecated, test the routes. rspec broke controller tests a long, long time ago. They're still possible, but pretty hard with the newest versions. Mostly integration tests with a few model unit tests thrown in for good measure is the way to go.
|
# ? Jul 11, 2019 14:54 |
|
That is a controller test, it's being described. Route tests are integration tests, they go through the routing engine.
|
# ? Jul 11, 2019 15:13 |
|
Trammel posted:... This is awesome, thanks for all that info. Going to read up on requests specs, coming from the front end world, this test environment is already 1000x better to work in.
|
# ? Jul 11, 2019 16:19 |
|
ddiddles posted:This is awesome, thanks for all that info. I've been really impressed with Cypress for front-end integration testing.
|
# ? Jul 11, 2019 16:24 |
|
necrotic posted:This seems like a good idea until your app and test suite grow: the setup and teardown for a bunch of tiny tests can really add up quickly. The approach you have now is completely reasonable for an integration test (which these are). For unit tests, where you aren't creating records or calling services, one expect per test is more reasonable. I'm definitely biased towards lots of unit tests, with instance doubles, with a few integration tests to prove out basic success and failure options. But the integration vs unit tests debate has been endlessly argued and I don't think the correct way is proven one way or the other. I do think it's probably easier to refactor for speed by combining tests, than refactor for clarity, later. Then again, how many integration tests do you want in a scenario like this? There's likely to be more than one route with authentication and access limitations, and it's going to get tedious, repetitive and slow testing these limitations on every route, even using shared examples. Can we test the authentication and access control in a single place, without integration tests?
|
# ? Jul 11, 2019 19:04 |
|
I need to generate PDFs from a template PDF supplied by a client. I’m currently using an ancient version of prawn to handle this as support for template PDFs was extracted from paperclip back in 2014. I'm aware that theres the prawn-templates gem, but this feels like a temporary solution as there was a reason template support was dropped. I’ve looked into switching over to wicked, but it looks like I’ll need to convert the supplied templates into html? This isn’t really ideal as the client updates the template semi-regular and it would be a huge pain to to re-create the html each time. Are there any other alternatives out there?
|
# ? Jul 17, 2019 11:59 |
|
Tea Bone posted:I need to generate PDFs from a template PDF supplied by a client. I’m currently using an ancient version of prawn to handle this as support for template PDFs was extracted from paperclip back in 2014. Can you explain more about the problem? I think you could do that by using PDF forms (the best approach) or by overlaying a different PDF on top of it (a hacky workaround.) Parsing and generating a PDF according to a template in a way that preserves its appearance is extremely hard, and would be misusing the format, IMO.
|
# ? Jul 17, 2019 12:15 |
|
xtal posted:Can you explain more about the problem? I think you could do that by using PDF forms (the best approach) or by overlaying a different PDF on top of it (a hacky workaround.) Parsing and generating a PDF according to a template in a way that preserves its appearance is extremely hard, and would be misusing the format, IMO. Yeah sure, basically it's a web portal which generates a variety of pdfs for the end user to download based off of values they submit in a form. The client who owns the portal sends us a batch of template pdfs intermittently (every 3 months or so) usually with just minor copy changes, but occasionally entirely new layouts/requirements to re-order or add new fields to the generated pdf. Ideally I want to just drop the drop the templates into a library and point the script to generate the output pdf to the template location. At the moment I generate a new pdf with prawn and point prawn's template parameter at our template pdfs, then I draw text boxes at the required x,y coordinates to fill in the blanks. I'm not too sure about the internal workings of prawn, but I suspect it's doing what you suggested in your second point (It just overlays my text boxes as a new PDF over the template PDF). I agree that PDF forms are probably the way to go, but that means either us having to add the form fields to the template or getting the client to supply them with the fields in place already. I'm starting to think perhaps generating the output PDF from another PDF is the wrong way of going about this. I'd perhaps be better off generating everything as an image then converting it to PDF? The output pdf doesn't need to be editable (or even selectable). But the templates are usually multiple pages long with only one or two of those pages requiring anything overlayed.
|
# ? Jul 17, 2019 13:26 |
|
Tea Bone posted:I'm starting to think perhaps generating the output PDF from another PDF is the wrong way of going about this. I'd perhaps be better off generating everything as an image then converting it to PDF? The output pdf doesn't need to be editable (or even selectable). But the templates are usually multiple pages long with only one or two of those pages requiring anything overlayed.
|
# ? Jul 18, 2019 17:15 |
|
Another question about rails specs. Switched over to using request specs rather than controller, but I'm repeating myself a lot checking that routes are protected by auth.code:
|
# ? Jul 20, 2019 01:40 |
|
ddiddles posted:Another question about rails specs. Switched over to using request specs rather than controller, but I'm repeating myself a lot checking that routes are protected by auth. You could also do: https://relishapp.com/rspec/rspec-core/docs/example-groups/shared-examples
|
# ? Jul 20, 2019 02:40 |
|
Exactly what I was looking for, thanks!
|
# ? Jul 20, 2019 04:20 |
|
I'm using FactoryBot in my test cases. How do I create a related record that can be referenced multiple times in a single factory? For instance, instead of using association :owner in a :car factory, I'd create the :owner record first and then explicitly define it's properties in the :car factory. Example of what I thought might work:code:
So, I'm either going about building my factories wrong, OR I have my environments somehow misconfigured?
|
# ? Jul 23, 2019 14:50 |
|
GeorgieMordor posted:I'm using FactoryBot in my test cases. How do I create a related record that can be referenced multiple times in a single factory? For instance, instead of using association :owner in a :car factory, I'd create the :owner record first and then explicitly define it's properties in the :car factory. Example of what I thought might work: So, there are definitely ways of doing this, yes, but... why are you denormalizing your database like this? Use a join or an include to get that information from the DB, don't save (and then have to sync) it to the model.
|
# ? Jul 23, 2019 15:17 |
|
kayakyakr posted:So, there are definitely ways of doing this, yes, but... why are you denormalizing your database like this? Use a join or an include to get that information from the DB, don't save (and then have to sync) it to the model. Also my thoughts, though this is a legacy application I've been brought on board for so I'm at the mercy of the current functionality. At least for now. For the sake of supporting these convoluted models in tests / factories though, you indicated some ways to do this -- what are they?
|
# ? Jul 23, 2019 15:27 |
|
GeorgieMordor posted:Also my thoughts, though this is a legacy application I've been brought on board for so I'm at the mercy of the current functionality. At least for now. You'd use after hooks to do this. From way back in the factorygirl days: https://thoughtbot.com/blog/aint-no-calla-back-girl
|
# ? Jul 23, 2019 15:36 |
|
kayakyakr posted:You'd use after hooks to do this. From way back in the factorygirl days: https://thoughtbot.com/blog/aint-no-calla-back-girl FactoryGirl, wow. See now you're making me feel old. Thanks for the link -- I'll check that out.
|
# ? Jul 23, 2019 16:04 |
|
You could also probably use a let!code:
https://relishapp.com/rspec/rspec-core/docs/helper-methods/let-and-let
|
# ? Jul 23, 2019 21:06 |
|
Awesome Animals posted:You could also probably use a let! Or actually, could use factorybot's version of this, the transient: https://github.com/thoughtbot/factory_bot/blob/master/GETTING_STARTED.md#transient-attributes
|
# ? Jul 23, 2019 21:35 |
|
kayakyakr posted:Or actually, could use factorybot's version of this, the transient: https://github.com/thoughtbot/factory_bot/blob/master/GETTING_STARTED.md#transient-attributes Oh, neat. I’ll try this at my next opportunity to do so
|
# ? Jul 23, 2019 23:00 |
|
I'm working through a Rails update now. Pre-existing Rails 4.2.* app to 5.2.3. The application runs against Postgres, and is utilizing some json and jsonb fields. While testing the upgrade we learned that some of these fields are being saved as strings instead of accessible JSON, and then this will break subsequent lookups to said fields when they can't access it as hashed JSON. Further reading found this is a known thing with Rails upgrades but I'm not totally understanding where the disconnect is happening. Trying to debug, I see that params come through the request looking like this: code:
What's tripping me up is if I try to post the same structure in a debug tool like Paw or Postman, I get a response from Rails that it's unparsable JSON. Yet, if the original requester sends it over in that shape, the payload is saved to the database without any error responses, though it is saved incorrectly as a string.
|
# ? Jul 25, 2019 15:37 |
|
An engineer is proposing monkey patching in two new methods to String, for an “encrypt”/“decrypt” functionality (real basic, nothing fancy). I was under the impression is monkey patching bad, but why exactly is this the case?
|
# ? Jul 25, 2019 15:40 |
|
Pollyanna posted:An engineer is proposing monkey patching in two new methods to String, for an “encrypt”/“decrypt” functionality (real basic, nothing fancy). I was under the impression is monkey patching bad, but why exactly is this the case? Overriding an existing method is heinous because it can lead to incorrect assumptions and elusive bugs. Adding new methods to an existing class isn't as bad. Given that Rails is already a quilt sewn from monkeypatches, adding a couple String methods seems okay; but my first preference would still be to encapsulate those features in a dedicated class or module, especially if the implementation requires internal state.
|
# ? Jul 25, 2019 17:33 |
|
DaTroof posted:Overriding an existing method is heinous because it can lead to incorrect assumptions and elusive bugs. Adding new methods to an existing class isn't as bad. Given that Rails is already a quilt sewn from monkeypatches, adding a couple String methods seems okay; but my first preference would still be to encapsulate those features in a dedicated class or module, especially if the implementation requires internal state. Good response. GeorgieMordor posted:I'm working through a Rails update now. Pre-existing Rails 4.2.* app to 5.2.3. I think that's coming from the original requester. That's a string being encoded and wrapped as another string. So the request is being sent like: post('endpoint', { json: JSON.stringify(someJson) }) Right? So that happens with a lot of libraries (I think including jquery) when you don't provide a content type with the ajax post. They automatically encode things as strings and don't leave them as JSON. Only thing I can think of with Postman is that you're actually posting unparseable JSON for one reason or another.
|
# ? Jul 25, 2019 22:37 |
|
Pollyanna posted:An engineer is proposing monkey patching in two new methods to String, for an “encrypt”/“decrypt” functionality (real basic, nothing fancy). I was under the impression is monkey patching bad, but why exactly is this the case? To flip this around, why do they want to do that? It isn't any better than having a class with static methods, which is what ActiveSupport does.
|
# ? Jul 25, 2019 22:46 |
|
Yeah, I'd think you would want a class so you know it's encrypted, otherwise it'd be difficult to tell.
|
# ? Jul 26, 2019 00:01 |
|
kayakyakr posted:Good response. The requester in this case is a hardware device running on Android, but I don’t have total familiarity with that code base so I’ll see if I can isolate how ya shaping the payload before it comes over. What’s confusing is that the same requests coming from this device did not have this problem with Rails 4, so I’m trying to isolate whether or not it’s possible the payload was always incorrectly shaped and “just worked”, OR if it’s the Rails 5 method of interacting with json and jsonb fields needs the attention and would rectify the situation. Like, I proved I can build a serializer for the model field that will fix the problem with a single JSON array and save it properly, for instance. Nested JSON issues have problems though.
|
# ? Jul 26, 2019 13:51 |
|
e: incorrect post here
|
# ? Jul 26, 2019 13:58 |
|
|
# ? Jun 5, 2024 03:47 |
|
DaTroof posted:Overriding an existing method is heinous because it can lead to incorrect assumptions and elusive bugs. Adding new methods to an existing class isn't as bad. Given that Rails is already a quilt sewn from monkeypatches, adding a couple String methods seems okay; but my first preference would still be to encapsulate those features in a dedicated class or module, especially if the implementation requires internal state. This is perfect, the only thing I'd add is that of you only need your monkey patch in a few specific places is take a look at refinements. They are very weird because of how they handle scoping but they do work well when you need an explicit extension on a base class in only a few places and allow you to easily remove them later over just patching globally of you find a cleaner solution.
|
# ? Jul 26, 2019 13:59 |