|
necrotic posted:The matcher docs explicitly state STDOUT is not checked as they replace $stdout for the check. That did the trick - thanks!
|
# ¿ Jun 5, 2017 22:15 |
|
|
# ¿ May 20, 2024 15:17 |
|
I have a high level question. We now have several more or less unrelated applications, and we'd like to have a common mechanism for emailing the help desk from any of them. We're thinking in terms of a shared widget that we can have a link to in every app. What are some good ways to go about this? I don't want to have a completely separate copy of the code in each app, and I don't think this really warrants making a separate application for. Is a gem the right way to go, or is there a better way to develop and use this independent of any particular application? Peristalsis fucked around with this message at 22:21 on Jun 8, 2017 |
# ¿ Jun 8, 2017 22:18 |
|
kayakyakr posted:Making a gem for this is fairly easy, especially if the gem doesn't need to know anything specialized about the app that it's embedded in. That's generally the right way to go. necrotic posted:Echoing kayakyakr you can use a gem for this. You can even publish private gems using a service like Gem Fury, or hosting your own internal repo. Thanks folks - I'll try making my very first gem. I don't know if it's going to involve any JavaScript at this point, but if it does, it can still just go in the gem, right?
|
# ¿ Jun 9, 2017 15:30 |
|
I'm reviewing some code that has some new rake tasks in a file, where the guy who wrote it has declared variables outside of the tasks. He seems to be using it as a way to pass data between tasks. For example: myfile.rake code:
|
# ¿ Jul 25, 2017 22:15 |
|
The Milkman posted:That definitely 'feels' like it shouldn't work. I guess depending on how those tasks are invoked my_hash is a top level variable still in memory? If it actually works and it's just a one-off I suppose it doesn't hurt too much. A MIRACLE posted:Rake tasks shouldn't be sharing variables like that. Offload the "populates and returns a hash" parts to a `lib/my_library.rb` class Thanks all. I helped him remove some of the dependencies, and we're going to leave the rest in there for now. Coding around them would basically require completely re-creating this portion of the data import. He claims that the variables don't persist between rake runs - assuming that's true, it relieves my concern about them clogging up memory indefinitely. necrotic posted:It works because my_hash is a variable scoped to the namespace block, and the subsequent task blocks have access to the parent scope(s). I forgot to mention/include that the dependencies are explicit, so that the third task does automatically run the second task first.
|
# ¿ Jul 26, 2017 21:45 |
|
I'm not clear how RVM sets up default and global gemsets. The documentation here seems pretty clear, but also doesn't coincide with what I'm seeing on my system. For example, if I read it correctly, the file ~/.rvm/gemsets/global.gems determines what is in the global gemset when I install a new ruby version. If I have additional or different gems specified in ~/.rvm/gemsets/<ruby implementation> or /gemsets/<ruby implementation>/<version number>, then that affects things, too (not sure whether they are appended, or one supersedes the others). However, I just installed ruby 2.4.1 with RVM, and my ~/.rvm/gemsets/ruby-2.4.1@global/gems directory has a bunch of gem directories that are not represented in ~/.rvm/gemsets/global.gems (which only contains gem-wrappers, rubygems-bundler, rake, and rvm). Further, when I use the new ruby with the global gemset, then execute gem list, it shows everything in global.gems, as well as 2 or 3 other gems (like bigdecimal) that I don't see specified anywhere. So, how is the initial global gemset determined, and why do I seem to have extra gems intalled in it? Also, I created a new gemset for my new ruby version, then made it the default by executing rvm use 2.4.1@newset --default, as in the instructions here. However, my (default) gemset still exists, and executing rvm use 2.4.1 with no gemset specified still switches me to the (default) gemset, and not the one I tried to make default. Am I misunderstanding something here? Peristalsis fucked around with this message at 21:51 on Aug 24, 2017 |
# ¿ Aug 24, 2017 17:40 |
|
The CI pipeline for one of our applications seems to be recreating the database every time it runs, by running all migrations. I've pointed out that the rails documentation (and the boilerplate comments in schema.rb) say not to create databases with migrations, but the lead programmer for this product is concerned that schema.rb may not work correctly because it may not be truly database agnostic (dev machines use Sqlite3, production server uses Oracle). Schema.rb looks to me like it's pretty much completely abstracted away from the actual SQL, and I'd have assumed that whatever DSL runs to keep migrations database agnostic also keeps rails db:reset (or its equivalents) agnostic. One former employee even said that at his previous job, they didn't keep the migrations around in source control once they started to get old, because they just aren't very useful after a while. So: 1) Is CI an exception to the general rule, where migrations should be used for creating the database, or should it also use schema.rb? 2) Do we need to worry that a schema.rb generated on a dev box using Sqlite3 might not work properly under Oracle?
|
# ¿ Oct 4, 2017 19:26 |
|
Sivart13 posted:Good stuff Thanks! I admit that isn't exactly what I wanted to hear, but at least maybe I won't make an rear end of myself at the next meeting now. Is it worthwhile to switch my dev db to MySQL? By "worthwhile", I mean are there less likely to be compatibility issues between MySQL and Oracle than between SQLite and Oracle, or is it just exchanging one set of issues for another? I admit I like how easy it is to use SQLite, despite having some reservations about it when I started using it, but it wouldn't be hard to use MySQL on my dev box instead.
|
# ¿ Oct 5, 2017 16:23 |
|
A minor optimization issue came up the other day in a project, and I was surprised how much trouble I had speeding up some work with models. I did some crude profiling, found a couple of sections of a page load that were hammering the database (relatively speaking), and tried a few things to eager load association data to reduce the database hits. I was successful in reducing the number of queries in at least one place, but it increased the total page load time for the page in question. I tried using includes() and eager_load() on an object's associated objects (so, on a CollectionProxy, I think), and at least one time I did manage to collapse a bunch of queries down to one, but it didn't have the desired effect. I also tried `outer_left_join` at one point, with similarly disappointing results. I can't spend much more time on this right now (it's just something another developer was concerned about, and just a nice-to-have refactoring issue for now), but I'm curious if there are any obvious or common pitfalls to using these methods. Possible issues that I would explore if I had more time:
The method being run to populate and display this page is heavy with mathematical computations, and in a pinch we can probably speed things up by tweaking the schema to store some partial results, but I'd like to clean up the SQL before deciding if we need to denormalize the database. This project uses Rails 5.1.4, and ruby 2.4.0. Peristalsis fucked around with this message at 17:17 on Nov 15, 2017 |
# ¿ Nov 15, 2017 17:12 |
|
kayakyakr posted:I'd guess the reasons why you didn't see a speedup are these, in order of likelihood: The Milkman posted:Agreed on all three points. I would try Postgres locally if possible, for many reasons. I wouldn't necessarily expect sqlite to reflect certain db optimizations. If you're on a Mac, Homebrew does a pretty decent job of installing/running it these days, easier than Postgres.app even. Thanks folks! I have MySQL installed already for another project, maybe I'll switch this project over to it for a quick test. I assume it's a serious enough db to show some results if there should be improvements. I'm working on Ubuntu, but I do have a MacBook for meetings and whatnot that I can use if I do end up installing Postgres. Edit: Also, regarding item 2) above, I have no doubt that the computations involved in this particular page load are taking up time, but I don't think that would explain why collapsing 50 - 100 db hits down to 1 or 2 would slow the total load time. The profiling I used did actually show that the new query itself took longer than the ones it replaced. Peristalsis fucked around with this message at 18:35 on Nov 16, 2017 |
# ¿ Nov 16, 2017 18:30 |
|
Doh004 posted:How are your indexes looking? I'm not sure about the indexes - I'll have to look in to that. Just switching to MySQL didn't do much good, so I'm doing something wrong in my changes. The data model is roughly this: Experiments have many Plates Plates have many Samples Samples have many DataPoints One of the places I'm trying to optimize is an Experiment method that returns a subset of its samples, based on sample_type. So, this code code:
There are a number of different things that could be tried here, but I just wanted to see if I could pre-load some data to start off, and changed code:
code:
|
# ¿ Nov 16, 2017 19:57 |
|
necrotic posted:SQLite is plenty fast. The issue is both populating a tree of objects from the queries and then filtering those objects in ruby. To get performance here you really need to leverage the database for querying what you want. Thanks, I'll look into restructuring the loop so it only goes over the samples once. Also, I did check the indexing: samples are indexed on plate_id, and plates are indexed on experiment_id - I'm not an expert, but that seems like the obvious list of things to index. I suspect there's a larger, structural problem here - something being done brute-force that could be done better, or maybe repeated queries over the same data. It's not really my project, so I'm not putting in the time I would need to really tear apart the existing code, but I'm baffled that pre-loading data didn't at least help some to do a slow thing faster, even if it should be doing something else to begin with.
|
# ¿ Nov 20, 2017 17:10 |
|
I'm a little late to the STI discussion, but is there an accepted better way to model inheritance of Model classes in Rails? The only other way I've seen is to have a separate table for each child model, with references back to the parent model. So, something like this:code:
code:
code:
Is STI just considered kind of an ugly hack that's too loose? For example, it seems unfortunate to me that with STI, if classes B and C both inherit from A, B objects can access attributes intended to be used only by C objects (though they'd have to use exact db attribute names, rather than association names that can be specific to subclasses). Our current product is starting to use a few STI models, and if this is a recipe for future problems, I'd like to know now.
|
# ¿ Mar 6, 2018 20:19 |
|
Thanks for your STI input. It sounds like STI is sort of there because it's easy and often good enough. Now I have a question about FactoryBot best practices. Using sequences for unique names is fine with RSpec tests, but if you use a factory to generate dev data from Rails console, the sequences restart every time you restart the console, which means they aren't unique any more. I've been replacing sequence and Faker::Name.unique instances with names with timestamps appended to manually unique-ify things as collisions come up. Another dev doesn't feel that FactoryBot should be used in Rails console in this way, and that it's only for setting up and executing tests. I think it's crazy to manually set up data with multiple complex associations by hand every time I want to try something out when that's exactly what factories do for us. Is he right that it's a best practice not to use Factories from the console? We also use factories in our seeds.rb for populating dev data - I'm not sure why that would be okay, but creating more data from the console isn't. (And I just found out that we may have a factory being used in some production data import code, too - I'm pretty sure that isn't a good idea.)
|
# ¿ Mar 7, 2018 17:19 |
|
Slimy Hog posted:I'm not even sure how you would use a factory in production. I haven't verified that it's there (another developer told me), but it's for a data import from another system. I assume it's just a way to quickly create model objects from values read in from a text file. Edit: Yeah, its used extensively in an import rake task. Peristalsis fucked around with this message at 20:10 on Mar 7, 2018 |
# ¿ Mar 7, 2018 20:08 |
|
xtal posted:I think one of us is misunderstanding factories That's possible. I'm talking about the FactoryBot gem used to set up test data, not the standard factory design pattern. I'm also very tired today, so I apologize if I garbled my descriptions.
|
# ¿ Mar 8, 2018 00:28 |
|
ToadStyle posted:Polymorphic inheritance maybe? I'm not sure what you mean. Isn't all inheritance polymorphic by definition? I'm specifically asking about how to model class hierarchies in relational database tables, not how to structure them in code.
|
# ¿ Mar 8, 2018 16:21 |
|
I'm working with CarrierWave to upload files as attachments to experiment objects. For dev, I have a local minio docker container for file storage, and I guess I'm using fog in some way to interact with it - mostly I'm copying similar code out of another project. The files seem to upload fine, but when a file is deleted by the user, I can't seem to scrub it from the minio instance. Here's my setup: code:
code:
NoMethodError at /datafiles/10023 undefined method `remove_previously_stored_files_after_update' for 0:Fixnum I've tried each approach individually, and both together, with no luck.
|
# ¿ Aug 14, 2018 18:45 |
|
manero posted:I'd think you wouldn't need to clean up the attached file, when you call @datafile.destroy, carrierwave should take care of removing the file out of storage for you This is what seems to be happening for the other project from which I copied some of the setup and code (according to a developer on the project - I haven't verified it myself). It makes me think I just have something configured wrong.
|
# ¿ Aug 15, 2018 17:00 |
|
manero posted:I'd try removing all the stuff like the entire clean_s3 method, and your destroy action should just be @datafile.destroy, don't bother with the calls to remove_uploaded_file! and save The issue ended up being that my migration for the new Datafile object had an integer for the mounted column, instead of a string. I fixed that, and all seems to be well.
|
# ¿ Aug 15, 2018 22:33 |
|
The team I'm on is at the point where it would be good to get on the same page about what we should be testing, and how we should be testing it. Are there books or other resources you have used that were valuable in adopting good conventions? This can be Rails-specific, or more general. Our main problem right now is that we all have different ideas about our testing goals, and none of us are particularly right. We're all somewhat new to using structured test suites, and would benefit from a standard approach to using unit vs. feature tests, coverage goals, writing for test speed vs. readability, etc. We could also change our testing tools, if warranted. Right now, we use RSpec with Capybara. The lead developer for the project has used Cucumber in the past, and didn't care for it compared to RSpec, and I also tend to prefer simple, straightforward test tools to ones that force you to be chatty and try to have a conversation with the computer, rather than just executing assertions. That said, we'd have to find some pretty compelling benefits to change to a different system entirely.
|
# ¿ Aug 24, 2018 16:40 |
|
I have a weird problem. I'm uploading spreadsheet and/or CSV files for processing. Some of the CSV files have a Byte Order Mark (BOM) at the beginning of the file and it's screwing everything up. The BOM is prepended to the first header value in the top row, and I'm having trouble figuring out how to deal with this. I've tried a number of different things, from cleaning the key in each key-value pair to trimming the stream directly in tempfile, but I keep hitting a couple of roadblocks. 1) Using String#start_with? and the like to locate the BOM doesn't work 2) File.Open is supposed to allow you to specify a BOM-friendly mode, but I'm using Roo, which doesn't seem to have a way to specify that. I tried using the CSV library for csv files, but it also doesn't seem to to able to digest this file. Has anyone else dealt with this successfully?
|
# ¿ Nov 27, 2018 19:42 |
|
Pollyanna posted:I’m at my wits end with these stupid loving tests. Is there any way to debug why Capybara would intermittently take five seconds to find and click on a link in a page? We’re often but not always spending like 5600ms on any click_link or click_button call, and I’m having a gently caress of a time figuring out why. I don’t think it’s an async thing and we don’t seem to be spending much time in the DB or anything, so I suspect that Capybara itself is having trouble getting what it wants. I feel your pain. In my experience, capybara is kind of erratic in ways like this, and it can be very difficult to figure out why. It seems odd to me that it always eventually finds your links - I've had more experiences where it just can't find some button or link that's right in front of it, causing a slow failure as it sits through its wait time, not a slow pass. Maybe someone who knows it better will comment further, but here are a few things to look at: * Capybara locates things on the page to click on by screen coordinates, and sometimes the screen rendering makes the control move after capybara locates it, but before it tries to click on it. Capybara then tries to click on some empty part of the page (or another control entirely). * If you have a screen that's slow to load, you can tell capybara to find some late-loading feature before you do other things, as a sort of check that the screen is done rendering before you expect too much. * It's better to use click_on, click_link, etc. than to activate widgets and explicitly call some callback on it. I've resorted to the latter at times when I just can't figure out what the hell is going on, and always ended up regretting it later. * The first test that is run seems to take a long time on the application I work on, and I assume that's just because it's loading the app or something while doing it. * Try not to use before(:all) blocks to save time. It can speed things up, but has always caused me more headaches than it's worth. Just stick with before(:each)/background. * Keeping each test short and specific can help locate exactly what step is causing the problem, and/or prevent the combination of steps that's causing the problem from being in the same test together. This may make the tests take longer, as each before and after block run more times for lots of short tests than for a few long ones, but keeping them simple makes them more understandable and maintainable. * Finding the root cause of capybara issues can take a lot longer than randomly changing things around (the order of steps, which tests do what things, different ways of doing the same things, etc.) until something works. I'd always prefer to know what the gently caress is actually going on, but with capybara in particular, figuring that out can be a frustrating dead end. And sometimes when you stumble across a fix, you can figure out what the problem must have been much better than trying to deduce it from first principles. Using within blocks is good for preventing ambiguous searches, but I wouldn't expect it to fix full seconds of delay just from shortening the parsing of HTML. Good luck!
|
# ¿ Jan 5, 2019 17:48 |
|
I hope this is a simple question to answer, but I can't quite find it elsewhere. I have an index page with some filter params. There's a link on the page to download a csv version of the page, so the controller looks roughly like this: code:
code:
Changing the link like so, <%= link_to(items_path(format: 'csv', filters: params[:filters]) %> leads to unable to convert unpermitted parameters to hash.
|
# ¿ May 13, 2019 21:54 |
|
Aquarium of Lies posted:My company has a large Ruby on Rails app that's 6+ years old, but thankfully is relatively up-to-date version wise (currently running on Rails 5). During the initial production, we chose to use JRuby + puma for performance reasons. This works fine in production, but one of the biggest issues our devs have with the codebase is how slow development is. Between JRuby (especially startup time) and poorly written specs, it takes 35-45 minutes to run all our specs. This makes it really annoying to wait for our CI to go green even for very small changes. I don't know anything about JRuby, but if you're not already doing it, this gem can parallelize your tests. It's not especially smart about how it divides up the tests between threads/processes, but it does shorten our test runs by quite a bit. I believe Rails or RSpec is supposed to be parallelized by default in a not-too-distant release, too.
|
# ¿ May 16, 2019 15:52 |
|
I have a Capybara test that passes locally, but fails intermittently (almost always at the moment) on our CI. The test clicks on a link to open a menu, then clicks on a link on the menu. That menu link isn't visible, and I don't know why. Does anyone have any suggestions of something I could try? Here's roughly how the test works: code:
I can comment out/delete this test - it's just verifying that a menu link works - but I'd like to fix it if I can.
|
# ¿ Oct 31, 2019 20:27 |
|
I must be doing something very stupid here, but I can't get a method to take the right number of parameters. I'm trying to do this: code:
rubydoc posted:#number_field_tag(name, value = nil, options = {}) ⇒ Object However, when I try to pass in name, value, and an options hash, I get: code:
I'm using Rails version 5.2.4.3. Edit: Okay, technically the rails docs don't say the helper method takes the same params as number_field_tag helper method, it says it takes the same options. The method signature for the form helper method is this: quote:number_field(object_name, method, options = {}) public That still looks like three params to me, and I have no idea what the "method" param is supposed to be. Could there be a difference between the form objects returned by form_tag and form_for? Peristalsis fucked around with this message at 21:07 on Aug 28, 2020 |
# ¿ Aug 28, 2020 20:37 |
|
Tea Bone posted:The boot strap form gem redefines most of the helpers and they don't always work 1:1. I've come against this before. If you can look into the source code for the gem, but from the top of my head I think you might need to pass the value as a named parameter. Try value:123 or default:123 Thanks for your response. Neither value nor default worked, but I think I'm going to just omit the default value for now, so I can move on and get some work done. If I get some time, I might look into the underlying code, but I'm not sure I'm willing to do that. I've always had trouble with the helper methods (even without bootstrap, if I recall correctly), and I'm not sure I want to take on any additional frustration or delays right now.
|
# ¿ Aug 28, 2020 21:34 |
|
Tea Bone posted:No worries, but that's strange Well, now it works (and foo="bar" is in there, too). I looked at another place in our codebase that used the value parameter, and which also wasn't populating the UI with it, and that seems to be working now, too. I think I'm going to call it a week, see if it's still bothering me on Monday. Thanks for your help!
|
# ¿ Aug 28, 2020 23:52 |
|
enki42 posted:I don't think that's bootstrap, that's standard for all the _tag vs form instance methods - the value is by default derived from the form's object and so you never pass it in (although you can override it with a value option). Are you talking about the number of parameters it accepts/requires here? It still seems bad to me to document the signature as taking 3 params, but only accepting 2. Or am I looking at the wrong documentation? enki42 posted:Think of it this way - if that method had 3 arguments, you'd have to unnecessarily pass the value every time, when 95% of the time you just want to use the current value of whatever your model is. I'm not saying that it should or shouldn't take 3 arguments, just that the documentation should match the behavior enki42 posted:To be honest, I'd consider reworking your form a bit anyway so that the controller initializes a model (without saving it), and build the form off that. That way you can specify your default values in your controller (or even your model or your DB if it's a universal default), which is going to be less buried when you come back to it, can be tested if you needed to for some reason, and you can share the same form for editing and creating if you need to do both. This is a form that is collecting metadata to use to create multiple objects. It takes in a base name and the number of objects desired, and creates them. So, you pass in "My Object", and 3, and when you click submit, the system creates 3 objects, named My Object 1, My Object 2, and My Object 3. So, there is no single model to initialize in the controller before calling the form. I'm certainly open to better ideas, but using form_tag seemed the easiest way to make a form to collect data that isn't directly related to a single model.
|
# ¿ Aug 31, 2020 16:41 |
|
necrotic posted:The docs you linked are for the bare number_field_tag, not the form helper version. The form helper version automatically defaults to calling (essentially) model.__send__(:field_name) to get the default value (the second param in the one you linked). Sorry, I updated my original post with this link. That still looks like 3 params to me, and there's no explanation of any of them, except to say that the options are the same as number_field_tag's options.
|
# ¿ Sep 3, 2020 03:26 |
|
Jaded Burnout posted:Short version: you're looking at the documentation for a similar method with the same name but on a different class. Thanks so much. This has been a long source of frustration for me, and I probably just never realized I was looking at the documentation for the wrong method(s).
|
# ¿ Sep 15, 2020 17:50 |
|
I've moved a couple of partials to be rendered asynchronously with the render_async gem. They were slow loading tabs on a show page, and moving them broke tests, because (I think) the tests don't know to wait for the asynchronous tabs to finish loading. I'd like a way to continue testing these tabs. I found that RSpec/capybara is supposed to be able to render partials, but when I try that in feature tests, render is not a recognized method name. This app doesn't have dedicated controller tests, which is where I assume this is actually supposed to go, and I don't know if those allow standard UI expectations anyway. Does anyone have any suggestions for a good way to test this?
|
# ¿ Jan 11, 2021 23:52 |
|
A MIRACLE posted:post your test? is it a controller type test? It's a very convoluted feature test - I'm replacing one of the helper methods it uses with a new method just to render and check the partial: code:
I also tried moving the render command into the test file itself (instead of a helper), and got the same error there. Like I said, I assume render only works in controller tests, but there are no dedicated controller tests in this app. I can add one to see if it works, but I wondered if there's another best practice way to approach this, preferably keeping it in a Capybara feature test. With all the JavaScript and asynchronous stuff being done these days, others must have had (and solved, I hope) similar problems. Update: I found a way to make it work - thanks for your input. Peristalsis fucked around with this message at 02:22 on Jan 13, 2021 |
# ¿ Jan 12, 2021 16:14 |
|
I added render_async to a multi-tabbed page, to let a couple of tabs load in the background while the rest of the page is viewable. This worked okay, except for one thing - there's a collapsible panel at the bottom of each tab that loads in a collapsed state, and won't open when clicked on in the new branch. I'm not very good with JavaScript (and I've never looked at coffeescript at all), but I've been able to determine that the coffeescript method used to open and close the panel isn't firing any more for the tabs I changed. It still works okay on tabs I didn't modify. I'm assuming there's some issue with the script not hooking up with the haml code properly due to the asynchronous loading, but I really have no idea what to do from here. There are no JS errors showing in the browser when I click on the non-responsive panel. If anyone has any suggestions or even useful background info, I'd be grateful.
|
# ¿ Feb 18, 2021 22:16 |
|
A MIRACLE posted:This is the kind of thing I would hop on a screen share call for. But it’s basically you’re at the point that you need to learn the chrome or Firefox debuggers and how to set breakpoints in your front end enki42 posted:Also, I know this doesn't directly solve your problem, but we recently subbed out render_async for Turbo, and it's a way smoother experience IMO (it also natively supports things like not loading a tabs content until it's visible in the DOM, so you can take a lot of that javascript work off your plate). Thanks! As I was showering this morning, it occurred to me that the problem might be that the JS that isn't firing is in a document ready or document loaded event handler, and since the page is already loaded when the async call is made, it isn't getting attached to the new parts of the page correctly. I don't think we're using turbo* - will something like this still work? Edit: The Using Default Events section of the documentation looks promising. When all else fails, read the rest of the instructions, eh? Peristalsis fucked around with this message at 16:11 on Feb 19, 2021 |
# ¿ Feb 19, 2021 16:08 |
|
Never mind, I'm an idiot.
Peristalsis fucked around with this message at 21:00 on Apr 19, 2021 |
# ¿ Apr 19, 2021 20:47 |
|
I'm using the new delegated types feature in rails, and it seems interesting and functional, but I'm thinking about how we basically have two model objects for a single conceptual object now, and what to do about it. I was wondering if anyone else had any thoughts. To use the example from the documentation: code:
code:
What happens when you want to retrieve a bunch of entries for something else, though? You get a collection of entries, each with an attached entryable object. You can access its associated entryable object to call methods and retrieve attributes from it, but you need to remember that the indirection is necessary. I've read some suggestions to delegate appropriate method calls on the Entry model to its entryable object, at least for methods defined in every delegated class, but I guess I'm a little surprised that something like that isn't baked into the mechanism. And what about when you just retrieve a bunch of messages directly, without any comments, and without going through the Entry model? You have collection of messages, each with its associated entry object. But much, if not most, of the data for that message is actually on its entry object. So as you process your messages, you have an indirection issue to remember here, too. You could have the message model delegate the entry-related calls back to the entry, but it seems to me like an anti-pattern to have two classes where each is delegating to the other*. Maybe the answer is to access messages and comments strictly through Entry - Entry.messages.where(...) and Entry.comments.where(...) instead of Message.where(...) and Comment.where(...) - and always remember that you have an entry object, rather than the associated delegated class. That probably works fine, but it seems unsatisfying to me. I guess my problem is that with all of this boilerplate, I should be able to treat the entry-entryable pair as a single logical object in the code. Otherwise, I'm not really gaining a lot over hand-rolling a solution. Another issue that has come up is with has_many, through relationships with the delegated classes. code:
I could add a many-to-many relationship directly between user and messages (and comments), and maybe that's the right answer, but then I either have to restructure some code that already uses the user-entry link (since I'm adding this to an existing project), or I have duplicate connections between each user and its messages (and comments), and have increased the potential for a data mismatch. I don't know, I guess I just feel like this feature isn't quite ready for prime time, and that it should offer a little more convenience, or at least more documentation setting best practices for how to think of the resulting pairs of objects (i.e. do we now approach this as a bunch of entries with some delegated objects attached, or a bunch of messages and comments, each with some entry details attached?) * Instead of delegation, I could use a method_missing method on one class telling it to go look at the other class before raising an exception, and that should be pretty equivalent.
|
# ¿ Mar 2, 2023 22:27 |
|
|
# ¿ May 20, 2024 15:17 |
|
Thanks for your feedback. It's nice to have some confirmation at least that I'm not missing something easy and obvious.Gmaz posted:If I looked at it from the second approach then I would probably keep the models separated and create custom objects when I need to combine data. Could you expand on what you mean by this, exactly? Do you mean you wouldn't use the delegated types construct at all, and just use regular composition, or you'd create another, separate class for handling merged entries and entryables?
|
# ¿ Mar 6, 2023 02:57 |