Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Peristalsis
Apr 5, 2004
Move along.

necrotic posted:

The matcher docs explicitly state STDOUT is not checked as they replace $stdout for the check.

https://relishapp.com/rspec/rspec-expectations/docs/built-in-matchers/output-matcher

So if your code uses the global instead of the constant it would work.

That did the trick - thanks!

Adbot
ADBOT LOVES YOU

Peristalsis
Apr 5, 2004
Move along.
I have a high level question.

We now have several more or less unrelated applications, and we'd like to have a common mechanism for emailing the help desk from any of them. We're thinking in terms of a shared widget that we can have a link to in every app. What are some good ways to go about this?

I don't want to have a completely separate copy of the code in each app, and I don't think this really warrants making a separate application for. Is a gem the right way to go, or is there a better way to develop and use this independent of any particular application?

Peristalsis fucked around with this message at 22:21 on Jun 8, 2017

Peristalsis
Apr 5, 2004
Move along.

kayakyakr posted:

Making a gem for this is fairly easy, especially if the gem doesn't need to know anything specialized about the app that it's embedded in. That's generally the right way to go.


necrotic posted:

Echoing kayakyakr you can use a gem for this. You can even publish private gems using a service like Gem Fury, or hosting your own internal repo.

If it's something JS based you could maybe get away with a shared script on your CDN instead of a gem.

Thanks folks - I'll try making my very first gem. I don't know if it's going to involve any JavaScript at this point, but if it does, it can still just go in the gem, right?

Peristalsis
Apr 5, 2004
Move along.
I'm reviewing some code that has some new rake tasks in a file, where the guy who wrote it has declared variables outside of the tasks. He seems to be using it as a way to pass data between tasks.

For example:

myfile.rake
code:
namespace :imports do
  task :first_task => :environment do
    # Import some stuff into the database
  end

  my_hash = {}
  task :second_task => :environment do
    # Do some stuff that populates and uses my_hash
  end

  task :third_task => :environment do
    # Do some more stuff that uses my_hash
  end
end
I don't want to be overly pedantic about it, but this seems like a really bad idea. I haven't (yet) found much about it by Googling, so I thought I'd ask here - is this an accepted/acceptable pattern, or should I tell him to re-work his code? These tasks will hopefully only be run once in production at a new system's go-live, so I'm not too worried about elegance or long-term maintainability, it just looks sketchy to me.

Peristalsis
Apr 5, 2004
Move along.

The Milkman posted:

That definitely 'feels' like it shouldn't work. I guess depending on how those tasks are invoked my_hash is a top level variable still in memory? If it actually works and it's just a one-off I suppose it doesn't hurt too much.


A MIRACLE posted:

Rake tasks shouldn't be sharing variables like that. Offload the "populates and returns a hash" parts to a `lib/my_library.rb` class

Thanks all. I helped him remove some of the dependencies, and we're going to leave the rest in there for now. Coding around them would basically require completely re-creating this portion of the data import. He claims that the variables don't persist between rake runs - assuming that's true, it relieves my concern about them clogging up memory indefinitely.

necrotic posted:

It works because my_hash is a variable scoped to the namespace block, and the subsequent task blocks have access to the parent scope(s).

It's not good practice, but if the tasks are setup properly it's not terrible. In this case the tasks should depend on the previous tasks by listing in an array with environmental first (phone posting code):

code:
task dick: :environment

task fart: [:environment, :dick]
# invoking fart runs environment -> dick -> fart

I forgot to mention/include that the dependencies are explicit, so that the third task does automatically run the second task first.

Peristalsis
Apr 5, 2004
Move along.
I'm not clear how RVM sets up default and global gemsets. The documentation here seems pretty clear, but also doesn't coincide with what I'm seeing on my system.

For example, if I read it correctly, the file ~/.rvm/gemsets/global.gems determines what is in the global gemset when I install a new ruby version. If I have additional or different gems specified in ~/.rvm/gemsets/<ruby implementation> or /gemsets/<ruby implementation>/<version number>, then that affects things, too (not sure whether they are appended, or one supersedes the others).

However, I just installed ruby 2.4.1 with RVM, and my ~/.rvm/gemsets/ruby-2.4.1@global/gems directory has a bunch of gem directories that are not represented in ~/.rvm/gemsets/global.gems (which only contains gem-wrappers, rubygems-bundler, rake, and rvm). Further, when I use the new ruby with the global gemset, then execute gem list, it shows everything in global.gems, as well as 2 or 3 other gems (like bigdecimal) that I don't see specified anywhere.

So, how is the initial global gemset determined, and why do I seem to have extra gems intalled in it?

Also, I created a new gemset for my new ruby version, then made it the default by executing rvm use 2.4.1@newset --default, as in the instructions here. However, my (default) gemset still exists, and executing rvm use 2.4.1 with no gemset specified still switches me to the (default) gemset, and not the one I tried to make default. Am I misunderstanding something here?

Peristalsis fucked around with this message at 21:51 on Aug 24, 2017

Peristalsis
Apr 5, 2004
Move along.
The CI pipeline for one of our applications seems to be recreating the database every time it runs, by running all migrations. I've pointed out that the rails documentation (and the boilerplate comments in schema.rb) say not to create databases with migrations, but the lead programmer for this product is concerned that schema.rb may not work correctly because it may not be truly database agnostic (dev machines use Sqlite3, production server uses Oracle). Schema.rb looks to me like it's pretty much completely abstracted away from the actual SQL, and I'd have assumed that whatever DSL runs to keep migrations database agnostic also keeps rails db:reset (or its equivalents) agnostic. One former employee even said that at his previous job, they didn't keep the migrations around in source control once they started to get old, because they just aren't very useful after a while.

So:
1) Is CI an exception to the general rule, where migrations should be used for creating the database, or should it also use schema.rb?
2) Do we need to worry that a schema.rb generated on a dev box using Sqlite3 might not work properly under Oracle?

Peristalsis
Apr 5, 2004
Move along.

Sivart13 posted:

Good stuff

Thanks! I admit that isn't exactly what I wanted to hear, but at least maybe I won't make an rear end of myself at the next meeting now.

Is it worthwhile to switch my dev db to MySQL? By "worthwhile", I mean are there less likely to be compatibility issues between MySQL and Oracle than between SQLite and Oracle, or is it just exchanging one set of issues for another? I admit I like how easy it is to use SQLite, despite having some reservations about it when I started using it, but it wouldn't be hard to use MySQL on my dev box instead.

Peristalsis
Apr 5, 2004
Move along.
A minor optimization issue came up the other day in a project, and I was surprised how much trouble I had speeding up some work with models. I did some crude profiling, found a couple of sections of a page load that were hammering the database (relatively speaking), and tried a few things to eager load association data to reduce the database hits. I was successful in reducing the number of queries in at least one place, but it increased the total page load time for the page in question. I tried using includes() and eager_load() on an object's associated objects (so, on a CollectionProxy, I think), and at least one time I did manage to collapse a bunch of queries down to one, but it didn't have the desired effect. I also tried `outer_left_join` at one point, with similarly disappointing results.

I can't spend much more time on this right now (it's just something another developer was concerned about, and just a nice-to-have refactoring issue for now), but I'm curious if there are any obvious or common pitfalls to using these methods.

Possible issues that I would explore if I had more time:
  • The examples I found all used these methods on the model directly instead of on a collection (so, users = User.includes(:profiles) rather than already having a collection, and calling users.includes(:profiles).each do...), but that seems unlikely to be an issue.
  • I didn't have a huge data set, so I guess it's conceivable that the benefit of one large query over many small ones won't show up until I have more data (I think I replaced 50 - 100 small queries with the join, and it made it take longer - could I need to replace thousands of small queries before I see a benefit?)
  • Maybe sqlite3 isn't a great db for seeing the benefit of doing this
  • I might just need to be more careful, and re-write more of the code to use these methods are more appropriate places, though I was surprised that none of my efforts helped at all.

The method being run to populate and display this page is heavy with mathematical computations, and in a pinch we can probably speed things up by tweaking the schema to store some partial results, but I'd like to clean up the SQL before deciding if we need to denormalize the database.

This project uses Rails 5.1.4, and ruby 2.4.0.

Peristalsis fucked around with this message at 17:17 on Nov 15, 2017

Peristalsis
Apr 5, 2004
Move along.

kayakyakr posted:

I'd guess the reasons why you didn't see a speedup are these, in order of likelihood:

1) sqlite3 is not a good database. Set up a local postgres. You can do it in Docker these days.
2) the reason why page load is taking a long time is not because of database query speed but instead ruby speed. Things like GROUPS and SUMS are cheap and fast on a database and are slow and expensive in ruby, if those are possible. Looking more at your algorithms will yield other improvements.
3) The way you are using the objects is not benefiting from the includes at all.

I would isolate querying and rendering and run metrics on either side of that to identify where the time is truly being spent.

The Milkman posted:

Agreed on all three points. I would try Postgres locally if possible, for many reasons. I wouldn't necessarily expect sqlite to reflect certain db optimizations. If you're on a Mac, Homebrew does a pretty decent job of installing/running it these days, easier than Postgres.app even.


Thanks folks! I have MySQL installed already for another project, maybe I'll switch this project over to it for a quick test. I assume it's a serious enough db to show some results if there should be improvements. I'm working on Ubuntu, but I do have a MacBook for meetings and whatnot that I can use if I do end up installing Postgres.

Edit: Also, regarding item 2) above, I have no doubt that the computations involved in this particular page load are taking up time, but I don't think that would explain why collapsing 50 - 100 db hits down to 1 or 2 would slow the total load time. The profiling I used did actually show that the new query itself took longer than the ones it replaced.

Peristalsis fucked around with this message at 18:35 on Nov 16, 2017

Peristalsis
Apr 5, 2004
Move along.

Doh004 posted:

How are your indexes looking?

Also, please try to replicate the environment used in production on your development machine as much as possible.

I'm not sure about the indexes - I'll have to look in to that. Just switching to MySQL didn't do much good, so I'm doing something wrong in my changes.

The data model is roughly this:
Experiments have many Plates
Plates have many Samples
Samples have many DataPoints

One of the places I'm trying to optimize is an Experiment method that returns a subset of its samples, based on sample_type. So, this code
code:
exp = Experiment.find(3)
exp.of_sample_type('reaction')
should loop over each sample in an experiment, and return an array of the ones with sample_type of 'reaction'.

There are a number of different things that could be tried here, but I just wanted to see if I could pre-load some data to start off, and changed
code:
self.plates.map(&:samples).flatten.each do |sample|
...
end
to

code:
self.plates.includes(samples: :data_points).map(&:samples).flatten.each do |sample|
...
end
I also tried eager_load(), outer_left_join(), and have been fiddling with just executing a fresh query instead of looping over the loaded data. It's also possible I could try to do a fast query, and intersect the results with exp.samples to filter further. But I'd like to avoid the many queries to data_points that occur when iterating over the resulting sample collection.

Peristalsis
Apr 5, 2004
Move along.

necrotic posted:

SQLite is plenty fast. The issue is both populating a tree of objects from the queries and then filtering those objects in ruby. To get performance here you really need to leverage the database for querying what you want.

That map.flatten call had to iterate all the stuff twice before you even filter. So that's at least three full iterations of the full results in ruby.

Thanks, I'll look into restructuring the loop so it only goes over the samples once.

Also, I did check the indexing: samples are indexed on plate_id, and plates are indexed on experiment_id - I'm not an expert, but that seems like the obvious list of things to index.

I suspect there's a larger, structural problem here - something being done brute-force that could be done better, or maybe repeated queries over the same data. It's not really my project, so I'm not putting in the time I would need to really tear apart the existing code, but I'm baffled that pre-loading data didn't at least help some to do a slow thing faster, even if it should be doing something else to begin with.

Peristalsis
Apr 5, 2004
Move along.
I'm a little late to the STI discussion, but is there an accepted better way to model inheritance of Model classes in Rails? The only other way I've seen is to have a separate table for each child model, with references back to the parent model. So, something like this:

code:
class Vechicle < ApplicationRecord
  # id as integer
  # number_of_wheels as integer
end
code:
class Bicycle < Vehicle
  # vehicle_id as integer
  # brake_type as string
  # frame_type
end
code:
class Car < Vehicle
  # vehicle_id as integer
  # num_cylinders_in_engine as integer
  # hybrid as boolean
end
This has the drawback that every access of a vehicle subclass model requires reading two rows from the db, as well as requiring the right wiring to be in the class defs. I've not seen this done in Rails, and I don't even know if the class inheritance will work correctly/automatically in a case like this.

Is STI just considered kind of an ugly hack that's too loose? For example, it seems unfortunate to me that with STI, if classes B and C both inherit from A, B objects can access attributes intended to be used only by C objects (though they'd have to use exact db attribute names, rather than association names that can be specific to subclasses).

Our current product is starting to use a few STI models, and if this is a recipe for future problems, I'd like to know now.

Peristalsis
Apr 5, 2004
Move along.
Thanks for your STI input. It sounds like STI is sort of there because it's easy and often good enough.

Now I have a question about FactoryBot best practices. Using sequences for unique names is fine with RSpec tests, but if you use a factory to generate dev data from Rails console, the sequences restart every time you restart the console, which means they aren't unique any more. I've been replacing sequence and Faker::Name.unique instances with names with timestamps appended to manually unique-ify things as collisions come up. Another dev doesn't feel that FactoryBot should be used in Rails console in this way, and that it's only for setting up and executing tests. I think it's crazy to manually set up data with multiple complex associations by hand every time I want to try something out when that's exactly what factories do for us. Is he right that it's a best practice not to use Factories from the console? We also use factories in our seeds.rb for populating dev data - I'm not sure why that would be okay, but creating more data from the console isn't. (And I just found out that we may have a factory being used in some production data import code, too - I'm pretty sure that isn't a good idea.)

Peristalsis
Apr 5, 2004
Move along.

Slimy Hog posted:

I'm not even sure how you would use a factory in production.

I haven't verified that it's there (another developer told me), but it's for a data import from another system. I assume it's just a way to quickly create model objects from values read in from a text file.

Edit: Yeah, its used extensively in an import rake task.

Peristalsis fucked around with this message at 20:10 on Mar 7, 2018

Peristalsis
Apr 5, 2004
Move along.

xtal posted:

I think one of us is misunderstanding factories

That's possible. I'm talking about the FactoryBot gem used to set up test data, not the standard factory design pattern. I'm also very tired today, so I apologize if I garbled my descriptions.

Peristalsis
Apr 5, 2004
Move along.

ToadStyle posted:

Polymorphic inheritance maybe?

I'm not sure what you mean. Isn't all inheritance polymorphic by definition? I'm specifically asking about how to model class hierarchies in relational database tables, not how to structure them in code.

Peristalsis
Apr 5, 2004
Move along.
I'm working with CarrierWave to upload files as attachments to experiment objects. For dev, I have a local minio docker container for file storage, and I guess I'm using fog in some way to interact with it - mostly I'm copying similar code out of another project. The files seem to upload fine, but when a file is deleted by the user, I can't seem to scrub it from the minio instance.

Here's my setup:
code:
class Datafile < ApplicationRecord
  ...
  mount_uploader :upload_file, DatafileUploader

  private
    def clean_s3
      upload_file.remove!
    rescue => ex
      puts "Error message"
      false
end
code:
class DatafilesController < ApplicationController
  ...
  def destroy
    @datafile.remove_upload_file!
    @datafile.save
    @datafile.destroy
    ...
  end
end
I have two approaches to deleting the remote file above, from Removing uploaded files and a related StackOverflow question. The private method doesn't fail, it just doesn't affect the minio file storage, whereas removing the uploaded file in the controller causes this error:

NoMethodError at /datafiles/10023

undefined method `remove_previously_stored_files_after_update' for 0:Fixnum

I've tried each approach individually, and both together, with no luck.

Peristalsis
Apr 5, 2004
Move along.

manero posted:

I'd think you wouldn't need to clean up the attached file, when you call @datafile.destroy, carrierwave should take care of removing the file out of storage for you

This is what seems to be happening for the other project from which I copied some of the setup and code (according to a developer on the project - I haven't verified it myself). It makes me think I just have something configured wrong.

Peristalsis
Apr 5, 2004
Move along.

manero posted:

I'd try removing all the stuff like the entire clean_s3 method, and your destroy action should just be @datafile.destroy, don't bother with the calls to remove_uploaded_file! and save

The issue ended up being that my migration for the new Datafile object had an integer for the mounted column, instead of a string. I fixed that, and all seems to be well.

Peristalsis
Apr 5, 2004
Move along.
The team I'm on is at the point where it would be good to get on the same page about what we should be testing, and how we should be testing it. Are there books or other resources you have used that were valuable in adopting good conventions? This can be Rails-specific, or more general. Our main problem right now is that we all have different ideas about our testing goals, and none of us are particularly right. We're all somewhat new to using structured test suites, and would benefit from a standard approach to using unit vs. feature tests, coverage goals, writing for test speed vs. readability, etc.

We could also change our testing tools, if warranted. Right now, we use RSpec with Capybara. The lead developer for the project has used Cucumber in the past, and didn't care for it compared to RSpec, and I also tend to prefer simple, straightforward test tools to ones that force you to be chatty and try to have a conversation with the computer, rather than just executing assertions. That said, we'd have to find some pretty compelling benefits to change to a different system entirely.

Peristalsis
Apr 5, 2004
Move along.
I have a weird problem. I'm uploading spreadsheet and/or CSV files for processing. Some of the CSV files have a Byte Order Mark (BOM) at the beginning of the file and it's screwing everything up. The BOM is prepended to the first header value in the top row, and I'm having trouble figuring out how to deal with this.

I've tried a number of different things, from cleaning the key in each key-value pair to trimming the stream directly in tempfile, but I keep hitting a couple of roadblocks.
1) Using String#start_with? and the like to locate the BOM doesn't work
2) File.Open is supposed to allow you to specify a BOM-friendly mode, but I'm using Roo, which doesn't seem to have a way to specify that. I tried using the CSV library for csv files, but it also doesn't seem to to able to digest this file.

Has anyone else dealt with this successfully?

Peristalsis
Apr 5, 2004
Move along.

Pollyanna posted:

I’m at my wits end with these stupid loving tests. Is there any way to debug why Capybara would intermittently take five seconds to find and click on a link in a page? We’re often but not always spending like 5600ms on any click_link or click_button call, and I’m having a gently caress of a time figuring out why. I don’t think it’s an async thing and we don’t seem to be spending much time in the DB or anything, so I suspect that Capybara itself is having trouble getting what it wants.

Edit: we are doing within blocks, though it doesn’t seem to help much.

Edit 2: you know what? Our test suite being a bloated piece of poo poo is extremely low priority. I am bringing no value by doing this. gently caress this poo poo.

I feel your pain. In my experience, capybara is kind of erratic in ways like this, and it can be very difficult to figure out why. It seems odd to me that it always eventually finds your links - I've had more experiences where it just can't find some button or link that's right in front of it, causing a slow failure as it sits through its wait time, not a slow pass.

Maybe someone who knows it better will comment further, but here are a few things to look at:
* Capybara locates things on the page to click on by screen coordinates, and sometimes the screen rendering makes the control move after capybara locates it, but before it tries to click on it. Capybara then tries to click on some empty part of the page (or another control entirely).
* If you have a screen that's slow to load, you can tell capybara to find some late-loading feature before you do other things, as a sort of check that the screen is done rendering before you expect too much.
* It's better to use click_on, click_link, etc. than to activate widgets and explicitly call some callback on it. I've resorted to the latter at times when I just can't figure out what the hell is going on, and always ended up regretting it later.
* The first test that is run seems to take a long time on the application I work on, and I assume that's just because it's loading the app or something while doing it.
* Try not to use before(:all) blocks to save time. It can speed things up, but has always caused me more headaches than it's worth. Just stick with before(:each)/background.
* Keeping each test short and specific can help locate exactly what step is causing the problem, and/or prevent the combination of steps that's causing the problem from being in the same test together. This may make the tests take longer, as each before and after block run more times for lots of short tests than for a few long ones, but keeping them simple makes them more understandable and maintainable.
* Finding the root cause of capybara issues can take a lot longer than randomly changing things around (the order of steps, which tests do what things, different ways of doing the same things, etc.) until something works. I'd always prefer to know what the gently caress is actually going on, but with capybara in particular, figuring that out can be a frustrating dead end. And sometimes when you stumble across a fix, you can figure out what the problem must have been much better than trying to deduce it from first principles.

Using within blocks is good for preventing ambiguous searches, but I wouldn't expect it to fix full seconds of delay just from shortening the parsing of HTML.

Good luck!

Peristalsis
Apr 5, 2004
Move along.
I hope this is a simple question to answer, but I can't quite find it elsewhere.

I have an index page with some filter params. There's a link on the page to download a csv version of the page, so the controller looks roughly like this:

code:
def index
  # Set up @items variable with filtered/selected Item rows

  respond_to do |format|
    format.html
    format.csv { send_data(Item.csv_export(@items)) }
  end
end
The index page looks sort of like this:

code:
<%= link_to(items_path(format: 'csv') %>

# Show table of item objects
The problem is that I need to pass params[:filters] through the link to the csv option, so that it will print what's on the view, but I don't see an obvious way to do it.

Changing the link like so, <%= link_to(items_path(format: 'csv', filters: params[:filters]) %> leads to unable to convert unpermitted parameters to hash.

Peristalsis
Apr 5, 2004
Move along.

Aquarium of Lies posted:

My company has a large Ruby on Rails app that's 6+ years old, but thankfully is relatively up-to-date version wise (currently running on Rails 5). During the initial production, we chose to use JRuby + puma for performance reasons. This works fine in production, but one of the biggest issues our devs have with the codebase is how slow development is. Between JRuby (especially startup time) and poorly written specs, it takes 35-45 minutes to run all our specs. This makes it really annoying to wait for our CI to go green even for very small changes.

Despite having no prior experience with Ruby or Rails, I'm working on digging into options to speed up spec time. One of the easiest wins is to switch from JRuby to Ruby MRI. Currently very little of our code actually requires JRuby and the parts that do can easily be updated to not. The benefits seem to be a better supporting Ruby implementation, faster turnaround on bug fixes and new features, and an improved dev process. Is there a reason in 2019 to continue to stick with JRuby?

Obviously if I go forward with this, I'd do a gradual rollout (we have our app distributed on multiple servers and scaling up/down is practically trivial) to get performance metrics of our actual app/loads for comparison. I'm just looking for any high-level opinions between the two to decide whether this is worth pursuing further.

I don't know anything about JRuby, but if you're not already doing it, this gem can parallelize your tests. It's not especially smart about how it divides up the tests between threads/processes, but it does shorten our test runs by quite a bit. I believe Rails or RSpec is supposed to be parallelized by default in a not-too-distant release, too.

Peristalsis
Apr 5, 2004
Move along.
I have a Capybara test that passes locally, but fails intermittently (almost always at the moment) on our CI.

The test clicks on a link to open a menu, then clicks on a link on the menu. That menu link isn't visible, and I don't know why. Does anyone have any suggestions of something I could try?

Here's roughly how the test works:
code:
scenario 'test link' do
  login_as(admin_user)
  visit('/')
  click_link('Admin')
  expect(page).to have_link('Versions', wait: 10)  # Delay to let screen resolve - Test fails here on CI
  click_link('Versions')
  expect(page).to have_current_path(versions_path)
end
The 'Versions' link isn't found on CI. I don't think I have access to the CI server, certainly I can't log in and use a debugger to explore what's going on. Maybe I could save off a screen shot, but I'm not sure if that would work or how I would retrieve it.

I can comment out/delete this test - it's just verifying that a menu link works - but I'd like to fix it if I can.

Peristalsis
Apr 5, 2004
Move along.
I must be doing something very stupid here, but I can't get a method to take the right number of parameters.

I'm trying to do this:
code:
<%= bootstrap_form_tag url: new_route_for_this_ticket, layout: :horizontal do |f| %>
  <div class='container-fluid'>
    <div class='row'>
      <div class='col-sm-6'>
        <%= f.number_field(:num_new_things, 1, step: 1, min: 1) %>    # <----- The problem line
      </div>
    </div>
  </div>
<% end %>
According to the rails documentation, the number_field method of the form object takes the same parameters as the number_field_tag helper method:

rubydoc posted:

#number_field_tag(name, value = nil, options = {}) ⇒ Object

However, when I try to pass in name, value, and an options hash, I get:

code:
wrong number of arguments (given 3, expected 1..2)
Am I having a mental break, or is the documentation just wrong here? It works fine if I omit the value parameter, but then I can't set a default, which would be nice to do. Explicitly putting brackets around the options hash doesn't make a difference.

I'm using Rails version 5.2.4.3.


Edit:
Okay, technically the rails docs don't say the helper method takes the same params as number_field_tag helper method, it says it takes the same options. The method signature for the form helper method is this:

quote:

number_field(object_name, method, options = {}) public

That still looks like three params to me, and I have no idea what the "method" param is supposed to be.

Could there be a difference between the form objects returned by form_tag and form_for?

Peristalsis fucked around with this message at 21:07 on Aug 28, 2020

Peristalsis
Apr 5, 2004
Move along.

Tea Bone posted:

The boot strap form gem redefines most of the helpers and they don't always work 1:1. I've come against this before. If you can look into the source code for the gem, but from the top of my head I think you might need to pass the value as a named parameter. Try value:123 or default:123

Thanks for your response. Neither value nor default worked, but I think I'm going to just omit the default value for now, so I can move on and get some work done. If I get some time, I might look into the underlying code, but I'm not sure I'm willing to do that. I've always had trouble with the helper methods (even without bootstrap, if I recall correctly), and I'm not sure I want to take on any additional frustration or delays right now.

Peristalsis
Apr 5, 2004
Move along.

Tea Bone posted:

No worries, but that's strange
code:
<%= f.number_field(:num_new_things, value:1, step: 1, min: 1) %>
is working fine for me? Any arbitrary options should just get passed as HTML attributes. Unless you have any other gems or monkey patches interfering with the :value option it should work.
if you throw another gibberish param in there (say foo:'bar') then inspect the HTML of the rendered element does foo show up as an attribute?

i.e:
code:
<%= f.number_field(:num_new_things, step: 1, min: 1, foo:'bar') %>
should render as
code:
<input  step="1" min="1" foo="bar" name="num_new_things" id="num_new_things" class="form-control" type="number">

Well, now it works (and foo="bar" is in there, too). I looked at another place in our codebase that used the value parameter, and which also wasn't populating the UI with it, and that seems to be working now, too. I think I'm going to call it a week, see if it's still bothering me on Monday. Thanks for your help!

Peristalsis
Apr 5, 2004
Move along.

enki42 posted:

I don't think that's bootstrap, that's standard for all the _tag vs form instance methods - the value is by default derived from the form's object and so you never pass it in (although you can override it with a value option).

Are you talking about the number of parameters it accepts/requires here? It still seems bad to me to document the signature as taking 3 params, but only accepting 2. Or am I looking at the wrong documentation?

enki42 posted:

Think of it this way - if that method had 3 arguments, you'd have to unnecessarily pass the value every time, when 95% of the time you just want to use the current value of whatever your model is.

I'm not saying that it should or shouldn't take 3 arguments, just that the documentation should match the behavior

enki42 posted:

To be honest, I'd consider reworking your form a bit anyway so that the controller initializes a model (without saving it), and build the form off that. That way you can specify your default values in your controller (or even your model or your DB if it's a universal default), which is going to be less buried when you come back to it, can be tested if you needed to for some reason, and you can share the same form for editing and creating if you need to do both.

This is a form that is collecting metadata to use to create multiple objects. It takes in a base name and the number of objects desired, and creates them. So, you pass in "My Object", and 3, and when you click submit, the system creates 3 objects, named My Object 1, My Object 2, and My Object 3. So, there is no single model to initialize in the controller before calling the form. I'm certainly open to better ideas, but using form_tag seemed the easiest way to make a form to collect data that isn't directly related to a single model.

Peristalsis
Apr 5, 2004
Move along.

necrotic posted:

The docs you linked are for the bare number_field_tag, not the form helper version. The form helper version automatically defaults to calling (essentially) model.__send__(:field_name) to get the default value (the second param in the one you linked).

Sorry, I updated my original post with this link.

That still looks like 3 params to me, and there's no explanation of any of them, except to say that the options are the same as number_field_tag's options.

Peristalsis
Apr 5, 2004
Move along.

Jaded Burnout posted:

Short version: you're looking at the documentation for a similar method with the same name but on a different class.

Thanks so much. This has been a long source of frustration for me, and I probably just never realized I was looking at the documentation for the wrong method(s).

Peristalsis
Apr 5, 2004
Move along.
I've moved a couple of partials to be rendered asynchronously with the render_async gem. They were slow loading tabs on a show page, and moving them broke tests, because (I think) the tests don't know to wait for the asynchronous tabs to finish loading. I'd like a way to continue testing these tabs. I found that RSpec/capybara is supposed to be able to render partials, but when I try that in feature tests, render is not a recognized method name. This app doesn't have dedicated controller tests, which is where I assume this is actually supposed to go, and I don't know if those allow standard UI expectations anyway. Does anyone have any suggestions for a good way to test this?

Peristalsis
Apr 5, 2004
Move along.

A MIRACLE posted:

post your test? is it a controller type test?

It's a very convoluted feature test - I'm replacing one of the helper methods it uses with a new method just to render and check the partial:

code:
def check_for_controls_on_publication_partial(can_edit, can_delete, can_create)
  render partial: 'myDir/myPartial', locals: {...}
end
This raises undefined method `render'.

I also tried moving the render command into the test file itself (instead of a helper), and got the same error there. Like I said, I assume render only works in controller tests, but there are no dedicated controller tests in this app. I can add one to see if it works, but I wondered if there's another best practice way to approach this, preferably keeping it in a Capybara feature test. With all the JavaScript and asynchronous stuff being done these days, others must have had (and solved, I hope) similar problems.


Update: I found a way to make it work - thanks for your input.

Peristalsis fucked around with this message at 02:22 on Jan 13, 2021

Peristalsis
Apr 5, 2004
Move along.
I added render_async to a multi-tabbed page, to let a couple of tabs load in the background while the rest of the page is viewable. This worked okay, except for one thing - there's a collapsible panel at the bottom of each tab that loads in a collapsed state, and won't open when clicked on in the new branch.

I'm not very good with JavaScript (and I've never looked at coffeescript at all), but I've been able to determine that the coffeescript method used to open and close the panel isn't firing any more for the tabs I changed. It still works okay on tabs I didn't modify. I'm assuming there's some issue with the script not hooking up with the haml code properly due to the asynchronous loading, but I really have no idea what to do from here. There are no JS errors showing in the browser when I click on the non-responsive panel. If anyone has any suggestions or even useful background info, I'd be grateful.

Peristalsis
Apr 5, 2004
Move along.

A MIRACLE posted:

This is the kind of thing I would hop on a screen share call for. But it’s basically you’re at the point that you need to learn the chrome or Firefox debuggers and how to set breakpoints in your front end

Dm me tomorrow if you’re still having issues I can try to help


enki42 posted:

Also, I know this doesn't directly solve your problem, but we recently subbed out render_async for Turbo, and it's a way smoother experience IMO (it also natively supports things like not loading a tabs content until it's visible in the DOM, so you can take a lot of that javascript work off your plate).

But even in the render_async world, you need to make sure that whatever page initialization code you might run listen for an event indicating that the page has loaded, and then run them. If you're using render async with turbolinks, it will fire a "turbolinks:load" event on the document. In our codebase, we set up a list of events to be run on pageload, and then trigger them with this (we're using turbo instead of turbolinks, but this all works the same with a different event name):

code:
document.addEventListener('turbo:load', () => {
  events.pageload.forEach(event => event.call());
});
(and then any page can register an event to be run on pageload)


Thanks! As I was showering this morning, it occurred to me that the problem might be that the JS that isn't firing is in a document ready or document loaded event handler, and since the page is already loaded when the async call is made, it isn't getting attached to the new parts of the page correctly. I don't think we're using turbo* - will something like this still work?

Edit: The Using Default Events section of the documentation looks promising. When all else fails, read the rest of the instructions, eh?

Peristalsis fucked around with this message at 16:11 on Feb 19, 2021

Peristalsis
Apr 5, 2004
Move along.
Never mind, I'm an idiot.

Peristalsis fucked around with this message at 21:00 on Apr 19, 2021

Peristalsis
Apr 5, 2004
Move along.
I'm using the new delegated types feature in rails, and it seems interesting and functional, but I'm thinking about how we basically have two model objects for a single conceptual object now, and what to do about it. I was wondering if anyone else had any thoughts.

To use the example from the documentation:

code:
# Schema: entries[ id, account_id, creator_id, created_at, updated_at, entryable_type, entryable_id ]
class Entry < ApplicationRecord
  belongs_to :account
  belongs_to :creator
  delegated_type :entryable, types: %w[ Message Comment ]
end

module Entryable
  extend ActiveSupport::Concern

  included do
    has_one :entry, as: :entryable, touch: true
  end
end

# Schema: messages[ id, subject, body ]
class Message < ApplicationRecord
  include Entryable
end

# Schema: comments[ id, content ]
class Comment < ApplicationRecord
  include Entryable
end
The benefit is that you can construct a list of entries, and treat them all (more or less) the same. You can even display comments and messages in the same list if you have a partial for each, by using the entry.entryable_name method, something like this:

code:
<%= render "entries/entryables/#{entry.entryable_name}", entry: entry %>
The stated use case is making display and pagination of multiple, related types easier, and so far so good.

What happens when you want to retrieve a bunch of entries for something else, though? You get a collection of entries, each with an attached entryable object. You can access its associated entryable object to call methods and retrieve attributes from it, but you need to remember that the indirection is necessary. I've read some suggestions to delegate appropriate method calls on the Entry model to its entryable object, at least for methods defined in every delegated class, but I guess I'm a little surprised that something like that isn't baked into the mechanism.

And what about when you just retrieve a bunch of messages directly, without any comments, and without going through the Entry model? You have collection of messages, each with its associated entry object. But much, if not most, of the data for that message is actually on its entry object. So as you process your messages, you have an indirection issue to remember here, too. You could have the message model delegate the entry-related calls back to the entry, but it seems to me like an anti-pattern to have two classes where each is delegating to the other*.

Maybe the answer is to access messages and comments strictly through Entry - Entry.messages.where(...) and Entry.comments.where(...) instead of Message.where(...) and Comment.where(...) - and always remember that you have an entry object, rather than the associated delegated class. That probably works fine, but it seems unsatisfying to me. I guess my problem is that with all of this boilerplate, I should be able to treat the entry-entryable pair as a single logical object in the code. Otherwise, I'm not really gaining a lot over hand-rolling a solution.

Another issue that has come up is with has_many, through relationships with the delegated classes.
code:
user.rb

has_many: :entries_users, class_name: 'EntriesUser'
has_many: :entries, through: :entries_users
has_many: :messages, through: :entries
has_many: :comments, through: :entries
Some version of this works for retrieving comments or messages through their entries, but it requires a has_many, through built on top of another has_many, through. So, while I can access user.comments, rails understandably scoffs at user.comments << Comment.create(...).

I could add a many-to-many relationship directly between user and messages (and comments), and maybe that's the right answer, but then I either have to restructure some code that already uses the user-entry link (since I'm adding this to an existing project), or I have duplicate connections between each user and its messages (and comments), and have increased the potential for a data mismatch.

I don't know, I guess I just feel like this feature isn't quite ready for prime time, and that it should offer a little more convenience, or at least more documentation setting best practices for how to think of the resulting pairs of objects (i.e. do we now approach this as a bunch of entries with some delegated objects attached, or a bunch of messages and comments, each with some entry details attached?)


* Instead of delegation, I could use a method_missing method on one class telling it to go look at the other class before raising an exception, and that should be pretty equivalent.

Adbot
ADBOT LOVES YOU

Peristalsis
Apr 5, 2004
Move along.
Thanks for your feedback. It's nice to have some confirmation at least that I'm not missing something easy and obvious.

Gmaz posted:

If I looked at it from the second approach then I would probably keep the models separated and create custom objects when I need to combine data.

Could you expand on what you mean by this, exactly? Do you mean you wouldn't use the delegated types construct at all, and just use regular composition, or you'd create another, separate class for handling merged entries and entryables?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply