Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KoRMaK
Jul 31, 2012



Vagrant! Vagrant was the word I was looking for.



kayakyakr posted:

Sounds like you either have someone dedicated to full devops, or you are losing money on it.

Mind, my business is in early stage startups, so I'm a big fan of path of least resistance.

It's one person, he is the co owner of the company, and he spent like probably a week on it, and then spends like maybe a couple hours a week on it moving forward, less as we get further away from the init deployment phase. He isn't dedicated to it in the sense that it's his only job, he is a consultant and PM most of the time.


So, you're close, sort of. He is our devops guy.


e: I got the azure ruby 233 docker image running on azure! Now I'm trying to figure out what is different between mine and theirs that is causing mine to prevent ssh. I think it's the fact that the container is closing immediately so I gotta put something in there until I get the rest going.

Adbot
ADBOT LOVES YOU

Peristalsis
Apr 5, 2004
Move along.
I'm working with CarrierWave to upload files as attachments to experiment objects. For dev, I have a local minio docker container for file storage, and I guess I'm using fog in some way to interact with it - mostly I'm copying similar code out of another project. The files seem to upload fine, but when a file is deleted by the user, I can't seem to scrub it from the minio instance.

Here's my setup:
code:
class Datafile < ApplicationRecord
  ...
  mount_uploader :upload_file, DatafileUploader

  private
    def clean_s3
      upload_file.remove!
    rescue => ex
      puts "Error message"
      false
end
code:
class DatafilesController < ApplicationController
  ...
  def destroy
    @datafile.remove_upload_file!
    @datafile.save
    @datafile.destroy
    ...
  end
end
I have two approaches to deleting the remote file above, from Removing uploaded files and a related StackOverflow question. The private method doesn't fail, it just doesn't affect the minio file storage, whereas removing the uploaded file in the controller causes this error:

NoMethodError at /datafiles/10023

undefined method `remove_previously_stored_files_after_update' for 0:Fixnum

I've tried each approach individually, and both together, with no luck.

xtal
Jan 9, 2011

by Fluffdaddy
I'm wrong

xtal fucked around with this message at 01:01 on Aug 15, 2018

manero
Jan 30, 2006

I'd think you wouldn't need to clean up the attached file, when you call @datafile.destroy, carrierwave should take care of removing the file out of storage for you

Peristalsis
Apr 5, 2004
Move along.

manero posted:

I'd think you wouldn't need to clean up the attached file, when you call @datafile.destroy, carrierwave should take care of removing the file out of storage for you

This is what seems to be happening for the other project from which I copied some of the setup and code (according to a developer on the project - I haven't verified it myself). It makes me think I just have something configured wrong.

manero
Jan 30, 2006

Peristalsis posted:

This is what seems to be happening for the other project from which I copied some of the setup and code (according to a developer on the project - I haven't verified it myself). It makes me think I just have something configured wrong.

I'd try removing all the stuff like the entire clean_s3 method, and your destroy action should just be @datafile.destroy, don't bother with the calls to remove_uploaded_file! and save

Peristalsis
Apr 5, 2004
Move along.

manero posted:

I'd try removing all the stuff like the entire clean_s3 method, and your destroy action should just be @datafile.destroy, don't bother with the calls to remove_uploaded_file! and save

The issue ended up being that my migration for the new Datafile object had an integer for the mounted column, instead of a string. I fixed that, and all seems to be well.

Peristalsis
Apr 5, 2004
Move along.
The team I'm on is at the point where it would be good to get on the same page about what we should be testing, and how we should be testing it. Are there books or other resources you have used that were valuable in adopting good conventions? This can be Rails-specific, or more general. Our main problem right now is that we all have different ideas about our testing goals, and none of us are particularly right. We're all somewhat new to using structured test suites, and would benefit from a standard approach to using unit vs. feature tests, coverage goals, writing for test speed vs. readability, etc.

We could also change our testing tools, if warranted. Right now, we use RSpec with Capybara. The lead developer for the project has used Cucumber in the past, and didn't care for it compared to RSpec, and I also tend to prefer simple, straightforward test tools to ones that force you to be chatty and try to have a conversation with the computer, rather than just executing assertions. That said, we'd have to find some pretty compelling benefits to change to a different system entirely.

manero
Jan 30, 2006

Peristalsis posted:

The team I'm on is at the point where it would be good to get on the same page about what we should be testing, and how we should be testing it. Are there books or other resources you have used that were valuable in adopting good conventions? This can be Rails-specific, or more general. Our main problem right now is that we all have different ideas about our testing goals, and none of us are particularly right. We're all somewhat new to using structured test suites, and would benefit from a standard approach to using unit vs. feature tests, coverage goals, writing for test speed vs. readability, etc.

We could also change our testing tools, if warranted. Right now, we use RSpec with Capybara. The lead developer for the project has used Cucumber in the past, and didn't care for it compared to RSpec, and I also tend to prefer simple, straightforward test tools to ones that force you to be chatty and try to have a conversation with the computer, rather than just executing assertions. That said, we'd have to find some pretty compelling benefits to change to a different system entirely.

I feel like Cucumber is way overkill, and tended to slow me down. I also never had "business people" writing specs on how the system should behave, so writing stuff in Cucumber was just an extra layer I didn't need.

I've heard good stuff about Everyday Rails Testing, but haven't read it yet.. it might be worth checking out:

https://leanpub.com/everydayrailsrspec

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home
My boy Noel Rap's Test Prescription books are my go-to.

https://pragprog.com/book/nrtest3/rails-5-test-prescriptions

I also recommend his Take My Money book if you deal with any sort of payment processing.

xtal
Jan 9, 2011

by Fluffdaddy
Just use minitest like our lord dhh gave you. Rails is omakase, and all

fantastic in plastic
Jun 15, 2007

The Socialist Workers Party's newspaper proved to be a tough sell to downtown businessmen.

xtal posted:

Just use minitest like our lord dhh gave you. Rails is omakase, and all

I think my company's CTO unironically holds this position.

Pollyanna
Mar 5, 2005

Milk's on them.


There’s something rather common that I’ve seen in Rails applications, and that’s an attempt at modeling tabular data with ActiveRecord. i.e. you have a Table model, where each Table has many TableRows and TableColumns, each in a specific non-primary key order. Both my previous and current job have attempted to display tabular data with customizable columns e.g. custom order, custom column naming, static columns, and inclusion/exclusion.

I’m wondering if a straight-up Excel-like implementation ala Rows and Columns is even a good fit for ActiveRecord. What have others done to implement tabular data in ActiveRecord (and therefore SQL) with arbitrary schema?

Edit: another interesting twist on this is that we will actually be pulling the tabular data from another source that provides the data as a CSV and determines the schema of the original tabular data, and our role will be to sort, rename, include/exclude, and add to this schema’s columns. So now we will have to keep up to date with them.

Pollyanna fucked around with this message at 14:56 on Oct 19, 2018

manero
Jan 30, 2006

milk moosie posted:

There’s something rather common that I’ve seen in Rails applications, and that’s an attempt at modeling tabular data with ActiveRecord. i.e. you have a Table model, where each Table has many TableRows and TableColumns, each in a specific non-primary key order. Both my previous and current job have attempted to display tabular data with customizable columns e.g. custom order, custom column naming, static columns, and inclusion/exclusion.

I’m wondering if a straight-up Excel-like implementation ala Rows and Columns is even a good fit for ActiveRecord. What have others done to implement tabular data in ActiveRecord (and therefore SQL) with arbitrary schema?

Edit: another interesting twist on this is that we will actually be pulling the tabular data from another source that provides the data as a CSV and determines the schema of the original tabular data, and our role will be to sort, rename, include/exclude, and add to this schema’s columns. So now we will have to keep up to date with them.

I haven't run into this problem, but it always seems weird when you're trying to model something that already exists, e.g. database tables.

It seems like maybe AR is holding you back, I could see something where you dynamically create database tables and columns, but that might be a recipe for madness. Maybe a NoSQL solution is a better fit?

kayakyakr
Feb 16, 2004

Kayak is true

manero posted:

I haven't run into this problem, but it always seems weird when you're trying to model something that already exists, e.g. database tables.

It seems like maybe AR is holding you back, I could see something where you dynamically create database tables and columns, but that might be a recipe for madness. Maybe a NoSQL solution is a better fit?

Or a JSONB field in postgres to hold each row's data.

But yeah, AR & postgres isn't the best way to describe a totally free-form sort of table.

Pollyanna
Mar 5, 2005

Milk's on them.


What gives me pause is that there’s no reason you can’t model it by making a Table with many TableColumns, each with default_name, custom_name, static_value, sort_order, and enabled? as attributes. It would technically work. I agree, I think it should just be as simple as a jsonb field on a TableSchema, but I can’t really explain why we would go with that instead of the basic relational model.

To be more explicit about this, we are doing the following:

- Downloading either a CSV or a set of JSON objects representing tabular data
- Removing/deselecting columns
- Renaming columns
- Adding columns with static values (don’t ask)
- Reordering the columns
- Uploading the result to S3

and not actually persisting the data to our database. So it pretty much boils down to allowing our customers to specify a pre-processor for our tabular data.

I also just don’t want to push back with an implementation of my own because it seems like being too difficult maybe? Or being too risky instead of just going with the prescribed model? Idk if that’s reasonable, that’s why I wanna justify it.

I guess it’s more of a matter of how I communicate a better solution to the rest of the team. I sketched out the processing steps and it’d be a lot easier and straightforward using a model that just has a few arrays and jsonbs for renames, ordering, static columns, and a whitelist. I figured that out by starting with the data that we want to process itself and going from there, instead of assuming ActiveRecord from the start. It’s basically the data we’d be building by querying Postgres, just without having to go through the database as much. We do enough of that already. I don’t know how to tell them this without getting “yeah but why not just do it our way” in response, I need a compelling reason. I mean, I’m convinced, but I’m the one that put it together.

Pollyanna fucked around with this message at 20:16 on Oct 19, 2018

manero
Jan 30, 2006

milk moosie posted:

What gives me pause is that there’s no reason you can’t model it by making a Table with many TableColumns, each with default_name, custom_name, static_value, sort_order, and enabled? as attributes. It would technically work. I agree, I think it should just be as simple as a jsonb field on a TableSchema, but I can’t really explain why we would go with that instead of the basic relational model.

To be more explicit about this, we are doing the following:

- Downloading either a CSV or a set of JSON objects representing tabular data
- Removing/deselecting columns
- Renaming columns
- Adding columns with static values (don’t ask)
- Reordering the columns
- Uploading the result to S3

and not actually persisting the data to our database. So it pretty much boils down to allowing our customers to specify a pre-processor for our tabular data.

I also just don’t want to push back with an implementation of my own because it seems like being too difficult maybe? Or being too risky instead of just going with the prescribed model? Idk if that’s reasonable, that’s why I wanna justify it.

I guess it’s more of a matter of how I communicate a better solution to the rest of the team. I sketched out the processing steps and it’d be a lot easier and straightforward using a model that just has a few arrays and jsonbs for renames, ordering, static columns, and a whitelist. I figured that out by starting with the data that we want to process itself and going from there, instead of assuming ActiveRecord from the start. It’s basically the data we’d be building by querying Postgres, just without having to go through the database as much. We do enough of that already. I don’t know how to tell them this without getting “yeah but why not just do it our way” in response, I need a compelling reason. I mean, I’m convinced, but I’m the one that put it together.

Hmm, so it kinda boils down to a list of ETL steps. Perhaps instead of storing the actual data, store the transformation steps?

Or maybe check out Kiba for some inspiration: https://github.com/thbar/kiba

xtal
Jan 9, 2011

by Fluffdaddy

milk moosie posted:

There’s something rather common that I’ve seen in Rails applications, and that’s an attempt at modeling tabular data with ActiveRecord. i.e. you have a Table model, where each Table has many TableRows and TableColumns, each in a specific non-primary key order. Both my previous and current job have attempted to display tabular data with customizable columns e.g. custom order, custom column naming, static columns, and inclusion/exclusion.

I’m wondering if a straight-up Excel-like implementation ala Rows and Columns is even a good fit for ActiveRecord. What have others done to implement tabular data in ActiveRecord (and therefore SQL) with arbitrary schema?

Edit: another interesting twist on this is that we will actually be pulling the tabular data from another source that provides the data as a CSV and determines the schema of the original tabular data, and our role will be to sort, rename, include/exclude, and add to this schema’s columns. So now we will have to keep up to date with them.

You already have tabular data and it's called the RBDMS. Re-implementing it in AR is notorious and difficult because it's an anti-pattern.

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through
Right now we have two databases. We need to get data (about 300k rows) from a secondary database, use 3 different primary keys to the primary database to retrieve information, and then write the data to a file.

The current system (all with active record) is this:

Establish connection to the secondary database
Make a request for the 300k rows
Reconnect to primary database
Map the rows using the primary keys, then inject a hash to hold the data from those rows
Then create an array collating the data.

It's slow as gently caress and I'm sure it's far from the standard way to go about it. What's the correct way to get massive amounts of data connected between databases?

manero
Jan 30, 2006

MasterSlowPoke posted:

Right now we have two databases. We need to get data (about 300k rows) from a secondary database, use 3 different primary keys to the primary database to retrieve information, and then write the data to a file.

The current system (all with active record) is this:

Establish connection to the secondary database
Make a request for the 300k rows
Reconnect to primary database
Map the rows using the primary keys, then inject a hash to hold the data from those rows
Then create an array collating the data.

It's slow as gently caress and I'm sure it's far from the standard way to go about it. What's the correct way to get massive amounts of data connected between databases?

AR is slow.... Maybe look into the Sequel gem, which I've used to pretty good effect for syncing records between databases. You wouldn't even need to create models, you can just write the queries by hand and deal with wiring stuff up manually if it's not too bad.

Kiba Pro also has a SQL source, so you could use that as an intermediary if you don't want to hook it all up yourself.

kayakyakr
Feb 16, 2004

Kayak is true
Why would you disconnect from your primary database? Don't use active record against the secondary database, just make raw SQL calls. Why are the databases separate? Is rails the right system to use? Should this sort of reporting be done in a background task?

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through
It's using ActiveRecord::Base.establish_connection to swap databases. The really slow part is pulling the associated data from the primary database. It's just a huge find query reduced to a hash: Addresses.where(id: query.map(&:address_id)).pluck(:fields).inject({}) {...}

It's typically done in the job queue.

necrotic
Aug 2, 2005
I owe my brother big time for this!
Use query.pluck(:address_id) instead of the map call.

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through
After switching back to the database with the Addresses table I can't pluck anymore right? That does another select query and I no longer have access to that table.

necrotic
Aug 2, 2005
I owe my brother big time for this!
Can you do the pluck before switching? I assumed it was still a query object and not the result based on the name :-/

edit: if these are different models I'm pretty sure you can change the connection on just the relevant model. I worked on a system a few years back that had different databases for a couple different models always.

necrotic fucked around with this message at 03:51 on Oct 31, 2018

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through
I'd like to set up the model to use the database, but unfortunately due to the way they set up their databases (there's multiple versions of this database, 1 per year for some reason I'm not sure) I can't set that up in a non-hacky way.

Did some research and decided on doing something kind of like pluck_in_batches:

Ruby code:
    addresses = {}
    num_batches = address_ids.length/1000.0).ceil
    0.upto(num_batches) do |iterator
      Address.where(id: address_ids[iterator * 1000, 1000])
             .pluck(:id, :country, :city, :postal)
             .each do |address|
               addresses[address[0]] = address[1..-1]
             end
    end
I don't think there's a built in batch like .find_in_batches or .find_each that doesn't create ActiveRecord objects, is there?

xtal
Jan 9, 2011

by Fluffdaddy

MasterSlowPoke posted:

I'd like to set up the model to use the database, but unfortunately due to the way they set up their databases (there's multiple versions of this database, 1 per year for some reason I'm not sure) I can't set that up in a non-hacky way.

Did some research and decided on doing something kind of like pluck_in_batches:

Ruby code:
    addresses = {}
    num_batches = address_ids.length/1000.0).ceil
    0.upto(num_batches) do |iterator
      Address.where(id: address_ids[iterator * 1000, 1000])
             .pluck(:id, :country, :city, :postal)
             .each do |address|
               addresses[address[0]] = address[1..-1]
             end
    end
I don't think there's a built in batch like .find_in_batches or .find_each that doesn't create ActiveRecord objects, is there?

Ruby code:
Address
  .in_batches
  .flat_map { |group| group.pluck(:id, :country, :city, :postal) }
  .each_with_object({}) { |(key, *value), hash| hash[key] = value }
Something like this?

xtal
Jan 9, 2011

by Fluffdaddy
Using multiple DBs with ActiveRecord is always going to be painful (right now, at least.) I would try to do as much of this with SQL as possible. This will let you avoid AR's shortcomings with regard to your use case, and also give you better performance.

8ender
Sep 24, 2003

clown is watching you sleep
I'm not sure what your budget / resources are like for infrastructure but this is starting to feel like a job for Elasticsearch. A small service running with DB2 can regularly serialize the needed data. Your Rails app can grab from ES very very quickly an array of keys you need and start querying it's own DB.

Peristalsis
Apr 5, 2004
Move along.
I have a weird problem. I'm uploading spreadsheet and/or CSV files for processing. Some of the CSV files have a Byte Order Mark (BOM) at the beginning of the file and it's screwing everything up. The BOM is prepended to the first header value in the top row, and I'm having trouble figuring out how to deal with this.

I've tried a number of different things, from cleaning the key in each key-value pair to trimming the stream directly in tempfile, but I keep hitting a couple of roadblocks.
1) Using String#start_with? and the like to locate the BOM doesn't work
2) File.Open is supposed to allow you to specify a BOM-friendly mode, but I'm using Roo, which doesn't seem to have a way to specify that. I tried using the CSV library for csv files, but it also doesn't seem to to able to digest this file.

Has anyone else dealt with this successfully?

xtal
Jan 9, 2011

by Fluffdaddy

Peristalsis posted:

I have a weird problem. I'm uploading spreadsheet and/or CSV files for processing. Some of the CSV files have a Byte Order Mark (BOM) at the beginning of the file and it's screwing everything up. The BOM is prepended to the first header value in the top row, and I'm having trouble figuring out how to deal with this.

I've tried a number of different things, from cleaning the key in each key-value pair to trimming the stream directly in tempfile, but I keep hitting a couple of roadblocks.
1) Using String#start_with? and the like to locate the BOM doesn't work
2) File.Open is supposed to allow you to specify a BOM-friendly mode, but I'm using Roo, which doesn't seem to have a way to specify that. I tried using the CSV library for csv files, but it also doesn't seem to to able to digest this file.

Has anyone else dealt with this successfully?

The Roo documentation says it accepts `File` objects: https://github.com/roo-rb/roo/blob/c83efbb8774d53701db5ec6815cc7e720389caa2/README.md#usage

Try using `File.open` with the BOM-mode stuff you read, then pass the file handle to `Spreadsheet.open`. This is probably also how you'd do it if you used the CSV stdlib.

Volguus
Mar 3, 2009
I am working to integrate a Ruby on Rails application in our workflow (developed by some other company) and I have a question:

How can I get rid of the RAILS_HOSTNAME env variable in a rails app? Whenever the rails application redirects it's using the environment variable to construct the Location url. The problem I have with that is that I cannot (apparently, or I don't know how) put the application behind a load balancer since the redirects are all wrong and basically nothing works. Plus, not to mention, if I need to spawn a new machine (we're hosting in AWS) then it needs to be reconfigured with the new environment variable (which I know it can be automated, but I wanna get rid of it completely).
The application itself is configured on that host to use nginx as reverse-proxy, but I doubt nginx is the problem here.

I tried setting the RAILS_HOSTNAME to the name of the load balancer but that made everything stop working completely (ngix throws Bad Gateway and doesn't seem to even try to contact the rails app).

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home
RAILS_HOSTNAME isn't a standard var. So it depends on what the custom logic is set up to do. It sounds somewhat like something we use to enforce the proper domain name on production apps:

code:
# config/environments/production.rb
...
  if ENV['APPLICATION_HOST'].present?
    config.middleware.use Rack::CanonicalHost,
                          ENV['APPLICATION_HOST'],
                          cache_control: 'max-age=3600'
  end
Does it blow up when it's just not set, or set to an empty string?

If you really need to, ENV is just a Hash loaded at boot, so you can delete the entry in ruby.
code:
ENV.delete("RAILS_HOSTNAME")

Volguus
Mar 3, 2009

The Milkman posted:

RAILS_HOSTNAME isn't a standard var. So it depends on what the custom logic is set up to do. It sounds somewhat like something we use to enforce the proper domain name on production apps:

code:
# config/environments/production.rb
...
  if ENV['APPLICATION_HOST'].present?
    config.middleware.use Rack::CanonicalHost,
                          ENV['APPLICATION_HOST'],
                          cache_control: 'max-age=3600'
  end
Does it blow up when it's just not set, or set to an empty string?

If you really need to, ENV is just a Hash loaded at boot, so you can delete the entry in ruby.
code:
ENV.delete("RAILS_HOSTNAME")

The only place I've seen it used (found via grep) is in config/environments/production.rb, as follows:

code:
config/environments/production.rb:  Rails.application.routes.default_url_options[:host] = ENV['RAILS_HOSTNAME']
config/environments/production.rb:  config.action_mailer.default_url_options = { host: ENV['RAILS_HOSTNAME'] }
config/environments/production.rb:  config.action_controller.default_url_options = { host: ENV['RAILS_HOSTNAME']}
So it could be just used by the framework when doing redirects? What I would like would be for the framework to use the host in the URL it was contacted with to do the redirects, nothing more, nothing less.
That is, if I access the app via http://localhost:1234/path then the redirect should keep localhost:1234. But if I talk with the app via http://somedomain/path, then use somedomain.
Is that possible to achieve via some setting or does it have to be done manually at controller /routes level?

kayakyakr
Feb 16, 2004

Kayak is true

Volguus posted:

The only place I've seen it used (found via grep) is in config/environments/production.rb, as follows:

code:
config/environments/production.rb:  Rails.application.routes.default_url_options[:host] = ENV['RAILS_HOSTNAME']
config/environments/production.rb:  config.action_mailer.default_url_options = { host: ENV['RAILS_HOSTNAME'] }
config/environments/production.rb:  config.action_controller.default_url_options = { host: ENV['RAILS_HOSTNAME']}
So it could be just used by the framework when doing redirects? What I would like would be for the framework to use the host in the URL it was contacted with to do the redirects, nothing more, nothing less.
That is, if I access the app via http://localhost:1234/path then the redirect should keep localhost:1234. But if I talk with the app via http://somedomain/path, then use somedomain.
Is that possible to achieve via some setting or does it have to be done manually at controller /routes level?

Make sure you use _path and not _url in any helpers. That will use the current url.

Generally default url options are only used in the case of email where there is no page context. Otherwise, if you're using _path, it'll be relative.

Volguus
Mar 3, 2009

kayakyakr posted:

Make sure you use _path and not _url in any helpers. That will use the current url.

Generally default url options are only used in the case of email where there is no page context. Otherwise, if you're using _path, it'll be relative.

That completely flew over my head because in the app/helpers folder, the helpers are all empty:
code:
module CustomersHelper
end
I obviously agree with you, the host part of the URL should never be taken into consideration when relative paths work, but i don't see where anything is being done here. I mean, there are classes that have zero contents, but still, things are happening, which is why I assume that there's some library/framework making all the happening happen, just needs to be configured correctly.

Pardot
Jul 25, 2001




Helper methods that are created from the routes.rb resources have both a _path suffix and a _url suffix. https://guides.rubyonrails.org/routing.html#path-and-url-helpers

xtal
Jan 9, 2011

by Fluffdaddy
I don't think I understand the problem so let me spit out some nonsense.

For one, HTTP 301 requires an absolute URL in the Location header, despite the fact that many/most clients and servers can handle relative. So, it's plausible, though I haven't verified, that Rails will implicitly use your hostname when issuing redirects so that it is absolute.

But to the core of the issue, what is the problem with this, and why does it cause problems with your load balancer? Are tha absolute URLs referring to specific web servers instead of the load balancers themselves? Where is that environment variable being configured, and can you change it to the load balancer's host?

Your solution with referer- or host-based redirection is 99.999% OK but also presents a small security flaw for open redirects.

Ideally, a single-tenant app should be addressed by one authoritative hostname (and also one scheme -- HTTPS), which is configured as the one hostname in Rails. This should be the web server or the load balancer if there is one.

xtal fucked around with this message at 02:47 on Dec 2, 2018

Pollyanna
Mar 5, 2005

Milk's on them.


I’m at my wits end with these stupid loving tests. Is there any way to debug why Capybara would intermittently take five seconds to find and click on a link in a page? We’re often but not always spending like 5600ms on any click_link or click_button call, and I’m having a gently caress of a time figuring out why. I don’t think it’s an async thing and we don’t seem to be spending much time in the DB or anything, so I suspect that Capybara itself is having trouble getting what it wants.

Edit: we are doing within blocks, though it doesn’t seem to help much.

Edit 2: you know what? Our test suite being a bloated piece of poo poo is extremely low priority. I am bringing no value by doing this. gently caress this poo poo.

Pollyanna fucked around with this message at 21:17 on Jan 4, 2019

Adbot
ADBOT LOVES YOU

Peristalsis
Apr 5, 2004
Move along.

Pollyanna posted:

I’m at my wits end with these stupid loving tests. Is there any way to debug why Capybara would intermittently take five seconds to find and click on a link in a page? We’re often but not always spending like 5600ms on any click_link or click_button call, and I’m having a gently caress of a time figuring out why. I don’t think it’s an async thing and we don’t seem to be spending much time in the DB or anything, so I suspect that Capybara itself is having trouble getting what it wants.

Edit: we are doing within blocks, though it doesn’t seem to help much.

Edit 2: you know what? Our test suite being a bloated piece of poo poo is extremely low priority. I am bringing no value by doing this. gently caress this poo poo.

I feel your pain. In my experience, capybara is kind of erratic in ways like this, and it can be very difficult to figure out why. It seems odd to me that it always eventually finds your links - I've had more experiences where it just can't find some button or link that's right in front of it, causing a slow failure as it sits through its wait time, not a slow pass.

Maybe someone who knows it better will comment further, but here are a few things to look at:
* Capybara locates things on the page to click on by screen coordinates, and sometimes the screen rendering makes the control move after capybara locates it, but before it tries to click on it. Capybara then tries to click on some empty part of the page (or another control entirely).
* If you have a screen that's slow to load, you can tell capybara to find some late-loading feature before you do other things, as a sort of check that the screen is done rendering before you expect too much.
* It's better to use click_on, click_link, etc. than to activate widgets and explicitly call some callback on it. I've resorted to the latter at times when I just can't figure out what the hell is going on, and always ended up regretting it later.
* The first test that is run seems to take a long time on the application I work on, and I assume that's just because it's loading the app or something while doing it.
* Try not to use before(:all) blocks to save time. It can speed things up, but has always caused me more headaches than it's worth. Just stick with before(:each)/background.
* Keeping each test short and specific can help locate exactly what step is causing the problem, and/or prevent the combination of steps that's causing the problem from being in the same test together. This may make the tests take longer, as each before and after block run more times for lots of short tests than for a few long ones, but keeping them simple makes them more understandable and maintainable.
* Finding the root cause of capybara issues can take a lot longer than randomly changing things around (the order of steps, which tests do what things, different ways of doing the same things, etc.) until something works. I'd always prefer to know what the gently caress is actually going on, but with capybara in particular, figuring that out can be a frustrating dead end. And sometimes when you stumble across a fix, you can figure out what the problem must have been much better than trying to deduce it from first principles.

Using within blocks is good for preventing ambiguous searches, but I wouldn't expect it to fix full seconds of delay just from shortening the parsing of HTML.

Good luck!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply