Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
xtal
Jan 9, 2011

by Fluffdaddy
e: didn't mean to post this

xtal fucked around with this message at 08:51 on Sep 6, 2016

Adbot
ADBOT LOVES YOU

Peristalsis
Apr 5, 2004
Move along.
Is it okay to use Ruby's object_id as a hash key?

I tried using objects as keys, but that backfired. For example, this code seems to work:
code:
foo = Bar.new
foo.my_attr = "bleah"
my_hash = {}
my_hash[foo] = "meh"
my_hash[foo]  # => "meh"
# Changing an attribute of foo doesn't hurt anything:
foo.my_attr = "yuck"
my_hash[foo]  # => "meh"
but as soon as I saved foo, it failed:
code:
...
foo.save
my_hash[foo]  # => nil
foo.object_id doesn't seem to change when I save foo, or even when I destroy it, but I guess I'm a little gun-shy at this point. The reason this is coming up is that I need to have a hash associating things with a set of model objects, each of which may or may not be persisted. Ruby 2.1.1p76 and Rails 4.2.5, if that matters. (Though if that matters, I probably shouldn't do it .)

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.
Note we're talking Rails here, not Ruby. In Ruby, it's perfectly fine to use an object as a hash key and not expect it to change. For Rails, objects are ephemeral and temporary representations of your real, actual underlying data structure and it's not safe to use anything that doesn't appear in the database. Object_id is not safe to use either. Object.id is, assuming you've already saved the object - so save before assigning to the array, if you can, and just use that. (or if you have a variety of objects, use a pair of keys that includes the class as well)

GlyphGryph fucked around with this message at 17:24 on Sep 6, 2016

xtal
Jan 9, 2011

by Fluffdaddy

Peristalsis posted:

Is it okay to use Ruby's object_id as a hash key?

I tried using objects as keys, but that backfired. For example, this code seems to work:
code:
foo = Bar.new
foo.my_attr = "bleah"
my_hash = {}
my_hash[foo] = "meh"
my_hash[foo]  # => "meh"
# Changing an attribute of foo doesn't hurt anything:
foo.my_attr = "yuck"
my_hash[foo]  # => "meh"
but as soon as I saved foo, it failed:
code:
...
foo.save
my_hash[foo]  # => nil
foo.object_id doesn't seem to change when I save foo, or even when I destroy it, but I guess I'm a little gun-shy at this point. The reason this is coming up is that I need to have a hash associating things with a set of model objects, each of which may or may not be persisted. Ruby 2.1.1p76 and Rails 4.2.5, if that matters. (Though if that matters, I probably shouldn't do it .)

When you use an object as a hash key it calls the `hash` method on the object. (eta: I am actually not 💯 about this.) For `ActiveRecord::Base` this delegates to `id.hash`. The reason this breaks when you save is that it is given an ID which changes the result of the hash function. Using `object_id` is probably fine but I think the most elegant approach is overloading the `hash` method on whatever class you're working with.

xtal fucked around with this message at 17:33 on Sep 6, 2016

Peristalsis
Apr 5, 2004
Move along.

GlyphGryph posted:

Note we're talking Rails here, not Ruby. In Ruby, it's perfectly fine to use an object as a hash key and not expect it to change. For Rails, objects are ephemeral and temporary representations of your real, actual underlying data structure and it's not safe to use anything that doesn't appear in the database. Object_id is not safe to use either. Object.id is, assuming you've already saved the object - so save before assigning to the array, if you can, and just use that.

Alas, I can't, which is why this came up.

xtal posted:

When you use an object as a hash key it calls the `hash` method on the object. (eta: I am actually not 💯 about this.) For `ActiveRecord::Base` this delegates to `id.hash`. The reason this breaks when you save is that it is given an ID which changes the result of the hash function. Using `object_id` is probably fine but I think the most elegant approach is overloading the `hash` method on whatever class you're working with.

I appreciate both your inputs - I'll think on this some more. At least I've done due diligence to make sure there's no obvious approach or ubiquitous pattern to avoid this issue.

For now, I've kept the model objects as keys, and put a big, ugly comment in the code that it's brittle, and only works here because the code is strictly being used in a view, and the objects won't change or be persisted between the hash being constructed and being accessed. I need to move on with the work (this is all just to reformat a single, confusing display on a form), but I'll keep thinking about better ways to go about this. It's already an obnoxiously complex data structure, and every other approach I think of makes it even worse (and might not work any better, if object_id isn't reliable, either). I'm hoping to avoid creating a whole new class just to populate some data to show on a screen, but if the overall approach I'm taking turns out to work and be sensible, then that may be the less bad alternative.

xtal
Jan 9, 2011

by Fluffdaddy

Peristalsis posted:

Alas, I can't, which is why this came up.


I appreciate both your inputs - I'll think on this some more. At least I've done due diligence to make sure there's no obvious approach or ubiquitous pattern to avoid this issue.

For now, I've kept the model objects as keys, and put a big, ugly comment in the code that it's brittle, and only works here because the code is strictly being used in a view, and the objects won't change or be persisted between the hash being constructed and being accessed. I need to move on with the work (this is all just to reformat a single, confusing display on a form), but I'll keep thinking about better ways to go about this. It's already an obnoxiously complex data structure, and every other approach I think of makes it even worse (and might not work any better, if object_id isn't reliable, either). I'm hoping to avoid creating a whole new class just to populate some data to show on a screen, but if the overall approach I'm taking turns out to work and be sensible, then that may be the less bad alternative.

Whether or not `object_id` works depends on how you're using it. It will stay the same after you save it, but it will change if you copy the object or load another one from the database. It's the ID of the object in Ruby space, defined on `Object`.

I think that all you need is something like this, provided there is some attribute on the record that is unique (for the purpose of hash equivalence.) Then you can use `Something`s as hash keys that will persist between saves and retrievals.

Ruby code:
class Something < ApplicationRecord
  delegate :hash, to: :some_unique_identifier_such_as_name
end

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.
I'm not sure I entirely understand your situation, but every time I've been in a similar one it was because I was doing something stupid that I didn't need to be doing and approaching the problem completely wrong.

But if it's only for a particular view, using the object ID should be alright. I just don't understand how you ever get into a situation where you are saving an object in a view - that part makes no sense at all. If it was just view formatting, your previous method would have worked fine... So something bad is going on with your code.

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home

GlyphGryph posted:

I'm not sure I entirely understand your situation, but every time I've been in a similar one it was because I was doing something stupid that I didn't need to be doing and approaching the problem completely wrong.


I'll echo this sentiment. Listen to your intuition.

Peristalsis
Apr 5, 2004
Move along.

GlyphGryph posted:

I'm not sure I entirely understand your situation, but every time I've been in a similar one it was because I was doing something stupid that I didn't need to be doing and approaching the problem completely wrong.

But if it's only for a particular view, using the object ID should be alright. I just don't understand how you ever get into a situation where you are saving an object in a view - that part makes no sense at all. If it was just view formatting, your previous method would have worked fine... So something bad is going on with your code.


The Milkman posted:

I'll echo this sentiment. Listen to your intuition.

It's a weird (and annoying) situation, but I'm not saving any data from code in the view - I apologize if that's the impression I gave. In the form for a model object of class Foo, I need to sort and display a bunch of Bar objects that can be linked to the Foo object. For each Bar object, I also need to display its linked Baz objects. This was displayed in a doubly-nested table that was difficult for users to understand, and impossible to untangle on the code side. (Oh, and it didn't work right.) I've simplified it to a singly-nested table with headers in the second column that take the place of one of the previous nesting levels, but I still need to know, for each Bar object, what Baz objects it has for each attribute value. I'm not confident that the Bar objects will always be persisted when they get to this form, so I didn't want to use the database id as a hash key. That's why I wanted to use the Bar object itself as the key of a hash.

If I were reading that paragraph I just wrote, my eyes would glaze over halfway through, and I'd go look at E/N for a while. I realize this is a mess of code I've inherited, but I'm trying to clean it up incrementally, and believe it or not, my proposed hash, above, is simpler and clearer than the quagmire of sorting logic that it's replacing. That said, I DON'T want to be the guy who asks for help and then ignores/argues about advice, but this particular view is trying to show something that is kind of complicated by its nature, and I think I can only simplify it so much without losing functionality.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Peristalsis posted:

this is a mess of code I've inherited

Okay, now it makes sense!

Peristalsis posted:

It's a weird (and annoying) situation, but I'm not saving any data from code in the view - I apologize if that's the impression I gave.

The original example you gave was literally calling "foo.save".

If you're not saving any of this data, the original way you were doing it should work fine, so I'm not sure how you ended up having this problem now. Considering it's inherited code though, who knows what nonsense was being done before that was causing the problems.

Peristalsis
Apr 5, 2004
Move along.

GlyphGryph posted:

Okay, now it makes sense!

The original example you gave was literally calling "foo.save".

Oh, right, sorry. I was just using that to demonstrate that I found a case where using the whole object as a hash key didn't work, and so I was leery about doing it at all (even though I don't think that case applies to my situation while the hash will be in scope). I guess I figured that if it wasn't dependable when saving, it might not be dependable in general.

GlyphGryph posted:

If you're not saving any of this data, the original way you were doing it should work fine, so I'm not sure how you ended up having this problem now. Considering it's inherited code though, who knows what nonsense was being done before that was causing the problems.

Excellent. I just felt icky using objects as hash keys, and so I played around with it in the console for a while, where I found that saving an object ruined its use as a hash key. From that, I inferred that maybe it's not a great idea at all.

Peristalsis
Apr 5, 2004
Move along.
I'm having a capybara issue.

In one of my tests, this command returns an empty collection:
code:
all("input[type='checkbox'][data-dfID]")
I put a debugger there, and printed out page.html, which shows that I have multiple elements of this form:
code:
<input type="checkbox" name="sample_files[]" data-dfID="5" value="datafile_id=5">
I've also tried specifying the data-dfID value, with no luck:
code:
all("input[type='checkbox'][data-dfID='5']")
I have a JQuery selector at the top of the file that seems to use the same selection criteria with no problem:
code:
$('input[type="checkbox"][data-dfID]').on('change', syncSameFileCheckboxes);
So, why does the original all() method return an empty result? Is all() not capable of finding data-* properties or something?

I'm pretty sure I can find a way around this, but I'd really like to know what I'm doing wrong, or mis-understanding.

Peristalsis fucked around with this message at 18:14 on Sep 16, 2016

Sivart13
May 18, 2003
I have neglected to come up with a clever title
Does your element show up in the full list of elements retrieved by
code:
all("input[type='checkbox']")
?

"all" is immediate, so if your checkbox is created via JavaScript, it might not be in the initial content retrieved by "all".

If you use "find" instead, like
code:
find("input[type='checkbox'][data-dfID]")
it will do an implicit wait for 2-5 seconds, polling the page to see if that thing exists.

Otherwise there maybe is some quirk between Capybara vs jQuery's interpretation of the selectors. Worst case, you should always be able to get the full list of checkbox nodes from "all" and filter them in Ruby for the ones with the right data attribute.

Peristalsis
Apr 5, 2004
Move along.

Sivart13 posted:

Does your element show up in the full list of elements retrieved by
code:
all("input[type='checkbox']")
?

Yes.

Sivart13 posted:

"all" is immediate, so if your checkbox is created via JavaScript, it might not be in the initial content retrieved by "all".

The checkboxes aren't created by JavaScript, they're created in a loop in the _form.html.erb file.

Sivart13 posted:

If you use "find" instead, like
code:
find("input[type='checkbox'][data-dfID]")
it will do an implicit wait for 2-5 seconds, polling the page to see if that thing exists.

That find command returns this message:
code:
Capybara::ElementNotFound: Unable to find css "input[type='checkbox'][data-dfID]"

Sivart13 posted:

Otherwise there maybe is some quirk between Capybara vs jQuery's interpretation of the selectors. Worst case, you should always be able to get the full list of checkbox nodes from "all" and filter them in Ruby for the ones with the right data attribute.

I was able to collect them by name, instead. I just wondered why my first attempt didn't work. I guess data-* is a new-ish addition to HTML, and I wonder if all and find aren't able to process it yet. I think I read that the "data-" part is ignored sometimes, but replacing the [data-dfID] with [dfID] also didn't help.

necrotic
Aug 2, 2005
I owe my brother big time for this!

Peristalsis posted:

I was able to collect them by name, instead. I just wondered why my first attempt didn't work. I guess data-* is a new-ish addition to HTML, and I wonder if all and find aren't able to process it yet. I think I read that the "data-" part is ignored sometimes, but replacing the [data-dfID] with [dfID] also didn't help.

The data part is not ignored in CSS selectors, only when you access the data-* attributes in JS from the element's dataset attribute.

Peristalsis
Apr 5, 2004
Move along.
On a completely different note, I have a side project coming up that involves writing a custom scheduling app. It's for a research lab that needs to send text messages to their subjects on a pre-determined schedule, and send occasional ad hoc texts on demand.

The lab has an account with TextMark, which I can use to send the texts via cURL commands to some URL somewhere. My question is about the overall structure of the program. Is this the kind of thing that the rufus scheduler does well, or should I just have a main loop somewhere that checks every 30 seconds if it's time to send the next text(s).

It also seemed like a not-too-bad use case for a second thread. The main thread would just be the normal app, which allows users to manage subject lists, text schedules, etc. The second thread would be the scheduled texting thread, which sends a text and then sleeps until the next text is due. That's probably overkill, but I'd enjoy tinkering with multi-threading, I think, as long as I'm not just inviting a world of pain and despair by dabbling in thread stuff. (As I understand it, the Global Interpreter Lock means there isn't ever genuinely concurrent execution of multiple threads, but rather it enforces a sort of time-slicing between the threads at the Ruby level. I don't anticipate this being a demanding app, and I think time-slicing would be just fine in performance terms.)

Anyway, I'm looking for feedback on what architectural approach or approaches make the most sense here.

Also, is RVM the normal way to use RoR on Macs? I recently acquired a used MacBook, and would like to be able to use it to work on this program. I just don't know what the standard way is to use Macs for rails development.

xtal
Jan 9, 2011

by Fluffdaddy

Peristalsis posted:

Also, is RVM the normal way to use RoR on Macs? I recently acquired a used MacBook, and would like to be able to use it to work on this program. I just don't know what the standard way is to use Macs for rails development.

The preferred option is to use the latest Ruby version installed through Homebrew. The next best option is virtualizing your applications through Vagrant or Docker and installing the necessary Ruby version in each one.

Maintaining multiple versions of Ruby is one of the worst parts of the languages tooling and you should avoid it if you can.

If for some reason you genuinely need multiple versions of Ruby on one host, use a more simple and secure option than RVM which is basically worst thing you can use. You can compile Ruby versions to separate `$PREFIX`es and change your `$PATH` to point to the one used by your project for example. There are tools to automate this, I believe they're called chrb and rb.

kayakyakr
Feb 16, 2004

Kayak is true

xtal posted:

The preferred option is to use the latest Ruby version installed through Homebrew. The next best option is virtualizing your applications through Vagrant or Docker and installing the necessary Ruby version in each one.

Maintaining multiple versions of Ruby is one of the worst parts of the languages tooling and you should avoid it if you can.

If for some reason you genuinely need multiple versions of Ruby on one host, use a more simple and secure option than RVM which is basically worst thing you can use. You can compile Ruby versions to separate `$PREFIX`es and change your `$PATH` to point to the one used by your project for example. There are tools to automate this, I believe they're called chrb and rb.

I disagree with almost all of this. Use RVM or rbenv if you find that you need a specific version of ruby. Otherwise use whatever comes on homebrew I guess?

prom candy
Dec 16, 2005

Only I may dance
Just use rbenv on your dev machine. In a perfect world we'd all have portable dev environments for each of our apps but that's a considerable amount of work to get setup and not without its downsides either.

8ender
Sep 24, 2003

clown is watching you sleep

prom candy posted:

Just use rbenv on your dev machine. In a perfect world we'd all have portable dev environments for each of our apps but that's a considerable amount of work to get setup and not without its downsides either.

Docker has made our dev environment mostly portable with the gigantic caveat that Docker with Virtualbox on a mac uses a tremendous amount of resources all the time.

xtal
Jan 9, 2011

by Fluffdaddy

8ender posted:

Docker has made our dev environment mostly portable with the gigantic caveat that Docker with Virtualbox on a mac uses a tremendous amount of resources all the time.

Isn't it possible to use Docker on a Mac without an explicit Linux VM now? I don't know, because I've always used Vagrant, but virtualizing your development environment is easy to set up (google and copy and paste into a Vagrantfile) and literally all upsides. Most of all you don't mess around with that poo poo that works at the shell-level and will ruin your day when you're scripting or logging in over SSH.

good jovi
Dec 11, 2000

'm pro-dickgirl, and I VOTE!

Handling multiple ruby versions is a breeze. rbenv has an extension called "ruby-build" (also available through Homebrew). Just install that and drop a .ruby-version file in each project. No problem*

Peristalsis posted:

On a completely different note, I have a side project coming up that involves writing a custom scheduling app. It's for a research lab that needs to send text messages to their subjects on a pre-determined schedule, and send occasional ad hoc texts on demand.

The lab has an account with TextMark, which I can use to send the texts via cURL commands to some URL somewhere. My question is about the overall structure of the program. Is this the kind of thing that the rufus scheduler does well, or should I just have a main loop somewhere that checks every 30 seconds if it's time to send the next text(s).

It also seemed like a not-too-bad use case for a second thread. The main thread would just be the normal app, which allows users to manage subject lists, text schedules, etc. The second thread would be the scheduled texting thread, which sends a text and then sleeps until the next text is due. That's probably overkill, but I'd enjoy tinkering with multi-threading, I think, as long as I'm not just inviting a world of pain and despair by dabbling in thread stuff. (As I understand it, the Global Interpreter Lock means there isn't ever genuinely concurrent execution of multiple threads, but rather it enforces a sort of time-slicing between the threads at the Ruby level. I don't anticipate this being a demanding app, and I think time-slicing would be just fine in performance terms.)

To address your actual question, the first approach that comes to mind is to have a cron job running that is responsible for the actual sending of messages. So the rails app is just responsible for dumping scheduled messages into the database. Decide on a minimum resolution for message scheduling (say, 10 minutes), and then have the cron job run on that schedule (say, every 10 minutes). The job will then look in the database for any messages scheduled to go out in the past that haven't been sent yet and send them.

That's just one approach, though, and assumes a certain scale (relatively small). Other options might depend on the infrastructure available to you. If you wanted to use a message queue like RabbitMQ or SQS, they both have a concept of delayed messages. Instead of putting SMSs in the database, you could just throw them on the queue with a delay of (send_time - current_time) and then again have a separate process responsible for consuming messages off that queue and just sending them out right away.

In general, something like this wouldn't be done with threading inside the web app process, but by a separate process (cron job, sidekiq worker, whatever) solely responsible for handling that out-of-band work. The web process should be dedicated to serving up web pages, and anything not essential to the rending of those pages should be shipped off.

*it's ruby, so "no problem" means "no problem as long as your use case exactly matches that of the program's author"

Peristalsis
Apr 5, 2004
Move along.

good jovi posted:

To address your actual question, the first approach that comes to mind is to have a cron job running that is responsible for the actual sending of messages. So the rails app is just responsible for dumping scheduled messages into the database. Decide on a minimum resolution for message scheduling (say, 10 minutes), and then have the cron job run on that schedule (say, every 10 minutes). The job will then look in the database for any messages scheduled to go out in the past that haven't been sent yet and send them.

That's just one approach, though, and assumes a certain scale (relatively small). Other options might depend on the infrastructure available to you. If you wanted to use a message queue like RabbitMQ or SQS, they both have a concept of delayed messages. Instead of putting SMSs in the database, you could just throw them on the queue with a delay of (send_time - current_time) and then again have a separate process responsible for consuming messages off that queue and just sending them out right away.

In general, something like this wouldn't be done with threading inside the web app process, but by a separate process (cron job, sidekiq worker, whatever) solely responsible for handling that out-of-band work. The web process should be dedicated to serving up web pages, and anything not essential to the rending of those pages should be shipped off.

Thanks for the comments. My first thought was either cron jobs or something with the clockwork gem (which we use at work for some nightly processing stuff), but I thought it would be nicer to have the whole thing be more self contained (i.e. a single RoR app that just works, not something that interacts with another ruby process or depends on cron). I'm not sure how good the technical people are who will be deploying and maintaining the application, so the less moving parts there are, the better.* I'd also prefer to avoid cron, for simplicity. I don't know anything about the server this will ultimately run on, or what permissions the application will have. All that said, if using cron or a ruby analog to it is the best approach, then that's probably what I'll do.

I'll take a look at sidekiq, too. The way we used clockwork just exported the scheduling responsibility, not the functionality we actually wanted to schedule. If sidekiq can encapsulate and export the actual job we want done, it might be just what I want.

One reason that I was considering using another thread was to avoid having to decide on some minimum interval of resolution (like "check the database every n minutes"), and cater to the scheduled times instead. Being able to put a thread to sleep for (next_scheduled_time - current_time) minutes had a certain appeal. However, that's me being anal - something that checks the database every few minutes will work fine, it just bothers me on an aesthetic level.

This should be a pretty small application. I think they're looking at 200 human subjects over a few months . I don't know how many days each subject is active, but it should translate to a few hundred rows in subjects table, (many) thousands of rows in a scheduled_texts table, and ... not much else, really. The only potential performance issue I anticipate right now is if many people need to get a text at exactly the same time, and the system ends up sending some of them late because it only sends one at a time. Even if 200 subjects all get a text at the very same moment, it's hard to imagine that actually taking more than a few minutes to process. I would like to make this as robust as possible, though, in case the protocol changes, or I find that I can use this somewhere else, or whatever. If nothing else, I'd like to keep this program around as part of a portfolio, so it should demonstrate good programming practices.

Oh, one more question - is there any reason I shouldn't use sqlite for the production environment? That would help keep the whole application self-contained, and I wouldn't have to trust someone else to set up a database server.


* Even the nightly processing where I work is currently not running, because nobody restarted it for a year, and a bug got released to production in that time. When I finally made sure that the clockwork gem was restarted during our last release, the nightly processing immediately crashed, because there's a syntax error in clock.rb.



xtal posted:

The preferred option is to use the latest Ruby version installed through Homebrew. The next best option is virtualizing your applications through Vagrant or Docker and installing the necessary Ruby version in each one.

Maintaining multiple versions of Ruby is one of the worst parts of the languages tooling and you should avoid it if you can.

If for some reason you genuinely need multiple versions of Ruby on one host, use a more simple and secure option than RVM which is basically worst thing you can use. You can compile Ruby versions to separate `$PREFIX`es and change your `$PATH` to point to the one used by your project for example. There are tools to automate this, I believe they're called chrb and rb.

kayakyakr posted:

I disagree with almost all of this. Use RVM or rbenv if you find that you need a specific version of ruby. Otherwise use whatever comes on homebrew I guess?

prom candy posted:

Just use rbenv on your dev machine. In a perfect world we'd all have portable dev environments for each of our apps but that's a considerable amount of work to get setup and not without its downsides either.

8ender posted:

Docker has made our dev environment mostly portable with the gigantic caveat that Docker with Virtualbox on a mac uses a tremendous amount of resources all the time.

xtal posted:

Isn't it possible to use Docker on a Mac without an explicit Linux VM now? I don't know, because I've always used Vagrant, but virtualizing your development environment is easy to set up (google and copy and paste into a Vagrantfile) and literally all upsides. Most of all you don't mess around with that poo poo that works at the shell-level and will ruin your day when you're scripting or logging in over SSH.

good jovi posted:

Handling multiple ruby versions is a breeze. rbenv has an extension called "ruby-build" (also available through Homebrew). Just install that and drop a .ruby-version file in each project. No problem*

*it's ruby, so "no problem" means "no problem as long as your use case exactly matches that of the program's author"

Thanks for the feedback. It sounds like there are multiple reasonable approaches. One of the machines I'm dealing with has very limited resources, so I'll probably keep it very simple on that one, at least. It's tempting to use this as an excuse to learn docker, but I'm already learning how to develop with a Mac, how to use Homebrew, etc., so maybe I'll just stick with RVM or rbenv for now. I guess I don't really anticipate needing multiple ruby versions or gemsets on a Mac any time soon, I just wanted to avoid any obvious, stupid mistakes.

Peristalsis fucked around with this message at 17:32 on Sep 20, 2016

Pardot
Jul 25, 2001




Peristalsis posted:

Oh, one more question - is there any reason I shouldn't use sqlite for the production environment? That would help keep the whole application self-contained, and I wouldn't have to trust someone else to set up a database server.

The main thing to be aware of is sqlite locks the entire database for writes. Writes probably don't take that long so you won't see problems until several people are using the app at the same time. I haven't carefully read all you've posted so far, but it seems like you'd probably be fine with sqlite.

Regardless of whatever you go with, please make sure you take good backups, test restores of them, and have the backup off of the machine.

xtal
Jan 9, 2011

by Fluffdaddy
Concurrent writing is one caveat but using a dynamically-typed database is not a good idea irrespective of scale

Peristalsis
Apr 5, 2004
Move along.

Pardot posted:

The main thing to be aware of is sqlite locks the entire database for writes. Writes probably don't take that long so you won't see problems until several people are using the app at the same time. I haven't carefully read all you've posted so far, but it seems like you'd probably be fine with sqlite.

Regardless of whatever you go with, please make sure you take good backups, test restores of them, and have the backup off of the machine.

Thanks for the heads up on locking.

The posted link posted:

When any process wants to write, it must lock the entire database file for the duration of its update. But that normally only takes a few milliseconds.

As long as we're really talking about millisecond delays, I think I'll be okay.

I plan to mention backups to the lab before I deliver the application, but I don't think I'll have direct access for ongoing maintenance or testing. I'll be willing to talk with their tech people about it, and I'll probably have to talk with them about setting up their first rails app, but ultimately, it's going to be on them to do the right thing.

xtal posted:

dynamically-typed database

Okay, that's a new one on me. I think I can work around it, but I didn't even realize this was a thing.

xtal
Jan 9, 2011

by Fluffdaddy
Say you're the back-end developer on a project with several front-end developers, and they ask for a shared development server so they don't need to run a Rails server locally. Is it crazy for me to say no? My concern is that they will end up wanting arbitrarily many servers to avoid colliding work, and I would rather solve this problem by reducing the friction for running your own server.

xtal fucked around with this message at 00:51 on Sep 21, 2016

KoRMaK
Jul 31, 2012



xtal posted:

Say you're the back-end developer on a project with several front-end developers, and they ask for a shared development server so they don't need to run a Rails server locally. Is it crazy for me to say no?
lol it's not even that hard.

But also, if they frontend how are they gonna precompile assets without a local vm?

How are they gonna avoid conflicts without git branches?


If I was you, I'd probably leave them to run their own local vms, and tell them to deal with conflict resolution on their own. Git gud i guess

xtal
Jan 9, 2011

by Fluffdaddy

KoRMaK posted:

lol it's not even that hard.

But also, if they frontend how are they gonna precompile assets without a local vm?

How are they gonna avoid conflicts without git branches?


If I was you, I'd probably leave them to run their own local vms, and tell them to deal with conflict resolution on their own. Git gud i guess

It's clearly not hard, but being the easiest solution doesn't mean it's the best solution (for the reasons mentioned in the rest of the post.) As for the rest of your post, the front-end is a separate codebase from the back-end. How they work on the front-end repo is unaffected by this decision.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

xtal posted:

Say you're the back-end developer on a project with several front-end developers, and they ask for a shared development server so they don't need to run a Rails server locally. Is it crazy for me to say no? My concern is that they will end up wanting arbitrarily many servers to avoid colliding work, and I would rather solve this problem by reducing the friction for running your own server.

IMO, the entire rails environment is a "front end developer" application and it's completely reasonable to expect them to run a Rails sever locally. I'm honestly not even sure how a "shared development server" would even work? The concept does seem a bit crazy.

Oddly enough, we do have "development servers" for the actual backend developers, because those are the people who don't want or need to deal with the hassle of running the actual rails server.

Peristalsis
Apr 5, 2004
Move along.

GlyphGryph posted:

IMO, the entire rails environment is a "front end developer" application and it's completely reasonable to expect them to run a Rails sever locally. I'm honestly not even sure how a "shared development server" would even work? The concept does seem a bit crazy.

I agree with this - I can't imagine going back to a shared dev environment like the one we had for J2EE development at my last job. If you install RoR, you already have the dev rails server - I really don't know how else anyone would want to do this.

Are these developers trying to avoid checking code out to their local system for some reason? Maybe there's some confusion between "rails server" and the "server" that they want to have host this shared environment. Are your developers completely new to rails? Are they using horrible, old hardware that can't run modern tools? Are they averse to using version control software for some reason?

If I were you, I'd make sure exactly what problem you/they are trying to solve - what it looks like they're asking for sounds crazy. Maybe they just want a staging/test server (i.e. a shared environment for testing the developers' changes together before pushing to production) - that would make more sense, and shouldn't cause the exponential server problems you're concerned about.

Peristalsis fucked around with this message at 17:06 on Sep 21, 2016

xtal
Jan 9, 2011

by Fluffdaddy
Let me clarify. The Rails server is API only and we have mobile apps that connect to it. They're completely ignorant to the fact that it's Rails.

Right now, unless a developer is working on both the app and the server at once, they point their app at the staging API while they're working. We want to QA a risky feature on staging which would break the front-end developers workflow.

The solutions I've come up with are: for the front-end and QA engineers to run their own Rails server; or for me to spin up what's basically a second staging server so we have one for front-end devs and one for QA.

I prefer the first approach because you can run any version you want, and I expect that if we have two staging servers tomorrow it will be three when developers are working on incompatible features.

xtal fucked around with this message at 17:31 on Sep 21, 2016

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

xtal posted:

Let me clarify. The Rails server is API only and we have mobile apps that connect to it. They're completely ignorant to the fact that it's Rails.

Right now, unless a developer is working on both the app and the server at once, they point their app at the staging API while they're working. We want to QA a risky feature on staging which would break the front-end developers workflow.

The solutions I've come up with are: for the front-end and QA engineers to run their own Rails server; or for me to spin up what's basically a second staging server so we have one for front-end devs and one for QA.

I prefer the first approach because you can run any version you want, and I expect that if we have two staging servers tomorrow it will be three when developers are working on incompatible features.

Ah, okay, so in this scenario the Rails server genuinely is just a back-end thing, they aren't actually doing development *on* on the server and don't need to modify it.

In that case, yeah, have servers running for them makes perfect sense and is probably worth doing. We have like half a dozen internal servers - some autobuild and deploy whenever our development branch is updated, and one that holds our last major stable release which is generally for normal QA and detached service endpoints. They all reset their data daily. At my last job we also had a few empty server "slots" that anyone could quickly and easily deploy to if they wanted some specific feature to be looked at by other people before it got merged.

I'd say something like that makes perfect sense, especially if you can automate it. You don't want your QA testers generally running around on a constantly changing server, but sometimes you do want them to *look* at something before it's merged.

Plus they provide other benefits, like being able to get feedback on in progress major features from clients before they are merged to the main branch.

Summary: Multiple ad hoc QA servers and a separate "stable" server for detached services and normal QA seem perfectly reasonable, and isn't what I originally thought you meant at all.

Depending on your team, it might also be perfectly reasonable to say no (if you've got reliable db dumps and decent build scripts that make it easy, for example), but most places I have been through have larger enough dependency stacks that even the actual Rails developers often pull from shared servers for chunks of their stack (I don't spool up an image server locally, for example, unless I absolutely need to, I just use our "latest" version)

GlyphGryph fucked around with this message at 18:06 on Sep 21, 2016

Peristalsis
Apr 5, 2004
Move along.
Never mind - I think I figured it out. The partial creates the field, and javascript is used to populate it.

A former developer created a partial for selecting datafiles from our system, and used it in multiple forms. I'm trying to use this partial in a new place, and so I'm looking at it for the first time. The very last line in the partial is this:
code:
<%= locals[:f].hidden_field(:datafile_ids, multiple: true, class: "hidden_file_select") %>
The bootstrap_form_for parameter is being passed into the partial as locals[:f], and the partial then accesses that, presumably as a means of passing information back to the main form in a format that will automatically be passed as a submit parameter. It seems like a bad approach to me, but maybe it's an idiom I'm not familiar with. Is this normal?

Peristalsis fucked around with this message at 21:29 on Oct 3, 2016

Peristalsis
Apr 5, 2004
Move along.
I have a completely new problem now.

We just discovered that one of the objects in our application can't be saved*, because it tries to save a row in a related has_many, through: ... link table without a created_at or updated_at field, and those columns have a non-null constraint on them.

I thought these rows were automatically populated by Rails. We did upgrade from Rails 3.2(?) to 4.2 in the last year - does Rails 4 handle this differently - maybe saving the row before populating the timestamps and saving again? These columns were apparently being populated appropriately in January of this year (when I think we were still on Rails 3.2 for this application).

* This is true when updating the object - and I think when creating a new one, though I need to do more testing to determine the exact conditions that trigger the error.

Peristalsis fucked around with this message at 22:18 on Oct 3, 2016

xtal
Jan 9, 2011

by Fluffdaddy

Peristalsis posted:

I have a completely new problem now.

We just discovered that one of the objects in our application can't be saved, because it tries to save a row in a related has_many, through: ... link table without a created_at or updated_at field, and those columns have a non-null constraint on them.

I thought these rows were automatically populated by Rails. We did upgrade from Rails 3.2(?) to 4.2 in the last year - does Rails 4 handle this differently - maybe saving the row before populating the timestamps and saving again? These columns were apparently being populated appropriately in January of this year (when I think we were still on Rails 3.2 for this application).

Check the callbacks on the join table model for the one that updates timestamps. It's unusual to me that this doesn't Just Work but if the callback is there then you know it's not using the AR model to save the record in the join table. Otherwise there's a problem with the timestamp tables that is eluding Rails heuristics for adding those callbacks.

Peristalsis
Apr 5, 2004
Move along.

xtal posted:

Check the callbacks on the join table model for the one that updates timestamps. It's unusual to me that this doesn't Just Work but if the callback is there then you know it's not using the AR model to save the record in the join table. Otherwise there's a problem with the timestamp tables that is eluding Rails heuristics for adding those callbacks.

Thanks - do you mean check our own code, or under the hood of the Rails voodoo? From our perspective, the entire join table model consists of the two belongs_to statements - that's all that's in the class:
code:
class FooBar < ActiveRecord::Base
  belongs_to :foo
  belongs_to :bar
end
I did find that the association is handled weirdly in the update controller method.
Given the classes Foo, Bar, and FooBar, FooController.update does the following:
code:
@foo = Foo.find(params[:id])
@foo.bars = []
@foo.update_attributes(foo_params)  # <-- I'm not sure where foo_params comes from just yet, either
...
respond_to do |format|
  if @foo.save
    # Go to @foo's details screen
  else
    # Go back to edit screen
  end
end
It attempts to delete all associations and then build them back up from scratch, and right now I'm guessing either that's the source of the problem, or foo_params is (since I don't see that defined anywhere). Maybe foo_params is some built-in helper I'm not familiar with, but if not, I need to figure out where it's coming from.

Peristalsis fucked around with this message at 15:54 on Oct 4, 2016

xtal
Jan 9, 2011

by Fluffdaddy
Yes, Rails automatically adds callbacks if it sees those fields in the database. So you can see if there's something going wrong there, but I think your conclusion thus far is likely.

Peristalsis
Apr 5, 2004
Move along.

xtal posted:

Yes, Rails automatically adds callbacks if it sees those fields in the database. So you can see if there's something going wrong there, but I think your conclusion thus far is likely.

It turns out to be quite the mess.

update_attributes is a deprecated method that updates the object's attributes' values, then tries to save. Saving there is premature, but that's okay, because the object isn't valid at this point. So we have an obsolete update that has to fail, being used to change attribute values, when I think assign_attributes would have worked just fine. Then, in the "..." part, there's a call to a homegrown sanitizing method to change attributes with empty string values into nils. That method uses attribute_names, which is also deprecated (or maybe just moved). And there are multiple versions of this method sanitizing method scattered throughout the code.

I think I just need to rewrite this part (at least) from scratch - the code that exists just doesn't seem worth saving. I am curious, though: what's the "right" way to remove all associated objects from another object? Is the line @foo.bars = [] okay?

Adbot
ADBOT LOVES YOU

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home
foo_params isn't any sort of built in helper (that I'm aware of), just a convention, mostly when using Strong Parameters. If it's not plainly defined in the controller sounds like you have some "Clever" meta programming going on

Peristalsis posted:

I am curious, though: what's the "right" way to remove all associated objects from another object? Is the line @foo.bars = [] okay?
Generally yes, but it depends on your intention and what the dependent setting on the association is. Also, when you want the removal to happen. If you just want to delete them and do so right then you could use foo.bars.clear

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply