|
quote:Is there any solution to the n+1 problem in rails for update/creates?
|
# ? Jun 1, 2017 22:47 |
|
|
# ? Jun 13, 2024 05:55 |
|
Gmaz posted:ActiveRecord import might help, that is if you're using AR. I recently had to build a custom CSV import module, I ended up using this gem to do the actual insertions. Works pretty much as it says, though I had one model that just would not work with the recursive inserts and I couldn't figure out why.
|
# ? Jun 1, 2017 23:16 |
I just write custom sql for mass insert operations
|
|
# ? Jun 1, 2017 23:38 |
|
A MIRACLE posted:I just write custom sql for mass insert operations and/or this http://api.rubyonrails.org/classes/ActiveRecord/Relation.html#method-i-update_all
|
# ? Jun 2, 2017 10:53 |
|
Does that work for creating records? That's the specifics question.
|
# ? Jun 2, 2017 15:21 |
|
Arachnamus posted:and/or this http://api.rubyonrails.org/classes/ActiveRecord/Relation.html#method-i-update_all I found update_all, but it only is good if you are inserting the SAME VALUE(s) for multiple records.
|
# ? Jun 2, 2017 15:23 |
|
I have a two Rails 5.1 projects, one backend API and one frontend dashboard that gets JSON data from the backend API (no access to the backend DB). What would be the proper strategy to handle integration testing 'fixture' data for tests across a separate backend? Do i need to basically snapshot multiple versions of the database data and restore them to reset the state of the data during testing? Since they are both rails projects, are there any gems that can help?
|
# ? Jun 4, 2017 03:47 |
|
Stupid question: why are they two separate projects? This kind of stuff is easier when everything is smooshed together, though there are understandable reasons not to. How does the frontend project work exactly? Does it access the JSON API in JavaScript-land or server side? Is it mostly a JavaScript SPA or a bunch of different Rails views? Does it keep state on the server? What I've done recently is: * Write Capybara tests where the 'main' server is the backend one (so I can load data in it using factories, because I like factories) * The Capybara tests have a global `before(:suite)` which spawns the frontend server (in such a way that it knows how to talk to the backend server that Capybara just spun up automatically) * The Capybara frontend server URL is set to something that hits the frontend server, so `visit '/'` loads data from that server The challenges here are * Getting Capybara to load the backend rails app, if you don't want to put these tests in the backend repo * Starting your frontend server and making sure it dies with the testrunner process whether the tests finish successfully or not * Being able to configure your frontend server on boot to hit a custom backend endpoint URL Though when I did this, the 'frontend server' was just a development server written in Node that served a static React app, so there may be more or different challenges with what you're doing I haven't seen many gems that help with this problem, other than specialized solutions like `ember-cli-rails` (for Ember apps with a Rails backend) which help you write Capybara tests that spin up a frontend and backend server automatically.
|
# ? Jun 5, 2017 01:47 |
|
I have an RSpec question. Why doe this test pass: code:
code:
code:
Peristalsis fucked around with this message at 21:19 on Jun 5, 2017 |
# ? Jun 5, 2017 21:16 |
|
The matcher docs explicitly state STDOUT is not checked as they replace $stdout for the check. https://relishapp.com/rspec/rspec-expectations/docs/built-in-matchers/output-matcher So if your code uses the global instead of the constant it would work.
|
# ? Jun 5, 2017 21:49 |
|
necrotic posted:The matcher docs explicitly state STDOUT is not checked as they replace $stdout for the check. That did the trick - thanks!
|
# ? Jun 5, 2017 22:15 |
|
I have a high level question. We now have several more or less unrelated applications, and we'd like to have a common mechanism for emailing the help desk from any of them. We're thinking in terms of a shared widget that we can have a link to in every app. What are some good ways to go about this? I don't want to have a completely separate copy of the code in each app, and I don't think this really warrants making a separate application for. Is a gem the right way to go, or is there a better way to develop and use this independent of any particular application? Peristalsis fucked around with this message at 22:21 on Jun 8, 2017 |
# ? Jun 8, 2017 22:18 |
|
Peristalsis posted:I have a high level question. Making a gem for this is fairly easy, especially if the gem doesn't need to know anything specialized about the app that it's embedded in. That's generally the right way to go.
|
# ? Jun 8, 2017 23:33 |
|
blugbee posted:I have a two Rails 5.1 projects, one backend API and one frontend dashboard that gets JSON data from the backend API (no access to the backend DB). Rails is supposed to use a separate ad hoc database when you're running tests, so you shouldn't need to do any snapshot/restore trickery with your database. Hopefully I'm understanding that question correctly. I don't have much experience when it comes to a integration testing that spans across two applications split into backend and frontend. I'm curious why you went this route? Either way, if I'm understanding this properly, the way I'd probably end up doing it is to split it into three parts. For the backend use something like FactoryGirl with Faker to set up dummy data for the back end and set up rspec request tests with it. Test your json requests with mock data and expected response output. Your goal is to make sure your API is behaving the way it should given the mock input. Then on the front end you'd do whatever tests you needed to interface-wise and on the data side of things construct your test input to match what your API would respond with in various request scenarios. If you follow the above strategy for your backend then what you're looking to do is make sure that your front end actually uses API responses properly. For the integration part that glues it together, you might set up a test instance of your backend application on AWS or whatever and have your frontend hit its endpoints to make sure it communicates with the API successfully instead of using mock data. Basically pretend to be a user and don't repeat the tests you've done on the other layers. Amish Ninja fucked around with this message at 23:56 on Jun 8, 2017 |
# ? Jun 8, 2017 23:50 |
|
Peristalsis posted:I have a high level question. Echoing kayakyakr you can use a gem for this. You can even publish private gems using a service like Gem Fury, or hosting your own internal repo. If it's something JS based you could maybe get away with a shared script on your CDN instead of a gem.
|
# ? Jun 9, 2017 00:31 |
|
kayakyakr posted:Making a gem for this is fairly easy, especially if the gem doesn't need to know anything specialized about the app that it's embedded in. That's generally the right way to go. necrotic posted:Echoing kayakyakr you can use a gem for this. You can even publish private gems using a service like Gem Fury, or hosting your own internal repo. Thanks folks - I'll try making my very first gem. I don't know if it's going to involve any JavaScript at this point, but if it does, it can still just go in the gem, right?
|
# ? Jun 9, 2017 15:30 |
|
You can include assets in a gem, yes: http://guides.rubyonrails.org/asset_pipeline.html#adding-assets-to-your-gems
|
# ? Jun 9, 2017 16:00 |
|
The backend is an API that serves JSON data for different user types (consumers vs. businesses) and a mobile APP. One of the frontends happens to be a web portal written in React/Rails for only one type of user so we didn't want to mix the code in with the backend. We've got this part under control Amish Ninja posted:For the backend use something like FactoryGirl with Faker to set up dummy data for the back end and set up rspec request tests with it. Test your json requests with mock data and expected response output. Your goal is to make sure your API is behaving the way it should given the mock input. And what I was hoping is that I could reuse all of that FactoryGirl/Faker stuff we have already written for the backend tests to "set up" the database for Capybara on the frontend. Sivart13 posted:What I've done recently is: I will try this if it will allow me to reuse the factories. Thanks
|
# ? Jun 11, 2017 16:05 |
|
It would as you are spawning from the backend, where the factories live. embee-cli-rails makes this painless with ember, but I think you would basically have to spawn a process manually for react (and then kill it in a cleanup after suite hook) as I'm not sure of a gem to do it for you.
|
# ? Jun 11, 2017 16:24 |
|
Hey guys, I was hoping you could help me with a few questions. Fair warning, this is gonna be long and I'm not a native English speaker. I'm not even halfway through my Comp Sci masters degree but I was recently able to get an internship with a small local business. I haven't coded in any real capacity before this and our coursework has been done with C and Python, but for the last couple of months I've been learning Ruby because this is a Rails shop. At this point they have 4 developers and 2 interns (the other guy has been here for nearly 2 years), but I get the impression that they've lost a couple of devs over the past year or so. I've recently been assigned to handle support cases for one of our old Rails projects over the summer while the real devs are on vacation. I was given a basic run through the system and I'm starting to get comfortable with the Rails framework, but I'll be mostly on my own for the next month or so. From what I understand, this is a typical legacy system: the original developers are long gone, there is very little documentation and there are NO tests of any kind. It runs on Ruby 1.9.3 and Rails 3.2, which has been deprecated for a pretty long time. I've started reading some books to prepare for this, like Working Effectively with Legacy Code, Rails AntiPatterns and Rails 4 Test Prescriptions. My manager was upfront about the situation and told me that the code I'll be maintaining is due for some tender love and refactoring and that he'd like to hear about some of my ideas after the summer. The system is a bit of a mess, at least to my untrained eyes, and so I'd love to get some feedback on the ideas I'm getting about it. First order of business: upgrade Ruby and Rails I've looked at the changes from Ruby 1.9.3 to 2.4 (it seems sensible to keep using latest version) and they don't seem too bad, but the changes to Rails might be worse. Since the app is relatively simple, I'm going to suggest starting from a new Rails 5 application and just migrating models one by one until an issue appears (fixing any antipatterns I spot along the way) and then fixing them one by one. I'm going to spend some time this summer tracking down the correct versions of the gems we use and I suspect there are some that can even be discarded entirely in favor of features that are now built into Rails (or simple enough to reimplement internally, depending on how much of the gems functionality we really need). I'll be extrapolating from this guide. The scariest part is the lack of tests. My plan is to write some rough integration tests for the user-facing functionality and use those as guideposts. Now this app is basically just a cache layer for proprietary data store. Our customers are all in the public sector: municipalities and counties who use a variety of document archiving software to manage everything from building permits to marriage licenses. These software solutions (which are not ours) store their records in an Oracle database and only the archivists can really understand the arcane systems they use to manage them. What our product does is make these documents accessible in a pretty and easy-to-use interface to the public and the civil servants who use them every day. Older versions of our software hooked directly into the original database where these documents were stored and simply displayed them. This was a bad idea, since the database didn't really belong to us and our app didn't properly handle any of the legal requirements like shielding sensitive documents from unauthorized access. poo poo went south, lessons were learned and the current version stores the information we need in our own database. It's still an Oracle database, because that's the what our developers have always used. Getting data from the proprietary database is done through a data pump (which I gather is borrowing from the Oracle vernacular) that connects to the web services provided by these proprietary software archiving systems. Going through the web service guarantees that we don't publish anything we're not allowed to. Some of our clients have millions of records (and this data pump code looks pretty shoddy) so the process can take upwards of two hours to complete for some of our customers. To avoid showing the users inconsistent data (and to speed up searches) the old developers used materialized views. Now, I'm pretty new to databases but these seem to be another layer of caching? Short snippet demonstrating one of these materialized views (with names changed to protect the innocent): code:
Is there a better way to solve this? My first thought was simply to pump data into temporary tables and replacing the old table once the pump job completes. This should ensure that the database is always in a consistent state for the app, right? Anyway, the data pump (which I've been told was hacked together in a single weekend) has been a source of many bugs. It's a plain old Ruby script of about 2000 lines and is run through a Rake task which we schedule through cron or Windows Task Scheduler (a lot of our customers insist on using Windows servers). The code itself is ugly as hell: global variables, rescuing Exception and not handling any of them, unused methods, terrible variable names and inconsistently updating the materialized views. I really want to fix it, but I think rewriting it might be easier. My idea for improving the whole process was to make this an ActiveJob, or even a series of specialized ActiveJobs for the different kinds of records we retrieve through the webservice. This way we could give the clients more power over when they want to schedule the refreshing of data as well as giving them the ability to fetch individual cases\journals\permits if something is out of sync. This would also mean that we could use the same system to schedule these jobs regardless of OS, which is always a plus. Speaking of OS, the way we deploy this app seems off. Everything is done by hand so it's pretty error-prone. In the Olden Days, every customer had their own repository in our Gitlab which was synced with the master repository. This allowed the old devs to create custom functionality whenever a customer requested it, but it was a giant pain to maintain. Installation was done by hand: remote into the machine, pull the repo, set up the database.yml and migrate the database. The new way they cooked up still has individual repositories for all of our customers, BUT the main app functionality has been stuffed into a Rails Engine which contains the actual functionality. Custom features are implemented as modules which are simply switched off for all customers except the one who requested it. When we need to manage an installation or upgrade, we manually remote into their servers and switch the engine repo to the correct version, pull the migrations into the customer app and migrate their database. Customers don't all buy their upgrades at the same time, so some customers are running older versions of this engine. I feel like I'm missing something fundamental about the separate repos, because as far as I can tell the ONLY thing that differs between customers are the database settings. Surely we could solve this in some other way, like having all customers pull from the Engine repo and simply using it as a Rails app directly. What's the normal way of deploying apps like these?
|
# ? Jul 19, 2017 20:21 |
Maista posted:First order of business: upgrade Ruby and Rails I've looked at the changes from Ruby 1.9.3 to 2.4 (it seems sensible to keep using latest version) and they don't seem too bad, but the changes to Rails might be worse. Since the app is relatively simple, I'm going to suggest starting from a new Rails 5 application and just migrating models one by one until an issue appears (fixing any antipatterns I spot along the way) and then fixing them one by one. I'm going to spend some time this summer tracking down the correct versions of the gems we use and I suspect there are some that can even be discarded entirely in favor of features that are now built into Rails (or simple enough to reimplement internally, depending on how much of the gems functionality we really need). I'll be extrapolating from this guide. The scariest part is the lack of tests. My plan is to write some rough integration tests for the user-facing functionality and use those as guideposts. Adding tests sounds like a good project. Upgrading the stack when the real devs are on vacation sounds like a bad project. Do you not have a backlog of tickets to pull from? Maista posted:
I would be interested in the context for this. But there's nothing wrong with including database-specific raw SQL, every codebase has that
|
|
# ? Jul 19, 2017 21:30 |
|
So you've described about 20 different issues. I'll try to approach them one at a time: 1. Yeah, you're going to have some fun upgrading 3.2 -> 5.1. The 4 -> 5 migration isn't really all that bad with just a few config changes being breaking, but I seem to remember the 3 -> 4 migration being a bit of a pain. You can approach it two ways: write an entirely new version of the app in rails 5 or go through the upgrade process. You can probably clear up all the upgrade crap in a day or two. Switch to bundler (if it's not already on bundler), then bump versions and fix until it works. If you want to do it right you'd write tests first, then upgrade and you'll know you did it well because the tests will all pass. Rewriting always takes longer than you think it will and you still need to write tests to make sure it all works. 2. This is where I would spend most of my time: if you can find a better solution for migrating data from the proprietary db's into your own, you'll vastly improve the environment. The more stable it is, the better. The worst part of that is that it's Oracle. Yuck. If I had my druthers, I'd migrate it all to postgres and not look back. Your predecessor's solution to run it via deployable rake task on chron or scheduler wasn't bad. It sucks for maintenance, but you've gotta have something that lives by your client's servers, and better a standalone rake task than the entire dang app. Which leads us to: 3. You're describing a multi-tenant application. This is a very, very common use case. Generally that's done via one database, one server with all the records tagged with that particular client's ID. Any request comes is scoped to that client and features are gated using one system or another. Rails loves to be a monolith. Every microservice implementation I've seen has not been worth the pain of development to get there (since services are never as isolated as advertised). Running on a client's hardware is always going to cause some ugliness with deploy. You can probably hack capistrano 3's environments to configure deploys so it's at least not a manual process.
|
# ? Jul 19, 2017 21:31 |
|
kayakyakr posted:The worst part of that is that it's Oracle. Yuck. If I had my druthers, I'd migrate it all to postgres and not look back. Oracle is like the mob. You dont just leave the family, unless its in a coffin. My last govt job out of our team of six tasked with migrating from oracle to postgres, we had one guy who's entire job was figuring out how to break that contract without throwing the whole operation into tens of millions of dollars in legal costs. Oracle is a loving nightmare.
|
# ? Jul 19, 2017 23:50 |
|
I'm reviewing some code that has some new rake tasks in a file, where the guy who wrote it has declared variables outside of the tasks. He seems to be using it as a way to pass data between tasks. For example: myfile.rake code:
|
# ? Jul 25, 2017 22:15 |
|
That definitely 'feels' like it shouldn't work. I guess depending on how those tasks are invoked my_hash is a top level variable still in memory? If it actually works and it's just a one-off I suppose it doesn't hurt too much.
|
# ? Jul 25, 2017 22:54 |
Rake tasks shouldn't be sharing variables like that. Offload the "populates and returns a hash" parts to a `lib/my_library.rb` class
|
|
# ? Jul 25, 2017 23:03 |
|
It works because my_hash is a variable scoped to the namespace block, and the subsequent task blocks have access to the parent scope(s). It's not good practice, but if the tasks are setup properly it's not terrible. In this case the tasks should depend on the previous tasks by listing in an array with environmental first (phone posting code): code:
|
# ? Jul 26, 2017 00:09 |
|
The Milkman posted:That definitely 'feels' like it shouldn't work. I guess depending on how those tasks are invoked my_hash is a top level variable still in memory? If it actually works and it's just a one-off I suppose it doesn't hurt too much. A MIRACLE posted:Rake tasks shouldn't be sharing variables like that. Offload the "populates and returns a hash" parts to a `lib/my_library.rb` class Thanks all. I helped him remove some of the dependencies, and we're going to leave the rest in there for now. Coding around them would basically require completely re-creating this portion of the data import. He claims that the variables don't persist between rake runs - assuming that's true, it relieves my concern about them clogging up memory indefinitely. necrotic posted:It works because my_hash is a variable scoped to the namespace block, and the subsequent task blocks have access to the parent scope(s). I forgot to mention/include that the dependencies are explicit, so that the third task does automatically run the second task first.
|
# ? Jul 26, 2017 21:45 |
|
Running rake is almost always in unique processes. The exception to this can be in development using spring, spork, or Zeus, which start a single process and share it to make iterating faster. It should never be an issue in production environments.
|
# ? Jul 27, 2017 00:19 |
|
I'm not clear how RVM sets up default and global gemsets. The documentation here seems pretty clear, but also doesn't coincide with what I'm seeing on my system. For example, if I read it correctly, the file ~/.rvm/gemsets/global.gems determines what is in the global gemset when I install a new ruby version. If I have additional or different gems specified in ~/.rvm/gemsets/<ruby implementation> or /gemsets/<ruby implementation>/<version number>, then that affects things, too (not sure whether they are appended, or one supersedes the others). However, I just installed ruby 2.4.1 with RVM, and my ~/.rvm/gemsets/ruby-2.4.1@global/gems directory has a bunch of gem directories that are not represented in ~/.rvm/gemsets/global.gems (which only contains gem-wrappers, rubygems-bundler, rake, and rvm). Further, when I use the new ruby with the global gemset, then execute gem list, it shows everything in global.gems, as well as 2 or 3 other gems (like bigdecimal) that I don't see specified anywhere. So, how is the initial global gemset determined, and why do I seem to have extra gems intalled in it? Also, I created a new gemset for my new ruby version, then made it the default by executing rvm use 2.4.1@newset --default, as in the instructions here. However, my (default) gemset still exists, and executing rvm use 2.4.1 with no gemset specified still switches me to the (default) gemset, and not the one I tried to make default. Am I misunderstanding something here? Peristalsis fucked around with this message at 21:51 on Aug 24, 2017 |
# ? Aug 24, 2017 17:40 |
|
2.4 moved some core packages to gems, iirc. Things like BigDecimal and minitest are core gems shipped with ruby now.
|
# ? Aug 24, 2017 18:26 |
|
Anyone have any specific sites that teach ruby well? Working on a rails application at work and I'm unfamiliar with both ruby and rails (coming from an asp.net c# mvc background). I usually use pluralsight but I wanted to check and see if anyone else knew any good ones? Some of my coworkers are using code academy.
|
# ? Aug 25, 2017 16:08 |
|
jiffypop45 posted:Anyone have any specific sites that teach ruby well? Working on a rails application at work and I'm unfamiliar with both ruby and rails (coming from an asp.net c# mvc background). I usually use pluralsight but I wanted to check and see if anyone else knew any good ones? Some of my coworkers are using code academy. I'd definitely check out either The Odin Project or Hartl's online book (free). Although I haven't picked up Ruby myself, I know a lot of people who use it and these are pretty often recommended resources. I've done a bit of the Hartl tutorial and it was nice because it didn't spend too much time on basics that any non-beginner should know.
|
# ? Aug 25, 2017 16:32 |
|
jiffypop45 posted:Anyone have any specific sites that teach ruby well? Working on a rails application at work and I'm unfamiliar with both ruby and rails (coming from an asp.net c# mvc background). I usually use pluralsight but I wanted to check and see if anyone else knew any good ones? Some of my coworkers are using code academy. Michael Hartl's Rails Tutorial was the best thing when I got started with Rails. I haven't followed it since and it has been 5 years or so. I have have been meaning to go back and check it out for Rails 5 though. Railscasts used to be a de facto resource until the guy basically rage quit. Its still helpful though for older versions of Rails or learning some of the basics. Much of the material is fairly out of date though. For more up to date video tutorials Go Rails seems to have picked up the torch. RubyTapas is also another long running site that focuses more on pure Ruby. I jumped on SafariBooksOnline back when they had a sale for $199 a year (I believe it's $399 a year normally). If you have a serious tech book buying addiction, like I did, I feel like this can really pay for itself. Having instant access to both books and videos from dozens of publishers has been really nice, and I know they have a lot of up to date books on Rails and Ruby there.
|
# ? Aug 25, 2017 20:34 |
|
I apologize if this question is too broad. I'm a GIS guy and I'm putting together a simple mapping application using Leaflet to display / interact with the map. My question is what is the best way to get Leaflet into Rails. The options appear to be: Using a Gem Dropping the javascript library into the vendor folder Pointing to the Leaflet CDN Are there any pros / cons between those options?
|
# ? Aug 28, 2017 19:15 |
|
PotatoJudge posted:I apologize if this question is too broad. I'm a GIS guy and I'm putting together a simple mapping application using Leaflet to display / interact with the map. My question is what is the best way to get Leaflet into Rails. The options appear to be: Gem: Can manage through Bundler, and compile it in with the rest of your application JS; may not be up-to-date with the latest version Vendoring: Can compile it in with the rest of your application JS; have to update by hand CDN: Causes a separate request to the CDN which may or may not be faster than compiling it in with your application JS, CDN may go down, versions could be updated/yanked out from under you. If there's a gem that's being actively maintained I tend to go that route, as it allows you to manage the version alongside the rest of the application. But, I've been bit with maintaining legacy projects where the CDNs go poof. That's probably less of a concern with something decently popular like leaflet. Vendoring is fine but it's more of a last resort/for smaller one off libraries you don't mess with much. Keep in mind with Rails 5.1 you can now use yarn/webpack or whatever to grab stuff over NPM, which is likely the way to go in the future.
|
# ? Aug 28, 2017 20:20 |
|
The Milkman posted:Gem: Can manage through Bundler, and compile it in with the rest of your application JS; may not be up-to-date with the latest version It looks like none of the various gems have been updated in a while so I'll probably vendor it. No big deal for this, it's just going to be a proof of concept / training exercise for myself but it is good to have a concise listing of the pros / cons to look at down the road.
|
# ? Aug 28, 2017 22:09 |
Is anyone good at Sidekiq here? I have a deferred job on it's own queue, and I want to limit the total enqueued job for that queue to 1 at a time. some pseudocode here is what I'm trying to achieve, just not sure how to do this with the Sidekiq API: Ruby code:
edit: not thoroughly though. found one solution using the 'Stats' module like this: Ruby code:
A MIRACLE fucked around with this message at 22:07 on Sep 15, 2017 |
|
# ? Sep 15, 2017 21:52 |
Just wanted to post here and thank ya'll for the help over the past uhh 7 years? or so. The initial enthusiasm here for the platform inspired me to learn web development and is the primary reason I have such a kick rear end career now. Cheers yall
|
|
# ? Oct 3, 2017 04:10 |
|
|
# ? Jun 13, 2024 05:55 |
|
Someone just got a promotion.
|
# ? Oct 3, 2017 17:55 |