|
Vagrant! Vagrant was the word I was looking for.kayakyakr posted:Sounds like you either have someone dedicated to full devops, or you are losing money on it. It's one person, he is the co owner of the company, and he spent like probably a week on it, and then spends like maybe a couple hours a week on it moving forward, less as we get further away from the init deployment phase. He isn't dedicated to it in the sense that it's his only job, he is a consultant and PM most of the time. So, you're close, sort of. He is our devops guy. e: I got the azure ruby 233 docker image running on azure! Now I'm trying to figure out what is different between mine and theirs that is causing mine to prevent ssh. I think it's the fact that the container is closing immediately so I gotta put something in there until I get the rest going.
|
# ? Aug 6, 2018 19:51 |
|
|
# ? Jun 13, 2024 06:08 |
|
I'm working with CarrierWave to upload files as attachments to experiment objects. For dev, I have a local minio docker container for file storage, and I guess I'm using fog in some way to interact with it - mostly I'm copying similar code out of another project. The files seem to upload fine, but when a file is deleted by the user, I can't seem to scrub it from the minio instance. Here's my setup: code:
code:
NoMethodError at /datafiles/10023 undefined method `remove_previously_stored_files_after_update' for 0:Fixnum I've tried each approach individually, and both together, with no luck.
|
# ? Aug 14, 2018 18:45 |
|
I'm wrong
xtal fucked around with this message at 01:01 on Aug 15, 2018 |
# ? Aug 15, 2018 00:24 |
|
I'd think you wouldn't need to clean up the attached file, when you call @datafile.destroy, carrierwave should take care of removing the file out of storage for you
|
# ? Aug 15, 2018 00:42 |
|
manero posted:I'd think you wouldn't need to clean up the attached file, when you call @datafile.destroy, carrierwave should take care of removing the file out of storage for you This is what seems to be happening for the other project from which I copied some of the setup and code (according to a developer on the project - I haven't verified it myself). It makes me think I just have something configured wrong.
|
# ? Aug 15, 2018 17:00 |
|
Peristalsis posted:This is what seems to be happening for the other project from which I copied some of the setup and code (according to a developer on the project - I haven't verified it myself). It makes me think I just have something configured wrong. I'd try removing all the stuff like the entire clean_s3 method, and your destroy action should just be @datafile.destroy, don't bother with the calls to remove_uploaded_file! and save
|
# ? Aug 15, 2018 20:11 |
|
manero posted:I'd try removing all the stuff like the entire clean_s3 method, and your destroy action should just be @datafile.destroy, don't bother with the calls to remove_uploaded_file! and save The issue ended up being that my migration for the new Datafile object had an integer for the mounted column, instead of a string. I fixed that, and all seems to be well.
|
# ? Aug 15, 2018 22:33 |
|
The team I'm on is at the point where it would be good to get on the same page about what we should be testing, and how we should be testing it. Are there books or other resources you have used that were valuable in adopting good conventions? This can be Rails-specific, or more general. Our main problem right now is that we all have different ideas about our testing goals, and none of us are particularly right. We're all somewhat new to using structured test suites, and would benefit from a standard approach to using unit vs. feature tests, coverage goals, writing for test speed vs. readability, etc. We could also change our testing tools, if warranted. Right now, we use RSpec with Capybara. The lead developer for the project has used Cucumber in the past, and didn't care for it compared to RSpec, and I also tend to prefer simple, straightforward test tools to ones that force you to be chatty and try to have a conversation with the computer, rather than just executing assertions. That said, we'd have to find some pretty compelling benefits to change to a different system entirely.
|
# ? Aug 24, 2018 16:40 |
|
Peristalsis posted:The team I'm on is at the point where it would be good to get on the same page about what we should be testing, and how we should be testing it. Are there books or other resources you have used that were valuable in adopting good conventions? This can be Rails-specific, or more general. Our main problem right now is that we all have different ideas about our testing goals, and none of us are particularly right. We're all somewhat new to using structured test suites, and would benefit from a standard approach to using unit vs. feature tests, coverage goals, writing for test speed vs. readability, etc. I feel like Cucumber is way overkill, and tended to slow me down. I also never had "business people" writing specs on how the system should behave, so writing stuff in Cucumber was just an extra layer I didn't need. I've heard good stuff about Everyday Rails Testing, but haven't read it yet.. it might be worth checking out: https://leanpub.com/everydayrailsrspec
|
# ? Aug 24, 2018 17:10 |
|
My boy Noel Rap's Test Prescription books are my go-to. https://pragprog.com/book/nrtest3/rails-5-test-prescriptions I also recommend his Take My Money book if you deal with any sort of payment processing.
|
# ? Aug 24, 2018 17:20 |
|
Just use minitest like our lord dhh gave you. Rails is omakase, and all
|
# ? Aug 24, 2018 22:37 |
|
xtal posted:Just use minitest like our lord dhh gave you. Rails is omakase, and all I think my company's CTO unironically holds this position.
|
# ? Aug 25, 2018 02:29 |
|
There’s something rather common that I’ve seen in Rails applications, and that’s an attempt at modeling tabular data with ActiveRecord. i.e. you have a Table model, where each Table has many TableRows and TableColumns, each in a specific non-primary key order. Both my previous and current job have attempted to display tabular data with customizable columns e.g. custom order, custom column naming, static columns, and inclusion/exclusion. I’m wondering if a straight-up Excel-like implementation ala Rows and Columns is even a good fit for ActiveRecord. What have others done to implement tabular data in ActiveRecord (and therefore SQL) with arbitrary schema? Edit: another interesting twist on this is that we will actually be pulling the tabular data from another source that provides the data as a CSV and determines the schema of the original tabular data, and our role will be to sort, rename, include/exclude, and add to this schema’s columns. So now we will have to keep up to date with them. Pollyanna fucked around with this message at 14:56 on Oct 19, 2018 |
# ? Oct 19, 2018 14:45 |
|
milk moosie posted:There’s something rather common that I’ve seen in Rails applications, and that’s an attempt at modeling tabular data with ActiveRecord. i.e. you have a Table model, where each Table has many TableRows and TableColumns, each in a specific non-primary key order. Both my previous and current job have attempted to display tabular data with customizable columns e.g. custom order, custom column naming, static columns, and inclusion/exclusion. I haven't run into this problem, but it always seems weird when you're trying to model something that already exists, e.g. database tables. It seems like maybe AR is holding you back, I could see something where you dynamically create database tables and columns, but that might be a recipe for madness. Maybe a NoSQL solution is a better fit?
|
# ? Oct 19, 2018 15:46 |
|
manero posted:I haven't run into this problem, but it always seems weird when you're trying to model something that already exists, e.g. database tables. Or a JSONB field in postgres to hold each row's data. But yeah, AR & postgres isn't the best way to describe a totally free-form sort of table.
|
# ? Oct 19, 2018 16:43 |
|
What gives me pause is that there’s no reason you can’t model it by making a Table with many TableColumns, each with default_name, custom_name, static_value, sort_order, and enabled? as attributes. It would technically work. I agree, I think it should just be as simple as a jsonb field on a TableSchema, but I can’t really explain why we would go with that instead of the basic relational model. To be more explicit about this, we are doing the following: - Downloading either a CSV or a set of JSON objects representing tabular data - Removing/deselecting columns - Renaming columns - Adding columns with static values (don’t ask) - Reordering the columns - Uploading the result to S3 and not actually persisting the data to our database. So it pretty much boils down to allowing our customers to specify a pre-processor for our tabular data. I also just don’t want to push back with an implementation of my own because it seems like being too difficult maybe? Or being too risky instead of just going with the prescribed model? Idk if that’s reasonable, that’s why I wanna justify it. I guess it’s more of a matter of how I communicate a better solution to the rest of the team. I sketched out the processing steps and it’d be a lot easier and straightforward using a model that just has a few arrays and jsonbs for renames, ordering, static columns, and a whitelist. I figured that out by starting with the data that we want to process itself and going from there, instead of assuming ActiveRecord from the start. It’s basically the data we’d be building by querying Postgres, just without having to go through the database as much. We do enough of that already. I don’t know how to tell them this without getting “yeah but why not just do it our way” in response, I need a compelling reason. I mean, I’m convinced, but I’m the one that put it together. Pollyanna fucked around with this message at 20:16 on Oct 19, 2018 |
# ? Oct 19, 2018 18:54 |
|
milk moosie posted:What gives me pause is that there’s no reason you can’t model it by making a Table with many TableColumns, each with default_name, custom_name, static_value, sort_order, and enabled? as attributes. It would technically work. I agree, I think it should just be as simple as a jsonb field on a TableSchema, but I can’t really explain why we would go with that instead of the basic relational model. Hmm, so it kinda boils down to a list of ETL steps. Perhaps instead of storing the actual data, store the transformation steps? Or maybe check out Kiba for some inspiration: https://github.com/thbar/kiba
|
# ? Oct 19, 2018 21:26 |
|
milk moosie posted:There’s something rather common that I’ve seen in Rails applications, and that’s an attempt at modeling tabular data with ActiveRecord. i.e. you have a Table model, where each Table has many TableRows and TableColumns, each in a specific non-primary key order. Both my previous and current job have attempted to display tabular data with customizable columns e.g. custom order, custom column naming, static columns, and inclusion/exclusion. You already have tabular data and it's called the RBDMS. Re-implementing it in AR is notorious and difficult because it's an anti-pattern.
|
# ? Oct 20, 2018 23:38 |
|
Right now we have two databases. We need to get data (about 300k rows) from a secondary database, use 3 different primary keys to the primary database to retrieve information, and then write the data to a file. The current system (all with active record) is this: Establish connection to the secondary database Make a request for the 300k rows Reconnect to primary database Map the rows using the primary keys, then inject a hash to hold the data from those rows Then create an array collating the data. It's slow as gently caress and I'm sure it's far from the standard way to go about it. What's the correct way to get massive amounts of data connected between databases?
|
# ? Oct 30, 2018 19:30 |
|
MasterSlowPoke posted:Right now we have two databases. We need to get data (about 300k rows) from a secondary database, use 3 different primary keys to the primary database to retrieve information, and then write the data to a file. AR is slow.... Maybe look into the Sequel gem, which I've used to pretty good effect for syncing records between databases. You wouldn't even need to create models, you can just write the queries by hand and deal with wiring stuff up manually if it's not too bad. Kiba Pro also has a SQL source, so you could use that as an intermediary if you don't want to hook it all up yourself.
|
# ? Oct 30, 2018 19:41 |
|
Why would you disconnect from your primary database? Don't use active record against the secondary database, just make raw SQL calls. Why are the databases separate? Is rails the right system to use? Should this sort of reporting be done in a background task?
|
# ? Oct 30, 2018 22:26 |
|
It's using ActiveRecord::Base.establish_connection to swap databases. The really slow part is pulling the associated data from the primary database. It's just a huge find query reduced to a hash: Addresses.where(id: query.map(&:address_id)).pluck(:fields).inject({}) {...} It's typically done in the job queue.
|
# ? Oct 31, 2018 00:40 |
|
Use query.pluck(:address_id) instead of the map call.
|
# ? Oct 31, 2018 02:17 |
|
After switching back to the database with the Addresses table I can't pluck anymore right? That does another select query and I no longer have access to that table.
|
# ? Oct 31, 2018 02:20 |
|
Can you do the pluck before switching? I assumed it was still a query object and not the result based on the name :-/ edit: if these are different models I'm pretty sure you can change the connection on just the relevant model. I worked on a system a few years back that had different databases for a couple different models always. necrotic fucked around with this message at 03:51 on Oct 31, 2018 |
# ? Oct 31, 2018 03:26 |
|
I'd like to set up the model to use the database, but unfortunately due to the way they set up their databases (there's multiple versions of this database, 1 per year for some reason I'm not sure) I can't set that up in a non-hacky way. Did some research and decided on doing something kind of like pluck_in_batches: Ruby code:
|
# ? Oct 31, 2018 06:53 |
|
MasterSlowPoke posted:I'd like to set up the model to use the database, but unfortunately due to the way they set up their databases (there's multiple versions of this database, 1 per year for some reason I'm not sure) I can't set that up in a non-hacky way. Ruby code:
|
# ? Nov 1, 2018 04:47 |
|
Using multiple DBs with ActiveRecord is always going to be painful (right now, at least.) I would try to do as much of this with SQL as possible. This will let you avoid AR's shortcomings with regard to your use case, and also give you better performance.
|
# ? Nov 1, 2018 04:52 |
|
I'm not sure what your budget / resources are like for infrastructure but this is starting to feel like a job for Elasticsearch. A small service running with DB2 can regularly serialize the needed data. Your Rails app can grab from ES very very quickly an array of keys you need and start querying it's own DB.
|
# ? Nov 6, 2018 09:08 |
|
I have a weird problem. I'm uploading spreadsheet and/or CSV files for processing. Some of the CSV files have a Byte Order Mark (BOM) at the beginning of the file and it's screwing everything up. The BOM is prepended to the first header value in the top row, and I'm having trouble figuring out how to deal with this. I've tried a number of different things, from cleaning the key in each key-value pair to trimming the stream directly in tempfile, but I keep hitting a couple of roadblocks. 1) Using String#start_with? and the like to locate the BOM doesn't work 2) File.Open is supposed to allow you to specify a BOM-friendly mode, but I'm using Roo, which doesn't seem to have a way to specify that. I tried using the CSV library for csv files, but it also doesn't seem to to able to digest this file. Has anyone else dealt with this successfully?
|
# ? Nov 27, 2018 19:42 |
|
Peristalsis posted:I have a weird problem. I'm uploading spreadsheet and/or CSV files for processing. Some of the CSV files have a Byte Order Mark (BOM) at the beginning of the file and it's screwing everything up. The BOM is prepended to the first header value in the top row, and I'm having trouble figuring out how to deal with this. The Roo documentation says it accepts `File` objects: https://github.com/roo-rb/roo/blob/c83efbb8774d53701db5ec6815cc7e720389caa2/README.md#usage Try using `File.open` with the BOM-mode stuff you read, then pass the file handle to `Spreadsheet.open`. This is probably also how you'd do it if you used the CSV stdlib.
|
# ? Nov 28, 2018 02:54 |
|
I am working to integrate a Ruby on Rails application in our workflow (developed by some other company) and I have a question: How can I get rid of the RAILS_HOSTNAME env variable in a rails app? Whenever the rails application redirects it's using the environment variable to construct the Location url. The problem I have with that is that I cannot (apparently, or I don't know how) put the application behind a load balancer since the redirects are all wrong and basically nothing works. Plus, not to mention, if I need to spawn a new machine (we're hosting in AWS) then it needs to be reconfigured with the new environment variable (which I know it can be automated, but I wanna get rid of it completely). The application itself is configured on that host to use nginx as reverse-proxy, but I doubt nginx is the problem here. I tried setting the RAILS_HOSTNAME to the name of the load balancer but that made everything stop working completely (ngix throws Bad Gateway and doesn't seem to even try to contact the rails app).
|
# ? Nov 30, 2018 16:05 |
|
RAILS_HOSTNAME isn't a standard var. So it depends on what the custom logic is set up to do. It sounds somewhat like something we use to enforce the proper domain name on production apps:code:
If you really need to, ENV is just a Hash loaded at boot, so you can delete the entry in ruby. code:
|
# ? Nov 30, 2018 17:30 |
|
The Milkman posted:RAILS_HOSTNAME isn't a standard var. So it depends on what the custom logic is set up to do. It sounds somewhat like something we use to enforce the proper domain name on production apps: The only place I've seen it used (found via grep) is in config/environments/production.rb, as follows: code:
That is, if I access the app via http://localhost:1234/path then the redirect should keep localhost:1234. But if I talk with the app via http://somedomain/path, then use somedomain. Is that possible to achieve via some setting or does it have to be done manually at controller /routes level?
|
# ? Nov 30, 2018 18:51 |
|
Volguus posted:The only place I've seen it used (found via grep) is in config/environments/production.rb, as follows: Make sure you use _path and not _url in any helpers. That will use the current url. Generally default url options are only used in the case of email where there is no page context. Otherwise, if you're using _path, it'll be relative.
|
# ? Nov 30, 2018 18:55 |
|
kayakyakr posted:Make sure you use _path and not _url in any helpers. That will use the current url. That completely flew over my head because in the app/helpers folder, the helpers are all empty: code:
|
# ? Nov 30, 2018 19:08 |
|
Helper methods that are created from the routes.rb resources have both a _path suffix and a _url suffix. https://guides.rubyonrails.org/routing.html#path-and-url-helpers
|
# ? Nov 30, 2018 19:16 |
|
I don't think I understand the problem so let me spit out some nonsense. For one, HTTP 301 requires an absolute URL in the Location header, despite the fact that many/most clients and servers can handle relative. So, it's plausible, though I haven't verified, that Rails will implicitly use your hostname when issuing redirects so that it is absolute. But to the core of the issue, what is the problem with this, and why does it cause problems with your load balancer? Are tha absolute URLs referring to specific web servers instead of the load balancers themselves? Where is that environment variable being configured, and can you change it to the load balancer's host? Your solution with referer- or host-based redirection is 99.999% OK but also presents a small security flaw for open redirects. Ideally, a single-tenant app should be addressed by one authoritative hostname (and also one scheme -- HTTPS), which is configured as the one hostname in Rails. This should be the web server or the load balancer if there is one. xtal fucked around with this message at 02:47 on Dec 2, 2018 |
# ? Dec 2, 2018 02:43 |
|
I’m at my wits end with these stupid loving tests. Is there any way to debug why Capybara would intermittently take five seconds to find and click on a link in a page? We’re often but not always spending like 5600ms on any click_link or click_button call, and I’m having a gently caress of a time figuring out why. I don’t think it’s an async thing and we don’t seem to be spending much time in the DB or anything, so I suspect that Capybara itself is having trouble getting what it wants. Edit: we are doing within blocks, though it doesn’t seem to help much. Edit 2: you know what? Our test suite being a bloated piece of poo poo is extremely low priority. I am bringing no value by doing this. gently caress this poo poo. Pollyanna fucked around with this message at 21:17 on Jan 4, 2019 |
# ? Jan 4, 2019 20:35 |
|
|
# ? Jun 13, 2024 06:08 |
|
Pollyanna posted:I’m at my wits end with these stupid loving tests. Is there any way to debug why Capybara would intermittently take five seconds to find and click on a link in a page? We’re often but not always spending like 5600ms on any click_link or click_button call, and I’m having a gently caress of a time figuring out why. I don’t think it’s an async thing and we don’t seem to be spending much time in the DB or anything, so I suspect that Capybara itself is having trouble getting what it wants. I feel your pain. In my experience, capybara is kind of erratic in ways like this, and it can be very difficult to figure out why. It seems odd to me that it always eventually finds your links - I've had more experiences where it just can't find some button or link that's right in front of it, causing a slow failure as it sits through its wait time, not a slow pass. Maybe someone who knows it better will comment further, but here are a few things to look at: * Capybara locates things on the page to click on by screen coordinates, and sometimes the screen rendering makes the control move after capybara locates it, but before it tries to click on it. Capybara then tries to click on some empty part of the page (or another control entirely). * If you have a screen that's slow to load, you can tell capybara to find some late-loading feature before you do other things, as a sort of check that the screen is done rendering before you expect too much. * It's better to use click_on, click_link, etc. than to activate widgets and explicitly call some callback on it. I've resorted to the latter at times when I just can't figure out what the hell is going on, and always ended up regretting it later. * The first test that is run seems to take a long time on the application I work on, and I assume that's just because it's loading the app or something while doing it. * Try not to use before(:all) blocks to save time. It can speed things up, but has always caused me more headaches than it's worth. Just stick with before(:each)/background. * Keeping each test short and specific can help locate exactly what step is causing the problem, and/or prevent the combination of steps that's causing the problem from being in the same test together. This may make the tests take longer, as each before and after block run more times for lots of short tests than for a few long ones, but keeping them simple makes them more understandable and maintainable. * Finding the root cause of capybara issues can take a lot longer than randomly changing things around (the order of steps, which tests do what things, different ways of doing the same things, etc.) until something works. I'd always prefer to know what the gently caress is actually going on, but with capybara in particular, figuring that out can be a frustrating dead end. And sometimes when you stumble across a fix, you can figure out what the problem must have been much better than trying to deduce it from first principles. Using within blocks is good for preventing ambiguous searches, but I wouldn't expect it to fix full seconds of delay just from shortening the parsing of HTML. Good luck!
|
# ? Jan 5, 2019 17:48 |