Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ChickenWing
Jul 22, 2010

:v:

Harik posted:

I had an interesting git merge experience today:

Someone merged with what was basically "resolve: mine", blowing away every other change.

Something like
code:
------ master--x- A, !B
 \---- A  ----/
  \----B  ---/
I want to see the changes at X that led to branch B being reverted. A merge that makes additional changes should show those changes somewhere, right?

I'll have a talk with the developer about how the hell he managed that - it was a stock gitlab merge of a pull request via the gitlab web interface to a moving dev branch, something that should happen thousands of times every day.

Are there any changes from B (like, from files that weren't conflicting)? Perhaps you merged a branch that wasn't up-to-date with the merge from B (e.g. merge local, but remote is 2 ahead and contains the merge).

Adbot
ADBOT LOVES YOU

James Baud
May 24, 2015

by LITERALLY AN ADMIN
I read back a few pages and didn't see anything quite like a problem I'm currently considering addressing.


I have a large long-lived project that, for better or worse, includes moderately sized binary assets in source control, although now and then bigger ones have snuck in.

A git clone with full history is quite a few gigabytes in size, and we currently bootstrap new clones by plugging in a hard drive and doing a straight copy. The real master isn't git and nobody pushes to it because $LARGECO and in-house systems galore, but people like working in git and this mostly works well enough...

I'd like to reduce the size of the repository while preserving the full history for everything except the binaries, which I'm willing to reset to "current version was there since the beginning of time / whenever else". It's okay to write off bisection and the other consequences of rewriting history in my context, although in a perfect world maybe the commits that changed these binaries wholesale would still include a bogus metadata change so that they'd retain the association in history.

Before I try too hard, is anyone familiar with a tool that can do this sort of bulk commit rewriting for me?

Edison was a dick
Apr 3, 2010

direct current :roboluv: only

James Baud posted:

I read back a few pages and didn't see anything quite like a problem I'm currently considering addressing.


I have a large long-lived project that, for better or worse, includes moderately sized binary assets in source control, although now and then bigger ones have snuck in.

A git clone with full history is quite a few gigabytes in size, and we currently bootstrap new clones by plugging in a hard drive and doing a straight copy. The real master isn't git and nobody pushes to it because $LARGECO and in-house systems galore, but people like working in git and this mostly works well enough...

I'd like to reduce the size of the repository while preserving the full history for everything except the binaries, which I'm willing to reset to "current version was there since the beginning of time / whenever else". It's okay to write off bisection and the other consequences of rewriting history in my context, although in a perfect world maybe the commits that changed these binaries wholesale would still include a bogus metadata change so that they'd retain the association in history.

Before I try too hard, is anyone familiar with a tool that can do this sort of bulk commit rewriting for me?

If you know what file paths need to be removed, you should be able to do it with something like:

code:
git filter-branch --index-filter 'git rm --cached --ignore-unmatch doc/bigasset.bin data/bigmodels.bin'

James Baud
May 24, 2015

by LITERALLY AN ADMIN

James Baud posted:

I'd like to reduce the size of the repository while preserving the full history for everything except the binaries, which I'm willing to reset to "current version was there since the beginning of time / whenever else".
...
Before I try too hard, is anyone familiar with a tool that can do this sort of bulk commit rewriting for me?

A bit of looking around has also found https://rtyley.github.io/bfg-repo-cleaner/

Its "--strip-blobs-bigger-than" option appears to do exactly what I want to do indiscriminately (trimming old, preserving current), in what it claims is significantly less time than git-filter-branch.

lifg
Dec 4, 2000
<this tag left blank>
Muldoon

James Baud posted:

A bit of looking around has also found https://rtyley.github.io/bfg-repo-cleaner/

Its "--strip-blobs-bigger-than" option appears to do exactly what I want to do indiscriminately (trimming old, preserving current), in what it claims is significantly less time than git-filter-branch.

I’ve used this before, and did the job perfectly.

netcat
Apr 29, 2008
Kind of related to the thread, I'm looking for some kind of version handled binary storage repository thing to deliver stuff like VM images, application binaries etc to. We have used Nexus in the past but I'm looking for some other alternatives since we are not really using Maven.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

netcat posted:

Kind of related to the thread, I'm looking for some kind of version handled binary storage repository thing to deliver stuff like VM images, application binaries etc to. We have used Nexus in the past but I'm looking for some other alternatives since we are not really using Maven.

Artifactory works pretty well for that kind of thing.

Taffer
Oct 15, 2010


I have some questions about monorepos and if they should be broken up, and how.

At work we have an enormous monorepo. It stores gigabytes of prebuilt binaries for dependencies, has mountains of C++ code, then all the various apps and implementations, all in the same repo. There are a number of problems this causes: First and formost the repo is slow. Oftentimes simply doing git status will take 5+ minutes. I'm not sure if this is due to sheer number of files git is scanning, an enormous history, or simply raw repo size. The other big issue is that we have an unreadable git history. One person will be working on a build system while another works on c++ and two more are working on apps etc, all making commits at the same time. So many merge commits. Did I mention that most of the time everyone is working on the same branch? It's like pulling teeth to get them to use separate branches.

Anyway, I want to break it up really badly, but there are some questions I have, especially if I'm going to convince my boss that we need to do this.

1) If repos are nested, will this have similar slowdown problems as a monorepo? I'm guessing no but I want to be sure. e.g. repo for iOS is contained within the "core" repo that contains build system etc.

2) When a new feature requires work in multiple repos and they all have a feature branch, how do you... sync these branches? Particularly with CI. If I have two PR's open on two separate repos and they're supposed to go together, is there any way for a CI system to automatically understand this or is that a lost hope?

Unrelated question, is there any way to make cmake download a git repo directly instead of building a bunch of unecessary folders? e.g. home/my-repo instead of home/my-repo/src/my-repo.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Taffer posted:

I have some questions about monorepos and if they should be broken up, and how.

At work we have an enormous monorepo. It stores gigabytes of prebuilt binaries for dependencies, has mountains of C++ code, then all the various apps and implementations, all in the same repo. There are a number of problems this causes: First and formost the repo is slow. Oftentimes simply doing git status will take 5+ minutes. I'm not sure if this is due to sheer number of files git is scanning, an enormous history, or simply raw repo size. The other big issue is that we have an unreadable git history. One person will be working on a build system while another works on c++ and two more are working on apps etc, all making commits at the same time. So many merge commits. Did I mention that most of the time everyone is working on the same branch? It's like pulling teeth to get them to use separate branches.

Anyway, I want to break it up really badly, but there are some questions I have, especially if I'm going to convince my boss that we need to do this.

1) If repos are nested, will this have similar slowdown problems as a monorepo? I'm guessing no but I want to be sure. e.g. repo for iOS is contained within the "core" repo that contains build system etc.

2) When a new feature requires work in multiple repos and they all have a feature branch, how do you... sync these branches? Particularly with CI. If I have two PR's open on two separate repos and they're supposed to go together, is there any way for a CI system to automatically understand this or is that a lost hope?

Unrelated question, is there any way to make cmake download a git repo directly instead of building a bunch of unecessary folders? e.g. home/my-repo instead of home/my-repo/src/my-repo.
You probably want to use submodules for this, not wacky cmake scripts. You can update multiple submodules in a single root repo commit to address the CI issue.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Taffer posted:

I have some questions about monorepos and if they should be broken up, and how.

At work we have an enormous monorepo. It stores gigabytes of prebuilt binaries for dependencies, has mountains of C++ code, then all the various apps and implementations, all in the same repo. There are a number of problems this causes: First and formost the repo is slow. Oftentimes simply doing git status will take 5+ minutes. I'm not sure if this is due to sheer number of files git is scanning, an enormous history, or simply raw repo size. The other big issue is that we have an unreadable git history. One person will be working on a build system while another works on c++ and two more are working on apps etc, all making commits at the same time. So many merge commits. Did I mention that most of the time everyone is working on the same branch? It's like pulling teeth to get them to use separate branches.

Anyway, I want to break it up really badly, but there are some questions I have, especially if I'm going to convince my boss that we need to do this.

1) If repos are nested, will this have similar slowdown problems as a monorepo? I'm guessing no but I want to be sure. e.g. repo for iOS is contained within the "core" repo that contains build system etc.

2) When a new feature requires work in multiple repos and they all have a feature branch, how do you... sync these branches? Particularly with CI. If I have two PR's open on two separate repos and they're supposed to go together, is there any way for a CI system to automatically understand this or is that a lost hope?

Unrelated question, is there any way to make cmake download a git repo directly instead of building a bunch of unecessary folders? e.g. home/my-repo instead of home/my-repo/src/my-repo.

There's a reasonable chance the sole cause of the performance issue is the gigabytes of binaries. Use BFG and clean those things out of your source control history. Put them in a package manager or worst case use Git-LFS.

Taffer
Oct 15, 2010


Ralith posted:

You probably want to use submodules for this, not wacky cmake scripts. You can update multiple submodules in a single root repo commit to address the CI issue.

We had a lot of issues with submodules in the past, but I did not know about being able to update the submodules in a single commit, I'll have to look into that, thanks.


New Yorp New Yorp posted:

There's a reasonable chance the sole cause of the performance issue is the gigabytes of binaries. Use BFG and clean those things out of your source control history. Put them in a package manager or worst case use Git-LFS.

Everything bigger than 10MB is already in Git-LFS. But would that actually change performance? I thought Git-LFS just provided a means to put big files on a separate server when using something like github, but otherwise worked the same. But yeah, I'd definitely love to know some best-practices with prebuilt binaries - we definitely need them, because building all of the dependencies would take literal days, but we need a better way to manage them.

Taffer fucked around with this message at 20:01 on Jan 4, 2018

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Taffer posted:

Everything bigger than 10MB is already in Git-LFS. But would that actually change performance? I thought Git-LFS just provided a means to put big files on a separate server when using something like github, but otherwise worked the same. But yeah, I'd definitely love to know some best-practices with prebuilt binaries - we definitely need them, because building all of the dependencies would take literal days, but we need a better way to manage them.
I use Nix to solve that problem. Commit a declarative specification of how to get your dependencies, not the binaries.

Xerophyte
Mar 17, 2008

This space intentionally left blank

Taffer posted:

We had a lot of issues with submodules in the past, but I did not know about being able to update the submodules in a single commit, I'll have to look into that, thanks.

Everything bigger than 10MB is already in Git-LFS. But would that actually change performance? I thought Git-LFS just provided a means to put big files on a separate server when using something like github, but otherwise worked the same. But yeah, I'd definitely love to know some best-practices with prebuilt binaries - we definitely need them, because building all of the dependencies would take literal days, but we need a better way to manage them.

To clarify: you update the submodule repositories themselves separately and independently with whatever feature they need, in some number of commits. You then make a single commit to the main repo that updates all the submodule references in that repository to the versions you need. This is what we do, it works. You will suffer the general submodule annoyances since updating a submodule reference is opaque and atomic, which makes conflicts and diffs more difficult. If you can avoid splitting the repo then that's preferable, in my opinion; submodules are evil.


Git stores every version of every file and checks any changes (e.g. git status) against every file in the entire history. For text, file versions can be stored efficiently with deltas so performance is pretty good. For binaries, deltas don't work so every repo clone stores a complete copy of every version of every binary ever committed. Even if it's "just" 1 MB binaries, if you update them often -- and it sounded like you have some build artifacts in there, is that right? -- you will bloat the repo pretty fast and big repos are slow since there's just more data that git needs to compare changes against whenever you do something. How big is your .git for the repo in question?

Git-LFS tries to get around git's issues with binaries by basically replacing the file stored in the repo with a hash and a URL, which LFS then resolves more or less transparently. Git itself no longer needs to store a copy of every version of the binary, it just needs to store every hash and URL used to resolve the binary, which is small and so keeps the total repo size down and basic repo operations responsive. You just have to deal with LFS's own overhead instead, which includes keeping its own cache of every version of the binaries and so on. You can store small binaries that are rarely updated directly directly in the tree if you want, but I'd be very hesitant to put anything >100kb in there. Never store actual artifacts of the build that need to be constantly updated in git itself, that's asking for pain. Ideally, if it can't be line diffed then it doesn't go in git.

Store prebuilt binaries and build artifacts in a package manager or an artifact store with some useful api. We use Artifactory, I have no real concept of how it compares to other options. Store a text specification in json, xml or something in git, saying that this project has a dependency on libfoo, version 2.4.2, libbar, version 12.0.2 and so on. On configuring the build, download the requested version if it's not present wherever the user has specified that they store their externals. As a first version you can do this by making cmake download and unzip binaries from a server on if it fails to find them on configure, it's hacky but what we ran on for years (to my shame). Using nix or nuget or some other actual package management system is ultimately a lot simpler than rolling your own crappy package manager in cmake or some other language, though.

Xerophyte fucked around with this message at 14:22 on Jan 5, 2018

Thom ZombieForm
Oct 29, 2010

I will eat you alive
I will eat you alive
I will eat you alive
For a personal project, I have two remotes, one is a heroku for deployment, and another a github public repo. The heroku remote needs a file with my api key, which resides in a folder listed in the .gitignore as to not be tracked by the public github repo. Is there a preferred method of splitting up this git project to accommodate this?

karms
Jan 22, 2006

by Nyc_Tattoo
Yam Slacker
Don't store credentials in a repository?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Thom ZombieForm posted:

For a personal project, I have two remotes, one is a heroku for deployment, and another a github public repo. The heroku remote needs a file with my api key, which resides in a folder listed in the .gitignore as to not be tracked by the public github repo. Is there a preferred method of splitting up this git project to accommodate this?

Your project should read the api key out of an environment variable.

The Fool
Oct 16, 2003


Thom ZombieForm posted:

For a personal project, I have two remotes, one is a heroku for deployment, and another a github public repo. The heroku remote needs a file with my api key, which resides in a folder listed in the .gitignore as to not be tracked by the public github repo. Is there a preferred method of splitting up this git project to accommodate this?

Use a CD pipeline than handles secrets in a sane way.

Doc Hawkins
Jun 15, 2010

Dashing? But I'm not even moving!


Those are all the correct answers, but if you find yourself really needing to distribute secrets in-repo, there's always git-crypt.

Thom ZombieForm
Oct 29, 2010

I will eat you alive
I will eat you alive
I will eat you alive
Thanks all. Went along with setting an environment variable... I admittedly know very little about deployments / web servers in general - going to read through the heroku documentation. Thanks!

Volguus
Mar 3, 2009

Thermopyle posted:

Your project should read the api key out of an environment variable.

What's the advantage of an environment variable vs a file? The value of the variable still has to be set somewhere (.bashrc or the other shells/windows equivalent). And the file can still be protected to be read only by owner. What does the "environment" bring to the table?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Volguus posted:

What's the advantage of an environment variable vs a file? The value of the variable still has to be set somewhere (.bashrc or the other shells/windows equivalent). And the file can still be protected to be read only by owner. What does the "environment" bring to the table?

Try accidentally committing an environment variable!

Also, PaaS providers have you configure env vars through your platform configuration pages.

Thermopyle fucked around with this message at 03:51 on Jan 7, 2018

Volguus
Mar 3, 2009

Thermopyle posted:

Try accidentally committing an environment variable!

Also, PaaS providers have you configure env vars through your platform configuration pages.

I've seen people committing their entire $HOME, ssh keys and all. If you have .gitignore you should be good though. Sure, service X requiring you to use env variables is one thing, but using env variables because of some perceived notion of stronger security ... I don't think it's there.

Data Graham
Dec 28, 2009

📈📊🍪😋



My feeling has been that it keeps the keys out of the home directory entirely, and centralizes them at the proxy server level.

Like, with my Django apps I put the keys into SetEnvVar statements in the Apache (or equivalent) conf, and pass them into my app from os.environ via wsgi.py. Then if the Apache conf is set root/600, nothing in userland can read it even if it gets shell access.

I don't know if that's Best Practice though, it's just what I've settled on. Would love to know if there's a better solution in general use.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Volguus posted:

I've seen people committing their entire $HOME, ssh keys and all. If you have .gitignore you should be good though. Sure, service X requiring you to use env variables is one thing, but using env variables because of some perceived notion of stronger security ... I don't think it's there.

I mean...you're arguing that because some other things get exposed it's ok to expose something else?

What are the advantages of not using env vars? They're super simple to both create and read...

Volguus
Mar 3, 2009

Thermopyle posted:

I mean...you're arguing that because some other things get exposed it's ok to expose something else?

What are the advantages of not using env vars? They're super simple to both create and read...

I am not arguing that at all. If one has to use a file to store secrets (env vars is just a file), one should put file with secrets in .gitignore, or better yet, not having that at all in the repo. Done. As opposed to env vars, a file then can also be structured (please don't use YAML), instead of having just prop1=value2 in your .bashrc or wherever it is. The advantage of a file is for ease of development, as a development version file can be mailed around, put in some location, and off one goes without spending more than needed time setting up who knows what.

And, even in production, running the service and specifying env variables is a technique highly dependent on OS, tools, etc. If you use init.d let's say is relatively easy to change your init script to export certain variables. But now, any time you would want to change them you need to mess with the script itself. Apache you modify the .conf file, if you run on Windows is a whole other issue, to not even mention everyone's favourite init system systemd. Rather be in those config files, than your own? Not to mention that changing systems, also changes the way they're handled, further increasing the chance of errors.

The idea is, env vars do not provide a superior secrets storing technique than just a file. There are ways (heard it in the amazon thread?) that in AWS one can store sensitive things in S3 and then somehow to be read from there only by an authorized host that must perform some bloody and mystical ritual before it can be given the file, but that's a completely different thing and obviously not applicable to everyone (and I am not familiar with the details). But env vars can be a bit more cumbersome to work with and not very flexible.

necrotic
Aug 2, 2005
I owe my brother big time for this!

Data Graham posted:

My feeling has been that it keeps the keys out of the home directory entirely, and centralizes them at the proxy server level.

Like, with my Django apps I put the keys into SetEnvVar statements in the Apache (or equivalent) conf, and pass them into my app from os.environ via wsgi.py. Then if the Apache conf is set root/600, nothing in userland can read it even if it gets shell access.

I don't know if that's Best Practice though, it's just what I've settled on. Would love to know if there's a better solution in general use.

Don't run Apache as root yo.

Data Graham
Dec 28, 2009

📈📊🍪😋



Or the user I launch Apache with, yeah but good point

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Volguus posted:

I am not arguing that at all. If one has to use a file to store secrets (env vars is just a file), one should put file with secrets in .gitignore, or better yet, not having that at all in the repo. Done. As opposed to env vars, a file then can also be structured (please don't use YAML), instead of having just prop1=value2 in your .bashrc or wherever it is. The advantage of a file is for ease of development, as a development version file can be mailed around, put in some location, and off one goes without spending more than needed time setting up who knows what.

And, even in production, running the service and specifying env variables is a technique highly dependent on OS, tools, etc. If you use init.d let's say is relatively easy to change your init script to export certain variables. But now, any time you would want to change them you need to mess with the script itself. Apache you modify the .conf file, if you run on Windows is a whole other issue, to not even mention everyone's favourite init system systemd. Rather be in those config files, than your own? Not to mention that changing systems, also changes the way they're handled, further increasing the chance of errors.

The idea is, env vars do not provide a superior secrets storing technique than just a file. There are ways (heard it in the amazon thread?) that in AWS one can store sensitive things in S3 and then somehow to be read from there only by an authorized host that must perform some bloody and mystical ritual before it can be given the file, but that's a completely different thing and obviously not applicable to everyone (and I am not familiar with the details). But env vars can be a bit more cumbersome to work with and not very flexible.

I do not agree with much of this, but I don't think environment variables are so much better than a file that I'm willing to put any more effort into the discussion!

Like, I wouldn't go rip out a file-based secrets system to replace it with env vars, but I would start a new project with env vars.

In fact, I wouldn't have even brought it up except for the fact that the original question was for heroku where you definitely should use env vars and not a file. Heroku is basically built around 12factor.

edit: Though meltdown and spectre might make me reconsider!

Thermopyle fucked around with this message at 19:19 on Jan 7, 2018

Volguus
Mar 3, 2009

Thermopyle posted:

edit: Though meltdown and spectre might make me reconsider!

Are you thinking that env vars are in memory all the time vs a file whose contents may be discarded after use?

Doc Hawkins
Jun 15, 2010

Dashing? But I'm not even moving!


What's nice about env vars is that they can be different every time the process is run, so if you put all configuration into them, environmental parity and credential rotation can both get a lot faster and less error-prone.

Although with docker you can get the same benefits by just mounting a config file on run, so as long as you can automate the updating of that file(s), it's not really env vars uber alles.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Volguus posted:

Are you thinking that env vars are in memory all the time vs a file whose contents may be discarded after use?

No, I'm thinking about the whole paradigm.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

From a developer's point of view, you can take a third option by reading your configuration options as command line parameters. This enables the ops guys to use whichever system works best for them - want to use env vars? Launch the app with "--option $OPTION". Want to use a file? Put the launch command in a file, or possibly multiple ones for different setups.

(Also, be a cool dude and support --help.)

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

NihilCredo posted:

From a developer's point of view, you can take a third option by reading your configuration options as command line parameters. This enables the ops guys to use whichever system works best for them - want to use env vars? Launch the app with "--option $OPTION". Want to use a file? Put the launch command in a file, or possibly multiple ones for different setups.

(Also, be a cool dude and support --help.)
Do not put secrets on the command line. The command line is world-readable.

Volguus
Mar 3, 2009

NihilCredo posted:

From a developer's point of view, you can take a third option by reading your configuration options as command line parameters. This enables the ops guys to use whichever system works best for them - want to use env vars? Launch the app with "--option $OPTION". Want to use a file? Put the launch command in a file, or possibly multiple ones for different setups.

(Also, be a cool dude and support --help.)

Spring boot went the whole 9 yards on this: default options in the classpath file, overridable by a classpath profile file (dev vs prod), which are overridden by a file on the disk in the current folder, which are overridden by environment vars, which are overridden by command line parameters.

Doc Hawkins
Jun 15, 2010

Dashing? But I'm not even moving!


Ralith posted:

Do not put secrets on the command line. The command line is world-readable.

I don't know what you mean by this, but it sounds serious, so please tell me.

necrotic
Aug 2, 2005
I owe my brother big time for this!
Command line options show up in process inspection which any user can do.

Top and friends can do this very easily.

Edison was a dick
Apr 3, 2010

direct current :roboluv: only
You can modify your command-line to remove the password but for security you have to assume it's already too late.

I'm equally skeptical of putting it in an environment variable.

Ideally you'd use the kernel key-ring or desktop password manager so you can take precautions and put it in locked memory so it's safe against being spilled to swap, but I've only a theoretical idea of how it should be done.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Volguus posted:

Spring boot went the whole 9 yards on this: default options in the classpath file, overridable by a classpath profile file (dev vs prod), which are overridden by a file on the disk in the current folder, which are overridden by environment vars, which are overridden by command line parameters.
AKA "god dammit where the hell is this option getting set" and "well it worked on my machine"

Edison was a dick posted:

I'm equally skeptical of putting it in an environment variable.
Environment variables (of a specific process) are not world readable.

Axiem
Oct 19, 2005

I want to leave my mind blank, but I'm terrified of what will happen if I do
In the past, I’ve used scripts that looked for an environment variable, and if it wasn’t there, it’d look in a file for a backup. In practice, the CI server used the environment variable, local development used the file.

This is probably terrible.

Adbot
ADBOT LOVES YOU

necrotic
Aug 2, 2005
I owe my brother big time for this!
I use a tool in dev that loads a file with environment data before starting anything. Prod the data is set explicitly in the environment.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply