Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hughlander
May 11, 2005

Sulla-Marius 88 posted:

I'm trying to start from scratch and I'm getting some bizarre behaviour here. I'm trying to use SourceTree on windows to manage it. I'm hoping to have git on the server, that can handle versioning and auto-deploy to the working files. So I have:

Server:
/home/username/http/hosts/* is the directory that holds all my files.
/home/username/http/dev_git/.git holds the non-bare repository. I did 'git init' in /dev_git/ to create it -- folder has nothing else in it.

In SourceTree I modify the remote repository to point to the new one:

ssh://username@server.ip.address.here:port/home/username/http/hosts/dev_git/.git

And try to push 3 php files that I've modified, should be about 10 kb total. It spends a couple minutes uploading at goddamn 2-5mb/s before warning with something weird and erroring out, so I decide to delete the local content and try again from scratch in case it's screwing up because of the changed context or it's trying to upload every single file in my local copy of the site code, rather than just the 3 files in the commit.

So I delete the local, start a whole new project in sourcetree and try to connect. And it starts downloading at 1.5mb/s for like 30+ seconds. I'm on internet with a pretty shallow data transfer limit so I had to cancel it after about 30 seconds because honestly where the hell is it getting this from? The whole size of the git folder on the server is 22mb. I just cancelled, deleted the local copy, and tried again -- about a minute at 2.5mb/s download and still no end in sight.

I'm super confused and I don't know why it's doing this. Essentially what I want is:

The production directory on the server.
An exact copy of that code on my local machine (I already have this).
A way so that changes made on the local machine can be pushed to the production directory while keeping a record of changes.

I still don't understand why the initial setup didn't work, why it really didn't want me to push directly to the master branch of the original repository. I also don't know why it's trying to download so much when all the data involved is only like 22mb.

e: Removed a bit of whinging.

Ok so a few problems it seems.

1) If you only did a git init on the server's dev_git then yes when you pushed to it, you're pushing all of the refs. Instead you should have done a git clone to hosts.
2) Sourcetree should only ever point at hosts, by making it point to dev_git you're doing away with what you're trying to do. (Or I'm missunderstanding it.)
3) What you need is on the server have a post push hook that does a cd to dev_git and checks out the copy that was just pushed.

Adbot
ADBOT LOVES YOU

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Combat Pretzel posted:

Anyone here using this Visual Studio Online "cloud" repository thing? How smooth does it work? It seems to be free for up to 5 users with my MSDN account, seeing how I'm just one, this may fit.

If you use the Git option it's just that... Git. All my my interactions with Azure stuff has been great so far.


I also signed up for BizSpark, 3 years of MSDN and all MS software free. Its Azure license is modified from the standard MSDN one and explicitly allows production use.

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

Combat Pretzel posted:

Anyone here using this Visual Studio Online "cloud" repository thing? How smooth does it work? It seems to be free for up to 5 users with my MSDN account, seeing how I'm just one, this may fit.

The only negative I've heard from people is that your VSO site can't be "public". So if you want to show someone what you are doing I believe you have to invite them. This might have changed with BUILD but I haven't heard anything. Besides that it's TFS with the option of git or TFSVS. This video from BUILD was a pretty good overview

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
It seemed to have gone public a few days ago only, so I suppose that was a restriction. I'm not about showing off my stuff, mainly to have an integrated version control solution to avoid any gently caress ups. Up until now, it was infrequent archiving with WinRAR.

hirvox
Sep 8, 2009
For version control and work item tracking purposes, VSO is indistinguishable from on-premises TFS. The only real restriction is that you have no control over the Build Controller/Agent servers that Team Builds run on. Automatic unit test runs and deploying to web-accessible servers (preferably Azure) is still doable, but compliling projects with loads of dependencies, running tightly-coupled integration tests and deploying to on-premises, heavily firewalled servers can be difficult.

New-ShitPost
Jul 25, 2011

hirvox posted:

For version control and work item tracking purposes, VSO is indistinguishable from on-premises TFS. The only real restriction is that you have no control over the Build Controller/Agent servers that Team Builds run on. Automatic unit test runs and deploying to web-accessible servers (preferably Azure) is still doable, but compliling projects with loads of dependencies, running tightly-coupled integration tests and deploying to on-premises, heavily firewalled servers can be difficult.

It is possible to connect an on-premises build controller to VSO.

hirvox
Sep 8, 2009
Good to know, thanks. Do you know how the build controller/agent accounts need to be configured in this case? Is this blog post still accurate?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

hirvox posted:

For version control and work item tracking purposes, VSO is indistinguishable from on-premises TFS. The only real restriction is that you have no control over the Build Controller/Agent servers that Team Builds run on. Automatic unit test runs and deploying to web-accessible servers (preferably Azure) is still doable, but compliling projects with loads of dependencies, running tightly-coupled integration tests and deploying to on-premises, heavily firewalled servers can be difficult.

The other restriction is that Microsoft's new release management package doesn't support VSO yet -- it requires an on-prem TFS instance still.

It's weird because it really doesn't do anything other than pull down build information from TFS, so I'm assuming it's a limitation in how the original developers handled authentication. I wouldn't be surprised if Update 2 changes that.

New Yorp New Yorp fucked around with this message at 19:54 on Apr 6, 2014

Sulla Faex
May 14, 2010

No man ever did me so much good, or enemy so much harm, but I repaid him with ENDLESS SHITPOSTING

Thanks guys. I spent a bit more time reading about bare vs non-bare repositories, proper git workflow, etc. I'm still a bit hazy but I figure that will clear with experience - there's only so much you can read about the proper conceptual way of doing things without having any real world experience to tie it to.

I followed this example:

http://www.sitepoint.com/one-click-app-deployment-server-side-git-hooks/

And now I have two repositories, a bare repository on my server and a cloned remote on my local machine. The bare takes pushes from my local machine and the post-receive hook automatically pushes it out to the working directory. I've tested this and it works for new files (e.g. 'test.txt').

However I also want to equalise/mirror the code on the working directory to an identical copy I already have on my local machine, without re-uploading and overwriting everything. I copied all the local code into the cloned repository (which is identical to the working directory on the server) and git added them all to 'working copy changes'. I tried to run the git command 'git update-index --skip-worktree' but it wasn't working. It said 'unable to mark file (filename)'. I assumed this was because the files hadn't been added to the index yet (git ls-files -o showed every single file). So I staged them all but *.* doesn't work, nor does * in windows, you need to go through and skip-worktree for each folder and file individually with a trailing '/' because git doesn't recognise them as being folders.

But even that isn't working. I think it's doing something different -- it's not resetting the tracked changes, but just temporarily excluding them from the commit. So I guess my final question is -- now that I have the above setup, is there a way to tell git that all the files in this directory are the latest versions and not to upload them, but to track subsequent changes? I know for a fact that they are identical to what's on the server now in the working directory, but I want to tell git that and only have it track subsequent changes.

The hit to my bandwidth probably isn't worth the amount of time I've spent on it, I'm just really confused as to why you can't set 'ignore all changes' for a file or a range of files or even the whole repository, then copy a bunch of stuff in, then turn off the 'ignore changes' option, and have git only track and commit changes from that point on.

e: I committed the changes and I'm not sure if it uploaded all the files in record time or whether something I did affected it, but it now appears to be tracking files normally and what have you, so I can get back to developing and just hit 'commit' and 'push' to deploy. :)

Sulla Faex fucked around with this message at 13:28 on Apr 7, 2014

Hughlander
May 11, 2005

Sulla-Marius 88 posted:

Thanks guys. I spent a bit more time reading about bare vs non-bare repositories, proper git workflow, etc. I'm still a bit hazy but I figure that will clear with experience - there's only so much you can read about the proper conceptual way of doing things without having any real world experience to tie it to.

I followed this example:

http://www.sitepoint.com/one-click-app-deployment-server-side-git-hooks/

And now I have two repositories, a bare repository on my server and a cloned remote on my local machine. The bare takes pushes from my local machine and the post-receive hook automatically pushes it out to the working directory. I've tested this and it works for new files (e.g. 'test.txt').

However I also want to equalise/mirror the code on the working directory to an identical copy I already have on my local machine, without re-uploading and overwriting everything. I copied all the local code into the cloned repository (which is identical to the working directory on the server) and git added them all to 'working copy changes'. I tried to run the git command 'git update-index --skip-worktree' but it wasn't working. It said 'unable to mark file (filename)'. I assumed this was because the files hadn't been added to the index yet (git ls-files -o showed every single file). So I staged them all but *.* doesn't work, nor does * in windows, you need to go through and skip-worktree for each folder and file individually with a trailing '/' because git doesn't recognise them as being folders.

But even that isn't working. I think it's doing something different -- it's not resetting the tracked changes, but just temporarily excluding them from the commit. So I guess my final question is -- now that I have the above setup, is there a way to tell git that all the files in this directory are the latest versions and not to upload them, but to track subsequent changes? I know for a fact that they are identical to what's on the server now in the working directory, but I want to tell git that and only have it track subsequent changes.

The hit to my bandwidth probably isn't worth the amount of time I've spent on it, I'm just really confused as to why you can't set 'ignore all changes' for a file or a range of files or even the whole repository, then copy a bunch of stuff in, then turn off the 'ignore changes' option, and have git only track and commit changes from that point on.

e: I committed the changes and I'm not sure if it uploaded all the files in record time or whether something I did affected it, but it now appears to be tracking files normally and what have you, so I can get back to developing and just hit 'commit' and 'push' to deploy. :)

I feel you're missing something fundamental here with 'equalizing the code' Everything should either be in git already or be something that can be generated automatically from something that is in git. You should only ever need to push once from your local box to the development server. At no point should you ever interact with the remote working copy directly. If you have everything already in git, then just using git checkout -f after setting GIT_WORK_TREE is it. Try it on your local box for a bit to get comfortable with it and take the remote server out of it.

IE: if you have your local git repo in /home/web do something like make a /home/deploy:
code:
mkdir /home/deploy
cd /home/web
export GIT_WORK_TREE=/home/web
git checkout -f master
Confirm everything works there, if it doesn't add what's needed to /home/web commit it and try again. Once it works, push origin master and confirm it works on the remote server.

Sulla Faex
May 14, 2010

No man ever did me so much good, or enemy so much harm, but I repaid him with ENDLESS SHITPOSTING
You might be misunderstanding what I meant by 'equalise the code'. In any case, it looks like git or something I did to set-up git broke the laravel framework I had in place and I can't figure out why, even after a few hours of trying to get things working, so I'm going to sideline Git for a while. I've already wasted a number of days just trying to set it up and the overall gains just aren't going to be worth the investment considering I'm the sole developer on this hobby stuff. The documentation just isn't enough for someone in my position so I'm going to uninstall it. I'll keep an eye out for a reasonable introductory resource to git in case I get a chance to take another crack at it in the future. Thanks for your help anyway guys.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

Sulla-Marius 88 posted:

You might be misunderstanding what I meant by 'equalise the code'. In any case, it looks like git or something I did to set-up git broke the laravel framework I had in place and I can't figure out why, even after a few hours of trying to get things working, so I'm going to sideline Git for a while. I've already wasted a number of days just trying to set it up and the overall gains just aren't going to be worth the investment considering I'm the sole developer on this hobby stuff. The documentation just isn't enough for someone in my position so I'm going to uninstall it. I'll keep an eye out for a reasonable introductory resource to git in case I get a chance to take another crack at it in the future. Thanks for your help anyway guys.

:psyduck:

Git is easy when you get your head around it, you just need to get your head around it. It seems like you're trying to use it as a deployment system or something, which it's really not.

Check out the book, and play around with toy projects if you haven't already. I know from your other SH/SC posts that you're the sole non-awful developer at your job, so you should try to improve yourself to the point where you use source control, instead of getting dragged down to the level of your coworkers. Even being able to keep a personal repo for cowboy projects there is going to be a monster help.

Volmarias fucked around with this message at 23:18 on Apr 7, 2014

Sulla Faex
May 14, 2010

No man ever did me so much good, or enemy so much harm, but I repaid him with ENDLESS SHITPOSTING
Yeah I know, I feel like a poo poo and I felt like a poo poo when I was writing those posts; it was just a really frustrating weekend. Normally I'm more than happy to research stuff and figure it out on my own, it was just a combination of trying to do the right thing (but in my very limited personal time) for a personal project that would have taken less time than I spent trying to set up the source control for it, and just getting super conflicting information from the reports. The official git documentation I found (including sections from that book) seemed to be aimed either at people using public servers (github etc) so it's too low level, or at people already versed in versioning who are coming from subversion or mercurial, so the conceptual introductions tend to be "think of Mercurial's X, but instead of Y we do Z!". All the tutorial blog posts were self-contradictory -- you have people saying not to push to non-bare repositories ever, other people saying it's perfectly fine with a post-receive hook, and when I finally found something that seemed to be working there was some niggling permissions error (either with the push or the post-receive deploy) that had at some point managed to gently caress up my existing code, so I had to wipe the whole thing and reset to scratch (pre-git).

It was just one of those weekends where you finally have a bit of free time to work on stuff and you have to choose between food shopping, fixing a broken apartment, finding out why your Italian bank account shows minus 60 Euros, and about 3 different projects. And every coffee-riddled hour I was just running up against this seemingly impenetrable wall of conflicting information that leeched the hours away until I finally reset everything on the final day of my 4-day weekend having accomplished sweet gently caress all - I had time to work on a different (non-computer) project for 2 hours, then dinner, then bed. It was just overall a :psyduck: weekend for me and I shouldn't have sperged about it here, I was just trying to desperately salvage something from what should have been the only 4 days of straight productivity I'll get all year. So apologies again and I'm already queuing up that book, I'll get things set up on my VM before I try to migrate my existing work to it.

Sulla Faex fucked around with this message at 08:47 on Apr 8, 2014

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

Sulla-Marius 88 posted:

Yeah I know, I feel like a poo poo and I felt like a poo poo when I was writing those posts; it was just a really frustrating weekend. Normally I'm more than happy to research stuff and figure it out on my own, it was just a combination of trying to do the right thing (but in my very limited personal time) for a personal project that would have taken less time than I spent trying to set up the source control for it, and just getting super conflicting information from the reports. The official git documentation I found (including sections from that book) seemed to be aimed either at people using public servers (github etc) so it's too low level, or at people already versed in versioning who are coming from subversion or mercurial, so the conceptual introductions tend to be "think of Mercurial's X, but instead of Y we do Z!". All the tutorial blog posts were self-contradictory -- you have people saying not to push to non-bare repositories ever, other people saying it's perfectly fine with a post-receive hook, and when I finally found something that seemed to be working there was some niggling permissions error (either with the push or the post-receive deploy) that had at some point managed to gently caress up my existing code, so I had to wipe the whole thing and reset to scratch (pre-git).

It was just one of those weekends where you finally have a bit of free time to work on stuff and you have to choose between food shopping, fixing a broken apartment, finding out why your Italian bank account shows minus 60 Euros, and about 3 different projects. And every coffee-riddled hour I was just running up against this seemingly impenetrable wall of conflicting information that leeched the hours away until I finally reset everything on the final day of my 4-day weekend having accomplished sweet gently caress all - I had time to work on a different (non-computer) project for 2 hours, then dinner, then bed. It was just overall a :psyduck: weekend for me and I shouldn't have sperged about it here, I was just trying to desperately salvage something from what should have been the only 4 days of straight productivity I'll get all year. So apologies again and I'm already queuing up that book, I'll get things set up on my VM before I try to migrate my existing work to it.

I apologize, I was being a shitlord with my post.

The main difference between a bare repo and a non bare repo is whether a breach is checked out in the workspace for that repo. If not, it is a bare repo, and only the git metadata exists. This is important to git, because it doesn't want to update someone's branch out from under them if they're working on it, so git will complain if you push to THE CHECKED OUT BRANCH (read: HEAD) of a non-bare repo. It is totally fine with you pushing to branches that are not the HEAD for that remote.

The solution to this is generally to have a bare repo be the one you're using as your remote. Non-bare repos are useful in certain non-traditional cases, such as using your desktop as a remote for your laptop, so that you can pick up where you left off without pushing anything to your "real" upstream before you're ready.

A quick piece of advice; if you need git help fast, the #git channel on Freenode is very helpful and not at all filled with awful neckbeards like the rest of Freenode.

Gul Banana
Nov 28, 2003

sorry if i contributed to misleading you- i meant to suggest a post-receive hook *in a bare repository*

Sulla Faex
May 14, 2010

No man ever did me so much good, or enemy so much harm, but I repaid him with ENDLESS SHITPOSTING
Absolutely no need to apologise, it's 100% my fault. I underestimated the time needed for preparation/research and then dragged you guys into it when I got tangled up and started running out of time. I'm reading the git book now in my free time - and not just on the stuff that seems to be relevant, but everything. You're right in that it's something I'll be needing (I've needed it already) long into the future so the investment in learning it properly, rather than just at a functional level, will pay off. And since I'm no longer delaying projects until I've set it up there's no rush to cobble a functional understanding together, so I can focus on learning it properly.

Mr. Crow
May 22, 2008

Snap City mayor for life

Ender.uNF posted:

And to answer my own question:

https://github.com/schacon/git-presentations

Some excellent Keynote presentations to work with. Thanks, schacon!

Thanks for this!

evensevenone
May 12, 2001
Glass is a solid.
http://git-man-page-generator.lokaltog.net/

ninjeff
Jan 19, 2004

I like git but this is the best

accipter
Sep 12, 2003
I think I am looking for a recommendation for version control software, but let me explain my work flow first.

I have developed toolbox a variety of Python scripts some of which are used on a specific project/task. For a specific task, I copy the files into a local directory that contains all of the scripts that I used on this project. This way I can always go back to a previous project and have working scripts. I used to have the location of shared scripts included in that PATH environmental variable, but then changes to these files have the potential to break old projects.

Is there a way to improve this process? I was thinking it might be nice to have some sort of version control. However, due to security restrictions at our company outgoing ports are blocked (e.g., git doesn't even work through the proxy), and I don't want to open any ports. Is there a term for this type of organization?

Edison was a dick
Apr 3, 2010

direct current :roboluv: only

accipter posted:

Is there a term for this type of organization?

Big IT; Mordac, preventer of Information Systems; Cancer.

SurgicalOntologist
Jun 17, 2004

accipter posted:

I think I am looking for a recommendation for version control software, but let me explain my work flow first.

I have developed toolbox a variety of Python scripts some of which are used on a specific project/task. For a specific task, I copy the files into a local directory that contains all of the scripts that I used on this project. This way I can always go back to a previous project and have working scripts. I used to have the location of shared scripts included in that PATH environmental variable, but then changes to these files have the potential to break old projects.

Is there a way to improve this process? I was thinking it might be nice to have some sort of version control. However, due to security restrictions at our company outgoing ports are blocked (e.g., git doesn't even work through the proxy), and I don't want to open any ports. Is there a term for this type of organization?

Use a different virtualenv for each project, install a different version of the shared scripts into each (you will have to create a setup.py for the toolbox to make it installable).

More specifically: Use git or mercurial for the shared scripts (if you can't push to a remote repository, then just keep it local). Tag various versions of this repo. Create a new virtualenv for each project, and make a requirements.txt file that identifies what version of the toolbox to use.

I suppose that last step is not technically necessary but it will be nice to be able to refer to it later if you need to recreate the virtualenv for some reason.

Tomed2000
Jun 24, 2002

I just started working on this new project and I originally cloned a copy of the origin/master repo (they're using github) to fool around a bit and get the app up and running. I've since been brought on as a developer and I'm expected to do the typical fork & pull request to submit changes but their repo is loving massive -- it's seriously like 3GB or something and it took me 5+ hours to clone it last time. Is there a way to just fork it on github and update the .git folder so that it points to my forked repo instead of their origin? I tried doing some google searching to accomplish this but maybe I'm searching for the wrong terms because I can't find anything.

ExcessBLarg!
Sep 1, 2001

Tomed2000 posted:

Is there a way to just fork it on github and update the .git folder so that it points to my forked repo instead of their origin?
If you're going to track their changes for a while--that is, this isn't a one-time pull request--you'll probably want to have two remotes in your local repository. One points to their repo so you can track changes, while the other points to your repo so you can push your changes for review.

So the two main options are:

1. Keep origin pointing at their repo, and add a new remote for your "private" repo. To do that, run "git remote add private your_repo_url".

2. Make origin point to your repo, then add their repo as "upstream" or something. I think "git remote set-url origin your_repo_url" does the right thing, but I'd probably sooner just edit .git/config and change the "url =" line under '[remote "origin"]'. After that, optionally do "git remote add upstream their_repo_url".

Personally I prefer option #1 as it keeps your tracking branches pointing at their repo so that it's easy to pull upstream changes, but #2 is probably closer to the canonical git usage model. You can also change which remote a tracking branch refers to, so it's not a big deal either way.

ExcessBLarg! fucked around with this message at 00:27 on Apr 18, 2014

Gazpacho
Jun 18, 2004

by Fluffdaddy
Slippery Tilde

accipter posted:

I think I am looking for a recommendation for version control software, but let me explain my work flow first.

I have developed toolbox a variety of Python scripts some of which are used on a specific project/task. For a specific task, I copy the files into a local directory that contains all of the scripts that I used on this project. This way I can always go back to a previous project and have working scripts. I used to have the location of shared scripts included in that PATH environmental variable, but then changes to these files have the potential to break old projects.

Is there a way to improve this process? I was thinking it might be nice to have some sort of version control. However, due to security restrictions at our company outgoing ports are blocked (e.g., git doesn't even work through the proxy), and I don't want to open any ports. Is there a term for this type of organization?
git does not require GitHub or any other active server. It can push/pull to a network folder if that's all you have. Same goes for Mercurial.

e: Misunderstood what Tomed was asking.

Gazpacho fucked around with this message at 01:01 on Apr 18, 2014

Tomed2000
Jun 24, 2002

ExcessBLarg! posted:

If you're going to track their changes for a while--that is, this isn't a one-time pull request--you'll probably want to have two remotes in your local repository. One points to their repo so you can track changes, while the other points to your repo so you can push your changes for review.

So the two main options are:

1. Keep origin pointing at their repo, and add a new remote for your "private" repo. To do that, run "git remote add private your_repo_url".

2. Make origin point to your repo, then add their repo as "upstream" or something. I think "git remote set-url origin your_repo_url" does the right thing, but I'd probably sooner just edit .git/config and change the "url =" line under '[remote "origin"]'. After that, optionally do "git remote add upstream their_repo_url".

Personally I prefer option #1 as it keeps your tracking branches pointing at their repo so that it's easy to pull upstream changes, but #2 is probably closer to the canonical git usage model. You can also change which remote a tracking branch refers to, so it's not a big deal either way.

Thanks! I went with option 1 and did a pull request and now I think I understand the process. Most guides say to fork and clone which is essentially the same as option 2 as I see it.

RICHUNCLEPENNYBAGS
Dec 21, 2010
I'm using Git. How do you guys handle configuration with sensitive data (connection strings, passwords, etc.)? Especially if you want different configuration for each branch?

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

RICHUNCLEPENNYBAGS posted:

I'm using Git. How do you guys handle configuration with sensitive data (connection strings, passwords, etc.)? Especially if you want different configuration for each branch?

Generally, you don't. Sensitive data shouldn't live in git, mostly because there's no getting it out of git. You should have a different strategy than source control for your config files.

wolffenstein
Aug 2, 2002
 
Pork Pro

RICHUNCLEPENNYBAGS posted:

I'm using Git. How do you guys handle configuration with sensitive data (connection strings, passwords, etc.)? Especially if you want different configuration for each branch?

shell environment variables seem like a decent way to do it if that's an option.

accipter
Sep 12, 2003

Gazpacho posted:

git does not require GitHub or any other active server. It can push/pull to a network folder if that's all you have. Same goes for Mercurial.

e: Misunderstood what Tomed was asking.

I didn't know this. Thanks!

Steve French
Sep 8, 2003

wolffenstein posted:

shell environment variables seem like a decent way to do it if that's an option.

This seems to be a common thing to do, but I'd like to point out that it makes it really really easy to accidentally leak sensitive data to third parties. It's common (and not hard to imagine) for error reporting mechanisms to dump the environment along with a stack trace or whatever; if you're putting sensitive poo poo directly in the environment that will go along for the ride. AWS does this and it drives me nuts; the environment variable should be a reference to a file containing the sensitive data, not the data itself.

ExcessBLarg!
Sep 1, 2001

RICHUNCLEPENNYBAGS posted:

How do you guys handle configuration with sensitive data (connection strings, passwords, etc.)? Especially if you want different configuration for each branch?
Put a symlink in the repo that points to a config file outside of it. Then you can change the target of the symlink in different branches to point to different config files, so that the config changes on branch checkout.

RICHUNCLEPENNYBAGS
Dec 21, 2010

Volmarias posted:

Generally, you don't. Sensitive data shouldn't live in git, mostly because there's no getting it out of git. You should have a different strategy than source control for your config files.

Well I mean that was kind of my question. What's "something else" that's Git-aware.

ExcessBLarg! posted:

Put a symlink in the repo that points to a config file outside of it. Then you can change the target of the symlink in different branches to point to different config files, so that the config changes on branch checkout.
I guess I should mention I'm working in Windows... as far as I know I could make a shortcut (which won't work) or a junction (which is like a hardlink so I'm back to where I started).

ExcessBLarg!
Sep 1, 2001

RICHUNCLEPENNYBAGS posted:

I guess I should mention I'm working in Windows... as far as I know I could make a shortcut (which won't work) or a junction (which is like a hardlink so I'm back to where I started).
As I understand modern Windows has support for symlinks, although making use of them with git requires effort.

Dylan16807
May 12, 2010

ExcessBLarg! posted:

As I understand modern Windows has support for symlinks, although making use of them with git requires effort.
That answer uses hard links to pretend to be symlinks, which is really fragile. Windows does technically have symlinks but you can't make them as a limited user, so git hasn't really bothered adding support.

ambushsabre
Sep 1, 2009

It's...it's not shutting down!
Hey guys, I guess this would be the place to post this since it has to do with git / GitHub? Anyways, when popcorn-time went down multiple times, I was really annoyed at how hard it was to find the latest copy of the offical source. So I made a tool that downloads the latest copy of the source every time you push to GitHub and stores it on s3 for you as a single persistent link, so you don't have to worry about GitHub being down if you want to offer the source as a single zip on your site or whatever. It's called anam.io and it's gotten a little bit of press and attention over the last day or so. Hopefully you guys find it useful!

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





ambushsabre posted:

Hey guys, I guess this would be the place to post this since it has to do with git / GitHub? Anyways, when popcorn-time went down multiple times, I was really annoyed at how hard it was to find the latest copy of the offical source. So I made a tool that downloads the latest copy of the source every time you push to GitHub and stores it on s3 for you as a single persistent link, so you don't have to worry about GitHub being down if you want to offer the source as a single zip on your site or whatever. It's called anam.io and it's gotten a little bit of press and attention over the last day or so. Hopefully you guys find it useful!

this is really cool but it'd be even better if it also offered the ability to download tags as well as just the latest default branch

ambushsabre
Sep 1, 2009

It's...it's not shutting down!

the talent deficit posted:

this is really cool but it'd be even better if it also offered the ability to download tags as well as just the latest default branch

yup, working on backing up the wiki etc as we speak. glad you liked it, spread the word!

Hughlander
May 11, 2005

Git question time!
Not sure if anyone has gone through this before but before I write some scripts I figure to ask.

I have a submodule that’s a static library used by several projects. Currently almost all of them have been merged into a common repo for ease of CI with what used to be 2 submodules. I’ve resisted going this route since we do have a project thats’ not in that repo and it’d be a pain to deal with. However at this point the pain of being a submodule is greater.

The current work flow is to make a feature branch of the submodule, do your thing, and merge it back into a develop branch. When any push is done about 8-10 minutes of tests fire off. When those tests have passed on develop, go to the uber repo, update the submodule and push that. Tests run across all the projects there that take about 20 minutes to pass. If they pass the SHA you push is fast forwarded to the new master.

If they fail though, you start over with worse case 30 minutes more tests gating you. (Usually more like 40 since if you go back to your own branch to fix it you will need to merge back into the submodule develop again.)

So I know that you can ‘fairly’ easily take a submodule and merge it into a project keeping the history by having a remote on each repo and merge it from one to the other. But I got thinking could you do the reverse? Could you once you’ve done that, and are making changes in the uber-repo then merge back into the submodule applying a filter-branch to remove any commits/changes that aren’t in the submodule and keep the history up to date?

It should be possible but before I write that reverse merge just wondering if:
A) Anyone has seen/done it before?
B) Does there already exist a script that will do it?
C) Other than ‘Don’t be stupid and have these uber repos’ which is out of my hand, is there anything wrong with the idea?

Adbot
ADBOT LOVES YOU

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
Developers typically have power user or admin rights, I just assume it isn't a high priority.


Git question... I think the answer is no, but wanted to make sure. We have a bunch of branches going at the same time. Often I find myself needing to make changes to the ignore file as I clean things up (move binary dependencies to a private nuget repo, exclude build products, etc). Is there any way to set an ignore file that is committed as part of the repo, but applies to all branches? Right now I have to cherry pick and merge to mine and everyone else's named branches, plus the legit branches, otherwise some of our devs accidentally commit crap.

Granted, this is won't be a problem forever as I finish cleaning up and our team gets more comfortable using git, but it seems like some teams would want the ability to force certain settings in a repo this way.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply