|
OAuth keys aren't like DVD CSS keys, they can be easily revoked and replaced with no real impact on anybody at all. And nobody really bothers to reverse engineer binaries for them, because they just aren't that interesting to begin with.
|
# ? Oct 5, 2011 01:06 |
|
|
# ? May 9, 2024 22:48 |
|
Xik posted:. . . such a common problem these days that the problem would have been "solved" somehow. Especially with the market being flooded with useless applications which use things like the twitter, facebook, flikr API etc.... Just about every single one of those is a case where anything you want to make public is readable/scrapeable via some sort of api or RSS feed or something where each user would need his own key so it becomes a configuration option by default. PS: I'll add that managing things like .NET strong name key files is also problematic for OSS operations -- you'd really want it in the SCM system but that is really not the sort of thing that should be in the wild. Best way to manage it in that scenario is to have a separate, private VCS to handle those items. But that still could be a PITA to deal with for test dependencies and the like. wwb fucked around with this message at 01:11 on Oct 5, 2011 |
# ? Oct 5, 2011 01:09 |
|
Fair enough. I appreciate the input and your time. I think I'll just go with storing the OAuth Consumer secret in the application config file and be done with it.
|
# ? Oct 5, 2011 02:38 |
|
wwb posted:Just about every single one of those is a case where anything you want to make public is readable/scrapeable via some sort of api or RSS feed or something where each user would need his own key so it becomes a configuration option by default. We've been dealing with this at work via E-Mail unfortunately. The code in source control looks to environmental variables for the API keys, and when an engineer starts he's mailed a file that stuffs those API keys into his local environment.
|
# ? Oct 5, 2011 18:24 |
|
What is the best way (or any way) to convert a git repo to an svn repo, and keep the history? I'm having trouble googling for a solution since google seems to interpret "git to svn" as "svn to git" which is supremely unhelpful. Backstory: the eventuality of the git issues I described way earlier in the thread has been reached and we are migrating off of github to beanstalk, and I'd really like to see if we can keep file history when doing so.
|
# ? Oct 6, 2011 16:26 |
|
Clone the empty svn repo as a git-svn remote, dcommit the git branch you want to push to svn, and wait a few hours.
|
# ? Oct 6, 2011 18:00 |
|
You'll probably also have to go through and squash any divergent history with interactive rebase, because i doubt that'd be handled gracefully.
|
# ? Oct 6, 2011 18:03 |
|
Git question. If I have the following:code:
code:
code:
minute fucked around with this message at 01:30 on Oct 8, 2011 |
# ? Oct 8, 2011 01:28 |
|
Rebase creates new commits, complete with new timestamps.
|
# ? Oct 8, 2011 01:32 |
|
You can, however, use the comically long --committer-date-is-author-date option to to force git rebase to use the author date as the committer date in the newly written commits, thus making both rebased branches identical.
|
# ? Oct 8, 2011 15:08 |
|
Or you can runcode:
|
# ? Oct 8, 2011 18:09 |
|
In other news, SVN 1.7 is out, and I can finally convert this drat client to not have the .svn directories everywhere. Assuming svn upgrade ever finishes that is...
|
# ? Oct 16, 2011 21:33 |
|
How do I state in my .gitignore that I want to ignore all files except for *.cpp files? I'm trying things like this to no avail: * !*.cpp
|
# ? Oct 17, 2011 19:23 |
|
Jam2 posted:How do I state in my .gitignore that I want to ignore all files except for *.cpp files? Just tried this and it worked fine: code:
|
# ? Oct 17, 2011 21:24 |
|
I've been using mercurial for a couple years now, mostly for small projects with a small number of people working on them. Can someone explain to me the usefulness of in-repo branches? Our workflow is as follows. There's a central repo which I clone onto my machine. I don't work on that clone though, I treat it as my "main" repo, which is the only one which will have changes sync'd with the central repo. I then clone my main repo whenever I have to work on something (a bug, a feature, etc). So my "feature branch" is just another clone of my main repo. code:
|
# ? Oct 18, 2011 18:55 |
|
I'm using branches to collaborate with another remote developer. We don't have access to each others machines, so we share our changes through the main server. If a third developer wanted to check out our stuff he can just switch to the branch instead of having to ask us to share.
|
# ? Oct 18, 2011 19:28 |
|
epswing posted:I've been using mercurial for a couple years now, mostly for small projects with a small number of people working on them. Bugfixes on code that is running in production somewhere. I tag the revision that is released. Then I go on coding new stuff. When I need to fix something in the production code, I go back to the tagged revision, fix the bug in a new branch called 'maintenance', and then merge the fix into default. From then on, more bugfixes all go into the maintenance branch. (Which gets merged into default every time.) Calling the maintenance branch 'maintenance' doesn't do anything except make it more obvious that this is the maintenance branch.
|
# ? Oct 18, 2011 19:38 |
|
epswing posted:Can someone explain to me the usefulness of in-repo branches?
|
# ? Oct 18, 2011 20:16 |
|
MEAT TREAT posted:I'm using branches to collaborate with another remote developer. We don't have access to each others machines, so we share our changes through the main server. If a third developer wanted to check out our stuff he can just switch to the branch instead of having to ask us to share. I don't quite understand this (probably because of my lack of experience with mercurial). There's a main server, what wouldn't a 3rd dev have access to if you didn't use branches? uXs posted:Bugfixes on code that is running in production somewhere. Pull from central repo into my main repo. Clone my main repo, update back to the tagged version running in production, fix the code, commit, merge the fix into tip, and push the fix back into my main repo. Which gets pulled into the central repo. I don't need branches for this workflow. And "other work" isn't affected because I'm working on other work in other clones. Mithaldu posted:Being able to switch between branches within less than a second without having to mess around with different directories? This isn't a great reason. It's pretty much the same functionality. I guess someone might not want having a number of clones (directories) lying around, but then again sometimes I do, because I want to run two versions of the program concurrently, maybe I'm trying to compare something. Edit: I want to be clear, I'm not saying "I hate branches", or "branches are bad for you", or anything like that. I'm asking what is their benefit over just cloning? Seems the same to me. epswing fucked around with this message at 21:15 on Oct 18, 2011 |
# ? Oct 18, 2011 21:09 |
|
epswing posted:Edit: I want to be clear, I'm not saying "I hate branches", or "branches are bad for you", or anything like that. I'm asking what is their benefit over just cloning? Seems the same to me. 1) Clones require you to work in a different directory. 2) Clones consume more disk space (not trivial on some projects I've dealt with - I work with one repository right now which consists of hundreds of binary blobs which change frequently). 3) 'git branch' is near instantaneous where 'git clone' on a large repo like the one in #2 can consume a lot of time. I used to use Mercurial as well, and the generally accepted practice on the project team that I was on at the time was just to clone rather than using named branches (granted we didn't have the massive binary blob repo I mentioned in #2). In the git world, named branches are so quick and cheap that there's no reason not to use them.
|
# ? Oct 18, 2011 21:40 |
|
crazyfish posted:Clones consume more disk space (not trivial on some projects Ahh, yeah, that's legitimate. Thanks. I haven't had to deal with a massive repo, so I haven't come across this yet. crazyfish posted:In the git world, named branches are so quick and cheap that there's no reason not to use them. Are they quicker/cheaper than in mercurial?
|
# ? Oct 18, 2011 22:00 |
|
epswing posted:Pull from central repo into my main repo. Clone my main repo, update back to the tagged version running in production, fix the code, commit, merge the fix into tip, and push the fix back into my main repo. Which gets pulled into the central repo. We're basically doing the same, except that I don't have to do the cloning. So mine is less work. Also less disk space. Biggest difference though, is that afterwards, I can see which commits were done in maintenance and which were done in default. In mercurial, both methods are pretty much the same. You can have the same functionality with anonymous branches or named branches, with or without clones. There's a small difference in workflow, but the biggest one, to me, is that naming your branches makes it more obvious what you are doing. You can name your branches in clones too, the result would be the same. It's just more work and I don't see the point.
|
# ? Oct 18, 2011 22:04 |
|
epswing posted:I don't quite understand this (probably because of my lack of experience with mercurial). There's a main server, what wouldn't a 3rd dev have access to if you didn't use branches? Lets assume I'm using a cloned repo to do my feature. Since I'm not going to check my feature into the default branch until it's done, no one else can see those changes unless I share my cloned repository with them through hg serve or some other means. Now, if I use a named branch instead, I can push my changes to the main server without disrupting anyone else working in default. The main benefit is that my changes are shared with the rest of the team. If I then add a third member to help with the feature, he doesn't have to do anything but switch to my branch and begin working.
|
# ? Oct 18, 2011 22:24 |
|
epswing posted:This isn't a great reason. It's pretty much the same functionality. Stuff i normally work on looks like this (and this is still rather tame): Not only is being able to switch between branches there a huge boon, having the repo i work in be aware of and contain the data of all branches at the same time also means i can do poo poo like this: If it's not exactly clear: I'm able to diff arbitrary commits in different branches and then diff arbitrary files between those commits. epswing posted:I guess someone might not want having a number of clones (directories) lying around, but then again sometimes I do, because I want to run two versions of the program concurrently, maybe I'm trying to compare something. At the same time i also have a flock of branches around to test various ideas and between which i switch a lot. Edit: epswing posted:Are they quicker/cheaper than in mercurial? Mithaldu fucked around with this message at 22:30 on Oct 18, 2011 |
# ? Oct 18, 2011 22:27 |
|
Mercurial is also a little crazy with how it treats branches that you're done with. Merging a branch in doesn't "close" it, but just makes it "inactive". And the last changeset in that branch will still show up in hg heads. The weirdness can be avoided by using hg branches -a and hg heads -t, but in a sane world, those options would be the default.
|
# ? Oct 18, 2011 22:28 |
|
epswing posted:Ahh, yeah, that's legitimate. Thanks. I haven't had to deal with a massive repo, so I haven't come across this yet. Since you consider disk space a legit reason, here's another one: My dev disk has 143 git checkouts lying around. Each of those has on average 3 branches. So if i had each of those in one single directory, i'd have 430 directories to juggle, most of which are duplicates of each other. That's a massive mental overhead i can just completely avoid.
|
# ? Oct 18, 2011 22:34 |
|
Mithaldu posted:Just tried this and it worked fine: code:
|
# ? Oct 18, 2011 23:16 |
|
Jam2 posted:
This seems to do the right thing, with subdir and subsubdir being the name of sub-directories: code:
|
# ? Oct 18, 2011 23:29 |
|
Mithaldu posted:Stuff i normally work on looks like this (and this is still rather tame): Oh man, I'd love to post the revision graph from our monolithic SQL initial data load file that needs to be modified for practically every change made. (a terrible, terrible idea by the way) If those graphs are complex, ours are an abomination. Every single commit is a merge, and we usually have 3-5 "live" branches going. And we're a 10-dev shop. I don't even want to think about what someplace like Microsoft or Apple must look like. Now take that one awful file, add about 10 more of similarly awful complexity, and then make changesets sometimes average around a hundred files changed at once, generally with 10% or so requiring merges. Could we figure out a way to work with directories as branches? Maybe. But instead of a half hour to do the merge on one of those big issues, we'd be looking at days, where by the time you were done with a merge, you needed to merge with what somebody else merged, and then spend a couple months dealing with the untracked bugs that crept in from screwed up merges where you had no idea where anything was merged from or why.
|
# ? Oct 18, 2011 23:46 |
|
Mithaldu posted:As long as you've not gone beyond a certain point in complexity the two mechanisms appear to be the same. That changes once you do go beyond that though. (Usually the complexity is determined simply by how many people are involved and how many different ideas they have, not by any virtue of the project.) Gottcha. Thanks for the input, y'all.
|
# ? Oct 18, 2011 23:46 |
|
wellwhoopdedooo posted:Oh man, I'd love to post the revision graph from our monolithic SQL initial data load file that needs to be modified for practically every change made. (a terrible, terrible idea by the way) If those graphs are complex, ours are an abomination. Every single commit is a merge, and we usually have 3-5 "live" branches going. And we're a 10-dev shop. I don't even want to think about what someplace like Microsoft or Apple must look like. I can honestly say if your having that sort of problem, maybe you guys need to evaluate your workflow in terms of how you chop up tasks between devs. That just sounds loving ugly.
|
# ? Oct 20, 2011 08:20 |
|
wellwhoopdedooo posted:Oh man, I'd love to post the revision graph from our monolithic SQL initial data load file that needs to be modified for practically every change made. (a terrible, terrible idea by the way) If those graphs are complex, ours are an abomination. Every single commit is a merge, and we usually have 3-5 "live" branches going. And we're a 10-dev shop. I don't even want to think about what someplace like Microsoft or Apple must look like. Sounds like you guys need to learn how to rebase. The solution to this kind of thing is that you take your branch, then you rebase it up one single commit on the master branch, towards head, then fix the conflicts that popped up because of that one rebase; then you rinse and repeat that process, moving up commit for commit on the master branch until you're on head and can just fast-forward the master tag onto the head of your branch. This means instead of having to deal with ALL the differences and conflicts at once, you can take them piecemeal AND end up with a cleaner history.
|
# ? Oct 20, 2011 08:35 |
|
I'm setting up Git for me and two other developers. We all work on our own machines and then push our code to a Github repo. The general workflow is: 1) Make changes 2) Commit and push 3) Log into production server 4) Pull changes 5) Build project (in Visual Studio) I'm trying to see if there's a way to automate 3 and 4, and possibly 5. Is there a way to remotely trigger a Git pull for another repo? So that when we commit and push, we could run a command that has the production server automatically pull down the changes?
|
# ? Oct 24, 2011 21:39 |
|
Typically you have some Daemon running on the production machine that scans your GitHub repo for changes. When it detects them, it pulls the changes and compiles your project.
|
# ? Oct 24, 2011 21:53 |
|
Is there a reason you don't just push to the production server?
|
# ? Oct 24, 2011 21:56 |
|
Xik posted:Is there a reason you don't just push to the production server? I'm guessing firewalls and some such.
|
# ? Oct 24, 2011 22:15 |
|
Xik posted:Is there a reason you don't just push to the production server? Even if you did, you'd still have to log in and reset to HEAD, right? Or does pushing actually clobber the working directory?
|
# ? Oct 25, 2011 01:19 |
|
It's not very hard to add a post-update hook to do so.
|
# ? Oct 25, 2011 02:08 |
|
Strict 9 posted:I'm setting up Git for me and two other developers. We all work on our own machines and then push our code to a Github repo. The general workflow is: I never understood why people use source control to push things into production. Or why they apparently don't have a test server. Or why they actually build things on their production servers. Or why they have Visual Studio installed on a production server. Maybe it's just me.
|
# ? Oct 25, 2011 09:05 |
|
|
# ? May 9, 2024 22:48 |
|
MEAT TREAT posted:Typically you have some Daemon running on the production machine that scans your GitHub repo for changes. When it detects them, it pulls the changes and compiles your project. Plorkyeran posted:It's not very hard to add a post-update hook to do so. That sounds fine - any examples of how to accomplish that in a Windows Server environment?
|
# ? Oct 25, 2011 13:33 |