|
Golbez posted:Not sure which command(s) to do for this, which seems simple, but being a relative git neophyte I'm not sure: The specific command you're thinking of is git revert, which creates a new commit reverting the changes made by one or more prior commits. The main advantage of this is that it does not involve rewriting history and thus has an entry in the commit log and will work even if you've already pushed the branch or had it pulled by others. That said, if you're only interested in commit A and not the branch itself, you can (as PiotrLegnica said) merge only A back in or create a new branch from it, and ignore B and C, possibly deleting this branch entirely once A is dealt with. If you are specifically attached to this branch, revert or reset are pretty much your only options.
|
# ? Sep 21, 2012 00:46 |
|
|
# ? May 16, 2024 16:56 |
|
I think my local git repository just blew up. Any time I try to change branches it says "fatal: bad revision 'refs/heads/temp'". Any ideas on recovery?
|
# ? Sep 21, 2012 17:00 |
|
|
# ? Sep 21, 2012 18:02 |
|
Lysidas posted:
1. There is no refs/heads/temp! There's a refs/heads/TEMP (not what the error says!) and that folder is empty. 2. git/HEAD is a file. It has a hash in it. This is the hash that is showing up in my prompt (that usually shows what branch I have checkedout). 3. git fsck gives that same error, then a long long list of dangling blobs, trees, and commits. EDIT: All of this was using cygwin git. When I switched to msysgit I was able to change branches and clean things up. But it's still giving the error about refs/heads/temp, which has me pretty nervous. Dromio fucked around with this message at 19:13 on Sep 21, 2012 |
# ? Sep 21, 2012 18:56 |
|
Dromio posted:1. There is no refs/heads/temp! There's a refs/heads/TEMP (not what the error says!) and that folder is empty. File systems in Windows** are not case-sensitive so they're the same thing. ** I mean, the traditional Windows file systems, I guess you could mount actual case-sensitive file systems in Windows though
|
# ? Sep 22, 2012 03:05 |
|
No Safe Word posted:File systems in Windows** are not case-sensitive so they're the same thing. Maybe, but most things in Cygwin do care about the case. I think git under Cygwin does. Anyway, the folder is empty. No idea what if anything should be in there.
|
# ? Sep 22, 2012 04:14 |
|
Dromio posted:Maybe, but most things in Cygwin do care about the case. I think git under Cygwin does. You shouldn't have a folder there, folders only exist in .git/refs when it's the beginning of another ref, such as refs/heads/TEMP/foo. Remove the folder, to see if that fixes it.
|
# ? Sep 22, 2012 11:30 |
|
Edison was a dick posted:You shouldn't have a folder there, folders only exist in .git/refs when it's the beginning of another ref, such as refs/heads/TEMP/foo. Remove the folder, to see if that fixes it. I seem to have quite a few empty folders in there. Deleting that one did get rid of the error. git fsck still complains about a lot of other dangling items. I've made a new clone of the remote repository and copied over my local branches from the bad one. I don't trust it anymore.
|
# ? Sep 23, 2012 00:57 |
|
Is there something fundamanetally wrong with doing source control in the following way: -Git Init on the directory that countains my source -Stage and commit my source to this "Source Repo" -Create a folder on my Skydrive -Add the Skydrive folder as a remote to my Source Repo -Push from my Source Repo to the Skydrive Repo In this way I end up with a "bare" git repo on Skydrive. Is there potentially some way to get a repo on Skydrive that actually has a working directory structure? I'd like to be able to take a look at my files in a pinch without having to clone the repo somewhere... Also, how would this approach work if I were to decide to start doing development on two machines?
|
# ? Sep 23, 2012 03:29 |
|
shodanjr_gr posted:Is there something fundamanetally wrong with doing source control in the following way: No, it's how you do this. If you don't want bare repo, git init the remote before pushing. Just note that pushing to repos with checked out working copy might crap out sometimes. It's generally better to keep the remote bare. shodanjr_gr posted:Also, how would this approach work if I were to decide to start doing development on two machines? Push on one machine, pull on the other, repeat until you get bored of the project. Just regular DVCS usage.
|
# ? Sep 23, 2012 03:42 |
|
I'm working on some CI stuff for a client using TFS. They necessarily have very large solutions with lots of dependencies, so we made the decision to do binary references instead of having solutions with hundreds of project references, with CI builds set up to automatically update the binaries on build, and then rebuild any projects using those binaries. I just finished mapping out what-triggers-what, and there's one massive solution that's used by basically every other project. It triggers 65 additional builds. One particular build ends up being run 18 times. Running all 66 builds would probably take 3-4 hours, thus entirely defeating the purpose of CI. I don't have any questions, I just wanted to vent.
|
# ? Sep 24, 2012 21:43 |
|
We may be setting up integrated issue tracking & version control from scratch at work in the near future. Is there anywhere I should look that has a comparison of externally hosted systems for this (github, bitbucket, etc), and general reading I should do on this topic? For background I've used a mix of svn and git for personal stuff for years now, and a coworker has worked with cvs and perforce before at previous jobs, but neither of us has done much with bugtrackers and we have zero process in place at this time.
|
# ? Sep 24, 2012 22:17 |
|
Ithaqua posted:I'm working on some CI stuff for a client using TFS. They necessarily have very large solutions with lots of dependencies, so we made the decision to do binary references instead of having solutions with hundreds of project references, with CI builds set up to automatically update the binaries on build, and then rebuild any projects using those binaries. Jenkins also has this problem. A coworker solved this by writing a program to trigger builds such that it wouldn't redundantly trigger the same job multiple times. Unfortunately this requires him to manually say he wants to run tests, so it doesn't scale well to many developers.
|
# ? Sep 25, 2012 08:55 |
|
Not doing much on that scale, but typically we include binaries for our libraries with the projects living independently with their own CI stack. The libraries in question are very stable though, perhaps one change a year. If you are on TeamCity you can have it make a nuget feed for you making this very painless.Otto Skorzeny posted:We may be setting up integrated issue tracking & version control from scratch at work in the near future. Is there anywhere I should look that has a comparison of externally hosted systems for this (github, bitbucket, etc), and general reading I should do on this topic? For background I've used a mix of svn and git for personal stuff for years now, and a coworker has worked with cvs and perforce before at previous jobs, but neither of us has done much with bugtrackers and we have zero process in place at this time. I've done this pretty successfully with redmine. SVN integration is easy -- just point it at the server. DCVS is a little more involved as you need a local clone and you need a cron job to update it. But it worked out in our case -- we wanted a local copy of the repos anyhow.
|
# ? Sep 25, 2012 13:11 |
|
Definitely don't use a cron job for syncing unless you have no other choice (and if you have no other choice you probably chose poorly when decided on a repo host). Post-receieve/changegroup hooks on the upstream work much better.
|
# ? Sep 25, 2012 15:08 |
|
We have a bunch of scripts that have higher permissions than git, so we use an every-minute cron job to git pull, and then update the permissions. Probably not the best way of doing it. A question of my own: We have a website. In this website are various modules. Someone has expressed interest in buying one of those modules as a service. And of course, they might want features we don't want, but we may want to give bugfixes to both, etc. We're relatively new to git so we've been trying to figure out how to handle this. Do we give that module its own git repository, and them their own repository, and our site keeps its own repository, and we simply merge between repos? Can you do that? Or, do we give it its own repository and somehow have branches off master? So it would look like: pre:* Their version \ \ * Our version \ / * HEAD/master I guess the main question is, how does this get done in more experienced houses? Thanks.
|
# ? Sep 25, 2012 15:37 |
|
Golbez posted:We have a website. In this website are various modules. Someone has expressed interest in buying one of those modules as a service. And of course, they might want features we don't want, but we may want to give bugfixes to both, etc. We're relatively new to git so we've been trying to figure out how to handle this. The first thing i would try to do is extract the shared code into whatever packaging/dependency format the language you're writing in uses, give it its own repo and versioning, and have the other sites consume it. If you can't do that, there's git submodules, which you can read up on, but they seem a distant second.
|
# ? Sep 25, 2012 17:06 |
|
/home got nuked yesterday as part of a reinstall of Linux. Did we have backups? lol of course not. Which means ~git is gone, as is our gitolite installation. The last backup of ~git we have is from Friday morning. A few questions: * I just be able to put the ~git backup back where it goes and things work, right? After installing git and gitolite, of course. * We should be able to view our commits locally with gitk, right? And then recreate them and push to the server? * Or, would git simply see our commits as newer than what it has and happily accept them?
|
# ? Sep 30, 2012 23:30 |
|
Golbez posted:A few questions: If ~git is just a folder with repos, then yes, just put it back where it was and it'll work. (But I've never used gitolite, so it might not be just that.) If you have a local clone, just push to the remote as usual. No need for recreating anything, and commit dates will not change. DVCS is called "distributed", because every repo is equal, there is no predesignated "main" repo. If two repos have the same changesets, they are identical in every respect. Cat Plus Plus fucked around with this message at 23:40 on Sep 30, 2012 |
# ? Sep 30, 2012 23:38 |
|
Has anyone using GitHub ever made a commit with a file that included some sensitive information with merges, issues, and whatnot pointing to it? I doubt anyone will ever see it, and it's nothing major, but I just wonder if it's even at all possible to purge files and information like that. Especially if it ever happens with an important password or some something such.
|
# ? Oct 10, 2012 12:14 |
|
ufarn posted:Has anyone using GitHub ever made a commit with a file that included some sensitive information with merges, issues, and whatnot pointing to it? https://help.github.com/articles/remove-sensitive-data
|
# ? Oct 10, 2012 12:29 |
|
Wow, that's great. Thanks a bunch.
|
# ? Oct 10, 2012 12:39 |
|
ufarn posted:Has anyone using GitHub ever made a commit with a file that included some sensitive information with merges, issues, and whatnot pointing to it? Change your password. If you've pushed at all, especially in a popular repo (you said merges and issues point to it) consider it compromised.
|
# ? Oct 10, 2012 14:56 |
|
Suspicious Dish posted:Change your password. If you've pushed at all, especially in a popular repo (you said merges and issues point to it) consider it compromised.
|
# ? Oct 10, 2012 15:41 |
|
Can you use that sort of thing to change the initial commit of a git repos? I've not gotten that to work in the past but I wasn't using github's instructions either.
|
# ? Oct 10, 2012 17:00 |
|
filter-branch works fine on the initial commit; it's just rebase -i that doesn't.
|
# ? Oct 10, 2012 17:47 |
|
I just ran their commands and got this for people interested:Sh code:
Does the last line just mean that I haven't pushed to my remote master at GitHub (which is true)? Or did something fail?
|
# ? Oct 10, 2012 18:48 |
|
We're migrating to git (from SVN) and we're having some issues with builds. (I'm a DBA, not a developer, every release we've had since migrating to Git has been a clusterfuck of late additions and "oh god why isn't this in the build augh", including one where we didn't finalise what was to be released until half a (business) hour before the release was due to go to production. We're currently a very process-driven risk-averse place, trying to become more "agile".) Our developers work several weeks in advance of releases, and we're pushing for a two-week release cycle (used to be quarterly major releases and monthly minor releases). As far as I can tell, the process is: - testers or external users raise issues - BAs do their thing, create a bunch of Jira issues and sub-issues and documents and poo poo. - Developers come along and develop that stuff, then commit it to a branch (which branch? ). Every two weeks or so, we schedule a release: - There's a meeting to decide what goes into the build for that release. - A new branch is created. - The development team leader sends out a list of those Jiras ordered by which developer worked on them. - The developers commit the code for those Jiras to the release branch, which is built and then deployed. Somewhere along the line, roughly half the poo poo for any given release goes missing, so the release fails functional and/or integration testing, so the developers send stuff directly to the infrastructure guys (DBA + applications engineers) to be manually integrated into the build. Apparently, the weak point of this is identifying the Jiras that need to go for a build, because some of them aren't properly tagged as sub-issues and/or the dev TL doesn't chase all the way down the tree of issues. Something tells me that using git and doing a build should not be this hard What are we doing wrong? Alternatively, how should we be doing it?
|
# ? Oct 11, 2012 09:26 |
|
Sounds like it is a "people not knowing how to do poo poo" issue -- no reason stuff should go missing except you've got git noobs at the help and/or people aren't pushing poo poo. That said, your branching model should look something like http://nvie.com/posts/a-successful-git-branching-model/ barring better input. There is also an extension that supports said model in git or hg, it is called git-flow or hg-flow.
|
# ? Oct 11, 2012 11:54 |
|
Personally, I dislike git-flow, I think it's overly complex. I prefer the Github model where you have master, branch from master, do your work, deploy that branch, and if it goes well, merge it into master. They wrote about it here: https://github.com/blog/1241-deploying-at-github I suspect, Thel that when you say branches or code are missing that developers aren't pushing them to the canonical remote? What is your git server setup like? Do you use your own git server or Github/BitBucket/etc? Deploying with Github is very easy, I think. I have an organization set up for my company. That's the canonical repository and all deploys are done from it's master branch. No other branches get pushed to it. Every developer on my team has a Github account. They fork the canonical repository, do their work, push as many branches remotely as they like to their fork. When their code is ready to be deployed, a pull request is opened, code review, run through CI/build server, everything looks good. Ok merge their code, one more CI/build server run though, and if that goes well, automatically deploy with Capistrano. It requires good disciplined engineers, but it works amazingly well.
|
# ? Oct 11, 2012 12:12 |
|
^^^Good points. If I were setting things up today, knowing DCVSes as well as I do now, I'd probaby look at more pull requests for workflow over branches. Now, I still think you need branching in the main repository depending on how many different production "versions" exist (we keep prod, qa and CI running for all projects). quote:Somewhere along the line, roughly half the poo poo for any given release goes missing, so the release fails functional and/or integration testing, so the developers send stuff directly to the infrastructure guys (DBA + applications engineers) to be manually integrated into the build. I missed this. This is really your problem. You need dev/integration in the SCM. No code changes should go except via SCM in the correct channels. Period.
|
# ? Oct 11, 2012 12:52 |
|
Github flow shines if you're doing continuous deployment. If you have versions and releases and the like, the complexity git-flow adds can be a help rather a hindrance.
|
# ? Oct 11, 2012 21:04 |
|
^^^Exactly. Basically all we've got is continuously deployed webapps with too many cooks in the kitchen so maintaining concurrent development with concurrent QA with concurrent production is key. We were doing the same general sort of workflow with SVN. Which made me a badass svn merger with no fear. But it is so much better with HG or git.
|
# ? Oct 11, 2012 21:11 |
|
Our flow, based on that webpage and modified for our needs, for our continually-updated website is: * Time to code! Make a new branch off master named after the ticket for the bug/feature. * Keep it local. We have development repos but I never merge into mine, since there's almost never any need for someone to view the code. * When it's time to test, merge into test (which was branched off master some time ago) and wait. * When it's signed off on, merge into master, tag it with a version number, push, then delete the original branch. Sometimes, for whatever reason (a branch got merged into test but was cancelled before going live, etc), test starts to get out of sync with master, and we have to nuke it and rebuild it with all of our development branches. Golbez fucked around with this message at 21:41 on Oct 11, 2012 |
# ? Oct 11, 2012 21:38 |
|
wwb posted:Which made me a badass svn merger with no fear. Did you write svn or something?
|
# ? Oct 11, 2012 22:02 |
|
Golbez posted:Our flow, based on that webpage and modified for our needs, for our continually-updated website is: I assume by "merge into master" you mean the feature branch? If you were instead just merging test into master, you'd never have to worry about it getting out of sync. e: In that scenario, you'd also want to make your branches off of test, so you would know that you were caught up with the latest changes. e2: Hey, wait a minute, why are you merging into test before testing? Why not just make sure your ticket branch is up-to-date and then test it directly? Doc Hawkins fucked around with this message at 22:11 on Oct 11, 2012 |
# ? Oct 11, 2012 22:08 |
|
Suspicious Dish posted:Did you write svn or something? Nope. But apparently I tried merges people thought not possible. It really helped being on a gigabit connection to the svn server, made blowing things away relatively painless when merges got fubar.
|
# ? Oct 11, 2012 22:42 |
|
Doc Hawkins posted:I assume by "merge into master" you mean the feature branch? If you were instead just merging test into master, you'd never have to worry about it getting out of sync. Right, sorry, I meant merge the feature branch into testing. Test never gets merged into master. We branch off master because of the aforementioned problems with test getting out of sync - I want to start with a 'pristine' copy of the code, not one that's been dirtied up with things in beta testing. Merging into test is for when other people have to test it. I can test it just fine on my own development environment and what not, test goes to an actual server that other people than me can log in and view, and anyone can merge to it, not just me, so I might have a half dozen features being tested up there at any given time, and the other dev might have some too. So, here's how a bill becomes law: git checkout -b hd1234 master; [coding, committing, tested by myself, and now ready for external testing] git checkout test; git merge hd1234; [if people want changes, these are made to hd1234 which is then remerged into test. if they sign off on it then...] git checkout master; git merge hd1234; git tag -a -m "this is a feature" 5.0.1; git branch -d hd1234
|
# ? Oct 11, 2012 23:00 |
|
e: This is turning into the vcs version of the "cobol in any language" joke. Doc Hawkins fucked around with this message at 01:24 on Oct 12, 2012 |
# ? Oct 12, 2012 01:15 |
|
|
# ? May 16, 2024 16:56 |
|
Doc Hawkins posted:
We have three servers: Live, 'test', and 'dev'. They can't test on dev because it only has my current branch up at any moment, and I don't want people snooping around code I'm actively working on. To answer the second question: 1) Because when I merge a branch into master, I'm just merging THAT branch. If I merged test into master, I'd be merging all 15 branches currently in active testing into master, which is obviously not what we want. 2) I would also be merging any branches that were killed after test but before master.
|
# ? Oct 12, 2012 04:34 |