|
I'm not familiar with git-stash, myself. What I'd do is something like:pre:git add -p <files with changes you want to keep> # use the patch interface to add the chunks you want to keep git commit git checkout -b wacky_feature git add <files with ugly changes> git commit
|
# ? Aug 2, 2010 12:40 |
|
|
# ? May 15, 2024 03:32 |
|
FigBug posted:A while ago I saw an ad for a product that was aimed at management. It was a tool that you connect to subversion and it tells you which of your employees are working hard, how much they are doing, etc. The idea was that you could promote/fire people based on the output of this tool. It sounded hilarious, and now I want to try it, but I can't remember what it was called. Anybody know what I'm talking about? This is an old page 3 post, but the app you are thinking of could have been Atlassian Fisheye. Its ideal use is not what you listed, but it could be treated that way. It maps the SVN/git/mercurial etc repo and presents reports on Lines of Code (LoC), commit frequency and if its linked in with crucible, jira and bamboo it will show build results, associated issues and allow for peer code review. Our dev team uses fisheye to track change sets during standups or in meetings when there isnt access to a computer with Versions or Tortoise. We also have crucible for peer code review. We can look at a changed file in fisheye and then reviewers can add comments inline in the code. If there are problems with the code you can raise a JIRA to fix it. Of course we also use JIRA with Greenhopper for Agile dev. Assign a story to a dev, with some technical tasks representing each unit of work and then when each unit of work is committed its tagged with a JIRA ID and that links JIRA to fisheye. * i dont work for atlassian, but i love their products and we use them religiously at work.
|
# ? Aug 2, 2010 12:43 |
|
ColdPie posted:I'm not familiar with git-stash, myself. What I'd do is something like: Note that "git gui" provides a nicer interface for staging individual diff chunks, too.
|
# ? Aug 2, 2010 13:22 |
|
ToxicFrog posted:Note that "git gui" provides a nicer interface for staging individual diff chunks, too. Tell me, for future, reference, is there any way to directly do the whole 'edit a patch before it hits staging' thing without going through git add -i? Like, a "git add -p filename.c" or something? I want to leave the file in the working directory untouched, but add a subset of its changes to the staging area. This is especially nice when I get to use my external text editor or diff'er. I often find myself fixing a few bugs here and there incidental to whatever branch I'm working on, and a lot of times these bug fixes should really be in the master branch. I'm hoping to figure out a nice way to do this.
|
# ? Aug 2, 2010 22:43 |
|
Magicmat posted:This led me to try to use GitX, but the hunk sizes were too big (I sometimes wanted to keep only 1 of 3 lines all right next to each other,) which led me to git add -i and, eventually, the patch command therein, which was just what I was looking for in step 1. Yeah. It's "git add -p filename.c".
|
# ? Aug 2, 2010 22:46 |
|
ColdPie posted:Yeah. It's "git add -p filename.c". Is that the preferred method for the use-case I described above? That worked right now, when I was already in branch B and wanted to commit the bugfixes to B and then create a new branch of other changes, C. What if, though, I'm 3 commits deep into C, fix a few bugs, and then want to commit just those bugfixes to B? What's the best way then? I'm starting Pro Git now. Maybe that will give me a better grounding then the scattered "git tutorial" pages I've relied on so far.
|
# ? Aug 2, 2010 22:50 |
|
Magicmat posted:Haha, no kidding? I'm good at guessing, it seems :p Yeah, just give that a read-through. It'll probably answer your questions fairly well. Make sure you read chapter 9, which will give you some insight into how git works internally, which can help you manipulate the porcelain better. To answer your question, you can change branches with a dirty working tree freely, so long as your uncommitted changes don't conflict with changes that occur due to switching branches. So you can just leave the changes you want in B in-place, use git-checkout to switch to B, and then commit them. A less "up in the air" method would be to just commit them on top of C and cherry-pick the commit over to B, then 'git checkout C; git reset --hard HEAD^' to reset C to not include those changes.
|
# ? Aug 2, 2010 23:02 |
|
I'm having trouble understand the way git works. I know with SVN, I update the code in my source folders locally, and when I've made changes I commit them. But with git there are local and remote repos and that just threw me off for some reason. Anyone know of any good guides for SVN to Git or some good tutorials or something?
|
# ? Aug 10, 2010 18:04 |
|
Some people swear by the Git Parable, and I do like it as an explanation. I enjoyed Pro Git, and the chapter "Git Basics". There's also Git Magic.
|
# ? Aug 10, 2010 18:18 |
|
And if for some reason all of those websites fail you there is http://gitready.com/
|
# ? Aug 10, 2010 23:38 |
|
I think I have a better grasp on it now than before, however I don't want to pay for a private repository. Anyone know of any good tutorials on how to set one up on Linux? More specifically something Debian/Debian based? I've found some for public ones, but I don't want my source code public. e: I'm not sure I really want to use ssh, but I can if need be. Yakattak fucked around with this message at 02:41 on Aug 11, 2010 |
# ? Aug 11, 2010 02:20 |
|
Yakattak posted:I think I have a better grasp on it now than before, however I don't want to pay for a private repository. Anyone know of any good tutorials on how to set one up on Linux? More specifically something Debian/Debian based? I've found some for public ones, but I don't want my source code public. Um. git init.
|
# ? Aug 11, 2010 02:50 |
|
Yakattak posted:I think I have a better grasp on it now than before, however I don't want to pay for a private repository. Anyone know of any good tutorials on how to set one up on Linux? More specifically something Debian/Debian based? I've found some for public ones, but I don't want my source code public. If you have your own server (and it sounds like you do), you can have all the private hosting in the world for free. Go to your home directory: code:
code:
Edit: Or just fork over the $7 a month for 5 private repo's on GitHub and support an amazing product.
|
# ? Aug 11, 2010 03:05 |
|
Alright, I got git working and all that, and in my research, I found this: http://od-eon.com/blogs/horia/two-git-tips-will-make-you-smile/ Most notably, making a global .gitignore: code:
|
# ? Aug 16, 2010 16:22 |
|
Magicmat posted:Is there a better way? In the future, when you're working on a new feature, make a branch for it. Then make a bunch of small commits anytime you've made progress at all. I commit after I get a test or two passing. You'll end up with like 20 commits for that feature. When it's done, pop over to your master branch, or wherever, and run git merge --squash featurebranch This will pull in all your little commits, and make them just one commit that has the whole feature. You get the best of both worlds. Easy loving around with code while you're writing it, and a nice, clean, linear history of your master branch that has just the features. http://reinh.com/blog/2009/03/02/a-git-workflow-for-agile-teams.html
|
# ? Aug 16, 2010 17:02 |
|
Pardot posted:In the future, when you're working on a new feature, make a branch for it. Then make a bunch of small commits anytime you've made progress at all. I commit after I get a test or two passing. You'll end up with like 20 commits for that feature. Much more flexible than merge --squash is rebase --interactive (or just rebase -i). It lets you pick which commits get squashed, lets you edit the commits or the commit messages as it replays them, etc.
|
# ? Aug 16, 2010 21:35 |
|
Pardot posted:In the future, when you're working on a new feature, make a branch for it. Then make a bunch of small commits anytime you've made progress at all. I commit after I get a test or two passing. You'll end up with like 20 commits for that feature. ColdPie posted:Much more flexible than merge --squash is rebase --interactive (or just rebase -i). It lets you pick which commits get squashed, lets you edit the commits or the commit messages as it replays them, etc.
|
# ? Aug 17, 2010 00:39 |
|
That's when you put your work on said sub-feature into a new branch and rollback to work on the main one
|
# ? Aug 17, 2010 06:24 |
|
ColdPie posted:Much more flexible than merge --squash is rebase --interactive (or just rebase -i). It lets you pick which commits get squashed, lets you edit the commits or the commit messages as it replays them, etc. Yeah I used to do the interactive rebase, but turns out I always just want all of them squashed, so I got lazy and just do merge --squash. It's saves a step or two, and if you just do git commit and leave out the message, it still lets you see all the old commit messages. Interactive rebase is awesome when you need it though, no doubt about it.
|
# ? Aug 17, 2010 06:27 |
|
Total newbie question here, but why rebase instead of merge in git? I'm trying to think of a good reason (and I assume there must be since everybody uses it so much) but I am failing. I'm reading Pro Git right now and the chapter on rebasing explains nicely what rebasing is, but not why it is desirable other than to "clean up" a commit history -- but why are 3 inline commits 'cleaner' than a merge of a branch with 3 commits in it? (other than saving a single 'merge' commit, I guess.)
|
# ? Aug 18, 2010 08:28 |
|
Suppose you're working on a local feature branch and the commit history looks like this:code:
Merging gives you A, B, C, D, E, F, G (and a merge commit), rebasing gives you A, B, D, F, C, E, F. Most of the time the latter is far easier to work with; your unpushed local commits are all in one clump rather than mixed in with commits made to the remote branch, which makes it easier to review or rewrite them.
|
# ? Aug 18, 2010 09:29 |
|
Magicmat posted:Total newbie question here, but why rebase instead of merge in git? I'm trying to think of a good reason (and I assume there must be since everybody uses it so much) but I am failing. To put it simply: Git likes the idea of having a linear history with only "worthwhile" commits and it gives you as many tools to make that happen as possible. Also, rebasing a branch is a cleaner way of updating it than merging from master is, just because it means there is less work for git involved, internally.
|
# ? Aug 18, 2010 09:37 |
|
On the other hand, merging is preferable if you have important history that you want to preserve. The kind of history I'm talking about is verified test cases, etc. If you rebase, then all of your commits change and become untested. If you merge, then you simply have one more commit to test -- the merge commit. "Test" here means whatever happens to verify the code works. Merging is also important if you're sharing your code, since rebasing changes the commit objects. In addition to what folks above are saying, rebasing can work better when you're distributing your code to another project via email or something. You can think of it as always putting your changes "on top of" the latest version of the code. Then when you send it off, the maintainer won't have to merge your code into the latest revision -- it's already based on top of that.
|
# ? Aug 18, 2010 12:30 |
|
Magicmat posted:Total newbie question here, but why rebase instead of merge in git? I'm trying to think of a good reason (and I assume there must be since everybody uses it so much) but I am failing. It's because of babbies who can't handle a non-linear history, even though a non-linear history is what actually happened. I never used rebase and I can't see why I ever would use it. If you can't see a good reason, don't use it. (And squashing history is even worse: you're not only changing history, you're actually destroying it. Destroying history in a version control system, think about it. Eww.)
|
# ? Aug 18, 2010 14:25 |
|
uXs posted:It's because of babbies who can't handle a non-linear history, even though a non-linear history is what actually happened. It can be nice for rewriting unintended history. Two days ago i committed a change and forgot to include a file, then did 5 more commits on top of that before i noticed. Since i had not pushed the changes life yet, they weren't in anyone's history but my own, so i was free to rewrite it. Using git rebase -i HEAD~6 i spooled back to the offending commit, amended it with the file i meant to add, then git replayed the remaining 5 commits and everything was fine. It may not be a big thing, but it helps people who're reading the commit history, and at that point would've wondered where the file was that the commit message alluded to.
|
# ? Aug 18, 2010 14:38 |
|
Apparently y'all misunderstood the purpose of rebasing. It isn't intended to avoid merges, it's intended to avoid excessive merges. i.e. it is the difference between this: code:
code:
pseudorandom name fucked around with this message at 19:56 on Aug 18, 2010 |
# ? Aug 18, 2010 19:49 |
|
uXs posted:(And squashing history is even worse: you're not only changing history, you're actually destroying it. Destroying history in a version control system, think about it. Eww.)
|
# ? Aug 27, 2010 01:21 |
|
There is history that is useful and there is history that is useless. Git and similar systems let you have the ability to commit early commit often, and the ability to turn that into something that's high SNR later. I don't see anything wrong with that.
|
# ? Aug 27, 2010 01:38 |
|
What's the best way of working with a project with a number of local-only configuration changes in git? Right now I have a branch that i continually rebase and never commit, but that's kind of a pain. I can't .gitignore them because some of the files have already been versioned. I can git update-index --assume-unchanged them, but git stash fucks things up when I do. Anyone have any other hints?
|
# ? Aug 31, 2010 06:50 |
|
Using git, can I export a set of files to their own repository, preserving history? I have a project where I want to take part of it and make it it's own project now.
|
# ? Aug 31, 2010 12:23 |
|
Profane Obituary! posted:Using git, can I export a set of files to their own repository, preserving history? I have a project where I want to take part of it and make it it's own project now. Check out http://github.com/apenwarr/git-subtree
|
# ? Aug 31, 2010 19:51 |
|
Argue posted:What's the best way of working with a project with a number of local-only configuration changes in git? Right now I have a branch that i continually rebase and never commit, but that's kind of a pain. I can't .gitignore them because some of the files have already been versioned. I can git update-index --assume-unchanged them, but git stash fucks things up when I do. Anyone have any other hints? Some questions:
If you answer 'no' to 1 or 3 for technical or political reasons, then I don't really have any useful advice. Otherwise, I would use a default configuration file (probably named 'default.conf') that is version controlled, and an override file that is .gitignored (maybe named 'local.conf' or 'override.conf' or something). I've used this style with a lot of success in the past.
|
# ? Aug 31, 2010 20:05 |
|
I'm checking some stuff out over SVN and it's crazy slow. Tortoise window says 0kbps most of the time, spiking up every now and then. In about 2 hours I've got about 100MB. I've got a tech guy looking at the server and network side, but is there anything I could do that might help? I wonder if it's something concerning the repository, because I've been getting stuff from another repository without any problems.
|
# ? Sep 1, 2010 11:31 |
|
Is it [url]http://[/url] or svn:// ? http is slow, especially for big repos.
|
# ? Sep 2, 2010 06:19 |
|
BizarroAzrael posted:I'm checking some stuff out over SVN and it's crazy slow. Tortoise window says 0kbps most of the time, spiking up every now and then. In about 2 hours I've got about 100MB. I've got a tech guy looking at the server and network side, but is there anything I could do that might help? I wonder if it's something concerning the repository, because I've been getting stuff from another repository without any problems. To follow up on this, I've had a new hard drive installed but it performs the same. I've also done the same checkout on a different machine and it's much quicker, though it also seems to slow down on larger directories. I don't think my network connection is slower or anything, could it be anything else?
|
# ? Sep 15, 2010 16:43 |
|
BizarroAzrael posted:To follow up on this, I've had a new hard drive installed but it performs the same. I've also done the same checkout on a different machine and it's much quicker, though it also seems to slow down on larger directories. I don't think my network connection is slower or anything, could it be anything else? In case I'm not talking to myself, I think the issue is just massive directories of thousands of large files, network usage is very low and spikes to fetch a file every now and then. Seems to be an SVN issue. To get around this, I thought I might make a script to deal with this by breaking the checkout down into lots of small checkouts/updates of single files. Here's what I had in mind: 1. Checkout just the empty root directory of the project. 2. run "svn list -r [file]" on the new project directory to populate a list of files that I will want. 3. For each file in this list, run "svn update --depth empty [file]" However, I've done some experimenting and if I create a versioned directory in this way and then attempt a normal update on it, it won't pick up the missing files that should be inside. I can put them in with their own separate updates, but I think this means I will have a checkout that can not receive any new files from the repository. Is there some solution to this, is this idea workable?
|
# ? Sep 16, 2010 17:48 |
|
How big is what you are trying to check out? Are you talking gigabytes? I've checked out up to about 4GB before and it's never been too much of a problem. Have you tried using the command line instead of tortoise and seeing what happens?
|
# ? Sep 16, 2010 19:55 |
|
I think big diffs in binary files slow it down too. So if you check-in images or dlls that change often that could hurt.
|
# ? Sep 16, 2010 21:20 |
|
sonic bed head posted:How big is what you are trying to check out? Are you talking gigabytes? I've checked out up to about 4GB before and it's never been too much of a problem. Have you tried using the command line instead of tortoise and seeing what happens? 4-6GB I think, not including the .svn directory, over 11,000 files as I recall. These are uncompressed sound files by the way, not text or code.
|
# ? Sep 17, 2010 00:39 |
|
|
# ? May 15, 2024 03:32 |
|
I actually did a Robocopy from another copy on another machine, which seemed to go fine. Will this be alright for things like commits, or will it identify me as the person I copied it from? Might this be a better solution than a normal checkout?
|
# ? Sep 17, 2010 11:23 |