|
Volmarias posted:You might want to take a look at http://sethrobertson.github.com/GitBestPractices/#sausage . I'm personally a proponent of hiding the sausage making, and I consider the ability to rewrite your own history (and rebase -i in general) to be a killer feature of git, but I'm a bit of a perfectionist and it has bitten me in the past on occasion.
|
# ? Dec 3, 2013 07:25 |
|
|
# ? May 18, 2024 13:46 |
|
Less Fat Luke posted:This is an excellent article, thanks! Sure is, exactly what I was looking for.
|
# ? Dec 3, 2013 11:27 |
|
In the repository summary, GitHub seems to think a few of my Java projects are CSS. Any idea why that might be?
|
# ? Dec 6, 2013 12:36 |
|
Woodsy Owl posted:In the repository summary, GitHub seems to think a few of my Java projects are CSS. Any idea why that might be? Because of your coding style Coding style? Style sheets? Get it? Get it? I'll show myself out.
|
# ? Dec 6, 2013 13:35 |
|
TFS questions ahead: We have a continuous integration build. It fails whenever a unit test fails, and out of our several hundred unit tests, about 95% of them are data mapping tests. We use XPath for data mapping, and any of our business folks can change the XPaths at any time, which means non-code changes can fail the build. Our resident TFS expert says this is just how TFS builds work - failed unit tests set the build status to "partially succeeded" if everything else is fine, and a partial success fails a CI build. He offered the following as solutions: 1. Fix the failing unit tests. (For one, they aren't code issues, they're data issues, so this is pretty much out. I'm not commenting out entire unit tests when they fail.) 2. Disable tests in the build definition. (Sort of defeats the purpose of having a dedicated space to double-check our data mapping, but I guess it's not a horrible idea.) 3. Turn off gated check-ins. (There's a reason we enabled them in the first place.) My limited research shows that, in an after-build event, we can set the build definition to pass if the code built correctly if there are failed tests. Is this possible? I was thinking something like this, which is a bit of code I found online but modified: code:
|
# ? Dec 6, 2013 18:06 |
|
Ignoring failures is definitely the wrong solution. If external people supplying broken xpath queries isn't a problem then your unit tests shouldn't be using them at all, and if it is a problem then you should be treating them as code changes with your CI system sending angry emails to people who break them or rejecting changes to them which would break the tests. Treating "code changes" and "data changes" as totally different sorts of things is rarely actually sensible.
|
# ? Dec 6, 2013 18:17 |
|
Plorkyeran posted:Ignoring failures is definitely the wrong solution. If external people supplying broken xpath queries isn't a problem then your unit tests shouldn't be using them at all, and if it is a problem then you should be treating them as code changes with your CI system sending angry emails to people who break them or rejecting changes to them which would break the tests. Treating "code changes" and "data changes" as totally different sorts of things is rarely actually sensible. This is the correct answer.
|
# ? Dec 6, 2013 18:22 |
|
Plorkyeran posted:Ignoring failures is definitely the wrong solution. If external people supplying broken xpath queries isn't a problem then your unit tests shouldn't be using them at all, and if it is a problem then you should be treating them as code changes with your CI system sending angry emails to people who break them or rejecting changes to them which would break the tests. Treating "code changes" and "data changes" as totally different sorts of things is rarely actually sensible. You're misunderstanding the problem. Perfectly valid code check-ins are failing and being rejected because of a database change that the person checking in did not submit. We're not trying to ignore test failures, we're trying to prevent workflow from grinding to a halt because someone who's not a developer made a half-minded change to a xpath. The source is entirely internal. Bad xpaths don't gently caress over our business, and they can be reverted back with our versioning.
|
# ? Dec 6, 2013 18:25 |
|
Not sure what you are using for a test platform, but NUnit and such have an Assert.Inconclusive() option that could be good here -- will flag the test and show you something is off kilter without pulling a halt. I'd really want a halt myself -- that is the point of all this CI stuff, if something is broken it isn't a perfectly valid check in unless you are into creating regressions.
|
# ? Dec 6, 2013 18:32 |
|
I'm a victim of lovely naming schemes and I should probably clarify that it's a gated checkin, not continuous integration, so developers cannot check anything in when a haphazard xpath change is made. I'm perfectly fine with being notified that a unit test is failing, as it's my responsibility to ensure that xpath changes that begin failing are a result of intentional changes and to change the test accordingly, but the other half dozen developers need to be able to check in in the meantime.
|
# ? Dec 6, 2013 18:35 |
|
Tha Chodesweller posted:You're misunderstanding the problem. Perfectly valid code check-ins are failing and being rejected because of a database change that the person checking in did not submit. We're not trying to ignore test failures, we're trying to prevent workflow from grinding to a halt because someone who's not a developer made a half-minded change to a xpath. If a database change is failing your test, it's not a unit test. The best solution is to decouple your tests from your database.
|
# ? Dec 6, 2013 18:37 |
|
Tha Chodesweller posted:You're misunderstanding the problem. Perfectly valid code check-ins are failing and being rejected because of a database change that the person checking in did not submit. We're not trying to ignore test failures, we're trying to prevent workflow from grinding to a halt because someone who's not a developer made a half-minded change to a xpath. Most likely these aren't unit tests but are integration tests. A bad XPath put into a Staging or QA environment shouldn't be breaking your unit tests. They shouldn't be depending on what is in a database that data should be mocked/faked out. Maybe take all of the tests that use these XPath values from your database and put them into another project that your CI runs occasionally and only have unit tests be ran on check-in where the code portion of the build fails. EDIT: Ithaqua! What he said
|
# ? Dec 6, 2013 18:37 |
|
gariig posted:Most likely these aren't unit tests but are integration tests. A bad XPath put into a Staging or QA environment shouldn't be breaking your unit tests. They shouldn't be depending on what is in a database that data should be mocked/faked out. Maybe take all of the tests that use these XPath values from your database and put them into another project that your CI runs occasionally and only have unit tests be ran on check-in where the code portion of the build fails. That's actually a good solution. Any way to trigger this build any time the aforementioned build completes?
|
# ? Dec 6, 2013 18:40 |
|
gariig posted:Most likely these aren't unit tests but are integration tests. A bad XPath put into a Staging or QA environment shouldn't be breaking your unit tests. They shouldn't be depending on what is in a database that data should be mocked/faked out. Maybe take all of the tests that use these XPath values from your database and put them into another project that your CI runs occasionally and only have unit tests be ran on check-in where the code portion of the build fails. Yeah, it gets tricky when you have tests that depend on your database. Ideally, you have your database changes source controlled and handled through SSDT deployments, not by having some random person go and update rows. Your CI build ends up looking like this:
Your nightly build looks like this:
Tha Chodesweller posted:That's actually a good solution. Any way to trigger this build any time the aforementioned build completes? You can also set up test categories and specify that only certain test categories should be run as part of a build. If your build and tests run fast, just set up three builds:
Then you'll get the feedback on your failing integration tests pretty rapidly, but your devs won't end up blocked. [edit] I'm actually not sure how gated checkin and rolling build will interact with each other, I've never tried it before. [edit2] Regarding Gated Checkin: I really strongly recommend against gated checkins unless your devs have a history of checking in total noncompiling garbage. New Yorp New Yorp fucked around with this message at 18:54 on Dec 6, 2013 |
# ? Dec 6, 2013 18:42 |
|
Well there are a lot of problems with how we handle data changes now, but that won't be overnight. At least pulling out these tests won't grind the original build to a standstill, and shouldn't take more than an hour to pull out of the original source. It really makes sense since we don't need to be testing the xpath changes every drat time we build. So thanks for the ideas, guys, probably a little easier than build definition hackery.
|
# ? Dec 6, 2013 18:52 |
|
So am I completely missing something or is TFS actually requiring me to download the branches that I want to delete before it actually allows me to delete it? I'm pruning some completed branches but I don't have them checked out locally, so it's not giving me the option in VS to delete that branch. When I download it, it becomes available. Does it want me to personally deliver the news to that branch and its children that they're worthless to me now and I am going to cut them off?
|
# ? Dec 11, 2013 18:14 |
|
No Safe Word posted:So am I completely missing something or is TFS actually requiring me to download the branches that I want to delete before it actually allows me to delete it? I'm pruning some completed branches but I don't have them checked out locally, so it's not giving me the option in VS to delete that branch. When I download it, it becomes available. Just map the branch, do a non-recursive get with the "tf get" command, then you can delete it. It's weird, I know.
|
# ? Dec 11, 2013 20:15 |
|
Ithaqua posted:Just map the branch, do a non-recursive get with the "tf get" command, then you can delete it. It's weird, I know. It was already mapped, and rather than having to fire up the CLI client (which I'm not averse to, it's just silly when I don't have to) I just got the files and then immediately deleted them. Clunky Though the other clunky thing I had to do was I had a branch structure like this: A | v B | v C ..and wanted to reparent C to A instead of B. So after finding the "Reparent..." menu item (no right-click, just under File > Source Control > Branching and Merging naturally), the only thing I could reparent to was .. the thing it was already parented to (B). So apparently I had to do a baseless merge with my branch's grandparent (via the command line this time) to just establish the relationship so that I could then reparent it. I have no idea why this is the only way to do that (it seems).
|
# ? Dec 11, 2013 20:26 |
|
TFS 2013 has Git support nowadays, but the clientside tools are still lacking. I've downloaded the Git Extensions like VS 2013 helpfully suggested, and the history view in Git GUI has helped a lot. Still, getting used to new ways of making the proverbial sausages has turned my commit chains rather messy. As far as I understood, git rebase can do some black magic and rework any local commits into more coherent commits before they are pushed to a repository. But any attempts to use it so far have turned into a big mess of conflicting edits. Even when I've already merged everything back into a single branch before doing the rebase and done zero changes to the commit list that git rebase -i shows. Also, is there a way to configure the Git Bash command-line tools to use Visual Studio for merging and conflict resolution? I've taken a look at the files that the git rebase creates for conflicts, and it seems like git uses a completely different way of merging. It puts all of the changes into the same file and uses it's own delimiters to separate them from each other. Visual Studio's merging assumes that it receives the files from the different sources as pristine and expects the user to build a third, merged file from them.
|
# ? Jan 14, 2014 11:54 |
If I have a commit structure like dispre:commits : A <- B (b1) <- C <- D (HEAD, b2) 1. Check out b1 to move HEAD to before C 2. Create a new branch, b3 3. Generate a diff between B and D as a patch 4. Apply patch, commit as E pre:commits : A <- B (b1) <- C <- D (b2) | -> E (HEAD, b3)
|
|
# ? Jan 14, 2014 14:42 |
|
git rebase -i B, delete the line for C, save and exit.
|
# ? Jan 14, 2014 15:09 |
|
Plorkyeran posted:git rebase -i B, delete the line for C, save and exit. It sounds like he wants to keep the changes. Step 3 in his example (diff between B and D) would include the changes in C. Instead of deleting the commit from the rebase UI he should squash (or fixup) it.
|
# ? Jan 14, 2014 17:29 |
necrotic posted:It sounds like he wants to keep the changes. Step 3 in his example (diff between B and D) would include the changes in C. Instead of deleting the commit from the rebase UI he should squash (or fixup) it. Don't want to keep changes in C. But of course D would have them, I want what D changed but not C. Presumably interactive rebase as stated does that
|
|
# ? Jan 14, 2014 20:18 |
|
oiseaux morts 1994 posted:Don't want to keep changes in C. But of course D would have them, I want what D changed but not C. Presumably interactive rebase as stated does that Then yeah, what Plorkyeran said. I was reading your step 3 as `git diff B D` which would have included the C changes.
|
# ? Jan 14, 2014 20:38 |
|
I love rebase, but if anyone else has pulled this branch, you'd be better off using `git revert C` to make a new commit which cancels out whatever C did. Unless C added plaintext passwords or something.
|
# ? Jan 14, 2014 22:14 |
|
Doc Hawkins posted:I love rebase, but if anyone else has pulled this branch, you'd be better off using `git revert C` to make a new commit which cancels out whatever C did. Yeah, you should only rebase (or any kind of history changes) on "private" branches.
|
# ? Jan 14, 2014 22:48 |
|
I had completely forgotten that with Windows 8.1 Skydrive is integrated into the Window's file system and for all intents and purposes acts like any other folder on your PC. I was thinking of using it as a private repository rather than hosting my code on Bitbucket and the sort and Git doesn't seem to mind or even notice that its working on a remote folder located somewhere else in the world. Is there anything about this that would make it a dumb idea that I'm not seeing? Looks like this works fine with Google Drive as well from what I can see.
|
# ? Jan 16, 2014 02:17 |
|
I finally got around to learning about git's interactive rebasing and why did it take me so long to do this, it's incredible!
|
# ? Jan 16, 2014 02:29 |
|
Just prepare for tears when you accidentally delete a line and do a :wq on instinct...
|
# ? Jan 16, 2014 04:19 |
|
evensevenone posted:Just prepare for tears when you accidentally delete a line and do a :wq on instinct... Then you get to learn about the reflog!
|
# ? Jan 16, 2014 04:39 |
|
Sailor_Spoon posted:Then you get to learn about the reflog! Which, to be fair, is just as awesome as interactive rebase, and is totally worth learning when you reach that level anyway.
|
# ? Jan 16, 2014 05:03 |
Speaking of which, what are the conditions for commit removal? If I "orphan" a commit by having it as part of no branch subtree, I can garbage collect it manually but documentation I've read is a bit vague about automatic GC, along the lines of "eventually". It might be good to know if I'm scouring the reflog for something that no longer exists.
|
|
# ? Jan 16, 2014 14:47 |
|
The default is two weeks for orphaned objects not in your reflog (i.e. things like remote branches that you fetched then deleted without ever checking out), and 90 days for things in the reflog.
|
# ? Jan 16, 2014 14:59 |
|
oiseaux morts 1994 posted:Speaking of which, what are the conditions for commit removal? If I "orphan" a commit by having it as part of no branch subtree, I can garbage collect it manually but documentation I've read is a bit vague about automatic GC, along the lines of "eventually". It might be good to know if I'm scouring the reflog for something that no longer exists. from the git gc docs: man git gc posted:--prune=<date> So it sounds like 2 weeks, but you can get it from the reflog for up to 90 days.
|
# ? Jan 16, 2014 15:05 |
|
DSauer posted:I had completely forgotten that with Windows 8.1 Skydrive is integrated into the Window's file system and for all intents and purposes acts like any other folder on your PC. I was thinking of using it as a private repository rather than hosting my code on Bitbucket and the sort and Git doesn't seem to mind or even notice that its working on a remote folder located somewhere else in the world. Is there anything about this that would make it a dumb idea that I'm not seeing? Looks like this works fine with Google Drive as well from what I can see. I did the same thing but with Dropbox. It works fine, but don't treat any of the cloud file sync services as backup. They can and will sync corrupted data if your hardware goes bad. Also you may want to turn on two factor authentication on your Microsoft account depending how valuable your code is. Then make sure your account recovery methods are current, because that bit me in the rear end and I lost access to Dropbox and GitHub amongst many other accounts.
|
# ? Jan 17, 2014 16:43 |
|
There are multiple reasons why using a cloud file storage solution is a dumb thing for source control. First, while having the code in "the cloud" is a convenient benefit of source control in 2014 it isn't why one uses it in the first place -- cloud file storage can't help you manage concurrent versions, it doesn't have much of a concept of diffing, branching, merging or any of the other very source code-specific things one can do with a modern SCM system. Second, most development generates a bunch of rapidly changing trash files which are local-only and best ignored from the sources. No need to snych those up to "the cloud" nor subject everyone else to download them. Or dropbox / box / skydrive don't have a .gitignore file either. I hope this helps your moral compass.
|
# ? Jan 17, 2014 20:01 |
|
Yep I agree with all of that, and the fact that a cloud storage solution would also happily mirror a corrupted file system has put that whole idea to bed. Thanks for the input folks.
|
# ? Jan 17, 2014 22:07 |
|
Dropbox does have a history of each file so you could roll back. It would be annoying and difficult though since the only information you have is timestamps and don't think there's any sort of diff viewer. Also, branches don't exist. This is also why storing a git repo on Dropbox is a pretty bad idea, when you switch branches it assumes all your files changed.
|
# ? Jan 19, 2014 03:45 |
|
I wouldn't object too strongly to storing a bare repository in Dropbox, that you'd use after e.g. git clone ~/dropbox-shared/repo.git. You'd then push to that bare repository and the pushed objects would be synchronized as normal. Each machine that you use could have its own non-bare clone of that shared bare repo. Repacking the bare repository would cause a lot of churn in the Dropbox content, but that wouldn't happen very often. I still consider this inferior to synchronizing changes purely through Git transport protocols, but it doesn't bother me that much.
|
# ? Jan 19, 2014 04:46 |
|
|
# ? May 18, 2024 13:46 |
|
This is exactly the use-case Git-Annex was designed for.
|
# ? Jan 19, 2014 14:15 |