Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Less Fat Luke
May 23, 2003

Exciting Lemon

Volmarias posted:

You might want to take a look at http://sethrobertson.github.com/GitBestPractices/#sausage . I'm personally a proponent of hiding the sausage making, and I consider the ability to rewrite your own history (and rebase -i in general) to be a killer feature of git, but I'm a bit of a perfectionist and it has bitten me in the past on occasion.
This is an excellent article, thanks!

Adbot
ADBOT LOVES YOU

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.

Less Fat Luke posted:

This is an excellent article, thanks!

Sure is, exactly what I was looking for.

Woodsy Owl
Oct 27, 2004
In the repository summary, GitHub seems to think a few of my Java projects are CSS. Any idea why that might be?

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

Woodsy Owl posted:

In the repository summary, GitHub seems to think a few of my Java projects are CSS. Any idea why that might be?

Because of your coding style :haw:

Coding style? Style sheets? Get it? Get it?

I'll show myself out.

Macichne Leainig
Jul 26, 2012

by VG
TFS questions ahead:

We have a continuous integration build. It fails whenever a unit test fails, and out of our several hundred unit tests, about 95% of them are data mapping tests. We use XPath for data mapping, and any of our business folks can change the XPaths at any time, which means non-code changes can fail the build.

Our resident TFS expert says this is just how TFS builds work - failed unit tests set the build status to "partially succeeded" if everything else is fine, and a partial success fails a CI build. He offered the following as solutions:

1. Fix the failing unit tests. (For one, they aren't code issues, they're data issues, so this is pretty much out. I'm not commenting out entire unit tests when they fail.)
2. Disable tests in the build definition. (Sort of defeats the purpose of having a dedicated space to double-check our data mapping, but I guess it's not a horrible idea.)
3. Turn off gated check-ins. (There's a reason we enabled them in the first place.)

My limited research shows that, in an after-build event, we can set the build definition to pass if the code built correctly if there are failed tests. Is this possible? I was thinking something like this, which is a bit of code I found online but modified:

code:
 <Target Name="AfterTest">

    <!-- Refresh the build properties. -->
    <GetBuildProperties TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
                        BuildUri="$(BuildUri)"
                        Condition=" '$(IsDesktopBuild)' != 'true' ">
      <Output TaskParameter="TestSuccess" PropertyName="TestSuccess" />
    </GetBuildProperties>

    <!-- Set CompilationStatus to Succeeded if TestSuccess is false. -->
    <SetBuildProperties TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
                        BuildUri="$(BuildUri)"
                        CompilationStatus="Succeeded"
                        Condition=" '$(IsDesktopBuild)' != 'true' and '$(TestSuccess)' != 'true' ">
</Target>
This should set the test phase to succeeded regardless of failed tests, correct?

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Ignoring failures is definitely the wrong solution. If external people supplying broken xpath queries isn't a problem then your unit tests shouldn't be using them at all, and if it is a problem then you should be treating them as code changes with your CI system sending angry emails to people who break them or rejecting changes to them which would break the tests. Treating "code changes" and "data changes" as totally different sorts of things is rarely actually sensible.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Plorkyeran posted:

Ignoring failures is definitely the wrong solution. If external people supplying broken xpath queries isn't a problem then your unit tests shouldn't be using them at all, and if it is a problem then you should be treating them as code changes with your CI system sending angry emails to people who break them or rejecting changes to them which would break the tests. Treating "code changes" and "data changes" as totally different sorts of things is rarely actually sensible.

This is the correct answer.

Macichne Leainig
Jul 26, 2012

by VG

Plorkyeran posted:

Ignoring failures is definitely the wrong solution. If external people supplying broken xpath queries isn't a problem then your unit tests shouldn't be using them at all, and if it is a problem then you should be treating them as code changes with your CI system sending angry emails to people who break them or rejecting changes to them which would break the tests. Treating "code changes" and "data changes" as totally different sorts of things is rarely actually sensible.

You're misunderstanding the problem. Perfectly valid code check-ins are failing and being rejected because of a database change that the person checking in did not submit. We're not trying to ignore test failures, we're trying to prevent workflow from grinding to a halt because someone who's not a developer made a half-minded change to a xpath.

The source is entirely internal. Bad xpaths don't gently caress over our business, and they can be reverted back with our versioning.

wwb
Aug 17, 2004

Not sure what you are using for a test platform, but NUnit and such have an Assert.Inconclusive() option that could be good here -- will flag the test and show you something is off kilter without pulling a halt.

I'd really want a halt myself -- that is the point of all this CI stuff, if something is broken it isn't a perfectly valid check in unless you are into creating regressions.

Macichne Leainig
Jul 26, 2012

by VG
I'm a victim of lovely naming schemes and I should probably clarify that it's a gated checkin, not continuous integration, so developers cannot check anything in when a haphazard xpath change is made. I'm perfectly fine with being notified that a unit test is failing, as it's my responsibility to ensure that xpath changes that begin failing are a result of intentional changes and to change the test accordingly, but the other half dozen developers need to be able to check in in the meantime.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Tha Chodesweller posted:

You're misunderstanding the problem. Perfectly valid code check-ins are failing and being rejected because of a database change that the person checking in did not submit. We're not trying to ignore test failures, we're trying to prevent workflow from grinding to a halt because someone who's not a developer made a half-minded change to a xpath.

The source is entirely internal. Bad xpaths don't gently caress over our business, and they can be reverted back with our versioning.

If a database change is failing your test, it's not a unit test.

The best solution is to decouple your tests from your database.

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

Tha Chodesweller posted:

You're misunderstanding the problem. Perfectly valid code check-ins are failing and being rejected because of a database change that the person checking in did not submit. We're not trying to ignore test failures, we're trying to prevent workflow from grinding to a halt because someone who's not a developer made a half-minded change to a xpath.

The source is entirely internal. Bad xpaths don't gently caress over our business, and they can be reverted back with our versioning.

Most likely these aren't unit tests but are integration tests. A bad XPath put into a Staging or QA environment shouldn't be breaking your unit tests. They shouldn't be depending on what is in a database that data should be mocked/faked out. Maybe take all of the tests that use these XPath values from your database and put them into another project that your CI runs occasionally and only have unit tests be ran on check-in where the code portion of the build fails.

EDIT: Ithaqua! What he said

Macichne Leainig
Jul 26, 2012

by VG

gariig posted:

Most likely these aren't unit tests but are integration tests. A bad XPath put into a Staging or QA environment shouldn't be breaking your unit tests. They shouldn't be depending on what is in a database that data should be mocked/faked out. Maybe take all of the tests that use these XPath values from your database and put them into another project that your CI runs occasionally and only have unit tests be ran on check-in where the code portion of the build fails.

That's actually a good solution. Any way to trigger this build any time the aforementioned build completes?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

gariig posted:

Most likely these aren't unit tests but are integration tests. A bad XPath put into a Staging or QA environment shouldn't be breaking your unit tests. They shouldn't be depending on what is in a database that data should be mocked/faked out. Maybe take all of the tests that use these XPath values from your database and put them into another project that your CI runs occasionally and only have unit tests be ran on check-in where the code portion of the build fails.

EDIT: Ithaqua! What he said

Yeah, it gets tricky when you have tests that depend on your database. Ideally, you have your database changes source controlled and handled through SSDT deployments, not by having some random person go and update rows.

Your CI build ends up looking like this:
  • Build software
  • Run unit tests (no external dependencies)

Your nightly build looks like this:
  • Build software
  • Run unit tests (no external dependencies)
  • Deploy / publish SSDT database changes
  • Run integration tests, since your database changes have been pushed now

Tha Chodesweller posted:

That's actually a good solution. Any way to trigger this build any time the aforementioned build completes?

You can also set up test categories and specify that only certain test categories should be run as part of a build.

If your build and tests run fast, just set up three builds:
  • Gated checkin / CI, run only tests in the "ActualUnitTest" category
  • Rolling build, run once every 15/30/60 minutes
  • Nightly build

Then you'll get the feedback on your failing integration tests pretty rapidly, but your devs won't end up blocked.

[edit]
I'm actually not sure how gated checkin and rolling build will interact with each other, I've never tried it before.

[edit2]
Regarding Gated Checkin: I really strongly recommend against gated checkins unless your devs have a history of checking in total noncompiling garbage.

New Yorp New Yorp fucked around with this message at 18:54 on Dec 6, 2013

Macichne Leainig
Jul 26, 2012

by VG
Well there are a lot of problems with how we handle data changes now, but that won't be overnight. At least pulling out these tests won't grind the original build to a standstill, and shouldn't take more than an hour to pull out of the original source.

It really makes sense since we don't need to be testing the xpath changes every drat time we build. So thanks for the ideas, guys, probably a little easier than build definition hackery. :v:

No Safe Word
Feb 26, 2005

So am I completely missing something or is TFS actually requiring me to download the branches that I want to delete before it actually allows me to delete it? I'm pruning some completed branches but I don't have them checked out locally, so it's not giving me the option in VS to delete that branch. When I download it, it becomes available.

Does it want me to personally deliver the news to that branch and its children that they're worthless to me now and I am going to cut them off?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

No Safe Word posted:

So am I completely missing something or is TFS actually requiring me to download the branches that I want to delete before it actually allows me to delete it? I'm pruning some completed branches but I don't have them checked out locally, so it's not giving me the option in VS to delete that branch. When I download it, it becomes available.

Does it want me to personally deliver the news to that branch and its children that they're worthless to me now and I am going to cut them off?

Just map the branch, do a non-recursive get with the "tf get" command, then you can delete it. It's weird, I know.

No Safe Word
Feb 26, 2005

Ithaqua posted:

Just map the branch, do a non-recursive get with the "tf get" command, then you can delete it. It's weird, I know.

It was already mapped, and rather than having to fire up the CLI client (which I'm not averse to, it's just silly when I don't have to) I just got the files and then immediately deleted them. Clunky :(

Though the other clunky thing I had to do was I had a branch structure like this:


A
|
v
B
|
v
C

..and wanted to reparent C to A instead of B. So after finding the "Reparent..." menu item (no right-click, just under File > Source Control > Branching and Merging naturally), the only thing I could reparent to was .. the thing it was already parented to (B). So apparently I had to do a baseless merge with my branch's grandparent (via the command line this time) to just establish the relationship so that I could then reparent it. I have no idea why this is the only way to do that (it seems).

hirvox
Sep 8, 2009
TFS 2013 has Git support nowadays, but the clientside tools are still lacking. I've downloaded the Git Extensions like VS 2013 helpfully suggested, and the history view in Git GUI has helped a lot.

Still, getting used to new ways of making the proverbial sausages has turned my commit chains rather messy. As far as I understood, git rebase can do some black magic and rework any local commits into more coherent commits before they are pushed to a repository. But any attempts to use it so far have turned into a big mess of conflicting edits. Even when I've already merged everything back into a single branch before doing the rebase and done zero changes to the commit list that git rebase -i shows.

Also, is there a way to configure the Git Bash command-line tools to use Visual Studio for merging and conflict resolution? I've taken a look at the files that the git rebase creates for conflicts, and it seems like git uses a completely different way of merging. It puts all of the changes into the same file and uses it's own delimiters to separate them from each other. Visual Studio's merging assumes that it receives the files from the different sources as pristine and expects the user to build a third, merged file from them.

o.m. 94
Nov 23, 2009

If I have a commit structure like dis
pre:
commits   :  A <- B (b1) <- C <- D (HEAD, b2)
and suddenly I decide C is bullshit, how can I remove C from the proceedings, as if it were never made, so it goes A <- B <- D? My thinking is to:

1. Check out b1 to move HEAD to before C
2. Create a new branch, b3
3. Generate a diff between B and D as a patch
4. Apply patch, commit as E

pre:
commits   : A <- B (b1) <- C <- D (b2)
                 |
                 -> E (HEAD, b3)
and then I can just discard b2 at my leisure? Or is there a quicker way of doing this and I'm being dumb?

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
git rebase -i B, delete the line for C, save and exit.

necrotic
Aug 2, 2005
I owe my brother big time for this!

Plorkyeran posted:

git rebase -i B, delete the line for C, save and exit.

It sounds like he wants to keep the changes. Step 3 in his example (diff between B and D) would include the changes in C. Instead of deleting the commit from the rebase UI he should squash (or fixup) it.

o.m. 94
Nov 23, 2009

necrotic posted:

It sounds like he wants to keep the changes. Step 3 in his example (diff between B and D) would include the changes in C. Instead of deleting the commit from the rebase UI he should squash (or fixup) it.

Don't want to keep changes in C. But of course D would have them, I want what D changed but not C. Presumably interactive rebase as stated does that

necrotic
Aug 2, 2005
I owe my brother big time for this!

oiseaux morts 1994 posted:

Don't want to keep changes in C. But of course D would have them, I want what D changed but not C. Presumably interactive rebase as stated does that

Then yeah, what Plorkyeran said. I was reading your step 3 as `git diff B D` which would have included the C changes.

Doc Hawkins
Jun 15, 2010

Dashing? But I'm not even moving!


I love rebase, but if anyone else has pulled this branch, you'd be better off using `git revert C` to make a new commit which cancels out whatever C did.

Unless C added plaintext passwords or something.

necrotic
Aug 2, 2005
I owe my brother big time for this!

Doc Hawkins posted:

I love rebase, but if anyone else has pulled this branch, you'd be better off using `git revert C` to make a new commit which cancels out whatever C did.

Unless C added plaintext passwords or something.

Yeah, you should only rebase (or any kind of history changes) on "private" branches.

Sauer
Sep 13, 2005

Socialize Everything!
I had completely forgotten that with Windows 8.1 Skydrive is integrated into the Window's file system and for all intents and purposes acts like any other folder on your PC. I was thinking of using it as a private repository rather than hosting my code on Bitbucket and the sort and Git doesn't seem to mind or even notice that its working on a remote folder located somewhere else in the world. Is there anything about this that would make it a dumb idea that I'm not seeing? Looks like this works fine with Google Drive as well from what I can see.

raminasi
Jan 25, 2005

a last drink with no ice
I finally got around to learning about git's interactive rebasing and why did it take me so long to do this, it's incredible!

evensevenone
May 12, 2001
Glass is a solid.
Just prepare for tears when you accidentally delete a line and do a :wq on instinct...

good jovi
Dec 11, 2000

'm pro-dickgirl, and I VOTE!

evensevenone posted:

Just prepare for tears when you accidentally delete a line and do a :wq on instinct...

Then you get to learn about the reflog!

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

Sailor_Spoon posted:

Then you get to learn about the reflog!

Which, to be fair, is just as awesome as interactive rebase, and is totally worth learning when you reach that level anyway.

o.m. 94
Nov 23, 2009

Speaking of which, what are the conditions for commit removal? If I "orphan" a commit by having it as part of no branch subtree, I can garbage collect it manually but documentation I've read is a bit vague about automatic GC, along the lines of "eventually". It might be good to know if I'm scouring the reflog for something that no longer exists.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
The default is two weeks for orphaned objects not in your reflog (i.e. things like remote branches that you fetched then deleted without ever checking out), and 90 days for things in the reflog.

Hughlander
May 11, 2005

oiseaux morts 1994 posted:

Speaking of which, what are the conditions for commit removal? If I "orphan" a commit by having it as part of no branch subtree, I can garbage collect it manually but documentation I've read is a bit vague about automatic GC, along the lines of "eventually". It might be good to know if I'm scouring the reflog for something that no longer exists.

from the git gc docs:

man git gc posted:

--prune=<date>
Prune loose objects older than date (default is 2 weeks ago, overridable by the config variable gc.pruneExpire). This option is on by default.

The optional configuration variable gc.reflogExpire can be set to indicate how long historical entries within each branch’s reflog should remain available in this repository. The setting is expressed as a length of time, for example 90 days or 3 months. It defaults to 90 days.

The optional configuration variable gc.reflogExpireUnreachable can be set to indicate how long historical reflog entries which are not part of the current branch should remain available in this repository. These types of entries are generally created as a result of using git commit --amend or git rebase and are the commits prior to the amend or rebase occurring. Since these changes are not part of the current project most users will want to expire them sooner. This option defaults to 30 days.

So it sounds like 2 weeks, but you can get it from the reflog for up to 90 days.

wolffenstein
Aug 2, 2002
 
Pork Pro

DSauer posted:

I had completely forgotten that with Windows 8.1 Skydrive is integrated into the Window's file system and for all intents and purposes acts like any other folder on your PC. I was thinking of using it as a private repository rather than hosting my code on Bitbucket and the sort and Git doesn't seem to mind or even notice that its working on a remote folder located somewhere else in the world. Is there anything about this that would make it a dumb idea that I'm not seeing? Looks like this works fine with Google Drive as well from what I can see.

I did the same thing but with Dropbox. It works fine, but don't treat any of the cloud file sync services as backup. They can and will sync corrupted data if your hardware goes bad. Also you may want to turn on two factor authentication on your Microsoft account depending how valuable your code is. Then make sure your account recovery methods are current, because that bit me in the rear end and I lost access to Dropbox and GitHub amongst many other accounts.

wwb
Aug 17, 2004

There are multiple reasons why using a cloud file storage solution is a dumb thing for source control.

First, while having the code in "the cloud" is a convenient benefit of source control in 2014 it isn't why one uses it in the first place -- cloud file storage can't help you manage concurrent versions, it doesn't have much of a concept of diffing, branching, merging or any of the other very source code-specific things one can do with a modern SCM system.

Second, most development generates a bunch of rapidly changing trash files which are local-only and best ignored from the sources. No need to snych those up to "the cloud" nor subject everyone else to download them. Or dropbox / box / skydrive don't have a .gitignore file either.

I hope this helps your moral compass.

Sauer
Sep 13, 2005

Socialize Everything!
Yep I agree with all of that, and the fact that a cloud storage solution would also happily mirror a corrupted file system has put that whole idea to bed. Thanks for the input folks.

evensevenone
May 12, 2001
Glass is a solid.
Dropbox does have a history of each file so you could roll back. It would be annoying and difficult though since the only information you have is timestamps and don't think there's any sort of diff viewer. Also, branches don't exist.

This is also why storing a git repo on Dropbox is a pretty bad idea, when you switch branches it assumes all your files changed.

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug
I wouldn't object too strongly to storing a bare repository in Dropbox, that you'd use after e.g. git clone ~/dropbox-shared/repo.git. You'd then push to that bare repository and the pushed objects would be synchronized as normal. Each machine that you use could have its own non-bare clone of that shared bare repo.

Repacking the bare repository would cause a lot of churn in the Dropbox content, but that wouldn't happen very often.

I still consider this inferior to synchronizing changes purely through Git transport protocols, but it doesn't bother me that much.

Adbot
ADBOT LOVES YOU

Edison was a dick
Apr 3, 2010

direct current :roboluv: only
This is exactly the use-case Git-Annex was designed for.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply