Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
uXs
May 3, 2005

Mark it zero!
My workflow (with hg) is this:

(Sidenote: we have 4 environments: Dev (local development), Test (in-company testing), QA (client testing), en Production.)

Stuff gets done on the default branch. At some point, I want to release something to the test environment. I make sure my local copy is on the changeset where I want to release, and I run my deployment tool. It calls msbuild a bunch of times to create installation files and everything. I also use a numbering scheme for sql scripts and my deployment tool runs the necessary scripts as well. It's pretty neat. Anyway, afterwards it tags the changeset as 'Test'.

When I've tested everything in the Test environment, I run the deployment tool again. It retrieves the changeset tagged as 'Test' to a folder (not my development folder because I don't want to have to clean it when I want to move from Test to QA), and it runs the whole deployment cycle again, building and running scripts and moving everything to QA. Afterwards the 'Test' changeset gets tagged as 'QA'.

When the clients have tested everything in QA (or when they haven't and I want to deploy anyway), I run the deployment tool again, and this time it deploys everything to Production. Afterwards, the 'QA' changeset gets tagged as 'Release'.

(Of course, when a bug is found in Test or QA, I don't have to keep moving along on the release track, I can start again from Dev once I've fixed the bug.)

So, I use the tags as a way to track what version is deployed where, and, crucially, for my deployment tool to know what it has to release. Deployment is as simple as pushing a button.

Then, bugfixes:

Obviously, it's possible that bugs are discovered in the production version of the program. But development has most likely been going on, and most of the time the bleeding edge version is not ready for release. So it's not possible to fix the bug on the bleeding edge version and release that. We have to go back to the released version, fix the bug there, and release that.

And that's exactly what happens: I go back to the release version (conveniently tagged with the 'Release' tag), and fix the bug. I commit the fix, but this time not on the 'default' branch, but on a named branch called 'maintenance'. (For hg there's no functional difference between named and unnamed branches, it's just nice to be able to see if a commit was on the default branch or in maintenance.) And then I just run the deployment tool again. The changeset with the fix gets deployed to Test and tagged as Test, deployed to QA and tagged as QA, and deployed to production and tagged as Release.

Afterwards, I switch back to the default branch, and merge the bugfix into it. Done!

TL/DR: source control system is used to track releases, but not as a deployment tool. The deployment tool uses source control to gather what it needs and afterwards tags the changeset it used. I think it's a great system and so should you.

Adbot
ADBOT LOVES YOU

nexus6
Sep 2, 2011

If only you could see what I've seen with your eyes
Thanks for the replies, Given that we're talking about websites here, and currently 'deploying' is simply a case of copying php, css or image files either with FTP or rsync, what's the advantage of using a deployment tool? Given that my company's been going for almost four years with no source control, whilst promising it for the past two years, I think I'm going to need to do some serious convincing to change the current processes.

Don't get me wrong, it's pretty easy to show why we should use source control but I can't see why we need to sign up for a tool like Deploy or Beanstalk to do something which, as far as I understand for our projects, just requires copying files over to a different location.

uXs
May 3, 2005

Mark it zero!
My deployment tool is just a program I wrote myself. It can execute programs (used for msbuild and command-line HG commands), and copy files from one location to the other.

The main advantage is one-click deployment of just about anything. (Once you set it up right.) Because having to do that manually is tedious and error-prone.

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.

nexus6 posted:

Thanks for the replies, Given that we're talking about websites here, and currently 'deploying' is simply a case of copying php, css or image files either with FTP or rsync, what's the advantage of using a deployment tool? Given that my company's been going for almost four years with no source control, whilst promising it for the past two years, I think I'm going to need to do some serious convincing to change the current processes.

Don't get me wrong, it's pretty easy to show why we should use source control but I can't see why we need to sign up for a tool like Deploy or Beanstalk to do something which, as far as I understand for our projects, just requires copying files over to a different location.

You can use automated scripts if you like. I use git for deploying the code to production, but then everything else is automated scripting, usually using Fabric for Python at the moment. Longer term I'm looking into configuration management but that's a little deeper still. Either way the business case for a one command deployment where you go grab a coffee and just enter SSH/Public Key passwords as required is a MUCH more predictable and less error prone workflow than anything manual.

Even if it's just file copying, using version control to do it is much more stable, and less time wasteful, than doing it manually.

wwb
Aug 17, 2004

Another way to do the deployment is to use a CI server such as teamcity or jenkins which is free. Loads of advantages there -- like being able to go check the CI server to see when / if something got out to production, the ability to do automated tests as part of the deployment process and boatloads of traceability overall.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

nexus6 posted:

Thanks for the replies, Given that we're talking about websites here, and currently 'deploying' is simply a case of copying php, css or image files either with FTP or rsync, what's the advantage of using a deployment tool? Given that my company's been going for almost four years with no source control, whilst promising it for the past two years, I think I'm going to need to do some serious convincing to change the current processes.

Don't get me wrong, it's pretty easy to show why we should use source control but I can't see why we need to sign up for a tool like Deploy or Beanstalk to do something which, as far as I understand for our projects, just requires copying files over to a different location.

You'll never need to hear someone say "Whoops, I forgot to ...." when it comes to production.

Argue
Sep 29, 2005

I represent the Philippines
What I'm asking for is probably bordering on sorcery, but our client wants us to make some private modifications to an open source library. They've mirrored the project on SVN, but the real project is on Git. I've used git-svn to check out the project, and now I'm wondering if there was a way to add the real repository as a remote, so that I can easily pull in changes into our private copy.

I realize this is probably impossible, especially since the SVN repository doesn't even share the same history as the Git repository, but if anyone has any tips on how to manage this project, I'd greatly appreciate it.

uXs
May 3, 2005

Mark it zero!
I guess the first thing I'd do is use Git, and then look into the Git equivalent of Hg's MQ.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
That's not really an issue. Add the upstream as a remote, cherry-pick new commits from the upstream (when I last used it, git-svn barfed on merges and with no shared history the first one would be goofy), and dcommit them to svn. Write a script to automate the cherry-picking if the project isn't mostly dead.

raminasi
Jan 25, 2005

a last drink with no ice
We're using a third-party library in a project that requires a license key to use. The way it works is that you include the license key in the first call into the library, and their license terms require that you actually embed the drat thing in your source code (you can't pull it out of a config file at runtime). So, we've been doing this, and it's been working fine. The problem is that we've gotten a directive from on high to make our codebase publicly available. This presents two problems.

The first is that anyone looking at the source history will be able to see the key. We're using git, and everything I've read about rewriting git history has to do with retroactively blowing away whole files, but this thing doesn't live in its own file and even if it did I'd have to change the build files as well. I'm not even sure what this change would look like - a global, historical, find-and-replace to turn the key into "KEY GOES HERE" or something? Can git do this? Do I want it to? Is there any option that's easier than actually starting a new repo from scratch? We can't have the vendor generate a new key for us because the software doesn't actually phone home so there's no way to invalidate an existing key (I assume the library just does some math to make sure the provided key is good - I don't see how there could be a master list they could update somewhere.)

The second problem is how to deal with this going forward. What I've thought of is having a build script that does the aforementioned find-and-replace, but in reverse: to copy the key into the codebase (from an unversioned config file or something), build the project, and then remove it. Does this sound reasonable?

ToxicFrog
Apr 26, 2008


GrumpyDoctor posted:

We're using a third-party library in a project that requires a license key to use. The way it works is that you include the license key in the first call into the library, and their license terms require that you actually embed the drat thing in your source code (you can't pull it out of a config file at runtime). So, we've been doing this, and it's been working fine. The problem is that we've gotten a directive from on high to make our codebase publicly available. This presents two problems.

The first is that anyone looking at the source history will be able to see the key. We're using git, and everything I've read about rewriting git history has to do with retroactively blowing away whole files, but this thing doesn't live in its own file and even if it did I'd have to change the build files as well. I'm not even sure what this change would look like - a global, historical, find-and-replace to turn the key into "KEY GOES HERE" or something? Can git do this?

It totally can; the command you're thinking of is git filter-branch. It rewrites history by running a shell command on every commit; make the command something like sed -r -i -e 's/<license key>/XXXXXXXXXXXX/' license.clj and then make sure you don't push the old version of the branch and there you go.

I've used it myself to retroactively remove copyrighted materials I was using for testing from a repo before putting it on github.

quote:

The second problem is how to deal with this going forward. What I've thought of is having a build script that does the aforementioned find-and-replace, but in reverse: to copy the key into the codebase (from an unversioned config file or something), build the project, and then remove it. Does this sound reasonable?

This sounds kind of ugly, to be honest, but I can't think of a better approach given the constraints you're under.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
It'd probably be easier to just move the actual definition of the variable defining the key to its own unversioned file.

raminasi
Jan 25, 2005

a last drink with no ice

Plorkyeran posted:

It'd probably be easier to just move the actual definition of the variable defining the key to its own unversioned file.

The only concern I had with that was that it seemed like if someone just cloned the repo and tried to build it wouldn't be as clear why it wasn't working. With a script, the clear problem is that the script is trying to point to "/pick/a/license/key/file.txt", but if a source file is missing it looks like I just hosed up at first glance.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

ToxicFrog posted:

It totally can; the command you're thinking of is git filter-branch. It rewrites history by running a shell command on every commit; make the command something like sed -r -i -e 's/<license key>/XXXXXXXXXXXX/' license.clj and then make sure you don't push the old version of the branch and there you go.
Note that if the repo is nontrivially sized then running this on a ramdisk is very much worth the effort, since it has to checkout and commit -a every commit in the repository.

Gazpacho
Jun 18, 2004

by Fluffdaddy
Slippery Tilde

uXs posted:

I guess the first thing I'd do is use Git, and then look into the Git equivalent of Hg's MQ.
That would be guilt, but in this case the patches are only used temporarily. Running "git diff" in a clone of the original git project and then "git apply" in a git-svn clone for every new change on the default branch should be enough to keep them in sync. You should keep the source project and your mods in separate SVN branches so you can coordinate merges.

I recently started a project where Mercurial is standard and I can say all kinds of good things about MQ. It's the next best thing to git if you must use Mercurial instead.

Gazpacho fucked around with this message at 06:44 on Jun 27, 2013

evensevenone
May 12, 2001
Glass is a solid.

GrumpyDoctor posted:

The problem is that we've gotten a directive from on high to make our codebase publicly available.

Do you have to give people your whole git repo? Why not just build source packages for each release, and have the build process take care of scrubbing the file(s) as needed?

It seems like there could be a lot of crap in a git repo you might not want to make public.

raminasi
Jan 25, 2005

a last drink with no ice

evensevenone posted:

Do you have to give people your whole git repo? Why not just build source packages for each release, and have the build process take care of scrubbing the file(s) as needed?

It seems like there could be a lot of crap in a git repo you might not want to make public.

I'll happily concede inexperience with this sort of thing, but that sounds like at best the same amount of work as this. There isn't anything else I can think of in the source code that shouldn't be public - I'm pretty sure I've gotten all the swear words out of comments.

wwb
Aug 17, 2004

GrumpyDoctor posted:

We're using a third-party library in a project that requires a license key to use. The way it works is that you include the license key in the first call into the library, and their license terms require that you actually embed the drat thing in your source code (you can't pull it out of a config file at runtime). So, we've been doing this, and it's been working fine. The problem is that we've gotten a directive from on high to make our codebase publicly available. This presents two problems.

The first is that anyone looking at the source history will be able to see the key. We're using git, and everything I've read about rewriting git history has to do with retroactively blowing away whole files, but this thing doesn't live in its own file and even if it did I'd have to change the build files as well. I'm not even sure what this change would look like - a global, historical, find-and-replace to turn the key into "KEY GOES HERE" or something? Can git do this? Do I want it to? Is there any option that's easier than actually starting a new repo from scratch? We can't have the vendor generate a new key for us because the software doesn't actually phone home so there's no way to invalidate an existing key (I assume the library just does some math to make sure the provided key is good - I don't see how there could be a master list they could update somewhere.)

The second problem is how to deal with this going forward. What I've thought of is having a build script that does the aforementioned find-and-replace, but in reverse: to copy the key into the codebase (from an unversioned config file or something), build the project, and then remove it. Does this sound reasonable?

What platform?

Presuming .NET stuff would it be OK to include the file as an embedded resource rather than in the code? How you can handle that build-wise is to have a dummy file in the git source control then use a second repo (this is where I use svn) and pull in the key file before you build for release.

For the public repo I would just make a clean break and stand up an independent repo.

wwb fucked around with this message at 19:10 on Jun 27, 2013

duck monster
Dec 15, 2004

Argue posted:

What I'm asking for is probably bordering on sorcery, but our client wants us to make some private modifications to an open source library. They've mirrored the project on SVN, but the real project is on Git. I've used git-svn to check out the project, and now I'm wondering if there was a way to add the real repository as a remote, so that I can easily pull in changes into our private copy.

I realize this is probably impossible, especially since the SVN repository doesn't even share the same history as the Git repository, but if anyone has any tips on how to manage this project, I'd greatly appreciate it.

Sort of off-topic, but be careful if your combining open and closed source components to be careful of licence. Its OK to combine GPL and closed source code as long as its not distributed outside your organization (clients organization.... I don't know?) but that doesn't necesarily true of other share-alike licenses. Your good to go on all the MIT/BSD/Apache type licenses though.

ExcessBLarg!
Sep 1, 2001

Gazpacho posted:

That would be guilt,
I've used guilt some but I found it to be buggy on occasion, and I don't think it's actively maintained.

Honestly I've found that interactive rebases are a totally reasonable alternative to mq/guilt. The downside is that you can't "comment out" entires in a seires file (interactive rebase list) that you intend to reapply later.

Gazpacho posted:

I recently started a project where Mercurial is standard and I can say all kinds of good things about MQ. It's the next best thing to git if you must use Mercurial instead.
So Mercurial's philosophy, at least at one time, was that revision history should/is/must be immutable and its design is a reflection of that. I suppose there's still a valid argument that most history should/is/must be immutable (e.g., anything that git requires a force-push for), but even there folks appreciate the concept of "mutable recent history" which MQ provides.

When I was a Mercurial user I really appreciated MQ for that feature. However, at the same time I was bothered that my workflow required a nearly-orthogonal extension to the base revision control system, particularly one that can be confusing to explain to others. With Git, interactive rebases is pretty much part of the standard workflow, so once folks get used to Git, it's just part of core system.

These days Mercurial bookmarks and rebase might be a closer approximation of Git, but I haven't used them.

Victor
Jun 18, 2004
Are there any books/articles that talk not about specific git commands, but how to think in git? It would be written to someone who is more familiar with something like svn or cvs—someone who knows a few things about source control, but not a lot. There are some features of git that one would never appreciate, for example, without a motivating use case. You wouldn't always know that there is an excellent git command (or few commands) for said 'motivating use case' if you had never read about said use case!

Make sense?

duck monster
Dec 15, 2004

Heres git in a nutshell:

1) Instead of a central repository everyone pushes to , everyones a repository and can push/pull to/from each other.
2) Don't be afraid of merging and branching. Gits really smart about stuff and will start honking if things won't work.

Aaand thats about the basics.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

duck monster posted:

Heres git in a nutshell:

1) Instead of a central repository everyone pushes to , everyones a repository and can push/pull to/from each other.
2) Don't be afraid of merging and branching. Gits really smart about stuff and will start honking if things won't work.

Aaand thats about the basics.

That doesn't really answer the question, though. As to myself, I don't know about any book or tutorial, though I would recommend "Git for ages 4 and up" if you're still used to SVN/P4 where branching/tagging are actually heavyweight operations.

Take a look at filter-branch; it's actually a decent command for representing git's philosophy of rewriting history.

MononcQc
May 29, 2007

Git from the bottom up [PDF] is the only git book I ever thought made sense to get the philosophy behind the tool.

Git is a leaky abstraction with a bad interface over its underlying format, and all efforts to hide this fact away end up being more confusing than anything else. Going in and understanding the underlying format itself is, as far as I can tell, the best way to ever feel comfortable with git and what it does to your repositories.

People have to stop acting as if the fact that git was distributed or not SVN was the problem. Mercurial is easy to grasp. Git is harder because the way people interact with it is bad and tries to imitate what you'd do with SVN while failing at it. The git people inserted this facade in front of it that badly hides its internals, and every power user ignores it, while others are left staring at it, trying not to see the guts behind while that's precisely what you need to get.

Victor
Jun 18, 2004

MononcQc posted:

Git from the bottom up [PDF] is the only git book I ever thought made sense to get the philosophy behind the tool.
Thanks! This is very much the kind of thing I was asking for. At the very least, it looks like a great start. I was thinking something longer might be useful as well, which provides nice motivations for a variety of uses of the various git commands. "Here's how to use git elegantly."

Gazpacho
Jun 18, 2004

by Fluffdaddy
Slippery Tilde
When I've used CVS or Perforce or TFS in a team project, that has always been accompanied by pressure to submit only fully-functional, fully-tested changes to the repository. My experience with git (and Mercurial MQ) has more like a personal save-game for coding. Like, imagine that you're playing through some really difficult boss fight in a video game, which requires you to complete some sequence of hits against the enemy. So you make a hit, save the game, make another hit, save the game, and so on while keeping an eye on your health to make sure you can get through the rest of the fight. Then maybe you decide to try a different strategy so you roll back to earlier in the fight and try that strategy. And now imagine that you have all these saved games organized in a tree with branches for your different strategies. When you finally finish the sequence, it looks like you played through like a pro. That's how you can think of git.

MonocQc, I have no frigging idea what you're talking about.

Victor
Jun 18, 2004
I actually did that for the first time, while trying to troubleshoot a problem. The thing I didn't do was take notes in some DEBUGLOG file, which I should have. At least I thought I shouldn't make my commit messages too long... which could be wrong. I'm not sure what the protocol is for that, whether it's a project-specific thing, or what. One thing that you miss out on is that you can choose to merge only the steps which headed toward the solution, so that the commit log isn't cluttered up with debugging crap.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Gazpacho posted:

When I've used CVS or Perforce or TFS in a team project, that has always been accompanied by pressure to submit only fully-functional, fully-tested changes to the repository.

That's more a symptom of poor branching practices than anything else. I always encourage development to take place in a dev branch, where the only barrier to checkin should be that the code compiles. It's only as you push the code back down towards the trunk (through any other intermediate branches) that things like CI come into play.

MononcQc
May 29, 2007

Gazpacho posted:


MonocQc, I have no frigging idea what you're talking about.

I think git has a lovely interface that is unintuitive and annoying. Whereas CVS(nt), SVN, or even hg, will support similar sets of operations and transitioning from one to the other is usually easily done without needing to drop previously acquired knowledge, git will use similar terminology to represent fundamentally different operations (some examples). I find it useful to try and forget about previously acquired knowledge when dealing with git, whereas other source control tools I've used still had me able to reuse former knowledge far more easily. That's purely my opinion, though.

---

Git's way of doing things is also intimately related to how it represents commits as linked lists/trees of diffs/patches that can be applied, and not knowing about them severely limits you -- squashing, reordering commits, rebasing, cherry-picking them and whatnot are things you are expected to do that uses them.

This is probably related to how git allows you to change history: To change history, you need to understand how git represents data, in a way similar to how you need to know how to manipulate pointers in a data structure.

This is something you do not need when using SVN or mercurial, where you could happily imagine the tool works by taking a full snapshot of the entire project for every commit. You could also imagine them working the same way git works internally and disabling mutability, or you could also imagine them tracking individual files and attaching them to commits, if it felt comfortable, and use these tools fine 99% of the time. The rest is just optimization and implementation details you do not need to worry about.

Basically, CVS, mercurial, and SVN let you ignore its internal representation if you don't want to hear about it. It's possible the user has a mental model of how things work that isn't the same as reality, and that is not a problem, and will very likely never be one because the interfaces these programs present to you properly abstract these details away from you. You can usually carry that mental model across each of these tools with minimal overhead and keep being productive.

When you get to use git, though, that mental model has to go if it's not the one git uses already. To use a lot of major functionality in git, you need to understand how it represents changes, and how this representation ties in with the git vocabulary. The upside of it is that you usually get more flexibility out of the box by understanding this.

I can't exactly remember which mental model I had when I came to git, but I remember it was the wrong one. Until I managed to figure it out (through 'Git from the bottom up'), it was always a huge, huge pain to deal with commands that relied on an internal representation I felt I should not need to know about to use them.

And I kept feeling I should not need to know about this internal representation because people kept telling me git was easy and simple and "here's the command you need to use it's simple dammit". In hindsight, you definitely want to know more about the details. You can't stick to the interface in git the way you could in SVN, CVSNT, mercurial, or whatever without walling you off a major part of the features, no matter what people say.

MononcQc fucked around with this message at 18:11 on Jul 8, 2013

Gazpacho
Jun 18, 2004

by Fluffdaddy
Slippery Tilde
I've tried to get around the problem by creating feature branches in Perforce. It hasn't worked and IMO can't work as i would need it to because feature branches don't support the use case of trying different strategies for one feature. If I had to use it again or subversion I'd want to put git in front of it.

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug

MononcQc posted:

it represents commits as linked lists/trees of diffs/patches that can be applied

You lost me here, because this is flat out wrong. Git commits have pointers to full snapshots of the project content. Tools like rebase treat the differences between commits as diffs/patches, but Git internally does not store data in this way. The only tool that I know of that operates in this manner is darcs, but I haven't kept up with VCS news since starting to use Git.

I agree overall, though, that fundamentally understanding Git's data model is required to use it in any real capacity.

MononcQc
May 29, 2007

Lysidas posted:

You lost me here, because this is flat out wrong. Git commits have pointers to full snapshots of the project content. Tools like rebase treat the differences between commits as diffs/patches, but Git internally does not store data in this way. The only tool that I know of that operates in this manner is darcs, but I haven't kept up with VCS news since starting to use Git.

I agree overall, though, that fundamentally understanding Git's data model is required to use it in any real capacity.

Yeah, I was overly vague, but went with a short way to mention it. I'm not actually aware of how git stores things on disk, but only of the tree/blob/commit representation as described in the "Repository: Directory content tracking" chapter of Git from the Bottom Up.

duck monster
Dec 15, 2004

Volmarias posted:

That doesn't really answer the question, though. As to myself, I don't know about any book or tutorial, though I would recommend "Git for ages 4 and up" if you're still used to SVN/P4 where branching/tagging are actually heavyweight operations.

Take a look at filter-branch; it's actually a decent command for representing git's philosophy of rewriting history.

Honestly for 90% of use cases though it doesn't really get much harder than push, pull and commit. I mean for bigger projects sure you need to start talking about branching and tagging and the like, but for dicking around and keeping code safe, synced and up to date, gits pretty straight forward.

evensevenone
May 12, 2001
Glass is a solid.
Maybe it's my ADD but I branch even on personal projects now. I usually make a feature branch, do a bunch of commits and then when it's done I squash and rebase back on master.

I just like to have master always be working.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Lysidas posted:

You lost me here, because this is flat out wrong. Git commits have pointers to full snapshots of the project content. Tools like rebase treat the differences between commits as diffs/patches, but Git internally does not store data in this way.

Git packfiles, which are what are sent over the network, and used as an archiving format for not-recently-accessed objects, are a delta storage format.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

evensevenone posted:

Maybe it's my ADD but I branch even on personal projects now. I usually make a feature branch, do a bunch of commits and then when it's done I squash and rebase back on master.

I just like to have master always be working.

As do I. I mean, that's one of the main points of git... branches are easy.

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug

Suspicious Dish posted:

Git packfiles, which are what are sent over the network, and used as an archiving format for not-recently-accessed objects, are a delta storage format.

Very true, but that isn't really what MononcQc meant. He implied that Git commits fundamentally are patches against the previous version (a la darcs), so that if you make a commit Y on top of commit X, Y is defined and stored as "X plus these changes".

This isn't the case; commit Y isn't a diff but an annotated pointer to a tree object that specifies the full contents of every file in the project. Whether these objects (commit, trees, blobs) are stored as deltas is really an implementation detail and a file size optimization. It doesn't mean you can say that "Git stores commits as patches" -- packfile delta chains are mutable and in fact older objects are often stored as deltas against newer ones. (The rationale is that you're more likely to need to access newer objects, so CPU time shouldn't be spent to reconstruct those.)

The most accurate answer to "does Git store diffs" is something like "technically yes, but not in the way that you meant".

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
The plumbing operates on full-tree blobs, but the porcelain mostly exposes commits as if they were diffs. Thinking of commits as deltas is incorrect, but for many purposes is still the right thing to do.

Tequila Bob
Nov 2, 2011

IT'S HAL TIME, CHUMPS
If I already know Mercurial and Darcs, should I take the time to learn Git? The posts here make Git seem complicated, but I would learn it if it has significant advantages over those other two.

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Tequila Bob posted:

If I already know Mercurial and Darcs, should I take the time to learn Git? The posts here make Git seem complicated, but I would learn it if it has significant advantages over those other two.

If you need it then learn it. If you're curious then learn it. It has its pros and cons compared to other DVCS systems.

I'm not sure what kind of answer you're looking for.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply