|
sonic bed head posted:I am pretty sure that your understanding of what svn:ignore does is incorrect. svn:ignore has nothing to do with the repository/server. It is just a setting that tells the client to ignore that directory when updating/committing. I can't guess what's happening with epswing, though.
|
# ¿ May 28, 2009 17:06 |
|
|
# ¿ Apr 29, 2024 13:15 |
|
Kekekela posted:Cool, thanks. That being the case, why do I have to do the "Update" in step 2?
|
# ¿ Aug 22, 2009 03:44 |
|
I have a question about version numbers as they relate to version control. At the company where I recently worked, the version number of a software package was stored in a file that's part of the project. The checked-in version number string usually ended in "-dev", like "1.4.0-dev", for development builds of the software. Release candidates had version strings like "1.4.0-rc3". The company made hardware devices as well as the software to run them, so it was common for developers to test changes by loading their development builds on real hardware. The test department could tell at a glance that a system was running a development version of software, and bring issues to the attention of the developer who made it instead of entering it in the issue tracker. Whenever the lead developer tagged a release candidate or released version, he/she would do so by modifying the version number string, committing that file, tagging the software at that state, and then restoring and committing the "-dev" suffix. It's this last part of the process that I think is a little clumsy and I'm wondering how to improve it. For a personal project, I can tell that (for example) the binary in my project directory reports "v2.4.10" but it's actually "v2.4.10 plus changes that I'm still working on". It isn't nearly as important to have the "-dev" suffix for development builds of a personal project, but I got used to that process and I quite like it. How would you handle this? I'm using Git, and I considered always having the version number as "-dev" in the master branch, then tagging a release by making a branchless commit that changes the version number. For example: code:
Should I care about this at all? It's really only useful to get information about a binary that I didn't produce and am not familiar with. I don't keep old binaries around (that's what version control is for) and I'm the only one working on this project, so this is probably a moot point.
|
# ¿ Sep 12, 2009 03:04 |
|
BizarroAzrael posted:If there a good way to find conflicting files in a working copy without attempting an update on it? We have guys that often get conflicts and it would be nice if I could scripts something that finds the problems so they can be fixed by hands, rather than have them kill the overnight update and the subsequent build process. You could maybe do a 'svn diff BASE:HEAD file-in-working-copy' for each remotely updated file, but you'd basically be reimplementing (or wrapping) patch to determine whether it can cleanly apply the diff. I have a more general question for you, though: do I understand correctly that this nightly update/build happens on each developer's machine? Why? Is it so hard to do a manual svn update whenever there are changes that you think you should have? Does the build of your project take several hours or something? I'm just curious. EDIT: Free Bees is right; I meant svn status instead of svn update. Lysidas fucked around with this message at 06:58 on Sep 19, 2009 |
# ¿ Sep 18, 2009 19:41 |
|
supster posted:I'm trying to get some sort of version control working with my remote web server. I set up an SVN repository (on a hosted service) and figured that if I map my remote web server as a drive in Windows (via FTP) then I would be able to use the TortoiseSVN context menus normally in the mapped folder. However they don't appear at all - what are my other options? Committing files with TortoiseSVN over a drive-letter-mapped FTP server is a bad idea. It isn't the worst idea, and you can probably get it to work, but there are much better ways. TortoiseSVN is probably configured to not display its context menus on network drives, and this is a good thing. You might transfer everything in bulk to a directory on your machine, and commit from there. You might consider a distributed version control system like Git or Mercurial -- you can just run git init or hg init on the live web server code, commit it, and clone the repository onto your machine.
|
# ¿ Oct 21, 2009 16:09 |
|
supster posted:Unfortunately FTP is pretty much the only way I can access the files.
|
# ¿ Oct 21, 2009 21:39 |
|
supster posted:Can you explain in some more detail how a distributed system will help me where SVN can't? Maybe you can expand on "synchronize it afterwards". How exactly are you editing these files remotely if you don't have the capability to do anything remotely? Are you using a text editor built into an FTP client or something (this is a horrible idea)? When you say that everything is done remotely, are you editing files on the live site (this is an even worse idea)? You are wrong about not wanting to do anything locally, by the way. Normal development practice is to work in a local copy of the files (a Subversion working copy or a Git working tree or whatever) and commit to the repository as you make your changes. When a feature is complete or stable enough to be pushed to production, you can use the same version control system to update the files on the server (either svn up or git pull). Whose arm do you have to twist to have your client let you use non-braindead development practices? I repeat: What OS is the web server running?
|
# ¿ Oct 21, 2009 23:44 |
|
LightI3ulb posted:I have an SVN that stores reports that can be of quite excessive size through use of a perl module. I've been tasked with trying to find a way to have the module be able to add a file to the svn if the file does not already exist. Your question about the .svn/entries file is quite strange and I think you're making this a lot harder than it is. That, or I'm totally misunderstanding your question. To list what's in the repository, use svn ls (this is much better than running 'svn checkout' and parsing that output). To add a file, svn add.
|
# ¿ Dec 21, 2009 18:21 |
|
nelson posted:So if your source files are ASCII text files and you or the people you work with are like me you may want to go with CVS. nelson posted:I've managed to screw something up every VCS I've come in contact with . Try Git and let us know how badly you break it.
|
# ¿ Jan 16, 2010 14:49 |
|
nelson posted:CVS is generally better than RCS for group work nelson posted:I've managed to screw something up every VCS I've come in contact with . Never use CVS for a new project, and never use it at all if you can avoid it.
|
# ¿ Jan 16, 2010 19:47 |
|
Lamont Cranston posted:This is probably really stupid, but I want to know if it's possible. I have a project that I'd been working on and storing in a git repository. About a month ago I decided to scrap everything and start over, and stored that project in a separate repository. Now, there's obviously no reason to keep both of them around, but I'd like to have the old project's history. Is there a way that I can sort of stitch them together, so that after the last commit on the original project, the new project begins? (And it would appear that all of the old files were deleted because the first commit of the new project was empty) This is certainly possible, and not stupid. It's called a "graft", in Git terminology. See http://git.wiki.kernel.org/index.php/GraftPoint This is a temporary mechanism that adds a fake parent to whatever commits you want. You can make it permanent by running git filter-branch after setting up a graft point. I'm not quite sure what the filter-branch arguments should be for this, but I'm sure it's somewhere on http://www.kernel.org/pub/software/scm/git/docs/git-filter-branch.html EDIT: Be warned that git filter-branch rewrites history in bulk, changing the SHA-1 values of all commits downstream from where you grafted onto the old history. People won't like this if you've shared your history and they've based their work on yours. EDIT 2: Actually it's quite easy: code:
Lysidas fucked around with this message at 05:26 on Jan 22, 2010 |
# ¿ Jan 22, 2010 05:08 |
|
Would you ever turn down a job because of the version control system they use? I'd like to imagine that this has happened at least once, somewhere:quote:(during interview/tour/whatever)
|
# ¿ Feb 12, 2010 20:40 |
|
beuges posted:My current idea is to create a repository for each client, and give my dev partner access to the ones he is involved with, rather than having to deal with permissions on a single repository, and likely exposing all my clients even if the actual contents of their projects is restricted. You can import that back into a different Subversion repository, but that's a bit more complicated (not to mention pointless).
|
# ¿ Mar 5, 2010 22:54 |
|
beuges posted:the impression I've gotten from this thread is that the git clients for windows pretty much suck. I also recommend looking into SmartGit.
|
# ¿ Mar 6, 2010 16:28 |
|
Dromio posted:
code:
|
# ¿ Mar 9, 2010 18:06 |
|
He doesn't need --shared:code:
If he's pushing to his home machine, he's probably the only user who will do so. If multiple people will use this repository, Gitosis is a much better idea anyway.
|
# ¿ Mar 9, 2010 18:18 |
|
Zhentar posted:Not only is the msysgit GUI eye-rape, but when I was trying it out last week trying to checkout a branch would crash it. TortoiseGit is at least usable, although it's still pretty easy to mess up. As far as I know, TortoiseGit depends on an external Git executable (e.g. msysgit). SmartGit definitely does. If you had serious problems with msysgit, I'm not sure how TortoiseGit could be more stable. (I feel like repeating myself for some reason: I couldn't fathom using Subversion without a good GUI (I really enjoyed SmartSVN), but I've found a GUI to be bothersome and useless for Git. Its command line interface is so clean and easy, partially because of the index. I don't have to specify the paths/files as part of the commit command, unlike SVN. Man up and use the CLI )
|
# ¿ Mar 9, 2010 18:53 |
|
Git or Mercurial, without a doubt. If you're starting from scratch, you have no excuse to use Subversion (centralized version control is obsolete). Distributed version control is never overkill in any situation; actually it's much quicker and easier to get a repository running. Because of this, I use Git to version control things that I never would have thought of before. I heartily recommend Git, but Mercurial's better Windows support might be better for you. As mentioned earlier in this thread, TortoiseHg is great and just hit 1.0.
|
# ¿ Mar 11, 2010 17:42 |
|
Sizzlechest posted:He's looking to set up a repository at work. He didn't explicitly say it, but I'm going to go out on a limb and guess the following are true: A distributed setup does not, in itself, increase the odds of larger merge conflicts. Merging as late as possible increases these odds. What kind of VCS do you think encourages people to delay merging as long as they can? One where repeated merging is painful (and destroys history), or one in which merging is clean, easy and fast? When I have a long-running topic branch, I regularly merge the master branch into it and fix any merge conflicts as they arise. Then, when I merge in the other direction, it's trivial. (Note that this is equivalent to keeping unstable changes in your SVN working copy and repeatedly 'svn up'ing whenever anyone else commits.) It's absurd to suggest that a DVCS would inspire employees to fork the codebase and never contribute their changes back to the mainline. I do, however, have the right to say "I want to work in private" -- I get to incrementally commit and refine my changes until they're ready to share with others. My experimental branches don't clutter up everyone else's history unless I decide to push them to the central repository, but I can share them if I want to. I'll go so far as to say that starting from scratch with Subversion is stupid. Even for two people, a distributed system (and the branching capabilities that it comes with) is superior in every way.
|
# ¿ Mar 11, 2010 18:43 |
|
Milde posted:Just out of curiosity, what makes Git easier to use than Mercurial for you? I use both, but I find Mercurial a lot simpler UI-wise. I primarily use command-line tools, and I've completely fallen in love with the index. I'm a big fan of staging only parts of a file; it's a neat feature. I also really like the ease with which Git lets you rewrite history -- as far as I know, Mercurial doesn't ship with features like rebasing, commit amending, etc. (not to mention git filter-branch). Essentially, I understand Git a lot better than Mercurial, so I see it as easier to use. ("what? you mean hg pull doesn't merge? that's stupid") I should probably stop stating this as objective truth
|
# ¿ Mar 11, 2010 19:17 |
|
Sizzlechest posted:No, I said you're cockamamie hybrid setup is.
|
# ¿ Mar 11, 2010 19:30 |
|
Sizzlechest posted:So...you're implementing a centralized system? Linus said it much better than I will: http://lwn.net/Articles/246381/ In a development organization, changes should eventually make their way into a central repository. A DVCS gives you a lot of freedom in how this happens (like through a single person who collects changes, for example). Sizzlechest posted:Working copy changes in subversion also run instantly. Commits to a centralized server whether in svn or git will take longer, obviously. Sizzlechest posted:And you don't need a decentralized VCS to accomplish this. The type of branching you're talking about is fast and easy in SVN, too. That's not the kind of branching and merging Linus was talking about when he advocates a decentralized system for Linux. You've got thousands of developers working independently, working on things that may never get into an actual release. They need a way to work on their own and merge later. Two developers working in the same shop don't need to work this way, nor is it advantageous for them to do so. Subversion absolutely does not keep track of branching and merging "just fine".
As the Subversion book says, "The bottom line is that Subversion's merge-tracking feature has an extremely complex internal implementation, and the svn:mergeinfo property is the only window the user has into the machinery." This just screams to me that the Subversion devs painted themselves into a corner with merging, and the best thing to do is scrap the system and redesign it from scratch. Incidentally, this is exactly what happened with Git, Mercurial, and others!
|
# ¿ Mar 11, 2010 19:53 |
|
I'd heard that Mercurial didn't have lightweight local branches like Git, then heard that it did, but I didn't realize that it was through an extension and that these bookmarks can't be easily pushed or pulled. Git's local branches seem much easier to me. I just remembered that I strongly dislike using .hgtags -- I think it's bizarre that you can check out a tag, but immediately after doing that, the tag kind-of no longer exists. Git got it right; branches and tags are just pointers to commits. Tags don't move, but branches do. Mercurial's command set seems simpler, but Git's history model makes much more sense to me. I believe that this is the difference between 'simple' and 'easy': Mercurial is easier to start using, but Git's design makes a lot more sense.
|
# ¿ Mar 11, 2010 20:11 |
|
Sizzlechest posted:It's a specious argument since a centralized server doesn't require the same kind of merge tracking as a distributed one. There's a single repository everyone is working from. People aren't creating personal branches to work in and then merging them with public ones. They're working from a shared branch or trunk. Merge conflicts get resolved on incremental commits. Merging between branches and the trunk is tracked sufficiently. I suppose you could do private branching in Subversion, but there's no logical reason you'd want to, nor was it designed with that in mind. If you're not making branches, then what do you do with experimental changes that aren't ready to go live yet? Do they sit in your working copy until they're ready to commit? (That's a horrible idea.) Do you commit them to the trunk? (That's even worse; what happens if you need to quickly fix a bug, but your experimental changes are mixed in with other important bug fixes that you can't easily cherry-pick?) Sizzlechest posted:There's no reason why a private company like the one mentioned needs to implement a distributed system. They don't need have one of them play "merge manager" and reject on a problem like Linus does. They can handle conflicts as they happen, which improves communication. I absolutely agree that two people should use a centralized workflow -- both of them have individual repositories and have push access to the central repository. DVCS tools are still have vastly superior features. Thermopyle posted:I'm fairly noobish when it comes to this stuff...but I think the argument you need to make is why shouldn't they use a DVCS.
|
# ¿ Mar 11, 2010 21:03 |
|
Bonus posted:This is a constant back and forth discussion here on SA. People will claim how you should use SVN because it's "good enough" and "you don't need more". Even after being faced with arguments that distributed version control systems can do pretty much everything that centralized ones can do and that they can do it better. Hell, this happened a few pages ago in this thread. At the time, I wasn't as experienced with Git and I hadn't yet Seen The Light. Now that I have, I'm an enormous jackass to anyone who suggests starting from scratch with Subversion.
|
# ¿ Mar 11, 2010 21:21 |
|
In case you didn't know, the msysgit version of Git 1.7.0.2 was released a few days ago. If you're using 1.6.5.1, I strongly recommend updating. Apparently there has been a lot of work in the Git mainline to make it perform better on Windows, and it definitely shows in this release. It feels much snappier to me.
|
# ¿ Mar 12, 2010 20:29 |
|
Knackered posted:Long ago, I was going to say something about your original post: Knackered posted:I was thinking of using a simple SVN over GIT/Mercurial. Since I'm in this thread, I may as well share some useful information too: Zhentar posted:AutoCrLf handling is, at best, poor. It will mark files as modified immediately after checking them out, which will interfere with a lot of operations (no switching for you). I've seen this happen and I believe I have an explanation for it. I would guess that your repository contains files with CR+LF line endings (i.e. the actual content of the blob objects). If core.autocrlf = true, Git will translate everything to a CR+LF line ending on checkout. However, since it would be committed with LF line endings instead of the CR+LF in the current commit, it's marked as modified (even after a git reset --hard or git stash). I believe that the best way to fix this is to commit the changes, thereby fixing the line endings in the actual repository. This will have detrimental effects on git blame, though. I'm sure you could fix the line endings in all previous commits with git filter-branch, but that can be quite disruptive (see the documentation on recovering from an upstream rebase) Off the top of my head (i.e. I haven't had to do this in a while and I'm on a Linux machine now), I've found this sequence of operations to fix most CR/LF problems except for the one that I described above: code:
|
# ¿ Apr 13, 2010 18:01 |
|
HFX posted:I've been using git with an svn connector. I had been doing all rebase and commits from the master after merging local branches. However, the rebase operation has an unfortunate side effect of removing the local git history for the branch being merged back to trunk. Can someone recommend me a way to keep the history and still be able to do updates from SVN? The short answer: create a new branch (like git branch old) before running git svn rebase. I'm not sure if you should make new commits on the old branch, though. Repeated rebases of the same commits might not be handled correctly (though they seemed to be in my quick test).
|
# ¿ Apr 14, 2010 18:36 |
|
Lexical Unit posted:I'm running git from Mac OS X and developing on a Windows VM. This seems like a very bad idea, and I'm not surprised in the slightest that you're having problems. Are you using Git over a network mount, or something like that? I recommend installing msysgit in the Windows VM and using Git from there. If you're using it on OS X, you're probably well-versed in the command line and won't mind a sub-par GUI (though I really like SmartGit, and it just hit version 1.5). You should probably make sure that the repository's core.autocrlf value is set to false. msysgit sets that to true by default for new or cloned repositories, but Git on OS X won't. Your Git repository probably contains CR+LF line endings, and that combination can cause problems (as I mentioned a few posts ago).
|
# ¿ Apr 15, 2010 01:38 |
|
Thermopyle posted:I've got a project hosted on github, that I'm totally rewriting. Starting from scratch. Rewrite it from scratch, committing locally to do so. When you're ready, push your changes to GitHub. If you aren't comfortable having your work in only one place (I'm not), make a bare clone of your repository, preferably copy it to a different machine that you have access to, and push to that repository quite often.
|
# ¿ May 26, 2010 15:56 |
|
Spengler posted:It may be as much a philosophical question as a practical one, but pushing to an extra repo to avoid branching in a situation where it seems warranted seems more like it's extra work than a shortcut. It absolutely is appropriate to make a new branch, and I guess I considered it basic enough that I didn't mention it in my post (oops). I didn't think Thermopyle wanted to push that new branch to GitHub yet; he did say that he didn't want his rewrite available on the project page and I interpreted that as "anywhere on the project page". Now that I think about this more, pushing the new branch to GitHub is probably what he wanted (or close enough).
|
# ¿ May 27, 2010 14:38 |
|
Detailed instructions here (this describes a superset of what you want to do): http://progit.org/2010/03/17/replace.htmlScott Chacon posted:For example, say you want to split your history into one short history for new developers and one much longer and larger history for people interested in data mining. You can graft one history onto the other by replaceing the earliest commit in the new line with the latest commit on the older one. This is nice because it means that you don’t actually have to rewrite every commit in the new history, as you would normally have to do to join them together (because the parentage effects the SHAs).
|
# ¿ Jun 1, 2010 05:01 |
|
Zhentar posted:AutoCrLf handling is, at best, poor. It will mark files as modified immediately after checking them out, which will interfere with a lot of operations (no switching for you). ToxicFrog posted:The problem he refers to is not with the editor, but with git itself. It's possible to get into a situation where the following happens: Lysidas posted:I've seen this happen and I believe I have an explanation for it. I would guess that your repository contains files with CR+LF line endings (i.e. the actual content of the blob objects). I was very interested to see this commit when git pulling my clone of Git itself. Finn Arne Gangstad <finnag@pvv.org> posted:autocrlf: Make it work also for un-normalized repositories
|
# ¿ Jun 22, 2010 20:58 |
|
epswing posted:I should have clarified that I'd have a clone specifically for accepting changes. In fact I could have as many clones as I have pushing developers, each hg serveing on a different port. So you'd be creating a "public" repository for each of them to push to? You've independently created the "integration manager" workflow described at http://whygitisbetterthanx.com/#any-workflow (ignore for the moment that this page is Git evangelism; the DVCS workflows described are mostly tool-agnostic). Having a clone for each developer to push to (that you create and manage) is a strange idea; it makes more sense for each developer to manage that clone themselves. Each of them can publish their code into their public repository, and you pull from those at your leisure. If you like what you see, push it to the central repository (which the developers should regularly pull from). It's a good idea for developers to have separate public/private repositories: one that is used for actual work and one that's used to publish changes to others when a feature/branch/etc. is stable enough to integrate. If you're pulling directly from their workspace without warning, you're likely to obtain half-done features/changes/etc.
|
# ¿ Jun 29, 2010 17:07 |
|
Your post didn't explicitly say this, and I want to rule it out before any more investigation: did you commit the change(s) before switching back to master? Checking out a new branch should preserve local un-committed modifications (and untracked files) that don't conflict with the branch you're switching to.
|
# ¿ Jul 17, 2010 06:21 |
|
Argue posted:What's the best way of working with a project with a number of local-only configuration changes in git? Right now I have a branch that i continually rebase and never commit, but that's kind of a pain. I can't .gitignore them because some of the files have already been versioned. I can git update-index --assume-unchanged them, but git stash fucks things up when I do. Anyone have any other hints? Some questions:
If you answer 'no' to 1 or 3 for technical or political reasons, then I don't really have any useful advice. Otherwise, I would use a default configuration file (probably named 'default.conf') that is version controlled, and an override file that is .gitignored (maybe named 'local.conf' or 'override.conf' or something). I've used this style with a lot of success in the past.
|
# ¿ Aug 31, 2010 20:05 |
|
CHRISTS FOR SALE posted:Hey git experts, I installed git on my jailbroken iPhone, and I'd like to know why I can't clone repos. Here's what happens when I try to: http://gist.github.com/608718
|
# ¿ Oct 3, 2010 18:12 |
|
Yakattak posted:For some god drat reason, git isn't making commits when I merge . For instance, I'm on master and I want to merge in issue3. Does the merge output contain "Fast-forward" or "Merge made by recursive."? If this is the situation: code:
If you really want a new commit after the merge (i.e. you want the history to look like this afterward): code:
|
# ¿ Nov 11, 2010 23:09 |
|
Yakattak posted:Yeah it does fast forward. What are the drawbacks to not fast forwarding? The only drawback is that it makes the history less linear and slightly harder to follow if you're looking at it later (EDIT: and there a lot of other crazy complicated merges in this vicinity). On the other hand, if you don't fast-forward, the structure of the graph can easily tell you which commits were related to issue3. You lose this information if you do a fast-forward, but this isn't a consideration if you prefix your commit messages with "issue3" or "i3" or something like that. Lysidas fucked around with this message at 23:33 on Nov 11, 2010 |
# ¿ Nov 11, 2010 23:27 |
|
|
# ¿ Apr 29, 2024 13:15 |
|
God drat git diff --stat @{yesterday}.. is awesome. Makes me feel like I've accomplished something after some major refactoring. Just thought I'd chime in with that.
Lysidas fucked around with this message at 21:01 on May 27, 2011 |
# ¿ May 27, 2011 17:18 |