Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
wwb
Aug 17, 2004

The clients are generally compatible.

SVN does not have a concept of shelve -- that is really a DCVS feature in my experience. You might want to look at Git-SVN or HG-SVN where you can work locally in git or hg and then commit to the SVN repo when you get stuff baked enough.

When I had to do this stuff in SVN, your option is to export a patch and then roll back to green.

Adbot
ADBOT LOVES YOU

wwb
Aug 17, 2004

Sorry; I probably spend too much time with this stuff. Basically, there are two kinds of version control in the world -- centralized and distributed. Centralized systems have a single database one needs to connect to to do much of anything. Distributed systems basically mean everyone who has checked out the repo has a full fidelity copy of the whole thing. Lots of other differences in terms of workflow but it all springs from this.

CVS, TFS (at least in previous versions) and SVN would be centralized. Git, HG and all the stuff the cool kids are on are distributed. Including Microsoft who has announced that they will make TFS work with Git as a SCM back-end.

A good start would be to first go read the first parts of the red bean SVN book (http://svnbook.red-bean.com/) and then go hit up http://hginit.com for a good HG intro written for someone used to SVN.

wwb
Aug 17, 2004

Zhentar posted:

Shelve is fairly high up on the SVN feature wishlist. It didn't make it into 1.8, but SVN 1.9 may well have it.

Cool, have pretty much given up on it at this point -- 1.7 clients get you a much better local DB option which we've taken advantage of but anything new is going into HG.

wwb
Aug 17, 2004

I'm pretty fond of http://nvie.com/posts/a-successful-git-branching-model/ as a starting point -- we use it on some rather ugly multi-headed multi-stakeholder beasts that tend to have multiple concurrent changes while also keeping a current release branch for emergency patches, etc.

We were using a somewhat similar strategy with SVN, challenge there is getting the changes back in from the branches but patch files are a reasonable workaround at times.

Teamcity is a godsend here -- it can now work against HG or git branches so you can get really wild with stuff. Also nice is that you could probably live inside the free SKU for teamcity, especially starting out so you aren't going to be asking for more than some space on a VM and some database space you probably already have handy.

wwb
Aug 17, 2004

If you are deploying as often as they do with small changes I wouldn't get so hung up on that. I'd suspect any sort of massive refactor probably has alot more code review involved.

wwb
Aug 17, 2004

Atlassian has released SourceTree for Windows -- an excellent GUI git client that blows just about any of the other windows options I'm aware of away.

http://sourcetreeapp.com/

wwb
Aug 17, 2004

No explorer integration that I can see from here, but everything I've got has tortoise git and tortoise hg installed so I wouldn't see it in all likelihood. I will note it plays well with other tools, so you can use tortoise git to help let you know changes are on the file system -- and even to commit simple ones and then use SourceTree's vastly superior UI to browse the tree and the changelogs.

FWIW, I use it pretty heavily on the mac with no finder integration and I find I don't miss it in general.

Side note: why can't the guys who wrote tortoise git stop smoking crack and get with the guys who wrote tortoise hg and help craft tortoise-dcvs that does both? As shown, a single tool can work with both repos successfully as the logic from a user standpoint is nearly identical.

wwb
Aug 17, 2004

Arcsech posted:

This is wonderful and thank you for posting it - I'm working on a yearlong senior project with a fair amount of code and trying to get the other people in my group to use git is like herding cats. I gave them the Github client which is basically a diff viewer plus a commit button which helped somewhat, but it's terrible as far as checking out old versions or resolving conflicts. Maybe I can get them to use this instead.

Never write code with bloody electrical engineers. Just never do it (I am an electrical engineer).

Ugh, sorry. I get to work with designers who see source control as a horrible impediment on creativity -- not the insane enabler we see it as in the development shop.

Anyhow, out of curiosity, why did you choose git over HG? Git clients are loads better now, but HG is still easier to use in just about every respect while being powerful enough for just about anything.

wwb
Aug 17, 2004

Personally I like the no editing history without jumping through hoops thing, and I don't find it to be a downside to not have perfect code on every commit but I'm probably in the lunatic fringe.

wwb
Aug 17, 2004

I would probably split that into a named branch but we do CI based on named branches as well so we might be odd in this respect. You can close the branches in any event so they don't have to stay alive forever.

wwb
Aug 17, 2004

Another way to do the deployment is to use a CI server such as teamcity or jenkins which is free. Loads of advantages there -- like being able to go check the CI server to see when / if something got out to production, the ability to do automated tests as part of the deployment process and boatloads of traceability overall.

wwb
Aug 17, 2004

GrumpyDoctor posted:

We're using a third-party library in a project that requires a license key to use. The way it works is that you include the license key in the first call into the library, and their license terms require that you actually embed the drat thing in your source code (you can't pull it out of a config file at runtime). So, we've been doing this, and it's been working fine. The problem is that we've gotten a directive from on high to make our codebase publicly available. This presents two problems.

The first is that anyone looking at the source history will be able to see the key. We're using git, and everything I've read about rewriting git history has to do with retroactively blowing away whole files, but this thing doesn't live in its own file and even if it did I'd have to change the build files as well. I'm not even sure what this change would look like - a global, historical, find-and-replace to turn the key into "KEY GOES HERE" or something? Can git do this? Do I want it to? Is there any option that's easier than actually starting a new repo from scratch? We can't have the vendor generate a new key for us because the software doesn't actually phone home so there's no way to invalidate an existing key (I assume the library just does some math to make sure the provided key is good - I don't see how there could be a master list they could update somewhere.)

The second problem is how to deal with this going forward. What I've thought of is having a build script that does the aforementioned find-and-replace, but in reverse: to copy the key into the codebase (from an unversioned config file or something), build the project, and then remove it. Does this sound reasonable?

What platform?

Presuming .NET stuff would it be OK to include the file as an embedded resource rather than in the code? How you can handle that build-wise is to have a dummy file in the git source control then use a second repo (this is where I use svn) and pull in the key file before you build for release.

For the public repo I would just make a clean break and stand up an independent repo.

wwb fucked around with this message at 19:10 on Jun 27, 2013

wwb
Aug 17, 2004

I have never heard of anyone using TFS on non .NET projects though the changes have happened and it might be in the art of the possible. I really don't see how it would support ANT though.

I would not touch it with a ten foot pole and I'd probably fail anyone taking my agile course for claiming it isn't bloated managerware. But I'm old school and dogmatic like that.

I would also fail you for doing something dumb like hosting your own SCM in 2013.

I would use Bitbucket as it supports git or hg, lets you have private repositories and has all the wiki / issue tracking features one could need. You can easily integrate that (or github for that matter) with jenkins and stay very agile and probably pick up all the points possible for that stage.

wwb
Aug 17, 2004

I admit the modern version is certainly markedly better than what came before and that the current iteration might even be a viable SCM platform, I have still yet to see anyone who isn't:

a) a microsoft employee or
b) a TFS consultant or
c) a non-coding manager / accountant

express any positive sentiment about TFS.

wwb
Aug 17, 2004

What happens when a client has two different projects that have two different codebases?

wwb
Aug 17, 2004

One thing that does help feature freeze and such is continuious deployment -- if you are constantly deploying small changes then the fear factor goes way down.

If this is possible depends on the nature of the business.

wwb
Aug 17, 2004

mnd posted:

If, like me, you use Bitbucket for Mercurial hosting, and if, like me, you would like it if they supported the "largefiles" Mercurial extension, please add your vote (and optionally a comment) here:

https://bitbucket.org/site/master/issue/3843/largefiles-support-bb-3903

That is all.

Commented / voted I think.

wwb
Aug 17, 2004

Yeah, even 125 repos is $200/month which gets you close to $5k/year based on off the street pricing @ https://github.com/plans.

Anyhow, if you want unlimited private repos and to pay per user (like we do) then we went with bitbucket as they work with that model.

wwb
Aug 17, 2004

Good point on the math. Unclear what he is talking about but "i meant github" I took to mean github.com not github enterprise.

wwb
Aug 17, 2004

That makes a bit more sense.

quote:

if access to your source is managed by a vendor then oh man has something gone massively wrong.

I'm not entirely sure I agree with this in 2013 -- at least in the bandiwdth-rich western world. Bitbucket or github do a vastly better job of hosting DCVS systems than I can, why am I going to take that problem on when I can pay someone else a pittance to handle this for me? Remember that with DCVS disaster recovery setups are easy -- just have a script that hg updates your stuff on a server you also backup.

I'll note I don't work for a software company so I might think differently in that case. But probably not to be honest.

wwb fucked around with this message at 19:02 on Oct 22, 2013

wwb
Aug 17, 2004

The question I ask myself is "going forward is github going to have more outages than the dunce running our infrastructure?"

wwb
Aug 17, 2004

SCM-wise HG is generally on par with git and arguably better in many ways that you tend to run into for projects short of maintaining the linux kernel. Client-app-wise, the command line options are vastly more approachable to begin with. TortoiseHG is arguably better than TortoiseSVN or any existing git client on windows*. For closed-source stuff bitbucket's model is generally superior to github's; feature-wise both are on par and perhaps bitbucket is ahead as they support both git and hg. Outside of "git is what the cool kids are using" there is very little reason to pick git over HG in 2013.

*sorucetree is coming on pretty strong here. It will likely pick up HG capabilities soon too.

wwb
Aug 17, 2004

rotor posted:

I can't believe people actually use hosted version control

Didn't we have this debate a few pages ago. Anyhow, that is a strong statement, care to back it up?

wwb
Aug 17, 2004

I do agree with the point for an engineering organization or at least some place generating their own IP. That said, at least here in the US the legal protections for someone pulling something with your rented storage space are very strong at the end of the day so I wouldn't be too hung up on that.

For the rest I see where you are coming from but my general experience is that outages from your big cloud SCM providers (github / bitbucket ) are about as frequent as the outages one will see with an on-premises installation of anything. Especially if you account for parts of the dev team perhaps not being on premises so you are fundamentally as reliable as the premises' ISP.

Finally, putting my credit card down for a 25 person bitbucket account ($25/mo) has been fundamentally more stupidly easy than:

* standing up a new server to host all the HG / HG web bits
* continually caring and feeding for that server in terms of disk space needs, patching both the HG stack and the underlying OS and all the other stuff that needs being done
* being the guy who is taking the call when something doesn't work weather it be my fault or God's fault.

wwb
Aug 17, 2004

uXs posted:

I set up the HG server at work. Extra work after it was set up: none whatsoever. Disk space used: extremely low.

And that's with 400+ repositories on it.

The only thing that worries me is that I forgot how to set it up and that the HG installation is getting out of date.

quote:

Backups are made, but not my job. Also since it's distributed, backups are on people's PC's anyway.

Unfortunately I'm managing the IT side of the shop so I can't skip backups and updates with a straight face. It isn't a whole lot of work -- our on premises SVN repos are not a whole lot of hassle either. Except if in the event they crash hard. We are anal enough to backup bitbucket as well -- using a 30 line python script.

On the flip side the cloud provider, when combined with CI, has made it easy enough that we have been able to do code edits from an iPhone and push them out to the live site.

wwb
Aug 17, 2004

Plorkyeran posted:

A dag of immutable objects with pointers to various objects in the graph should not be a "complex thing" for a programmer.

This. And use hg to get essentially the same featureset in a wrapper that is not explicitly designed to make you feel stupid.

wwb
Aug 17, 2004

Gul Banana posted:

speaking of git! does anyone know about good git server/wrapper software that is *easy to host on Windows*?
imagine an environment where words like "linux" or "java" are met by an angry mob, or a pink slip. it doesn't have to do much; stuff like web access, pull request hosting, code reviews, that would all be *nice* but all I really want is something that works faster and more reliably than "put your repo.git directories on a share", which is all we've managed to date.

Alternatively HG works reasonably well on windows and is reasonably easy to setup -- see http://www.jeremyskinner.co.uk/mercurial-on-iis7/

Bonus points is that the windows clients are better too.

wwb
Aug 17, 2004

Not sure what you are using for a test platform, but NUnit and such have an Assert.Inconclusive() option that could be good here -- will flag the test and show you something is off kilter without pulling a halt.

I'd really want a halt myself -- that is the point of all this CI stuff, if something is broken it isn't a perfectly valid check in unless you are into creating regressions.

wwb
Aug 17, 2004

There are multiple reasons why using a cloud file storage solution is a dumb thing for source control.

First, while having the code in "the cloud" is a convenient benefit of source control in 2014 it isn't why one uses it in the first place -- cloud file storage can't help you manage concurrent versions, it doesn't have much of a concept of diffing, branching, merging or any of the other very source code-specific things one can do with a modern SCM system.

Second, most development generates a bunch of rapidly changing trash files which are local-only and best ignored from the sources. No need to snych those up to "the cloud" nor subject everyone else to download them. Or dropbox / box / skydrive don't have a .gitignore file either.

I hope this helps your moral compass.

wwb
Aug 17, 2004

aerique posted:

To weigh in on the Git + Dropbox combination. On a first look it might seem like a good idea and then you'll ask for opinions on a forum and people will tell you it is a stupid thing to do with well-founded arguments so you don't do it and continue looking for alternatives.

And they're right. You shouldn't use it for your company's code base, BUT I've been using this combo for years for personal projects and holy poo poo is it convenient (and it hasn't failed my yet).

Point taken but I fail to see how git + dropbox is more convenient than, say, bitbucket using git.

wwb
Aug 17, 2004

evensevenone posted:

If you use multiple machines, git can get a little bit iffy. I.e. say you are machine A and you push some changes. Then later on machine B you were working on that branch and forgot to fetch. You commit some changes and then decide to rebase. Then you push, but because you had rebased, you had to do a git push --force (because it wasn't a fastforward). Now whatever you had on machine A was lost (or will be lost once you pull), and git wouldn't have warned you because you had done --force.

This really bites you if you set up your remotes as mirror=push and you don't even have to do --force to do non-FF pushes.

having a single working directory/repo that is synced though other means (i.e. dropbox or whatever) prevents that problem, because git is only dealing with a single working directory/index. I don't think it's a good idea per se, but there is a gap there that is filled by syncing rather than using git 100%.

I'm probably just old school, and used to working on 3+ computers, but I never have had this problem outside of being too drunk to remember to push things.

wwb
Aug 17, 2004

The easiest way I know if to do a SVN server is to run windows and install http://www.visualsvn.com/server/ FWIW

wwb
Aug 17, 2004

^^^ that man speaks the truth.

wwb
Aug 17, 2004

paberu posted:

Sadly Mercurial doesn't like working with large files (anything over 10mb), so my choice is either Perforce or SVN for the assets.

I've got a few dozen repos with loads of 10-50mb files in bitbucket here with no issues. We stop at about 50 because things tend to time out over HTTP but we could get much larger if we went over ssh.

With SVN we did store loads of 100mb+ files in said visual SVN setup. That was OK for those of us onsite on a 100-1000mb connection to the server but for the remote guy, especially the guy living on the farm on a 1.5mb connection, it was pure hell. SVN tends to get corrupted and the "fix" is typically to blow away your repo and check the entire thing out again.

wwb
Aug 17, 2004

Actually that is a very, very workable idea if you can get the lanes worked out -- I've done such things with great success a few times in the past.

It does work best when the lanes are separate enough that they don't need to cross alot. Like in this case where it sounds like there is a codebase and then some associated graphics assets that are likely generated elsewhere and probably move on a different cycle than the codebase anyhow.

wwb
Aug 17, 2004

necrotic posted:

Setting up gitolite on a small VPS is incredibly simple. If you need anything more than what Gitolite offers (pull requests and such) there are open source offers (GitLab HQ) that work fairly well and can be simple to setup if you have experience with Rails.

A half day of effort tops to save a 2-person shop $84/year (does Bitbucket's free plan allow you to share private repos? I thought it didn't...)

Bitbucket gives you free private repos for up to 5 private users, you need to pay beyond that.

The real cost in that server isn't standing it up or configuring it it is caring and feeding for it and updating it as needed and keeping it backed up and all that other stuff on needs to do with production line of business servers. Real fun stuff comes in when gitolite's upgrade steps fail and you are stuck trying to fix what is a fairly complex web app with a fairly complex service behind it (git) hoping to god you didn't just lose 6 months worth of work.

On the flip side, with DCVSes, you can do disaster recovery with bitbucket (or github for that matter) with a pretty simple python script that parses through your repos and stashes a copy somewhere on whatever schedule you'd like. Worst case scenario is bitbucket dies and you need to stand up gitolite and import your repo.

This also does not contemplate the fact that the cloud providers are in an arms race and are continually pushing the feature set envelope -- how far behind github is gitolite now?

This note is coming from a very grizzled old dev ops guy who reguarly manages a fleet of 60+ web apps on various servers / platforms / clouds and has no fear of hosting just about anything himself.

For merge compare tools I tend to be a bit tactical. Heavy lifting tends to go through kdiff because that is what tortoisehg packs and I've got the most seat time there. Just looking and understanding tends to happen in whatever web interface I am near -- any change can be reflected from bitbucket through redmine to teamcity. Teamcity's browser is really badassed if you've never gone that way.

wwb
Aug 17, 2004

And also you've got the same recourse you had when your repo went down -- dcvs means the repo going down is not longer a work stoppage, just an inconvenience.

wwb
Aug 17, 2004

^^^^ something like that would be a pretty good approach. Could do it internally to the controller as well where you swap out underlying implementations.

Other thing you could do is use #ifdef a bit and have 2 controllers in one file that chooses based on a build environment variable.

wwb
Aug 17, 2004

^^^^ This. And perhaps a script to go check out repo, build these files and rsync it together and commit it for ya.

Adbot
ADBOT LOVES YOU

wwb
Aug 17, 2004

So now the apache subversion project isn't using subversion anymore: https://issues.apache.org/jira/browse/INFRA-7524

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply