Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Space Whale
Nov 6, 2014
I'm extremely interested in this and somewhat experienced in using it, but not at all in setting it up. Since I'm the only one who cares enough to do this where I work at present, I figure I'll learn about it, and get it happening. And since CoC is where we talk about stuff, I figured i'd ask the people here who have done it how to not suck at it.

Also, I didn't see a thread for this yet!

Everyone knows about git these days, and a lot of people even use it. One awesome thing git lets you do is set up a branch that will, when code is pushed to it, trigger a build and deployment! This helps out with testing, catching bugs, making sure things are checked in properly, and make it that much harder for poo poo to sneak into production. It also means you can get real slick and script out stuff so I press a button and spin up a VM with a test db (with test data, or old real data!) and a test server and other test things and if I break poo poo being a cute cheeky idiot in my sandbox, nothing else is broken, whee!

I'm a C#/.NET/MVC/~full stack~ guy. Everything is windows this IIS that. We use bitbucket as our git repo host. What are my (GOOD) options to set this up? Which ones have which advantages? Is this a thing I can do in powershell or do I really need to get and configure something complicated?

Finally, how hard is it to arm-twist management if we have outside IT into in-housing it or just getting really chummy? We literally share an open floor with our IT/hosing company, so it shouldn't be a huge barrier, but if I understand this correctly, this is the kind of thing that needs to happen.

Adbot
ADBOT LOVES YOU

wwb
Aug 17, 2004

'sup hg / C# / MVC / fullstack buddy. Thanks for starting this thread -- it is greatly needed.

quote:

I'm a C#/.NET/MVC/~full stack~ guy. Everything is windows this IIS that. We use bitbucket as our git repo host. What are my (GOOD) options to set this up? Which ones have which advantages? Is this a thing I can do in powershell or do I really need to get and configure something complicated?

First, you've got the right idea -- end of the day this deployment process needs to be callable from a command line, perhaps with a few parameters to tell it where to shoot. I would develop the pointed part of the process there if for no other reason than it is alot easier to debug your local powershell environment then rely upon a build server doing fancy hoo-ha to do everything. Moreover, in the cases where fancy tools fail if you can deploy via command line you can always have a way out and deploy poo poo when the rest of the system is failing. One other angle is that there are some things you need to do in development and others you need to do in producction (or qa / staging / whatever you call it which is really production with stale data ). I'll start at the dev side.

For achieving frictionless development -- the idea of building projects by spinning up of VMs and poo poo like that you need to start and stop with Vagrant. It makes that sane, rational, trackable and repeatable. Generally we don't use vagrant for .NET stuff as Microsoft has done a pretty amazing job of setting things up so you can fly off IIS express / Sql Express locally and get close enough to prod to not need to wait on VMs to spin up to work. But our stuff tends to be relatively uncomplicated and/or architected to make sure we can keep this dev story up while hiding complexity. YMMV.

In terms of scripting things powershell is a good option to some extent. We managed to get pretty jiggy with MSBuild back when there werent better options. These days I'm starting to fall in love with fake for a variety of reasons. Or at least it is a good excuse to try F#.

When you get to the QA / test part of this story this is where you are really going to want to get some tools in play. Personally, I love TeamCity and find it to be worth every cent of the rather cheap license. You can probably get to the same places with jenkins if you don't have a budget. Microsoft also has rolled has some TFS online stuff but I'm loath to depend on them for that part of my development cycle. Anyhow, the build server is really a pretty fancy job runner and reporting system that happens to be tuned to talk to source control and test suites. The real magic tends to happen on the build slaves -- this is where the builds actually execute. For instance, we run 4 -- 3 in various network segments and a mac build server for iOS stuff. How I'd approach something like you are taking about would be:

1) make sure I can build things and get things orchestrated to be ready to run tests from the command line. State is the bane of software development and this is no different -- the biggest challenge is dealing with teh DB. We use RoundHousE for our SQL/.NET stuff to provide a CLI to handle those tasks. I like it because it works the way I want -- that is to do things like mount a database backup and also to execute sql migration scripts in a defined order -- while not making presumptions about my code or my DB. Moreover, it is a command line executable we can ship with code so I can xcopy files, run stuff and it just works when systems fail me. Figuring out a way to schlep the data to the tests is a trick. If it isn't sensitive then having a http download is the least friction. Source control will usually melt with real databases, especially binary sql backups. Anyhow, figure out a way to get the data you need mounted near your tests. Or near enough to load in. Better yet figure out how to write tests without database state or tests which create their own database state.
2) Figure out what the deployment flow should be -- IE do you backup before you pull the trigger? What is the rollback plan? Etc, etc. FWIW our general plan is "rollback is rolling out a quick bugfix" as rollback gets real tricky real fast. Also FWIW we settled upon a basic flow for the .NET stuff of first building the binaries and other packages on the build machine then xcopying them into hot IIS directories. This has worked very, very will with minimal downtimes, etc. We have about zero session dependence which would get broken by this unless you are running out of proc sessions.
3) Configure the build server to execute your build based on whatever criteria you want -- we are pretty manual here, at least for the deployment ones. But you can base it on branches / tags / what have you depending on how your flow is. That is just a configuration option for the build server. This is also the place you would integrate the test suite here so it will stop the final "deploy this poo poo" step if it fails. Or not depending on what management needs.

In terms of IT and management to some extent it depends on the people and mentality. I know part of our shop hates this poo poo -- they feel [kinda rightly] like I'm replacing them with robots. The other part loves it because who wants to run through 2 pages of deployment instructions when they could be off solving real problems. Presuming the management is reasonably forward thinking they are easy to sell -- just tell them google and amazon are doing continuious delivery so you should be too.

So, alot of specific depend on your environment but the first step is to make your things automatable. Then it is a matter of picking tools to automate. Though getting deployments down to "run this script and wait for the smoke to clear" gets you 90% of the way there. The rest is just fancy reporting.

Hope this helps, happy to dive into more details if needed.

[edit: the first cut needed some wordsmithing]

wwb fucked around with this message at 00:59 on Jan 22, 2015

xpander
Sep 2, 2004
I too was hoping for a thread like this!

For anyone that uses Ansible - do you have experience using the Docker module? If so, how do you get it to start a container and then issue a command to it? I have a small project to get mongo installed, and I can get the container started and then log into the box/container to do stuff. However, I'm trying to get replication set up on it, and when I try to actually issue commands via the module, it just seems to kind of die.

EAT THE EGGS RICOLA
May 29, 2008

Docker has had a few crazy bugs recently that would make it insane to use in production, hasn't it?

Dirk Pitt
Sep 14, 2007

haha yes, this feels good

Toilet Rascal
We were in the process of setting up Octopus Deploy (https://octopusdeploy.com) at my former enterprise job before I left. I found it much easier to setup than tfs release management, plus I know some of the devs and they built a very affordable platform.

I am now an iOS developer and we are exploring that. At night I do tamarin contract work and bought a mac mini this afternoon to run as my build automation environment. I will post this weekend when I get it setup.

xpander
Sep 2, 2004

EAT THE EGGS RICOLA posted:

Docker has had a few crazy bugs recently that would make it insane to use in production, hasn't it?

I can't argue with this, it's neat tech but still a bit premature for my taste. However, I'm being paid to make that happen(in fact I'm just replacing a manual process with an Ansible script to set up these containers). I find the Ansible module to be a bit limited, but I fully expect I just don't entirely know what I'm doing. This is also on CoreOS, but that doesn't make a huge diff, all the action is happening in the container. Anyway I'm going to keep on truckin, I just don't know if there are any obvious mistakes I'm making.

wwb
Aug 17, 2004

Speaking of iOS if anyone has unlocked whatever magic voodoo it takes to get apps successfully building for distribution on a headless setup using a build server please share.

Kallikrates
Jul 7, 2002
Pro Lurker
iOS CI is a pain because apple likes to break 3rd party toolchains. I think jenkins + xctool has been fixed but there was a long lag between xcode6 and the fix where the toolset was broken. We are experimenting with scripting xcode bots in the hopes that apple will break their own toolchain less often right now we have some basic task balancing and bots that can run tests, and distribute prerelease builds. We've never done comand-line delivery to the app store since though.

At the end of the day any approach you take is going to need mac hardware. And configuring your server heedlessly might not be possible because certain steps need dialog boxes clicked (afterwards it should run headless fine).

Everyone wonders how apple does all this internally since the tools change so much release to release.

wwb
Aug 17, 2004

Yeah, we admitted defeat on the hardware front -- I've got a mac mini running in the CI rack now. It actually has been generally useful as a CI slave for our LAMP deployments too.

The approach we were trying was to use a bash script to do the fundamental execution using the apple toolchain. We never could quite get it to actually use the keychain we wanted it to use, nor could we cheat and have it use the SSH user's keychain. Some of this probably has to do with limited seat time building stuff on macs but then again I got android building release / store ready signed artifacts inside of an hour so it can't be that tricky.

Doh004
Apr 22, 2007

Mmmmm Donuts...
I've been successful running Teamcity + XCtool for CI and deployments to TestFlight. It works in a headless state; however, it's frustrating as hell as I'll have to go in and update the provisioning profiles on the machine as well as checking some dialogs if they ever come up randomly.

Space Whale
Nov 6, 2014
And it's officially "configure TeamCity with our bitbucket" day. We're ahead of schedule with our sprint, so I won't be too rushed with this.

This is also the first ops/IT thing I've done ever, whee!

So far the triggering is working, but the build stuff is not. Time to learn MSBuild or just copy poo poo from my dev environment and be a lazyfuck :geno:

kitten smoothie
Dec 29, 2001

Kallikrates posted:

iOS CI is a pain because apple likes to break 3rd party toolchains. I think jenkins + xctool has been fixed but there was a long lag between xcode6 and the fix where the toolset was broken. We are experimenting with scripting xcode bots in the hopes that apple will break their own toolchain less often right now we have some basic task balancing and bots that can run tests, and distribute prerelease builds. We've never done comand-line delivery to the app store since though.

At the end of the day any approach you take is going to need mac hardware. And configuring your server heedlessly might not be possible because certain steps need dialog boxes clicked (afterwards it should run headless fine).

Everyone wonders how apple does all this internally since the tools change so much release to release.

Yeah, the XCode 6 migration was a horrible pain in the rear end for us because of this.

We're running a Mac Pro that hosts a bunch of OS X VMs as Jenkins slaves, and from time to time we need to remote desktop into them to screw with provisioning issues and such, but otherwise it runs fine headless.

Space Whale
Nov 6, 2014
So now build, msbuild, and nuget. But in what order?

Is there a way to trigger a build from the web interface or do I have to push a comment commit to git and wait a minute every time? :effort:

Edit: OK, screw it, wiping EVERYTHING and starting from scratch. The last time anyone even tried this was in November of last year and they gave up pretty quickly.

Space Whale fucked around with this message at 20:26 on Jan 30, 2015

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Space Whale posted:

So now build, msbuild, and nuget. But in what order?

Is there a way to trigger a build from the web interface or do I have to push a comment commit to git and wait a minute every time? :effort:

NuGet package restore can be configured to run automatically as part of the build process (it modifies the *.*proj files to include a reference to an MSBuild target). If you're talking about publishing binaries into nuget, that's a different story.

Also, you should need minimal-to-zero MSBuild fuckery to build. Proj files are msbuild files that already know how to build everything. The goal of your build system should be to get you deployable binaries. Once you have those, you're done. If you try to write release scripts in MSBuild, you will be miserable and hate life. Trust me on this.

New Yorp New Yorp fucked around with this message at 20:29 on Jan 30, 2015

Space Whale
Nov 6, 2014

Ithaqua posted:

NuGet package restore can be configured to run automatically as part of the build process (it modifies the *.*proj files to include a reference to an MSBuild target). If you're talking about publishing binaries into nuget, that's a different story.

TeamCity itself saw it as part of an orderable set of processes to go with the build when triggered. I'm going to do a fresh install of whatever build tool and teamcity from scratch and see if doing it right with up to date versions of everything is easier, meh.

No Safe Word
Feb 26, 2005

Ithaqua posted:

Also, you should need minimal-to-zero MSBuild fuckery to build. Proj files are msbuild files that already know how to build everything. The goal of your build system should be to get you deployable binaries. Once you have those, you're done. If you try to write release scripts in MSBuild, you will be miserable and hate life. Trust me on this.

Having cut my teeth on MSBuild poo poo at work, I'll second this. But I'm also happy to help people avoid some of the same lovely mistakes I made.

Mistake #1 for people not to repeat: do not change the default target of your csproj files just to enable some other poo poo to happen on build. Use BeforeBuild/AfterBuild.

Space Whale
Nov 6, 2014
I have absolutely no idea what I am doing or what is going on, I'm spoiled on just clicking "build" :downs:

gently caress.

Edit: OK, ran installers, manually ran scripts, TC cannot find build agents, hooray.

Edit2: "toolsVersion: Selected MSBuild tools version is not supported by selected MSBuild version" No wonder nobody else wanted to do this. At least my agent has found the TC and vice versa?

Feh: It thought it was MSBuild 4.0 when I had 12.0 installed. Now it can't detect default targets. I'm going to go to the .NET thread for this now.

Yay: works!

Space Whale fucked around with this message at 23:08 on Jan 30, 2015

Dust!!!
May 13, 2003

Woosh!

wwb posted:

Some of this probably has to do with limited seat time building stuff on macs but then again I got android building release / store ready signed artifacts inside of an hour so it can't be that tricky.

I'm the "automated Android and iOS builds" developer where I work. The actual process of building release artifacts can be very simple for both platforms. Getting to this point in iOS is much more difficult than it is with Android because of code signing.

Here's what I did:

Setup:
  1. Export signing identities using Xcode on dev computer (manager wasn't in favor of a separate build server account)
  2. Import signing identities on the build server
  3. Install Shenzhen (makes it really easy to build IPAs and distribute them)

The import step puts everything in a login keychain.

Current build script:
  1. security unlock-keychain (required to unlock the login keychain)
  2. ipa build (provided by Shenzhen)

The previous build script specified identity and the .mobileprovision filename. However, release build uploads started failing when we switched from a Mac Mini (setup by a consultant with his account) to a virtual Mac (setup by me). Builds were successful but were signed with the wrong certificate. The only way I knew to fix it was by using Xcode defaults for the identity and provisioning profile. Shenzhen can easily do that, so I used that instead of writing my own build script.

Thankfully, the build server is now capable of building all 60+ apps. The previous script used an incorrect provisioning profile for a few apps (despite matching bundle IDs).

duck monster
Dec 15, 2004

EAT THE EGGS RICOLA posted:

Docker has had a few crazy bugs recently that would make it insane to use in production, hasn't it?

Yeah I was working at a govt department with a *lot* of Django apps and we tried to set up a hg -> jenkins -> docker -> test -> deploy kind of chain but docker was just too fragile to really be worth it. Its awesome in theory but in practice its just not solid enough to do what we wanted it to, which ultimately was to build an inhouse heroku type set up so the various coders in the sub departments could deploy their poo poo without being given the keys to the castle on out various servers. Plus as far as security goes its actually *less* secure than a chroot jail.

Theres also that COREOS thing that your supposed to deploy it onto but we found etcd to be completely flakey.

Decairn
Dec 1, 2007

At work our C embedded devices code (Microchip) and C# based server stack is historically in Subversion source control. Everything gets built from CruiseControl.Net. Newer stuff is just switching over to Java. That's stored in a private GIT repository and continuous build in Jenkins. Developers use their own builds locally at the dev and unit test phase, automated build for anything system test and beyond. Each team has their own server or two managing this, and regularly re-configure for new branches beyond trunk. Docs and issues managed from Confluence and JIRA.

No automated deploy for us. We're not that advanced! We do have some Selenium scripts defined to do automated testing of UI.

Decairn fucked around with this message at 20:39 on Feb 1, 2015

wwb
Aug 17, 2004

Dust!!! posted:

I'm the "automated Android and iOS builds" developer where I work. The actual process of building release artifacts can be very simple for both platforms. Getting to this point in iOS is much more difficult than it is with Android because of code signing. . .

Thanks. I think shenzen might be the missing link I was looking for.

revmoo
May 25, 2006

#basta
We drank the Atlassian cool aid and I configured CI using Bamboo backed into Stash. It's pretty swank. The only thing I don't love about it is the pricing is a little aggressive. Coming from Jenkins though, I like the UI a lot better.

Fatz
Jul 1, 2011
OctopusDeploy is a wonderful thing. Pretty much changed the way I look at deploying crap.

I hooked it in to TFS and Azure along with code metrics, code analysis and automated testing, etc. Everything starts as a gated checkin. If the code makes it through the 'gauntlet' it's automatically deployed.

One aggravating thing about OctopusDeploy. There's a utility exe called Octopack that wraps nuget.exe and produces an almost-nuget package. What gets created doesn't install correctly as a nuget in visual studio; only as a package in octopus deploy. OctopusDeploy uses Octopack for creating their deployment packages; not for creating nuget packages for consumption in other development projects--despite Octopack generating a .nupkg file.

Space Whale
Nov 6, 2014
So the "branch off of master, merge all into a dev branch; merge that dev branch into master to deploy" pattern has my architect a bit antsy, since with TFS it was hell. I've also seen at least one nasty git merge, but we were having spaghetti merges that looked like a cladogram of the protists.

...why not just use that dev as your master? What would make it all go pear shaped? I know that the project files can get a bit messy, but seriously, why?

Hughlander
May 11, 2005

Space Whale posted:

So the "branch off of master, merge all into a dev branch; merge that dev branch into master to deploy" pattern has my architect a bit antsy, since with TFS it was hell. I've also seen at least one nasty git merge, but we were having spaghetti merges that looked like a cladogram of the protists.

...why not just use that dev as your master? What would make it all go pear shaped? I know that the project files can get a bit messy, but seriously, why?

I feel we are missing some context here. Usually the reason to do that is to use the named branch refs as tags representing state. Ie: tip of master is always what's on production. Places with more environments will use tags though. Like where I am uses the pattern pm deployment of making two tags CURRENT/ENVNAME and ENVNAME/YYYYMMDDHHMM. So like CURRENT/PRODUCTION and QA23/201502090844. The first being overwritten with each deployment but great for scripts to boot from and the latter being a history and deployment log that is useful to use when reproing bugs. But even then when prod is smoke tested its then set with a git checkout -B master.

Mr Shiny Pants
Nov 12, 2012

Space Whale posted:

So the "branch off of master, merge all into a dev branch; merge that dev branch into master to deploy" pattern has my architect a bit antsy, since with TFS it was hell. I've also seen at least one nasty git merge, but we were having spaghetti merges that looked like a cladogram of the protists.

...why not just use that dev as your master? What would make it all go pear shaped? I know that the project files can get a bit messy, but seriously, why?

For me personally it makes it easy to reason about the state of a project. Development is where everybody is working in and merges into. Master is what "works". If something breaks or a merge goes awry the master repo is still good.

It also makes it easy to do a baseline system that works, just use the master branch and deploy it.

Space Whale
Nov 6, 2014
What I mean is after you coalesce your feature branches into dev, and you're ready to deploy, sometimes that dev branch doesn't merge smoothly into master. So, why not just overwrite master with it?

Hughlander
May 11, 2005

Space Whale posted:

What I mean is after you coalesce your feature branches into dev, and you're ready to deploy, sometimes that dev branch doesn't merge smoothly into master. So, why not just overwrite master with it?

You're missing what should be a step it seems. Long lived branches should be constantly rebasing themselves onto the latest master. If this is done then there is no merge back into master, but rather master is a fast-forward of the long lived branch. If you go down that route you literally should use git -merge --ff-only at the end.

Space Whale
Nov 6, 2014
That raises another question I've gotten.

Let's say that there are breaking changes made by work being done in the API or architecture of the project, or whatever. These changes all need to be shared and the breaking bits fixed. Would pushing that to master early then doing the rebase/merge from master onto their branches be the best way to do this?

Mr Shiny Pants
Nov 12, 2012

Space Whale posted:

That raises another question I've gotten.

Let's say that there are breaking changes made by work being done in the API or architecture of the project, or whatever. These changes all need to be shared and the breaking bits fixed. Would pushing that to master early then doing the rebase/merge from master onto their branches be the best way to do this?

The way I look at it, if there are breaking changes then you don't want them in master at first. What if you come across some unforeseen side effect of these breaking changes and you want to rollback? It is a lot easier to keep the master as a known good, and if everything passes testing and you are sure you have delivered a stable version, fast forward the master to the development branch. You share out the development branch so people are merging in, and working on, the version that has the changes they need.

This seems reasonable to me. :)

I am not a Git guru or a development guru, so if anybody has a different perspective please share. I'd like to learn as well.

Decairn
Dec 1, 2007

Space Whale posted:

What I mean is after you coalesce your feature branches into dev, and you're ready to deploy, sometimes that dev branch doesn't merge smoothly into master. So, why not just overwrite master with it?

Go back a couple of posts - because master is "what works". It is also what everyone also understands is reliable (production ready or in production) without asking whether it works or not. If dev doesn't merge smoothly into master you've likely broken some unit tests or regression and need to reassess what should be the reference to move on with. You cannot move on until that is resolved. Also, you can have continuous integration on master that you set and forget until it complains after a bad merge, there's less maintenance to do.

Hughlander
May 11, 2005

Space Whale posted:

That raises another question I've gotten.

Let's say that there are breaking changes made by work being done in the API or architecture of the project, or whatever. These changes all need to be shared and the breaking bits fixed. Would pushing that to master early then doing the rebase/merge from master onto their branches be the best way to do this?

Depends on how large and crippling they are. If the person doing the change fixes it in the master branch that is going to be pushed then the only thing that really matters for the most part is what some other engineer is working on in their branch. Which is the point of frequently rebasing from master. However one time we had a java project that was removing a level of the package namespace (com.company.project.tech to com.company.tech) along with other large changes. It basically involved moving every file in the entire project down a level. What the architect who was doing that did was just called a meeting late in the day of, "Ok guys this is going out tomorrow, everyone merge this branch in and we'll sit here quietly working on it making sure that all of your long lived branches are clean and pass tests." But that was literally a one time thing and only example of that I can think of. I've done plenty of API rewrites as part of an old and crusty code-base and the mantra is basically, "It works on master and passes tests, if it doesn't work on your branch why do you have such a long lived branch?"

Master_Odin
Apr 15, 2010

My spear never misses its mark...

ladies
So I'm a complete newbie at CI but for a project I'm loosely associated with, I feel somewhat obligated to step in and force myself to learn as some suggestions on testing are incredibly dumb (like using JUnit to test C++ code?!).

The setup is a repo on github, and it would need (ideally) run a test suite of PHP, C++, and JS (with unit testers actually written in those languages, and JUnit). What would be the ideal way to do this and hook it up to something like Travis-CI as doing it on a local server would involve quite a hassle I'd like to avoid.

Mongolian Queef
May 6, 2004

Does anyone use Trac and can recommend any of these plugins for CI?
http://trac.edgewall.org/wiki/PluginList#ContinuousIntegration
I see Hudson and Jenkins on there, but I have never used any of them (or any at all) so any recommendation/suggestion is appreciated.

Fatz
Jul 1, 2011
Just finished hooking up Sandcastle to our CI build because I'm a little tired of being asked for documentation that nobody uses.

If you're in the .NET world, Sandcastle has been around for ages. It's what MS uses/used for their MSDN documentation generation. Someone released the source to the public ages ago and I've been using it on and off since. It's been a good 5 years since I touched it last so I grabbed the latest last week. Really has come a long ways. Integrates into visual studio as a project type now. Just add a new "help" project to your solution, add the .*proj files as documentation sources and poof--instant docs...provided you've remarked your code everywhere.

Using Team Build (TFS) for building, I created a helper script that publishes the generated help after a successful build to a website and includes the url in the successful build notifications.

..btt
Mar 26, 2008

Fatz posted:

Just finished hooking up Sandcastle to our CI build because I'm a little tired of being asked for documentation that nobody uses.

If you're in the .NET world, Sandcastle has been around for ages. It's what MS uses/used for their MSDN documentation generation. Someone released the source to the public ages ago and I've been using it on and off since. It's been a good 5 years since I touched it last so I grabbed the latest last week. Really has come a long ways. Integrates into visual studio as a project type now. Just add a new "help" project to your solution, add the .*proj files as documentation sources and poof--instant docs...provided you've remarked your code everywhere.

Using Team Build (TFS) for building, I created a helper script that publishes the generated help after a successful build to a website and includes the url in the successful build notifications.

Interesting - I last looked at this about a year ago. Similarly, we are sometimes asked by higher-ups for standalone documentation no dev would use. At the time it was abandoned, hadn't been touched since 2012 or so.

Does it still have crazy build times? A fairly complex project that took about 10 minutes for our ancient build server to build would take in the order of an hour to build the docs, so it's been disabled since I set it up.

Fatz
Jul 1, 2011

..btt posted:

Interesting - I last looked at this about a year ago. Similarly, we are sometimes asked by higher-ups for standalone documentation no dev would use. At the time it was abandoned, hadn't been touched since 2012 or so.

Does it still have crazy build times? A fairly complex project that took about 10 minutes for our ancient build server to build would take in the order of an hour to build the docs, so it's been disabled since I set it up.

An hour? No, nothing that bad but there still is alot of overhead. The longest build I've got right now is about 2 minutes so it's tough for me to give reliable numbers, but sandcastle increased that one to 3 minutes. This seems to have caught on at my company and other teams are doing it now. Build times haven't been a complaint yet. My hunch is this will end up being for release compiles only or some other flag to turn it off.

Added bonus is watching the IIS logs and seeing of the people asking for documentation, how many actually use it.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Is it possible to transfer a an Atlassian product license (looking at JIRA+Stash+Bamboo) between machines? I'm looking at getting a personal license, I'd like to throw it on my server bare-metal, but if it's gonna be bitchy about getting moved I'll toss it in a VM instead.

Sorry, not quite on the thread topic, just seemed like people here might have some answers :shobon:

Paul MaudDib fucked around with this message at 03:21 on Feb 28, 2015

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
Is it just me, or is Chef loving awful? I decided to exit my Windows comfort zone a bit and play with Chef this weekend, and it has not been a happy, smooth experience. Doing everything requires endless dicking around with config files and googling, and no one seems to have any clear instructions on how to set up and use the thing properly.

I got it to the point where I can invoke a really simple cookbook on a target server (literally babby's first cookbook), but it's just been an arduous, painful process. I'm dreading figuring out how to manage cookbook dependencies and write some sort of useful recipe.

Also, the way they named things is super irritating. KNIFE RECIPE COOKBOOK KITCHEN. I'm surprised they call servers "nodes" instead of "POTS AND PANS LOL GET IT COOKING IDIOMS"

Puppet is up next on my list of things to play with... am I going to be just as miserable setting it up and using it, or is it better?

Adbot
ADBOT LOVES YOU

Less Fat Luke
May 23, 2003

Exciting Lemon
Puppet is much, much better in my opinion though I actually find Ansible the nicest of the bunch; Ansible doesn't try to make you worry as much about a dependency graph as Puppet does. Essentially an Ansible cookbook is a sequence of things to run on a server. Right now my job uses the whole cattle concept instead of pets so setting up Amazon images is a one-shot thing with Ansible (and then we snapshot the image for scaling purposes). At my previous role we had physical servers that were much longer lived, and the Puppet model of ensuring everything was always in the proper working order was great for that. So yeah, both have their strengths :)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply