Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
AlexG
Jul 15, 2004
If you can't solve a problem with gaffer tape, it's probably insoluble anyway.
Ansible is decent, and it is easy to get started with it. There are cases where it does things with ease which are essentially impossible with Puppet. For example, we use ansible to manage zero-downtime rolling restart of cloud services. This is not reasonable with Puppet because it has next-to-no ability to coordinate actions across multiple machines. To some extent, you can hack stuff up, but I have no wish to write or maintain that sort of thing when I could just use Ansible. Also, Puppet is a pain to debug. For me, the final straw was realizing that for all its vaunted declarative dependency specifications, Puppet cannot cope with having two modules that both want the same system package to be installed (there are ways to hack it, but screw that).

Adbot
ADBOT LOVES YOU

evensevenone
May 12, 2001
Glass is a solid.
Ansible is pretty easy to get going, but it doesn't manage state at all, it's basically glorified shell scripts that run in parallel. So if you run a playbook that adds a package, and then later you decide that you don't want that package, you need to add a rule to remove the package.

If you have to manage long-running physical servers, you need to think a bit carefully about what you're doing; the state of a machine is going to be the sum of all ansible recipies that have ever been run on it, not just the most recent one. So things like "lineinfile" that seem super convenient can bite you in the rear end.

I think it's pretty good if your process is to always spin up a clean VM and run ansible against it, and then if you want to change something you always spin up a new one and kill the old one (i.e. the "cattle not pets" mentality).

Also, do yourself a favor and store your inventory files in a separate place than your playbooks so you don't have to do a commit to add a new host.


edit: I don't know if Chef or Puppet are any better at this, I was just surprised at how little Ansible actually does.

NovemberMike
Dec 28, 2008

What about Saltstack? I've been playing around with it and it seems nice ,anyone have real opinions?

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
I run a small consulting business, doing development for a bunch of different clients. I've been looking at setting up a CI/build server to get things a bit more organized. Most of our dev is C# - typically MVC for web, a mixture of WCF and Web API for services, and some windows services/console apps as well. However, we also do a reasonable amount of work for a couple of clients in C on the Raspberry Pi (we currently build everything using the CodeBlocks IDE on the pi itself, but do most of the dev on Windows under Visual Studio), and some embedded C dev on credit card terminals as well (NetBeans IDE with Cygwin+ARM cross compilers). There's also one java project using Eclipse and Maven and Spring and some other junk.

What I'm hoping to find is some setup which can do a nightly (as well as on-demand) build of all the projects that have changed, along with updating the version numbers for all the assemblies/projects being built, and committing the updated version resources back to source control. I know about the x.x.*.* versioning that Visual Studio can do, but I want something a bit more deterministic. Post build, if it's a web project, deploy to IIS somewhere, if it's a service, stop the existing one, deploy, restart, and if it's just a standalone app or one of the Pi/credit card projects, just push to a dedicated share, with the current version number in the name somewhere. What I want is for all the nightly builds/deploys to update the dev environment automatically. Then, once we're ready to move something into test, just change the deployment URL's and credentials and initiate an on-demand build, and have everything update the test environment as well. I should be able to create some shell scripts or makefiles or whatever for the Raspberry Pi and credit card projects if there aren't any already, as well as writing some helper apps to update the build numbers for the non .net projects where necessary. Source control is svn and git at the moment, with potentially some clients using TFS in the future.

Is there a single product that can do this? Or a set of products that work reasonably well together? We've had a few instances in the past with source control mismanagement and having difficulty tying specific versions of binaries to source control revisions, so I'm wanting to automate as much of this as possible.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

beuges posted:

What I want is for all the nightly builds/deploys to update the dev environment automatically. Then, once we're ready to move something into test, just change the deployment URL's and credentials and initiate an on-demand build, and have everything update the test environment as well.

This is not a good practice, IMO. You should be building a set of binaries, testing them against a dev/integration environment, then promoting those binaries to the next environment in the release path. There are tools out there to help you manage releases like this. Overextending the build process to deploy software is really painful and inflexible.

wwb
Aug 17, 2004

beuges posted:

I run a small consulting business, doing development for a bunch of different clients. I've been looking at setting up a CI/build server to get things a bit more organized. Most of our dev is C# - typically MVC for web, a mixture of WCF and Web API for services, and some windows services/console apps as well. However, we also do a reasonable amount of work for a couple of clients in C on the Raspberry Pi (we currently build everything using the CodeBlocks IDE on the pi itself, but do most of the dev on Windows under Visual Studio), and some embedded C dev on credit card terminals as well (NetBeans IDE with Cygwin+ARM cross compilers). There's also one java project using Eclipse and Maven and Spring and some other junk.

What I'm hoping to find is some setup which can do a nightly (as well as on-demand) build of all the projects that have changed, along with updating the version numbers for all the assemblies/projects being built, and committing the updated version resources back to source control. I know about the x.x.*.* versioning that Visual Studio can do, but I want something a bit more deterministic. Post build, if it's a web project, deploy to IIS somewhere, if it's a service, stop the existing one, deploy, restart, and if it's just a standalone app or one of the Pi/credit card projects, just push to a dedicated share, with the current version number in the name somewhere. What I want is for all the nightly builds/deploys to update the dev environment automatically. Then, once we're ready to move something into test, just change the deployment URL's and credentials and initiate an on-demand build, and have everything update the test environment as well. I should be able to create some shell scripts or makefiles or whatever for the Raspberry Pi and credit card projects if there aren't any already, as well as writing some helper apps to update the build numbers for the non .net projects where necessary. Source control is svn and git at the moment, with potentially some clients using TFS in the future.

Is there a single product that can do this? Or a set of products that work reasonably well together? We've had a few instances in the past with source control mismanagement and having difficulty tying specific versions of binaries to source control revisions, so I'm wanting to automate as much of this as possible.

Check out TeamCity if you want a product or Jenkins if you want free. I wouldn't get hung up on promoting binaries unless you've got really long test cycles.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Anyone have cool tricks for managing SSH / Ansible host files?

I use primarily iTerm to ssh into servers via keys. We use ansible for installs/deployments in dev and prod and have a normal CI environment that I manage. Our real failing right now is keeping track of all our different hosts and being able to quickly look up / ssh into them and look around, because we keep one ansible host file per application checked into the code repository instead of a centralized CMDB.

Basically, I was hoping someone had a program that could read in various ansible hosts files (we have one per application) and spit out .ssh/config and iTerm profiles for me/us to use. I could write one myself, but it seems silly when someone almost certainly has already done it already.

Or, maybe there is a better solution I'm not seeing?

Dren
Jan 5, 2001

Pillbug
The project I'm working on is split into two deliverables (in separate git repos) where the second deliverable depends on the first. Jenkins is set up to take the necessary artifacts from the last successful build of the first project when it builds the second project and there is a way to kick off a build of everything that packages up all of the artifacts at the end and makes an ISO with all of the deliverables. This all works pretty ok. I'd like it to be able to kick off the whole process on a tag instead of master in order to create a release. Is there any option besides duplicating all of the projects and pointing them at the tag instead of master?

Also, the jenkins vagrant plugin is kind of rear end, are there any alternatives? We ended up scripting some stuff inside the jobs to make vagrant work.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
You can set "Branch Specifier" as ${TAG_NAME} and set a string build parameter TAG_NAME with the default name as origin/master or origin/HEAD or whatever, and only change it when you want to build a release. Presumably the options are the same and you're using a post-build action to trigger parameterized build of the second job. You can add "Current build parameters" as an option and it'll forward the tag along.

Edit: You may also have to mess around with the refspec, because by default the git plugin may not fetch tags automatically depending on which version you're using. Setting refspec to "+refs/tags/*:refs/remotes/origin/tags/*" and "*/tags/${TAG_NAME}" as branch specifier should do the trick.

Never hooked vagrant up to jenkins, sorry.

Bhodi fucked around with this message at 23:27 on Mar 2, 2015

Dren
Jan 5, 2001

Pillbug

Bhodi posted:

You can set "Branch Specifier" as ${TAG_NAME} and set a string build parameter TAG_NAME with the default name as origin/master or origin/HEAD or whatever, and only change it when you want to build a release. Presumably the options are the same and you're using a post-build action to trigger parameterized build of the second job. You can add "Current build parameters" as an option and it'll forward the tag along.

Edit: You may also have to mess around with the refspec, because by default the git plugin may not fetch tags automatically depending on which version you're using. Setting refspec to "+refs/tags/*:refs/remotes/origin/tags/*" and "*/tags/${TAG_NAME}" as branch specifier should do the trick.

Never hooked vagrant up to jenkins, sorry.

I'll check that out, thanks

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

Ithaqua posted:

This is not a good practice, IMO. You should be building a set of binaries, testing them against a dev/integration environment, then promoting those binaries to the next environment in the release path. There are tools out there to help you manage releases like this. Overextending the build process to deploy software is really painful and inflexible.

That definitely makes sense. Not sure what I was thinking by wanting to rebuild to the test environment rather than just re-deploy.

I think it's time to stop staring at Wikipedia features tables trying to find something that does everything I need, and just install Jenkins and TeamCity in VMs and see which are easiest to configure and manage for my needs.

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison
I dunno if there's a better thread to ask this in, but does anyone have any experience with Selenium for automated browser testing? We've got a whole lot of old Selenium RC tests and for some reason haven't updated our Selenium Server or test runners since 2.22. Nothing uses WebDriver as far as I can tell. We'd like to upgrade the whole thing so we can test against current versions of FF and Chrome but also keep testing against IE8.

I guess I'd also like to know what (if anything) is better than Selenium for this purpose.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
I'm running a jenkins job that kicks off xvfb-run with an rpsec that includes selenium-webdriver and firefox . It uh, just kinda works? It's slightly newer, I think 2.36? I can check monday morning if you need version specifics. It was pretty cut and dry, I just googled around for some tutorials, I can forward them along but it sounds like you've already got something set up.

AFAIK selenium webdriver is where it's at, there are a few options for the display layer, but I found xvfb to be convenient and it was the first one I tried that worked basically out of the box on linux so that's what I went with. It takes about 4 seconds to spin up X and firefox. Some people apparently like PhantomJS but I dunno

Bhodi fucked around with this message at 05:19 on Mar 7, 2015

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
If you don't want to manage a Selenium server, you can use a 3rd party service like SauceLabs. Otherwise, WebDriver is where it's at.

duck monster
Dec 15, 2004

NovemberMike posted:

What about Saltstack? I've been playing around with it and it seems nice ,anyone have real opinions?

We ran a very large government department with it. Science clusters, windows servers, various linux boxes, virtual hosts and servers, the lot. Its very nice. Like all of these things, theres a bit of a learning curve, but honestly I found it much easier than puppet

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Ithaqua posted:

This is not a good practice, IMO. You should be building a set of binaries, testing them against a dev/integration environment, then promoting those binaries to the next environment in the release path. There are tools out there to help you manage releases like this. Overextending the build process to deploy software is really painful and inflexible.
This is generally sound advice, but it is context-dependent. For example, if you're building a minified web application using something like Webpack, there's a good chance you won't be able to reuse exactly the same build artifacts for each environment, especially if you have debug flags. Even for standard COTS software releases, you typically want a separate debug build on dev/QA than the release build you put on staging and production.

Vulture Culture fucked around with this message at 20:12 on Mar 11, 2015

TodPunk
Feb 9, 2013

What do you mean, "TRON isn't a documentary?"

duck monster posted:

We ran a very large government department with it. Science clusters, windows servers, various linux boxes, virtual hosts and servers, the lot. Its very nice. Like all of these things, theres a bit of a learning curve, but honestly I found it much easier than puppet

SaltStack is locally developed here in Utah, so I get a lot of exposure to it. Their docs suck at helping the learning curve and every presentation I've ever seen fails to explain what you can/can't do with it or why you'd care. I just did a book review for Learning SaltStack and it made me realize it's NOT just me that has this problem. I hope they improve their docs to have a simplistic version of this book (it was written by one of the devs), they need it.

That said, SaltStack is amazing and scales incredibly well if you can keep it organized (the tools make it easy to do so if you keep it in mind). We're using it in our new AWS deployment to replace some legacy junk. Only 6 servers, but it's still a devops godsend. I'll never touch puppet again. Ansible was also worth looking at, but we found the project changing too much at the time.

Referal-less amazon link! http://www.amazon.com/Learning-Saltstack-Colton-Myers/dp/1784394602

bgreman
Oct 8, 2005

ASK ME ABOUT STICKING WITH A YEARS-LONG LETS PLAY OF THE MOST COMPLICATED SPACE SIMULATION GAME INVENTED, PLAYING BOTH SIDES, AND SPENDING HOURS GOING ABOVE AND BEYOND TO ENSURE INTERNET STRANGERS ENJOY THEMSELVES
We're a C#/ASP.NET/SQL shop that builds both an enterprise-level website and a related desktop application. When I first got here, it was all CruiseControl.NET driving MSBuild from TFS. At some point, my bosses were like, "Hey let's go full Microsoft!" and so I ported the build syetem to use TeamBuild 2010 in ways that Microsoft probably wasn't intending for that release. Needless to say, I've become quite practiced at customizing these drat .xaml workflow templates and doing eldritch things with build definitions.

We've also gone through about four different methods for managing our configurations, including

  • Hardcoding it into the build workflows (driven by environment-specific values coming from external .xml files), meaning environment-specific configuration happened at build time, but the build finishes with a build deployable on all targeted environments -- BAD!
  • Writing our own tool to perform xpathreplace-based configuration post-build. This meant we could configure for an environment we didn't anticipate when we ran the build. -- Slightly less BAD!
  • Just checking all the config files for all the environments right into TFS. - Less bad, but maintenance heavy.
  • Web/app.config transforms -- Our next step, only small portions of the application have been modified to use this technique, but it seems pretty flexible and cuts down on config maintenance.

Meanwhile, another group has spun off here that is going full Java/node.js/Mongo/Rabbit, etc. They're using TeamCity and Gitlab and having just an awful time of it.

I kind of miss doing actual dev, but being "the build guy" is kind of a nice thing to have on my resume. Just wish TeamBuild was in more demand.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

bgreman posted:

We're a C#/ASP.NET/SQL shop that builds both an enterprise-level website and a related desktop application. When I first got here, it was all CruiseControl.NET driving MSBuild from TFS. At some point, my bosses were like, "Hey let's go full Microsoft!" and so I ported the build syetem to use TeamBuild 2010 in ways that Microsoft probably wasn't intending for that release. Needless to say, I've become quite practiced at customizing these drat .xaml workflow templates and doing eldritch things with build definitions.

We've also gone through about four different methods for managing our configurations, including

  • Hardcoding it into the build workflows (driven by environment-specific values coming from external .xml files), meaning environment-specific configuration happened at build time, but the build finishes with a build deployable on all targeted environments -- BAD!
  • Writing our own tool to perform xpathreplace-based configuration post-build. This meant we could configure for an environment we didn't anticipate when we ran the build. -- Slightly less BAD!
  • Just checking all the config files for all the environments right into TFS. - Less bad, but maintenance heavy.
  • Web/app.config transforms -- Our next step, only small portions of the application have been modified to use this technique, but it seems pretty flexible and cuts down on config maintenance.

Meanwhile, another group has spun off here that is going full Java/node.js/Mongo/Rabbit, etc. They're using TeamCity and Gitlab and having just an awful time of it.

I kind of miss doing actual dev, but being "the build guy" is kind of a nice thing to have on my resume. Just wish TeamBuild was in more demand.

Team build is getting a full rewrite in 2015 that will be much less awful. Check Chris Patterson's blog on MSDN.

you shouldn't have your build responsible for deployment in general. Config transform encourages bad practices like one build per environment instead of a binary promotion model.

bgreman
Oct 8, 2005

ASK ME ABOUT STICKING WITH A YEARS-LONG LETS PLAY OF THE MOST COMPLICATED SPACE SIMULATION GAME INVENTED, PLAYING BOTH SIDES, AND SPENDING HOURS GOING ABOVE AND BEYOND TO ENSURE INTERNET STRANGERS ENJOY THEMSELVES

Ithaqua posted:

Team build is getting a full rewrite in 2015 that will be much less awful. Check Chris Patterson's blog on MSDN.

We've canceled on the TFS 2013+ upgrade since our .NET application has been transitioned into "maintenance mode." Whatever the hell that means. We're still doing active development on it, but I guess the bigwigs decided not to sign off on the costs of the upgrade. So I'm stuck with TFS 2010 and the attendant Team Build.

Like I said, we're moving away from a model where the build drives the config. Basically at the end of the process, the binaries are there and the configs for all environments are available for the actual deployment process. Not that the deployment process to any of our formal environments is at all automated, but that's outside my purview at the present time.

syphon
Jan 1, 2001
It's definitely a trade-off. On one hand, having the deployment configs generated with the build give you an atomic "package" that's easily reproducible. On the other, it's not very agile, and if you need to deploy that build to a new environment, you're screwed. Alternately, having your deployment configs separate from your build create more flexibility, but becomes another config setting you're dependent on (it becomes more difficult to wholly reproduce a 'deployment', because a change could have come from either the build or the config side of things).

You can use Source Control to try to mitigate this problem. Things like Git's 'tag' or Perforce 'label' (I'm sure TFS has to have an equiv) can help, but I've never found a rock-solid solution to this problem when you start deploying your app to many different 'environments'.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I'd create a repo with a git branch/tag for each environment config. You have something that says "docker: bring up server with configuration 'production'" and then it does "git archive --format=tar.gz git://gitconfigrepo:production | tar -xz". I can't remember the exact syntax to archive only a specific branch or tag, but I wrote it up as a script a while back. I'll check on that.

There's probably a better mechanism for making the config update atomically when you change the syntax to prevent a timespan where configs are broken, maybe subtrees or submodules? But I think that works for a simple deployment scheme.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
We had branch-per-configtype for a while (one for prod, dev, staging, test, etc), and it went really badly because it turns out that many changes frequently have to be made to multiple branches simultaneously (e.g. updating what SSL ciphers an Apache instance uses), and the branches drift apart from each other.

A more robust solution is to have code that generates the configs, taking the environment type as a parameter. e.g. "generate_config.sh --env=prod'. This keeps everything together in a single branch, and as a bonus it's easier to write tests that each env's config is correct.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
You also need to consider what it means to reproduce a deployment. Reproducing a deploy from a week ago might be a reasonable use case if you have a really nasty regression on a public system. On the other hand, how often are you going to be deploying months-old code where it's worth it to invest that time up front to make it work, as opposed to when (if) it comes up?

syphon
Jan 1, 2001
Tools like Chef have done a really good job of mitigating this problem. Your cookbooks have 'defaults' which can be overridden per environment. This answers minato's stated problem of "configuration drift" across environments. Then, you can enforce versioning of your cookbooks in order to create reproducible deployments.

Managing the mapping of App Version to Cookbook Version is a bit of a pain, but I think the benefits outweigh the costs.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

syphon posted:

Tools like Chef have done a really good job of mitigating this problem. Your cookbooks have 'defaults' which can be overridden per environment. This answers minato's stated problem of "configuration drift" across environments. Then, you can enforce versioning of your cookbooks in order to create reproducible deployments.

Managing the mapping of App Version to Cookbook Version is a bit of a pain, but I think the benefits outweigh the costs.
And if that's a big concern -- I've rarely found it to be in practice -- Jamie Winsor's practices outlined here are a big help:

https://www.youtube.com/watch?v=Dq_vGxd-jps

syphon
Jan 1, 2001
Ah I briefly met him at this year's Chefconf. Seemed like a really smart guy.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

syphon posted:

Tools like Chef have done a really good job of mitigating this problem. Your cookbooks have 'defaults' which can be overridden per environment. This answers minato's stated problem of "configuration drift" across environments. Then, you can enforce versioning of your cookbooks in order to create reproducible deployments.

Managing the mapping of App Version to Cookbook Version is a bit of a pain, but I think the benefits outweigh the costs.
An issue I keep coming across is an elephants all the way down problem of then having to have an associated prod/dev/test/whatever for your all your management code / servers when you use puppet/chef/ansible/whatever.

For example, I built a jenkins test suite that pulls a branch from git and runs a bunch of tests on our cloud environment including creating VMs on a bunch of different vlans with configs using the tool we distribute to users. But now I need to be able to reproduce jenkins itself in both prod and dev, so I have a separate repo for the jenkins configs. And I need a program to be able to import/export, so I wrapped ansible around that and have some ansible tasks to pull/push configs to the various jenkins servers. But wait, the jenkins configs are subtly different because, for example, prod jenkins needs to pull from the prod branch and dev from dev, so now I have to munge it through a tool to dynamically make the jenkins configs.

It's ugly and now I have 3 repos to manage and try and keep in sync, all with different versions and good release process. It's messy but the best I could come up with. My sister group dealing with our openstack silo has it three or four times as bad.

All these cloud products enable people to easily do continuous integration on them, but not the app itself.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





Bhodi posted:

An issue I keep coming across is an elephants all the way down problem of then having to have an associated prod/dev/test/whatever for your all your management code / servers when you use puppet/chef/ansible/whatever.

For example, I built a jenkins test suite that pulls a branch from git and runs a bunch of tests on our cloud environment including creating VMs on a bunch of different vlans with configs using the tool we distribute to users. But now I need to be able to reproduce jenkins itself in both prod and dev, so I have a separate repo for the jenkins configs. And I need a program to be able to import/export, so I wrapped ansible around that and have some ansible tasks to pull/push configs to the various jenkins servers. But wait, the jenkins configs are subtly different because, for example, prod jenkins needs to pull from the prod branch and dev from dev, so now I have to munge it through a tool to dynamically make the jenkins configs.

It's ugly and now I have 3 repos to manage and try and keep in sync, all with different versions and good release process. It's messy but the best I could come up with. My sister group dealing with our openstack silo has it three or four times as bad.

All these cloud products enable people to easily do continuous integration on them, but not the app itself.

I have one project at work that has provisioning scripts for it's very own jenkins instance. It's smart enough to realize when it can reuse one already built but sometimes I really do have to build the whole environment. Config management is a special kind of hell.

syphon
Jan 1, 2001
The enterprise version of Jenkins has a Templates Plugin for creating similar jobs from a template. That doesn't solve the "elephants all the way down" problem you describe (I love that term, btw) but it might help with this one specific instance.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

minato posted:

We had branch-per-configtype for a while (one for prod, dev, staging, test, etc), and it went really badly because it turns out that many changes frequently have to be made to multiple branches simultaneously (e.g. updating what SSL ciphers an Apache instance uses), and the branches drift apart from each other.

A more robust solution is to have code that generates the configs, taking the environment type as a parameter. e.g. "generate_config.sh --env=prod'. This keeps everything together in a single branch, and as a bonus it's easier to write tests that each env's config is correct.

That is a better solution and I'll keep that in mind.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bhodi posted:

An issue I keep coming across is an elephants all the way down problem of then having to have an associated prod/dev/test/whatever for your all your management code / servers when you use puppet/chef/ansible/whatever.

For example, I built a jenkins test suite that pulls a branch from git and runs a bunch of tests on our cloud environment including creating VMs on a bunch of different vlans with configs using the tool we distribute to users. But now I need to be able to reproduce jenkins itself in both prod and dev, so I have a separate repo for the jenkins configs. And I need a program to be able to import/export, so I wrapped ansible around that and have some ansible tasks to pull/push configs to the various jenkins servers. But wait, the jenkins configs are subtly different because, for example, prod jenkins needs to pull from the prod branch and dev from dev, so now I have to munge it through a tool to dynamically make the jenkins configs.

It's ugly and now I have 3 repos to manage and try and keep in sync, all with different versions and good release process. It's messy but the best I could come up with. My sister group dealing with our openstack silo has it three or four times as bad.
I don't buy that this is a problem that Chef and its kin don't solve, honestly. Chef supports trivially versioning cookbooks (the server + Berkshelf do this easily, future Chef versions will go even further with Policyfile), and it's super-easy to template out the config so the same template produces all the correct configurations.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Vulture Culture posted:

I don't buy that this is a problem that Chef and its kin don't solve, honestly. Chef supports trivially versioning cookbooks (the server + Berkshelf do this easily, future Chef versions will go even further with Policyfile), and it's super-easy to template out the config so the same template produces all the correct configurations.

In your example it would be having recipes for setting up your Postgres, rabbit mq, bookshelf, all the components of chef. And because presumably you need to be able to test upgrades and patches while your dev instance is supporting other people's work in dev, you need a separate entire instance for your own testing of those scripts. Maybe chef can bootstrap itself with its own files, I don't know, but you need those too. At some point you have to evaluate if it's all useful and just compromise, as was brought up in the cloud thread, but it's obnoxious to deal with when your systems can't manage themselves.

Less Fat Luke
May 23, 2003

Exciting Lemon
Is there any place that provides a SonarQube setup as a service? I'd like to use it for a couple Android projects but from what I understand the configuration and management (and integration with other services) is a total nightmare and I'd pay a lot to not think about it.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bhodi posted:

In your example it would be having recipes for setting up your Postgres, rabbit mq, bookshelf, all the components of chef. And because presumably you need to be able to test upgrades and patches while your dev instance is supporting other people's work in dev, you need a separate entire instance for your own testing of those scripts. Maybe chef can bootstrap itself with its own files, I don't know, but you need those too. At some point you have to evaluate if it's all useful and just compromise, as was brought up in the cloud thread, but it's obnoxious to deal with when your systems can't manage themselves.
I deal with these situations all the time by bootstrapping Test Kitchen instances with chef-solo and validating the results with Serverspec. Test Kitchen finally got multi-node support, so the weird integration cases got much easier to support. I don't have any permanent infrastructure assigned to testing environments.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Oh yeah? I have a serverspec question for you, actually...

Are you wrapping stuff for testing multiple hosts inside a rakefile? I can't figure out how to test multiple servers in one spec file because of the weird-rear end instantiation of rspec tests.

I really wanted to get it working but I tried basically everything, you can loop it but you can't reset the ssh connection variable and create it even if you put it in :all or :before, so I ended up having to go with a rakefile loop that tests per-server as the (very limited) documentation suggested.

rspec variable scope is loving weird, the ordering is loving weird, nothing makes sense

lament.cfg
Dec 28, 2006

we have such posts
to show you




Long-story warning.

I am a dev who is rapidly falling into the realm of responsibility which would be best served by CI/DevOps/whatever tools.

We are a primarily C# Windows desktop software shop with a SVN repo and a bad job at config management. I am trying to fix that.

Our current build process:

Everyone checks in to the single SVN repo trunk. We very rarely branch, unless there is a specific dichotomy of features being delivered to multiple "Buildings". Occasionally, a build is required. We do not do anything approaching "continuous". It is all driven by whatever the next official delivery is. We have a guy who owns a VM that has our "build server" on it, which is in reality an instance of Visual Studio and some scripts he wrote to update from SVN and run a build with the right flags. That build is then dropped on a fileshare, and the msi is installed on ~10 machines in our lab for testing.

Our official delivery configuration requirements look like this:

10 separate "Buildings"
~10 "Rooms" within each "Building"

So we have, give or take, 100 configuration files that get fed to the application depending on which "room" within which "building" the app is being deployed on. We do have an app that auto-generates the xml config based on the input of "Building" and "Room", so we aren't actually manually managing 100 configs, but due to some politics, we may need 100 discrete config files for CM reasons. We currently manually install our app on each "Room"'s workstation.

Things that I know we need (to fix):
Build process. We need it to not be one guy's VM, we need it to be automated, we need it to be via some kind of "industry standard" method instead of a hacked together clusterfuck.
Deployment. Our Lab and the "Buildings" are not available externally via any networking. So someone does have to bring a CD or hard drive with the build. From that point in, we *do* have the capability of introducing some kind of remote/automatic deployment system.
Version control. We should probably not just be adding poo poo into the trunk constantly and calling a specific build number as the 'released' version. This is another kludge.
`
TL;DR We are a mess and I need help.

Where do I start? Technologies, books, blogs, anything.

syphon
Jan 1, 2001
This is the book I've seen most people recommend as the place to get started with CI/CD. It's commonly referred to as the "Black book" - http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912

As far as things to do first, I'd say set up a CI server like Jenkins. It'll build .Net solutions just fine (although other for-pay tools like TeamCity do it a bit better). Get it pulling from your SVN server, running msbuild with every commit, then publishing a versioned artifact to your file share somewhere.

From there you can work on automating your deploy process, then enhancing your branching strategy. It may be worth going with Github for source control (it doesn't have to be publicly available), as Github has tremendous workflows for easy branching a code-reviews and whatnot.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
It's worth taking a look at Visual Studio Online if you're already in the Microsoft world. That will cover your source control and build, and they'll be adding a redesigned release/deployment experience later this year.

Adbot
ADBOT LOVES YOU

lament.cfg
Dec 28, 2006

we have such posts
to show you




syphon posted:

This is the book I've seen most people recommend as the place to get started with CI/CD. It's commonly referred to as the "Black book" - http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912

As far as things to do first, I'd say set up a CI server like Jenkins. It'll build .Net solutions just fine (although other for-pay tools like TeamCity do it a bit better). Get it pulling from your SVN server, running msbuild with every commit, then publishing a versioned artifact to your file share somewhere.

From there you can work on automating your deploy process, then enhancing your branching strategy. It may be worth going with Github for source control (it doesn't have to be publicly available), as Github has tremendous workflows for easy branching a code-reviews and whatnot.

I literally have a copy of that book open on my desk. Good to know I'm starting in the right place!

I will get Jenkins up and running as Step 1.

RE: Github, When you say publicly available, does that mean it's hosted locally, or on their servers but private? We also have a requirement that nothing can be hosted by a third party (which is horrible but it comes with the territory). This also addresses VS Online, which I assume hits the same roadblock. I'll check them both out for suitability though, thank you both.

EDIT: Github Enterprise supports local hosting and is $250/user/year-ish.

lament.cfg fucked around with this message at 21:22 on Jun 2, 2015

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply