Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
syphon
Jan 1, 2001
Yeah you can use your own hosted version of Github (GHE) but it costs money and resources to stand-up/support so the ROI drops dramatically. Given those options I'd stick with SVN for now. No reason to change EVERYTHING at first!

I'd probably tackle your challenges in this order:
--1) Set up CI builds with something like Jenkins
--2) Automate your deployments
--3) Create branch model that supports CI/CD. A Trunk or 'Mainline' setup is pretty common for this, so you're not too far off base.
--4) Automate your testing (this is a huge endeavor so don't expect to get this one done easily :))
--5) Tie it all together in a Continuous Delivery Pipeline

One of the biggest challenges I see from moving teams into a CI/CD model is the concept of "Every check-in must be capable of shipping all the way to release". If they've been doing it for a long time, people get way too used to the concept of "I can always check-in a fix later" and break the build or commit their half-written code. The more devs you have working in this mindset, the longer your build/deploy/tests will be broken, people will be blocked, and you're not releasing software. The idea is to set up your branch plan so that it allows people to commit frequently, but ALSO not commit junk to the Trunk branch and break everyone else (Github is really good at this by default).

syphon fucked around with this message at 23:18 on Jun 2, 2015

Adbot
ADBOT LOVES YOU

Stoph
Mar 19, 2006

Give a hug - save a life.
I'm trying to switch my team to Atlassian Stash (enterprise Bitbucket), since we already license other Atlassian products like JIRA and Confluence.

We currently use SVN and everyone commits directly to trunk. It feels like going back in time to the Stone Age. I keep asking my supervisor how I can send him a pull request so he can review my code before it goes into the mainline branch. He has no idea. However, he does want to start using SonarQube for code review. Baby steps, he tells me.

I'm hoping that perhaps we can use some of the tips here:

http://blogs.atlassian.com/2013/01/atlassian-svn-to-git-migration-technical-side/

I feel like proper code quality and CI is an insurmountable task until we switch to a pull request based workflow.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

syphon posted:

One of the biggest challenges I see from moving teams into a CI/CD model is the concept of "Every check-in must be capable of shipping all the way to release". If they've been doing it for a long time, people get way too used to the concept of "I can always check-in a fix later" and break the build or commit their half-written code. The more devs you have working in this mindset, the longer your build/deploy/tests will be broken, people will be blocked, and you're not releasing software. The idea is to set up your branch plan so that it allows people to commit frequently, but ALSO not commit junk to the Trunk branch and break everyone else (Github is really good at this by default).

The other big challenge is to get people to start using feature flags and short-lived dev branches so you can ship your code even if a feature is half-completed. The killer is usually database stuff -- it's hard to get into the mindset of never (rarely) introducing breaking database schema changes.

syphon
Jan 1, 2001

Stoph posted:

I feel like proper code quality and CI is an insurmountable task until we switch to a pull request based workflow.
I don't really know SVN at all, but I've used both Perforce and Github. Perforce has a code-review system that relies on "Shelved Changelists", is there anything similar for SVN?

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
SVN itself has no sort of code review functionality, but there's plenty of code review tools that can be used with SVN.

syphon
Jan 1, 2001

Ithaqua posted:

The other big challenge is to get people to start using feature flags and short-lived dev branches so you can ship your code even if a feature is half-completed. The killer is usually database stuff -- it's hard to get into the mindset of never (rarely) introducing breaking database schema changes.
Feature flags are great, but as the team/product gets larger they can turn into their own nightmare. For example, if you have 20 different devs contributing to the same product and they all put their code behind feature flags, that's 20^2 permutations of the app that should be tested (each feature should be tested against every possible permutation of every OTHER feature flag). What if another team has to roll back their changes or turn their feature off? Are you sure your code works reliable with features A B and C turned on but features X Y and Z turned off?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

syphon posted:

Feature flags are great, but as the team/product gets larger they can turn into their own nightmare. For example, if you have 20 different devs contributing to the same product and they all put their code behind feature flags, that's 20^2 permutations of the app that should be tested (each feature should be tested against every possible permutation of every OTHER feature flag). What if another team has to roll back their changes or turn their feature off? Are you sure your code works reliable with features A B and C turned on but features X Y and Z turned off?

I rarely see a team of 20 devs all working on totally isolated features. It's more commonly 20 devs working on 1 or 2 Big New Features (that will reasonably span multiple sprints and thus be good candidates for feature flagging), and 3-5 little features/bugfixes that aren't big enough to warrant a flag.

syphon
Jan 1, 2001
Yeah my example was certainly a bit exaggerated... but my company has a pretty large monolithic app where it's not unheard of to have 10+ feature flags spanning multiple releases all going at once. I'm willing to admit we're a bit of an edge case there. :)

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

syphon posted:

Feature flags are great, but as the team/product gets larger they can turn into their own nightmare.
All true, but this problem is largely QA's. It's up to them to figure out a sane testing plan, and maybe convince Dev to decouple modules so the permutations aren't n-factorial.

Feature flags can really hit Devs when you have a framework that can enable feature flags for specific users, like those who might want to use the beta-version of a new feature, or when you're doing A/B testing on an sub-set of your customers. Because when a bug report comes in, you need to double-check what features they have turned on in order to replicate the behavior.

bgreman
Oct 8, 2005

ASK ME ABOUT STICKING WITH A YEARS-LONG LETS PLAY OF THE MOST COMPLICATED SPACE SIMULATION GAME INVENTED, PLAYING BOTH SIDES, AND SPENDING HOURS GOING ABOVE AND BEYOND TO ENSURE INTERNET STRANGERS ENJOY THEMSELVES

Ithaqua posted:

The other big challenge is to get people to start using feature flags and short-lived dev branches so you can ship your code even if a feature is half-completed. The killer is usually database stuff -- it's hard to get into the mindset of never (rarely) introducing breaking database schema changes.

Do you work at my office? We've been revamping our CI/CD scheme over the last six months, migrating from many long-lived "feature" branches to one trunk branch + feature flags, and the pushback has been incredible, particularly with database stuff.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

bgreman posted:

Do you work at my office? We've been revamping our CI/CD scheme over the last six months, migrating from many long-lived "feature" branches to one trunk branch + feature flags, and the pushback has been incredible, particularly with database stuff.

I work in many offices, helping people figure out how to do this stuff better. :)

syphon
Jan 1, 2001

bgreman posted:

Do you work at my office? We've been revamping our CI/CD scheme over the last six months, migrating from many long-lived "feature" branches to one trunk branch + feature flags, and the pushback has been incredible, particularly with database stuff.
I'm not surprised by this at all. It seems to me that DB changes (schema mostly) are the hardest to make forward/backward compatible (and thus able to be put behind a feature flag). FWIW though, it was a HUGE win when my company was finally able to accomplish this! Keep fighting the good fight!

Dren
Jan 5, 2001

Pillbug
I've been trying out GoCD for the last couple of days as an alternative to Jenkins. My current project has four git repos that are relevant to a deployable build:
  • Source 1 repo
  • Source 2 repo (needs artifacts from Source 1 repo to build, this repo is an implementation of a public API from Source 1 repo)
  • VM repo (contains Vagrantfiles and bootstrap shell scripts for 3 different dev environments)
  • deploy repo (contains some scripts to glue together artifacts from the builds in each environment into an ISO)

We've already got most of a solution cobbled together in Jenkins but the support for the vagrant stuff is not great, the support for passing artifacts between job stages is not great (it works but eh..), and figuring out how to do something like get jenkins to build a release from a tag was an awful problem that looked like it'd require a ton of headache inducing engineering.

So far GoCD seems like it has some niceties that Jenkins lacks. It puts the idea of a pipeline build where artifacts are published between pipeline stages right up front. Potentially solving the issue of building tagged releases, it has the ability to kick off an entire pipeline from a specific commit hash. It also puts all config into a single XML file that can be backed up and shared which is much easier to deal with than whatever Jenkins does (last I looked config was split across lots of directories). One thing that's a blessing/curse is GoCD won't support a multi-line script right in a task the way that Jenkins does. I shouldn't complain, I've said for years that keeping huge build scripts in Jenkins and outside of source control is a bad thing, but being restricted to not scripting in there at all feels like tough love.

Something I was able to do with GoCD that was useful, and I guess I could've done this with Jenkins since you can script anything in there, is set up a dummy git related to my project and a job that takes a source tarball off a directory on my machine rather than a git checkout (GoCD requires some kind of a data source for every pipeline, their word for this is "Material", and Materials are restricted to being artifacts from other pipelines/SCM/package repo). Building from arbitrary stuff is incredibly useful for me in testing before I commit anything especially since I've got three environments (one of which has three build configurations that must be run). Whenever I want to test something I run a script that publishes the source tarball and pushes a commit to the related git to trigger the GoCD build. I realize it breaks the whole idea of CD, where everything is traceable back to some origin point, but it's drat useful to be able to use the CD machinery to run my test builds. All in all I don't think it's so bad, I've isolated this piece off in a pipeline group just for me.

One thing I'm having a bit of trouble with, and this was a problem on Jenkins too, is what to do with automating provisioning of the Vagrant managed build environments. At the moment I'm working on a script to checkout the vm repo then duplicate it N times (number of build agents I want) and use vagrant to provision a bunch of machines. It's got some smarts in it so that if I rerun it in the same workspace it'll avoid reprovisioning the environments that didn't change and disable/delete/vagrant destroy the ones that did before reprovisioning. I'm not running this script inside of GoCD but I suppose I could. Is there a better way to manage this issue? Should I be looking at using something like moving my shell script provisioning to Chef or whatever then making these machines with that instead of Vagrant? (The machines are all hosted on vSphere)

I've also got some VMs for testing deployments on my local machine that I'd like to transition over to the CD system. They're snapshotted centos and solaris machines where I just restore the snapshots to the default state after the OS installed. Not much to them.

One step I intend to take once I get things a bit more set up is to try unifying our repos in a master repo with subtrees. That way the master repo could be tagged for deliveries (and keep everything else in sync) rather than tagging each repo.

Another piece of our system is a roll-your-own artifact server where CentOS repos, OpenCSW repos, and MSYS2/MinGW64 repos were all snapshotted and mirrored. I'm interested to know if any of the artifact server products can do stuff like constantly take updates from upstream yum so that the yum that is mirrored is fully up to date but be able to show a server a snapshot of that yum server in time so that it can always reprovision the same way. I've set something like this up before for a RHEL based product using a custom yum plugin. It was actually pretty cool, I think. This project doesn't do that snapshot and constantly updated yum stuff but it does have artifacts. Would there be any reason to investigate an artifact server as opposed to nginx?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

a worthy uhh posted:

10 separate "Buildings"
~10 "Rooms" within each "Building"
...
Our Lab and the "Buildings" are not available externally via any networking. So someone does have to bring a CD or hard drive with the build.

Be honest with me here, are you delivering medical services? Customized config on both a facility and room level, airgapped system, maybe like a cyclotron treatment facility or something? hire me esp. if you know Thornton

I used to hear horror stories about cyclotrons and cobbled-together bullshit in clearcase. Including one about a guy who had someone bring in a scorched plexiglas testing cube and drat near shat himself when he realized that it was nuking the cube. I'm honestly amazed there's never been another Therac. If that's what you're doing make goddamn sure that that can never ever happen. Especially with continuous integration.

Paul MaudDib fucked around with this message at 06:34 on Jun 4, 2015

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I have also sold our team on continuous deployment to staging, at least. We still don't have any goddamned testing because my predecessors were retards but I have permission to start fixing it going forward, just like the rest of the codebase. I think I may be getting promoted to the senior engineer on the project, my boss gave me a nice talk about how much I have improved "process" on the project and talked about how we needed someone to make strategic decisions and onboard newbies.

We have a fuckload of supported browsers and OSs, an unsustainable amount if humans have to monitor the differences. We obviously need unit tests too, but as a way to automate regression testing, I'm strongly considering implementing a visual diff tool with an adjustable alert threshold type thing that monitors difference from a baseline browser as well as version over version change. I think that would be pretty straightforward and would give us a big red flag when something breaks.

Paul MaudDib fucked around with this message at 05:26 on Jun 4, 2015

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

a worthy uhh posted:

RE: Github, When you say publicly available, does that mean it's hosted locally, or on their servers but private? We also have a requirement that nothing can be hosted by a third party (which is horrible but it comes with the territory). This also addresses VS Online, which I assume hits the same roadblock. I'll check them both out for suitability though, thank you both.

Hosted or solely hosted? Git is a distributed version system, every developer holds a full copy of the repo. If Github fucks you over any one of them is capable of restoring you all on their own. Put it on a backed up system and you're golden.

If you mean your legal has a problem with someone else having access to your source (i.e. you have a problem with them hosting it, at all), that's also bullshit. If Github disclosed someone's source just for funsies the repercussions would put them out of business in a week. Hate to put it quite like this but Github is a Facebook-level business, your software isn't worth loving it up over. Even if you're a whole complex of highly profitable medical facilities. There's much, much bigger fish than you who are retaining their services.

Get your boss to stand ground on these issues, neither are acceptable complaints.

PHI isn't the same thing at all - doesn't need to go into the repo. We deal with PHI and we don't do it. Keep that poo poo on an audit-logged server. Our reporting is nuts, I have actually been considering re-vamping our audit logging into a separate service (DB + macaroons) since what we have now is incompatible with our current system layout. Right now we can't guarantee that someone wouldn't start a transaction, do queries, and then roll the audit log back. It's not a "yesterday" issue because our network is super isolated from the net, so there's like 2-3 people who could do that. It works in staging b/c a junior dev rolled back audit logging and then built functionality on top of the concept of being able to perform un-loggable SELECT queries on PHI (he's being fired next week). I'm actually going to muscle that issue again tomorrow, we have a team that's building on that functionality for a release in 2-3 weeks and I really cannot allow that to deploy into prod.

syphon posted:

I'd probably tackle your challenges in this order:
--1) Set up CI builds with something like Jenkins
--2) Automate your deployments
--3) Create branch model that supports CI/CD. A Trunk or 'Mainline' setup is pretty common for this, so you're not too far off base.
--4) Automate your testing (this is a huge endeavor so don't expect to get this one done easily :))
--5) Tie it all together in a Continuous Delivery Pipeline

One of the biggest challenges I see from moving teams into a CI/CD model is the concept of "Every check-in must be capable of shipping all the way to release". If they've been doing it for a long time, people get way too used to the concept of "I can always check-in a fix later" and break the build or commit their half-written code. The more devs you have working in this mindset, the longer your build/deploy/tests will be broken, people will be blocked, and you're not releasing software. The idea is to set up your branch plan so that it allows people to commit frequently, but ALSO not commit junk to the Trunk branch and break everyone else (Github is really good at this by default).

This is really good advice - this is what I've been doing with Bamboo. Check out the "git-flow" model - that's the most sensible model I've seen so far. "Develop" is equivalent to "trunk" or "nightly". We deliver straight to a testing environment for "develop" plus one instance for every release branch we're maintaining. Testing is next up on the list, at both a "testers bill time to write selenium tests" and "developers roll unit tests into their time from now on" level.

Paul MaudDib fucked around with this message at 06:25 on Jun 4, 2015

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Dren posted:

So far GoCD seems like it has some niceties that Jenkins lacks. It puts the idea of a pipeline build where artifacts are published between pipeline stages right up front. Potentially solving the issue of building tagged releases, it has the ability to kick off an entire pipeline from a specific commit hash. It also puts all config into a single XML file that can be backed up and shared which is much easier to deal with than whatever Jenkins does (last I looked config was split across lots of directories). One thing that's a blessing/curse is GoCD won't support a multi-line script right in a task the way that Jenkins does. I shouldn't complain, I've said for years that keeping huge build scripts in Jenkins and outside of source control is a bad thing, but being restricted to not scripting in there at all feels like tough love.

Something I was able to do with GoCD that was useful, and I guess I could've done this with Jenkins since you can script anything in there, is set up a dummy git related to my project and a job that takes a source tarball off a directory on my machine rather than a git checkout (GoCD requires some kind of a data source for every pipeline, their word for this is "Material", and Materials are restricted to being artifacts from other pipelines/SCM/package repo). Building from arbitrary stuff is incredibly useful for me in testing before I commit anything especially since I've got three environments (one of which has three build configurations that must be run). Whenever I want to test something I run a script that publishes the source tarball and pushes a commit to the related git to trigger the GoCD build. I realize it breaks the whole idea of CD, where everything is traceable back to some origin point, but it's drat useful to be able to use the CD machinery to run my test builds. All in all I don't think it's so bad, I've isolated this piece off in a pipeline group just for me.

FWIW this is all trivially doable with Jenkins (to the extent that anything involving Jenkins can be said to be trivial), but I can definitely see the value in a tool that actually points you in the right direction rather than basically requiring a knowledgeable consultant to end up with anything remotely sane.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Plorkyeran posted:

FWIW this is all trivially doable with Jenkins (to the extent that anything involving Jenkins can be said to be trivial), but I can definitely see the value in a tool that actually points you in the right direction rather than basically requiring a knowledgeable consultant to end up with anything remotely sane.
TeamCity is free up to three build agents and very reasonably priced beyond that

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
okay?

that's sort of a non-sequitor

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Plorkyeran posted:

okay?

that's sort of a non-sequitor
"I can definitely see the value in a tool that actually points you in the right direction rather than basically requiring a knowledgeable consultant to end up with anything remotely sane."

teamcity is p. good and handles artifact deps really nicely

wwb
Aug 17, 2004

It isn't if you understand the pricing model.

TeamCity is a freemium product. It is free at a certain level -- that is 3 build agents and 20 or so projects. Beyond that you need to pay but if you've got that much going on then it is pretty cheap compared to a lot of products -- like $2k or so a year.

Dren
Jan 5, 2001

Pillbug

Plorkyeran posted:

FWIW this is all trivially doable with Jenkins (to the extent that anything involving Jenkins can be said to be trivial), but I can definitely see the value in a tool that actually points you in the right direction rather than basically requiring a knowledgeable consultant to end up with anything remotely sane.

I understand and I've done this sort of thing in Jenkins before. The end result with Jenkins was sort of complex and opaque. There's something to be said for the way GoCD presents a pipeline flow showing you everything upstream and downstream of a pipeline stage. Having a visualization of your complicated build be a key part of the app is really nice. To be fair, maybe that's available in Jenkins and I haven't seen it.

One thing I don't like about GoCD so far is it doesn't seem to give you direct access to the workspace of your agents via the web client. It hasn't been a big problem so far, and I believe the idea is you alleviate that problem by publishing test reports as artifacts, but I haven't gotten around to figuring out how to publish my test reports yet and the quick and dirty solution of direct workspace access would have been adequate for me.

I'm toying with the idea of writing a pair of GoCD and reviewboard plugins to automatically build stuff that gets submitted for review then display in reviewboard if a review was properly built or not. I'd need to rework our git stuff and test reviewboard a bit more before I tried it though.

Vulture Culture posted:

TeamCity is free up to three build agents and very reasonably priced beyond that

I looked at TeamCity for about as long as it took find the page with the pricing model. For my project I need lots of agents and I can't imagine the tool is worth being bound by the licensing restrictions. Maybe if they let you use 10 or 20 for free I'd have tried it, but I can't even get a full trial build of my stuff going in order to test out TeamCity without a minimum of 6 agents (many more if I want to really get things going).

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Dren posted:

I understand and I've done this sort of thing in Jenkins before. The end result with Jenkins was sort of complex and opaque. There's something to be said for the way GoCD presents a pipeline flow showing you everything upstream and downstream of a pipeline stage. Having a visualization of your complicated build be a key part of the app is really nice. To be fair, maybe that's available in Jenkins and I have
Not stock, but as with everything there's a plugin for it.

JimboMaloi
Oct 10, 2007

(disclaimer: I work for ThoughtWorks, though not on the Go team)

I'm actually currently dealing with the Jenkins vs. Go CD discussion, and while it's true that you can get some semblance of a visualization in Jenkins, it doesn't compare to what Go CD gives you out of the box. The part where Jenkins visualization falls apart if/when you have to actually chain pipelines together. I will give credit though that the (relatively) new Workflow Plugin seems to be a big improvement over previous Jenkins plugins, particularly by actually having support for the diamond dependency problem where you only want pipeline D to run if pipelines B and C were successful using a common artifact from pipeline A. But Go CD is free now, so if it suits your needs there's no longer a reason not to use it. If you've got the budget you should definitely look into TeamCity though; it's a very solid piece of software. Just whatever you do, don't use ElectricCommander.

quote:

One thing I'm having a bit of trouble with, and this was a problem on Jenkins too, is what to do with automating provisioning of the Vagrant managed build environments. At the moment I'm working on a script to checkout the vm repo then duplicate it N times (number of build agents I want) and use vagrant to provision a bunch of machines. It's got some smarts in it so that if I rerun it in the same workspace it'll avoid reprovisioning the environments that didn't change and disable/delete/vagrant destroy the ones that did before reprovisioning. I'm not running this script inside of GoCD but I suppose I could. Is there a better way to manage this issue? Should I be looking at using something like moving my shell script provisioning to Chef or whatever then making these machines with that instead of Vagrant? (The machines are all hosted on vSphere)

You're on the right track in that Vagrant is the wrong tool for what you're trying to do. Chef (or Ansible or Puppet or Salt) will do what you want once you've wrapped your head around how they actually work.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





i want to get all state off a jenkins server because our provisioning is in a state of quantum uncertainty. ideally, you should be able to run a simple bootstrap script and there should be a provisioning server ready and waiting for you inside our vpc. however, i want to retain build logs and numbers. i looked at thinBackup but it seems pretty heavyweight for what i want. i thought about writing a plugin that writes logs to s3 and job numbers/status to postgres but i already manage like 15 postgres dbs and i don't want to add anymore. is an ebs block store device mounted at /var/lib/jenkins/jobs a terrible, terrible idea or do you think i can get away with it? i'll just add a line to the script that shuts down any running provisioning server and detaches the block device before starting a new one and attaching it

anyways, bad plan?

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
What are you conceivably going to do with these logs? Do you really need full console logs and build numbers? Why?

That seems like a lot of effort for way too much data that's almost certainly worthless. What about generating / uploading a (junit) test report instead?

Or, do some groovy parsing and peel off what you actually need...

I get the log hoarder mentality but unless you really do have the capability and man power to go back and do heavy analysis with correlation to networking storage or whatever, with a feedback loop to actually drive change, it's kind of wasted. If your provisioning system really is that much of a mess, most likely you're going to get a shrug and a "well, it works now" so you might want to refocus your effort.

Also, build numbers aren't really useful in and of themselves, which is why I suggested a test report so you can tie it to whatever number is actually meaningful to your attempt - git id, tag, date or whatever.

If you go down the road of trying to capture the exact state of the jobs dir, the first time you need to reset the build ids or clear the logs it's going to be a mess. You say you never need to do that, but there are some reasons why you might need to, anything from running out of inodes to re-engineering your jobs, or maybe a future version of Jenkins changes the format. You'd be painting yourself into a corner if you went by that method, never mind the extremely convoluted solution.

Bhodi fucked around with this message at 03:27 on Jul 1, 2015

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





Bhodi posted:

What are you conceivably going to do with these logs? Do you really need full console logs and build numbers? Why?

That seems like a lot of effort for way too much data that's almost certainly worthless. What about generating / uploading a (junit) test report instead?

Or, do some groovy parsing and peel off what you actually need...

I get the log hoarder mentality but unless you really do have the capability and man power to go back and do heavy analysis with correlation to networking storage or whatever, with a feedback loop to actually drive change, it's kind of wasted. If your provisioning system really is that much of a mess, most likely you're going to get a shrug and a "well, it works now" so you might want to refocus your effort.

Also, build numbers aren't really useful in and of themselves, which is why I suggested a test report so you can tie it to whatever number is actually meaningful to your attempt - git id, tag, date or whatever.

If you go down the road of trying to capture the exact state of the jobs dir, the first time you need to reset the build ids or clear the logs it's going to be a mess. You say you never need to do that, but there are some reasons why you might need to, anything from running out of inodes to re-engineering your jobs, or maybe a future version of Jenkins changes the format. You'd be painting yourself into a corner if you went by that method, never mind the extremely convoluted solution.

you have made some really good points, particularly that no one is ever actually going to look at the logs and we should be using meaningful ids and not whatever garbage jenkins generates. i'm already enforcing that artifacts have to be saved somewhere other than the provisioning vm disk if you want to retain them so i guess i could do the same with logs/reports. worst case i can always add a post build step to the template job that moves logs to elasticsearch or something

TheresNoThyme
Nov 23, 2012
Anyone have experience using devops tools to stand up and support an internal cloud?

I'm being asked for input on doing so, and while my role is definitely not-devops-guy it's also will-get-screwed-by-bad--infrastructure-decisions-guy. Assuming it stabilizes in the future, I figure Docker is a good starting point for enabling scalability, and leaves the door open to an external cloud in the future. Even with its current issues, it seems pretty much too-big-to-fail at the moment.

When it comes to hooking in docker with actual deployment and cloud management though, I'm in general pretty distrustful of the whole "just write puppet scripts to do everything!" methodology but when I look at what I'd consider more reliable, framework-style options (like say kubernetes) there doesn't seem to be much maturity there either. Like, obviously it works fine for google et al but I'm not confident our internal team could be more successful using it in its current state than if they just went with a homebrew approach

TheresNoThyme fucked around with this message at 14:59 on Jul 6, 2015

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
I'm a huge fan of Docker and we have a lot of stuff running in Docker in production, but it's mostly hand-tooled at the moment. The PaaS offerings for Docker are plentiful, but as you say, immature. Kubernetes and Mesos are your best bets for now, but they'll also require significant time investments. Mesos/Marathon is more of a SOA development framework than a PaaS.

If you're a relatively small shop without much experience then I'd recommend Dockerizing-all-the-things (just because it frees Ops from package-management hell) and deploy with hand-tooled puppet scripts, with a view to replacing those scripts over time with whatever Docker PaaS ends up winning the race.

TheresNoThyme
Nov 23, 2012
Thanks for the input, that's the way I am leaning as well and it's nice to have input from someone already down the road on Docker. I had not heard of Mesos, will have to check that out.

It's a bit annoying because I just know I'm going to be back to "well it works on my local!" for the inevitable early slew of puppet script problems. I guess I just need to accept baby steps here and get the transparency docker provides, then worry about the other stuff later.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





TheresNoThyme posted:

Thanks for the input, that's the way I am leaning as well and it's nice to have input from someone already down the road on Docker. I had not heard of Mesos, will have to check that out.

It's a bit annoying because I just know I'm going to be back to "well it works on my local!" for the inevitable early slew of puppet script problems. I guess I just need to accept baby steps here and get the transparency docker provides, then worry about the other stuff later.

we run mesos in production and it's basically zero help. the stuff on mesos requires just as much hand holding as the stuff we deploy via jenkins/ansible

Lord Of Texas
Dec 26, 2006

Ithaqua posted:

The other big challenge is to get people to start using feature flags and short-lived dev branches so you can ship your code even if a feature is half-completed. The killer is usually database stuff -- it's hard to get into the mindset of never (rarely) introducing breaking database schema changes.

"Breaking" database schema changes can be part of a toggles/feature flags approach too, the key to making that easy is having an SOA architecture where you don't have 50 different apps reading and writing from the same database tables.

If you instead have your tables behind a service that manages them, you can work around those changes within the bounded context and ensure you're not impacting anything used in production (e.g. if someone added a not-null column that's not used yet, you can have your service insert default values to that column for the time being)

Of course, if you're refactoring the entire schema structure, your changes to the service itself are probably going to be too catastrophic to push that to prod either, not everything fits neatly behind a feature flag.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

TheresNoThyme posted:

Anyone have experience using devops tools to stand up and support an internal cloud?
Anything but OpenStack. :ptsd:

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Vulture Culture posted:

Anything but OpenStack. :ptsd:
Definitely. OpenStack is (mostly) fine to use, but only a masochist would want to manage it.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

minato posted:

Definitely. OpenStack is (mostly) fine to use, but only a masochist would want to manage it.
(For anyone who is a masochist who wants to manage it, I've done every performance deep-dive there is to do. Ask away.)

Dirk Pitt
Sep 14, 2007

haha yes, this feels good

Toilet Rascal
So I spent part of the weekend setting up fastlane, a delivery mechanism for iOS to work in our new app. Next up is getting a CI server that can do the work of calling the appropriate workflow.

Should I just setup a Jenkins server on one of our Mac minis and go to town?

Pollyanna
Mar 5, 2005

Milk's on them.


What options are there for centralized config options/keys? One of our projects relies on a single config file that's copied over to every new Dev machine, and when changes in the config file happen, it makes everyone else's outdated and causes problems with failing tests and poo poo. Is there a service that offers a "centralized" config file or ENV variables?

Hughlander
May 11, 2005

Pollyanna posted:

What options are there for centralized config options/keys? One of our projects relies on a single config file that's copied over to every new Dev machine, and when changes in the config file happen, it makes everyone else's outdated and causes problems with failing tests and poo poo. Is there a service that offers a "centralized" config file or ENV variables?

If the source requires it to build/run and the source changes breaks it, you may consider it to be part of the source that needs control. Maybe by a source control system.

syphon
Jan 1, 2001
The idea is to treat all of your configs as code (which basically means put them in source control and run them through whatever "build/deploy/test" processes are applicable). There are various "Configuration Management" tools (chef, puppet, ansible, salt) that support and encourage these practices.

Adbot
ADBOT LOVES YOU

0zzyRocks
Jul 10, 2001

Lord of the broken bong

syphon posted:

The idea is to treat all of your configs as code (which basically means put them in source control and run them through whatever "build/deploy/test" processes are applicable). There are various "Configuration Management" tools (chef, puppet, ansible, salt) that support and encourage these practices.

I've been using Chef for a while now, and after you get past the learning curve it's really good. It's just a lot to take in at once... you have half a dozen command line tools, 3 available provisioners, cookbooks, recipes, attributes, environments, roles, nodes, files, templates, resources, LWRPs, etc. etc. etc. Can be pretty overwhelming, especially if you don't know Ruby either. But, once you get into it, learn some of the best practices, write a couple cookbooks, and frequent the IRC channel you're set. I'm helping the company I work at now design and implement webapp servers (LNMP stack mainly) managed by Chef, to begin the process of migrating from hand-managed servers to config-VCS-tested bliss. It's also a GREAT way to segway into contributing to open source, since everything Chef is on GitHub.

I'm even using it for side projects because, in concert with Vagrant, it makes setting up a local development instance easy as cake when you need it to match existing production servers.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply