Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dren
Jan 5, 2001

Pillbug
The project I'm working on is split into two deliverables (in separate git repos) where the second deliverable depends on the first. Jenkins is set up to take the necessary artifacts from the last successful build of the first project when it builds the second project and there is a way to kick off a build of everything that packages up all of the artifacts at the end and makes an ISO with all of the deliverables. This all works pretty ok. I'd like it to be able to kick off the whole process on a tag instead of master in order to create a release. Is there any option besides duplicating all of the projects and pointing them at the tag instead of master?

Also, the jenkins vagrant plugin is kind of rear end, are there any alternatives? We ended up scripting some stuff inside the jobs to make vagrant work.

Adbot
ADBOT LOVES YOU

Dren
Jan 5, 2001

Pillbug

Bhodi posted:

You can set "Branch Specifier" as ${TAG_NAME} and set a string build parameter TAG_NAME with the default name as origin/master or origin/HEAD or whatever, and only change it when you want to build a release. Presumably the options are the same and you're using a post-build action to trigger parameterized build of the second job. You can add "Current build parameters" as an option and it'll forward the tag along.

Edit: You may also have to mess around with the refspec, because by default the git plugin may not fetch tags automatically depending on which version you're using. Setting refspec to "+refs/tags/*:refs/remotes/origin/tags/*" and "*/tags/${TAG_NAME}" as branch specifier should do the trick.

Never hooked vagrant up to jenkins, sorry.

I'll check that out, thanks

Dren
Jan 5, 2001

Pillbug
I've been trying out GoCD for the last couple of days as an alternative to Jenkins. My current project has four git repos that are relevant to a deployable build:
  • Source 1 repo
  • Source 2 repo (needs artifacts from Source 1 repo to build, this repo is an implementation of a public API from Source 1 repo)
  • VM repo (contains Vagrantfiles and bootstrap shell scripts for 3 different dev environments)
  • deploy repo (contains some scripts to glue together artifacts from the builds in each environment into an ISO)

We've already got most of a solution cobbled together in Jenkins but the support for the vagrant stuff is not great, the support for passing artifacts between job stages is not great (it works but eh..), and figuring out how to do something like get jenkins to build a release from a tag was an awful problem that looked like it'd require a ton of headache inducing engineering.

So far GoCD seems like it has some niceties that Jenkins lacks. It puts the idea of a pipeline build where artifacts are published between pipeline stages right up front. Potentially solving the issue of building tagged releases, it has the ability to kick off an entire pipeline from a specific commit hash. It also puts all config into a single XML file that can be backed up and shared which is much easier to deal with than whatever Jenkins does (last I looked config was split across lots of directories). One thing that's a blessing/curse is GoCD won't support a multi-line script right in a task the way that Jenkins does. I shouldn't complain, I've said for years that keeping huge build scripts in Jenkins and outside of source control is a bad thing, but being restricted to not scripting in there at all feels like tough love.

Something I was able to do with GoCD that was useful, and I guess I could've done this with Jenkins since you can script anything in there, is set up a dummy git related to my project and a job that takes a source tarball off a directory on my machine rather than a git checkout (GoCD requires some kind of a data source for every pipeline, their word for this is "Material", and Materials are restricted to being artifacts from other pipelines/SCM/package repo). Building from arbitrary stuff is incredibly useful for me in testing before I commit anything especially since I've got three environments (one of which has three build configurations that must be run). Whenever I want to test something I run a script that publishes the source tarball and pushes a commit to the related git to trigger the GoCD build. I realize it breaks the whole idea of CD, where everything is traceable back to some origin point, but it's drat useful to be able to use the CD machinery to run my test builds. All in all I don't think it's so bad, I've isolated this piece off in a pipeline group just for me.

One thing I'm having a bit of trouble with, and this was a problem on Jenkins too, is what to do with automating provisioning of the Vagrant managed build environments. At the moment I'm working on a script to checkout the vm repo then duplicate it N times (number of build agents I want) and use vagrant to provision a bunch of machines. It's got some smarts in it so that if I rerun it in the same workspace it'll avoid reprovisioning the environments that didn't change and disable/delete/vagrant destroy the ones that did before reprovisioning. I'm not running this script inside of GoCD but I suppose I could. Is there a better way to manage this issue? Should I be looking at using something like moving my shell script provisioning to Chef or whatever then making these machines with that instead of Vagrant? (The machines are all hosted on vSphere)

I've also got some VMs for testing deployments on my local machine that I'd like to transition over to the CD system. They're snapshotted centos and solaris machines where I just restore the snapshots to the default state after the OS installed. Not much to them.

One step I intend to take once I get things a bit more set up is to try unifying our repos in a master repo with subtrees. That way the master repo could be tagged for deliveries (and keep everything else in sync) rather than tagging each repo.

Another piece of our system is a roll-your-own artifact server where CentOS repos, OpenCSW repos, and MSYS2/MinGW64 repos were all snapshotted and mirrored. I'm interested to know if any of the artifact server products can do stuff like constantly take updates from upstream yum so that the yum that is mirrored is fully up to date but be able to show a server a snapshot of that yum server in time so that it can always reprovision the same way. I've set something like this up before for a RHEL based product using a custom yum plugin. It was actually pretty cool, I think. This project doesn't do that snapshot and constantly updated yum stuff but it does have artifacts. Would there be any reason to investigate an artifact server as opposed to nginx?

Dren
Jan 5, 2001

Pillbug

Plorkyeran posted:

FWIW this is all trivially doable with Jenkins (to the extent that anything involving Jenkins can be said to be trivial), but I can definitely see the value in a tool that actually points you in the right direction rather than basically requiring a knowledgeable consultant to end up with anything remotely sane.

I understand and I've done this sort of thing in Jenkins before. The end result with Jenkins was sort of complex and opaque. There's something to be said for the way GoCD presents a pipeline flow showing you everything upstream and downstream of a pipeline stage. Having a visualization of your complicated build be a key part of the app is really nice. To be fair, maybe that's available in Jenkins and I haven't seen it.

One thing I don't like about GoCD so far is it doesn't seem to give you direct access to the workspace of your agents via the web client. It hasn't been a big problem so far, and I believe the idea is you alleviate that problem by publishing test reports as artifacts, but I haven't gotten around to figuring out how to publish my test reports yet and the quick and dirty solution of direct workspace access would have been adequate for me.

I'm toying with the idea of writing a pair of GoCD and reviewboard plugins to automatically build stuff that gets submitted for review then display in reviewboard if a review was properly built or not. I'd need to rework our git stuff and test reviewboard a bit more before I tried it though.

Vulture Culture posted:

TeamCity is free up to three build agents and very reasonably priced beyond that

I looked at TeamCity for about as long as it took find the page with the pricing model. For my project I need lots of agents and I can't imagine the tool is worth being bound by the licensing restrictions. Maybe if they let you use 10 or 20 for free I'd have tried it, but I can't even get a full trial build of my stuff going in order to test out TeamCity without a minimum of 6 agents (many more if I want to really get things going).

Dren
Jan 5, 2001

Pillbug
What do people use to store artifacts between build steps?

My specific scenario is that I'm in Jenkins and I want there to be two build steps in my pipeline.
step 1. if the relevant part of the source tree changed I want to make a pipeline step that creates a vagrant box using packer then publish it somewhere.
step 2. Get the latest box from wherever it is published and do stuff

My problem is I don't know where to put the box in between the steps.

stuff I've looked into:
  • As I understand it stash/unstash don't work because they don't persist between builds and because they're bad for large files.
  • The external workspace manager plugin seems like it might work but could be messy. It's also way overkill, I want to save off one file not deal with the entire workspace persisting and whatever weird side effects that will have.
  • storing as a jenkins artifact and using the copy artifact plugin - seems like it might work if I hack around with ${BUILD_REVISION}
  • using apache archiva - I got archiva set up, used maven to deploy my artifact with mvn deploy:deploy-file then... I'm stuck. There doesn't seem to be a way to download an artifact from maven. depedency:get fails because the artifact is not in central. dependency:copy fails because I don't have a pom file for my project (it's not a java project, I don't want a pom file). The archiva REST API gives me an error 204 when I try to download the artifact with wget.

Is there any way to get maven to work? My googling seemed to tell me that people use maven for non-java artifacts all the time and this would be no problem. Failing that, is there anything like archiva that has an interface for putting a file w/ a version and then also an interface for getting that file back?

Dren
Jan 5, 2001

Pillbug

Plorkyeran posted:

We store things as jenkins artifacts and use the copy artifact plugin and it's awful and a common source of spurious build failures.

Based on examples I saw I assumed that this was the case, it's why I am hunting for a different solution.

Dren
Jan 5, 2001

Pillbug
I got maven to behave. If you set up an archiva then here's what you need to do to deploy whatever into it. You need maven 3.1+ because of this bug. This assumes your archiva is running on archiva.myinternaldomain.com:8080, that you set up a user with credentials jenkins/password, and that maven is installed. In this example I'm uploading my packer box.

code:
create ~/.m2/settings.xml:                                                      
                                                                                
    <settings>                                                                  
        <servers>                                                               
            <server>                                                            
                <id>archiva.internal</id>                                       
                <username>jenkins</username>                                  
                <password>password</password>                                   
            </server>                                                           
            <server>                                                            
                <id>archiva.snapshots</id>                                      
                <username>jenkins</username>                                  
                <password>password</password>                                   
            </server>                                                           
        </servers>                                                              
    </settings> 

upload packer artifact to archiva:                                                         
                                                                                
    cd ~/projects/packer
    mvn deploy:deploy-file -Dfile=packer_centos7_virtualbox.box -DrepositoryId=archiva.snapshots -Durl=http://archiva.myinternaldomain.com:8080/repository/snapshots/ -DgroupId=com.myinternaldomain.packer -DartifactId=packer-centos7-virtualbox -Dversion=1.0

look at packer artifact on the web:                                             
                                                                                
    [url]http://archiva.myinternaldomain.com:8080/repository/snapshots[/url]                    
    [url]http://archiva.myinternaldomain.com:8080/repository/snapshots/com/myinternaldomain/packer/packer-centos7-virtualbox/1.0/[/url]

download packer artifact:                                                       
                                                                                
    mvn org.apache.maven.plugins:maven-dependency-plugin:3.0.0:get -DremoteRepositories=http://archiva.myinternaldomain.com:8080/repository/snapshots -Dartifact=com.myinternaldomain.packer:packer-centos7-virtualbox:1.0:box
    mvn org.apache.maven.plugins:maven-dependency-plugin:3.0.0:copy -Dartifact=com.myinternaldomain.packer:packer-centos7-virtualbox:1.0:box -DoutputDirectory=. -Dmdep.stripVersion=true
Some things to note:
  • There might be a way to not store the pw in plaintext but it looked like a pita to set up.
  • You need to use the snapshots repository if you want to be able to reupload and overwrite the artifact (which is important for a CI pipeline). There seem to be two modes for maven repositories, snapshot and release. Snapshot lets you overwrite stuff, release is permanent (unless an admin intervenes).
  • The group id can be whatever you want. Maven seems to identify artifacts primarily by group id, artifact name, and version. There are some other fields like classifier, I saw something that said classifier should map to the extension of the artifact (default is jar).
  • Don't specify a classifier on the deploy-file step, it will mess with the filename and you won't be able to download the artifact with depedency:get.
  • Do specify the classifier on the dependecy:get and dependency:copy steps.
  • The fully qualified name for the dependency plugin was necessary for me because the version I got when I used dependency:copy required a POM file to be in my project. The 3.0.0 version does not.
  • -Dmdep.stripVersion=true gives you back the original filename. If you don't use it -1.0 (or whatever your version is) gets appended to the filename.
  • The dependency:get step downloads the artifact to your local repository at ~/.m2/repository. That repository filling up could become a problem, I don't know. My solution will probably be to rm -rf it from time to time.

I haven't integrated this process with jenkins yet but I feel fairly good about it.

Dren fucked around with this message at 17:40 on May 5, 2017

Dren
Jan 5, 2001

Pillbug
Can anyone share their thoughts on how containers turn updating from a system problem (run yum update once in the system) to an every app problem? Is this not a big deal in practice or does it turn out that updates don't happen as easily or as often as they should?

Dren
Jan 5, 2001

Pillbug
I'm kind of a docker noob but can't you build a docker image that is fully provisioned, manually version it, publish it to a docker repository, and pull that for a CI build so you can have a fresh one each time but not eat the cost of provisioning? Building and publishing the image in CI is possible too – check if the current version is in the repository and if not have CI build it and push it.

Dren
Jan 5, 2001

Pillbug

EssOEss posted:

To have a second process for building the imge? Sure you can but there is no obvious need to unless you plan to reuse it (e.g. for a whole build cluster). Or do you have some benefit in mind that I do not immediately think of?

Harik said he wants to

Harik posted:

setup my build environment so that the pre-build state is cached and can spinup instantly.

Your process has these steps:

EssOEss posted:

  1. Build a Docker image that contains your toolchain and your inputs.
  2. Spawn a new container based on this image and have it execute the build process for your app.
  3. Mount directories on the host to capture build output files and/or reuse any temporary build files you wish to keep.

What I'm talking about is having step 1 be performed once, possibly by a CI step, every time the toolchain changes (which shouldn't be very often) instead of once for every build. This would meet Harik's goal of having the pre-build state cached so it can spin up instantly.

Dren
Jan 5, 2001

Pillbug

EssOEss posted:

Ah, I see. That would be taken care of by this part:


You get caching for free. No need to push any images for it.

Where will devs get the environments from? Do they have to build it themselves?

Dren
Jan 5, 2001

Pillbug
I have used Jenkins since it was Hudson but I've probably only set up 5 or 6 projects on it in total and never to the level of deployment, just build and some test. I tried out GoCD for a project a few years ago and very much liked its UI, pipelines, automatically versioned configuration, and the concept that there is one artifact to ship between stages. But when I had a problem it was tough to google (Jenkins doesn't have this issue) and when I wanted a plugin to trigger builds from phabricator I needed to write it myself (again, not a problem with Jenkins because it's the default so people already have plugins). Once Jenkins got pipelines, which were my favorite part of GoCD, I went back to Jenkins. We have bitbucket and the bitbucket multi branch project plugin is pretty good. It will pick up feature branches and PRs then build them in their own workspaces and delete them when the branches go away. (It requires a jenkinsfile in the TLD so you have to use pipeline). A nice consequence of this plugin is that Jenkinsfile changes can be developed on a feature branch so that master doesn't get cluttered with your failures.

Like some of you have observed Jenkins plugins mostly stink. I used to end up scripting many things myself inside of Jenkins (and consequently outside of SCM) but thanks to vagrant things aren't so bad anymore. My projects now have a scripts directory at the top level with scripts that launch vagrant controlled envs to do the various build tasks and my pipeline simply calls those scripts. Devs can call the scripts locally if they wish or they can run the build commands locally if they need more control. If they don't know what order to run the commands in then worst case they can look at the jenkinsfile. This means that what needs to be installed on a Jenkins agent is vagrant and virtualbox, no other software stack. I would like to switch some stuff to docker but I haven't gotten there yet. Some other people at my company are working on it and I'm hoping to use their work.

Adbot
ADBOT LOVES YOU

Dren
Jan 5, 2001

Pillbug

Twlight posted:

How are you backing up your jenkins configs? we have a nightly job which copies it straight to SCM so we save all those job scripts, I've done a bunch of custom scripting within jenkins since as you rightly said the plugins can be hit or miss. We're finally moving to bitbucket and the multi branch project plugin sounds great i'd love to move to using more pipeline stuff as currently our jobs are a bit too "one dimensional"

We have a not great setup and we're going to transition to something better. Right now it's:

* When I set Jenkins up I wrote a step by step of what I did beginning at installing ubuntu
* The ESXi VM containing Jenkins gets backed up
* Agents are not backed up at all.

We don't have very many jobs on there right now but as we move more stuff on there we'll obviously need a better solution. I'd like to have the Jenkins setup all be in Ansible and that be in SCM along with the Jenkins config files so that the whole thing could be torn down and rebuilt if need be. I'd like Ansible stuff for the agents to be in SCM as well. I'm not too worried about the jobs themselves since multibranch bitbucket jobs are not very much configuration and the Jenkinsfile for each project is already in SCM.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply