Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Anyone dug into Spinnaker's guts? It's a horror show of "I want this for my cloud" with the vaguest sense of cohesion.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
As a user of Jenkins for nearly... 6 years now (before when it was known as Hudson), I can't really say it's great for a lot of software projects besides really bloated enterprise ones where you're totally cool writing more code for the sake of automating more weird things in your build process. A lot of the plugins really piss me off (the SSH agent plugin doesn't support the "new" ssh private key format still, and the errors are completely bizarre).

Jenkins' scripted pipelines and declarative pipelines are where most competitors went to philosophically (think Travis CI) but Jenkins is still the same under the hood with that horrific XML based configuration that defines what is ultimately a freestyle job. That technical baggage is of little value to you as a user and ultimately undermines the experience. You can use stuff like Jenkins Job Builder but all the wrappers never can get away from this harsh reality, but after writing like 50+ Jenkins pipeline scripts of both types I'd rather just use Jenkins Job Builder https://docs.openstack.org/infra/system-config/jjb.html https://docs.openstack.org/infra/jenkins-job-builder/

Jenkins requires a lot of up-front investment - more than most other CI options nowadays - and 90%+ of the time on a job I'd rather just build artifacts with either a bunch of shell scripts executed (as developers have locally since the dawn of time) or with some time-saving opinions. I really don't like picking and choosing my buggy plugin-of-the-week for test result parsing or how to install tools.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
The more I use Jenkins the more I am convinced that if you can do something with a shell script rather than the Jenkins plugin built specifically for that task, you should.

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!

Plorkyeran posted:

The more I use Jenkins the more I am convinced that if you can do something with a shell script rather than the Jenkins plugin built specifically for that task, you should.

My philosophy as well. Only use a plug-in if you need it for orchestrating multiple slaves or the UI experience for developers. Script everything else.

Docjowles
Apr 9, 2009

I have to assume that most people who manage non trivial Jenkins deployments hate it. It's a pain in the rear end for all the reasons cited already. Plus performance can get ungodly slow, though someone linked a blog post on Jenkins GC tuning a while ago and you are my hero for that because it was amazingly in-depth.

The problem is that the list of strictly better open source projects out there with comparable feature sets is as follows:

Uhhhh... *Beavis and Butthead laugh* Yeah.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

necrobobsledder posted:

As a user of Jenkins for nearly... 6 years now (before when it was known as Hudson), I can't really say it's great for a lot of software projects besides really bloated enterprise ones where you're totally cool writing more code for the sake of automating more weird things in your build process. A lot of the plugins really piss me off (the SSH agent plugin doesn't support the "new" ssh private key format still, and the errors are completely bizarre).

Jenkins' scripted pipelines and declarative pipelines are where most competitors went to philosophically (think Travis CI) but Jenkins is still the same under the hood with that horrific XML based configuration that defines what is ultimately a freestyle job. That technical baggage is of little value to you as a user and ultimately undermines the experience. You can use stuff like Jenkins Job Builder but all the wrappers never can get away from this harsh reality, but after writing like 50+ Jenkins pipeline scripts of both types I'd rather just use Jenkins Job Builder https://docs.openstack.org/infra/system-config/jjb.html https://docs.openstack.org/infra/jenkins-job-builder/

Jenkins requires a lot of up-front investment - more than most other CI options nowadays - and 90%+ of the time on a job I'd rather just build artifacts with either a bunch of shell scripts executed (as developers have locally since the dawn of time) or with some time-saving opinions. I really don't like picking and choosing my buggy plugin-of-the-week for test result parsing or how to install tools.
Is this still your take with the Blue Ocean stuff, or is this take solely restricted to old-style pipeline management? I'm looking for a decent CI setup for Chef and other infrastructure code.

Erwin
Feb 17, 2006

Vulture Culture posted:

Is this still your take with the Blue Ocean stuff, or is this take solely restricted to old-style pipeline management? I'm looking for a decent CI setup for Chef and other infrastructure code.

Jenkins works fine for cookbooks. Chef works around a lot of the annoyances of Jenkins because it handles its own upstream dependency 'artifact' juggling. You only need one Jenkinsfile for all of your cookbooks, especially if you do a sort of feature flags to turn on or off different steps (for instance, linting with rubocop or cookstyle, but not both). You can put a config file of some sort in each cookbook repo to specify which steps to enable.

If you're defining everything as code (Jenkinsfiles, etc) then Blue Ocean is mostly about looking nice I think. But, if you want to configure each job through the GUI, then you'll get more out of it. I don't do that so I don't have much to say about Blue Ocean.

If 'other infrastructure code' is Terraform, check out kitchen-terraform: https://github.com/newcontext-oss/kitchen-terraform

Using a cookbook to install and configure Jenkins is a whole other level of frustration. Key things are that new versions of Jenkins often breaks Chef's official Jenkins cookbook and they don't care about fixing it. Also, installing plugins with dependencies takes forever (like literally hours to days) because it doesn't handle dependency resolution. You're better off gathering a list of all the plugins you want plus their dependencies and having your wrapper cookbook install each one without dependencies. There's an easy way to get that list with a groovy script from a running Jenkins instance (you'd spin up a temporary Jenkins master, hand-pick your plugins and install them, then get that list and plop it into an attribute in your cookbook). But really, running the Jenkins master in a docker container is far less annoying, just because of the way the official image handles plugin installation.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Erwin posted:

If you're defining everything as code (Jenkinsfiles, etc) then Blue Ocean is mostly about looking nice I think. But, if you want to configure each job through the GUI, then you'll get more out of it. I don't do that so I don't have much to say about Blue Ocean.
I completely disagree with this sentiment because unless you're using something that spits out the fully formed xml like job builder, the groovy pipeline scripts are by far the best way to couple the jobs with the code they manage within your change control.

It's true it has some pretty features like tracking and displaying times of individual stages, but that's secondary IMO to dynamically generated jobs that can have actual logic paths, which is a huge step above the previous extremely crude job chaining based on successful return codes.

Yeah don't use chef to install/manage the jenkins instance itself, you're asking for heartache. I used ansible and built a pretty tight deploy script on centos 7. I'll share if you need, can PM me. Docker would be equally good. Doing it over again, I'd probably use docker.

Bhodi fucked around with this message at 16:21 on Jul 28, 2017

Twlight
Feb 18, 2005

I brag about getting free drinks from my boss to make myself feel superior
Fun Shoe
if anyone is still using puppet we've came up with a pretty nice way to handle testing, puppet slaves and masters are build in containers via Jenkins, then after builds everything gets tested via serverspec. Which has made things a lot smoother than what we were doing previously, which was just throwing things into production for funsies.

Erwin
Feb 17, 2006

Bhodi posted:

I completely disagree with this sentiment because unless you're using something that spits out the fully formed xml like job builder, the groovy pipeline scripts are by far the best way to couple the jobs with the code they manage within your change control.
I think that's what I was saying though? What I meant is that if you're using the GUI to configure bespoke jobs, you'll get more out of Blue Ocean than just a pretty interface. You'll get more out of Jenkins as a whole with with Jenkinsfiles in your repos instead of doing anything manually. I think we're on the same page. However the extent of my Blue Ocean experience is opening a job in Blue Ocean to see a better view of the pipeline flow. I've done nothing else in Blue Ocean specifically.

edit:

ultrabay2000 posted:

Can anyone offer any pros/cons of GoCD compared to Jenkins currently? We're evaluating both of these. GoCD seems to have a nicer UI out of the box but Jenkins is a lot more widely used. It seems GoCD was particularly strong with value streams but Jenkins has made progress on that.

I'm leaning towards Jenkins because it's free and more widely used but we use GoCD heavily already.
I did a proof of concept buildout with GoCD at my old job. Put everything we did in Hudson into GoCD (Java, Perl, Python, MATLAB, SQL, and Nodejs) and it all worked and GoCD is way better looking. We then went to Jenkins not because GoCD didn't work, but it was a small team and Jenkins is just so much more Googleable. GoCD's forums were (are) in Google Groups and it was very hard to find any answers. With Jenkins, it's almost guaranteed that whatever you're trying to do has been done and discussed online by countless other people.

edit2: Also GoCD expects all of your application servers to be managed by GoCD. It's not necessary, but I think that's their philosophy.

Erwin fucked around with this message at 16:43 on Jul 28, 2017

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Erwin posted:

If 'other infrastructure code' is Terraform, check out kitchen-terraform: https://github.com/newcontext-oss/kitchen-terraform
This isn't what I was actually looking to test, but this looks insanely useful for a litany of other reasons

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
There's two major components to Blue Ocean: the use of the in-repo Jenkinsfile to configure things, and the pretty new UI.

Jenkinsfiles are an improvement, but don't really solve any of the actual problems I have with Jenkins. You can't run a Jenkinsfile locally, so the fact that you can put logic in them is just as much of a trap as it always has been. Jobs which were working yesterday will still break tomorrow because someone updated a plugin that you aren't even using.

The new UI is a broken pile of garbage. It looks better than the old UI, but is incredibly slow and buggy.

Erwin
Feb 17, 2006

Plorkyeran posted:

There's two major components to Blue Ocean: the use of the in-repo Jenkinsfile to configure things, and the pretty new UI.

Jenkinsfiles are an improvement, but don't really solve any of the actual problems I have with Jenkins. You can't run a Jenkinsfile locally, so the fact that you can put logic in them is just as much of a trap as it always has been. Jobs which were working yesterday will still break tomorrow because someone updated a plugin that you aren't even using.

The new UI is a broken pile of garbage. It looks better than the old UI, but is incredibly slow and buggy.

Jenkinsfile/groovy pipeline definition is part of the Pipeline plugin, not Blue Ocean. You can use them without installing Blue Ocean. I do agree that not being able to run pipeline code locally sucks, and most new pipelines have a few failed runs at the beginning while you iterate and push like a chump. If I need to do something complicated, I usually try to put as much of the logic as I can in a Rakefile or equivalent, which I can test locally, then just call rake tasks from the pipeline.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
One of the recommendations I read from the Cloudbees folks is that you should be using Jenkins pipeline jobs to orchestrate across nodes primarily while your Jenkins freestyle jobs and your build systems like Rake, make, Gradle, Maven, etc. are what should be the fundamental build unit. Given how easy it is to show several jobs in parallel in a pipeline job (I still dislike the trigger and blocking conventions) that seems to make some sense and it also shows what the developers' priorities have been in the design of pipelines.

Vulture Culture posted:

Is this still your take with the Blue Ocean stuff, or is this take solely restricted to old-style pipeline management? I'm looking for a decent CI setup for Chef and other infrastructure code.
As said above, Blue Ocean is a separate view layer for Jenkins requiring use of per-branch Jenkinsfiles that takes the place of the various terrible looking build pipeline dashboards out there. In vanilla Jenkins, I spent a few days trying to get some installed to show several build pipelines at once on our TVs in the office and ordered by different criteria and I just hated the experience so much. Unfortunately, Blue Ocean is also super limited because it only worked for Maven builds and Junit, so it appears to be highly coupled to the tooling, and this is a problem if you're looking for more flexibility, which should be 99%+ of Jenkins users out there now I hope. It is total demo-ware and pie-in-the-sky for the realities of my horrific warcrime tribunal of software at my employer.


Also, I think Jenkins will be less and less relevant outside all but the largest companies as people start using containers to serve as build environments so you can start using CI like Drone.io (or something similar but hosted). I freakin' hate trying to setup auto-install tooling in Jenkins and am tired of spending weeks trying to setup rvm and rbenv and pip w/ virtualenv and whatever other build system will fail to work in Jenkins like it does on a developer laptop while people are confused why it takes so much effort to make jobs in Jenkins.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
I started trying to rig up a combo terraform+ansible aws setup... does everyone really just use local_exec to run ansible-playbook? feels too easy, like a trap

its strange i like both tools so far but i cannot get my head around the notion of arbitrary clients running arbitrary versions of them connecting into live poo poo, seems bannanas. but if you try to have a dedicated node for it all of a sudden you're in recursive spiral.

Dren
Jan 5, 2001

Pillbug
I have used Jenkins since it was Hudson but I've probably only set up 5 or 6 projects on it in total and never to the level of deployment, just build and some test. I tried out GoCD for a project a few years ago and very much liked its UI, pipelines, automatically versioned configuration, and the concept that there is one artifact to ship between stages. But when I had a problem it was tough to google (Jenkins doesn't have this issue) and when I wanted a plugin to trigger builds from phabricator I needed to write it myself (again, not a problem with Jenkins because it's the default so people already have plugins). Once Jenkins got pipelines, which were my favorite part of GoCD, I went back to Jenkins. We have bitbucket and the bitbucket multi branch project plugin is pretty good. It will pick up feature branches and PRs then build them in their own workspaces and delete them when the branches go away. (It requires a jenkinsfile in the TLD so you have to use pipeline). A nice consequence of this plugin is that Jenkinsfile changes can be developed on a feature branch so that master doesn't get cluttered with your failures.

Like some of you have observed Jenkins plugins mostly stink. I used to end up scripting many things myself inside of Jenkins (and consequently outside of SCM) but thanks to vagrant things aren't so bad anymore. My projects now have a scripts directory at the top level with scripts that launch vagrant controlled envs to do the various build tasks and my pipeline simply calls those scripts. Devs can call the scripts locally if they wish or they can run the build commands locally if they need more control. If they don't know what order to run the commands in then worst case they can look at the jenkinsfile. This means that what needs to be installed on a Jenkins agent is vagrant and virtualbox, no other software stack. I would like to switch some stuff to docker but I haven't gotten there yet. Some other people at my company are working on it and I'm hoping to use their work.

Twlight
Feb 18, 2005

I brag about getting free drinks from my boss to make myself feel superior
Fun Shoe

Dren posted:

I have used Jenkins since it was Hudson but I've probably only set up 5 or 6 projects on it in total and never to the level of deployment, just build and some test. I tried out GoCD for a project a few years ago and very much liked its UI, pipelines, automatically versioned configuration, and the concept that there is one artifact to ship between stages. But when I had a problem it was tough to google (Jenkins doesn't have this issue) and when I wanted a plugin to trigger builds from phabricator I needed to write it myself (again, not a problem with Jenkins because it's the default so people already have plugins). Once Jenkins got pipelines, which were my favorite part of GoCD, I went back to Jenkins. We have bitbucket and the bitbucket multi branch project plugin is pretty good. It will pick up feature branches and PRs then build them in their own workspaces and delete them when the branches go away. (It requires a jenkinsfile in the TLD so you have to use pipeline). A nice consequence of this plugin is that Jenkinsfile changes can be developed on a feature branch so that master doesn't get cluttered with your failures.

Like some of you have observed Jenkins plugins mostly stink. I used to end up scripting many things myself inside of Jenkins (and consequently outside of SCM) but thanks to vagrant things aren't so bad anymore. My projects now have a scripts directory at the top level with scripts that launch vagrant controlled envs to do the various build tasks and my pipeline simply calls those scripts. Devs can call the scripts locally if they wish or they can run the build commands locally if they need more control. If they don't know what order to run the commands in then worst case they can look at the jenkinsfile. This means that what needs to be installed on a Jenkins agent is vagrant and virtualbox, no other software stack. I would like to switch some stuff to docker but I haven't gotten there yet. Some other people at my company are working on it and I'm hoping to use their work.

How are you backing up your jenkins configs? we have a nightly job which copies it straight to SCM so we save all those job scripts, I've done a bunch of custom scripting within jenkins since as you rightly said the plugins can be hit or miss. We're finally moving to bitbucket and the multi branch project plugin sounds great i'd love to move to using more pipeline stuff as currently our jobs are a bit too "one dimensional"

Dren
Jan 5, 2001

Pillbug

Twlight posted:

How are you backing up your jenkins configs? we have a nightly job which copies it straight to SCM so we save all those job scripts, I've done a bunch of custom scripting within jenkins since as you rightly said the plugins can be hit or miss. We're finally moving to bitbucket and the multi branch project plugin sounds great i'd love to move to using more pipeline stuff as currently our jobs are a bit too "one dimensional"

We have a not great setup and we're going to transition to something better. Right now it's:

* When I set Jenkins up I wrote a step by step of what I did beginning at installing ubuntu
* The ESXi VM containing Jenkins gets backed up
* Agents are not backed up at all.

We don't have very many jobs on there right now but as we move more stuff on there we'll obviously need a better solution. I'd like to have the Jenkins setup all be in Ansible and that be in SCM along with the Jenkins config files so that the whole thing could be torn down and rebuilt if need be. I'd like Ansible stuff for the agents to be in SCM as well. I'm not too worried about the jobs themselves since multibranch bitbucket jobs are not very much configuration and the Jenkinsfile for each project is already in SCM.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Twlight posted:

How are you backing up your jenkins configs? we have a nightly job which copies it straight to SCM so we save all those job scripts, I've done a bunch of custom scripting within jenkins since as you rightly said the plugins can be hit or miss. We're finally moving to bitbucket and the multi branch project plugin sounds great i'd love to move to using more pipeline stuff as currently our jobs are a bit too "one dimensional"
Those are great for backing up the jobs themselves, but for jenkins itself you're still looking at tar/scp or using one of their plugin wrappers which do the same thing. I use ansible, which I pull into a local repository, then check in / tag to keep our dev/prod in sync. I have an ansible job for a pull and one for a push, so that I could manually update dev, tweak it to where it was good, then be able to pull it locally, check it in, then push/install it to prod. I have a .gitignore to prevent pushing anything secret into SC, so for initial pushes it has to be from the working directory directly. It's kinda half-assed but it worked for me, since I was the only one managing it.

Here's my pull as an example, the push is the same except reversed. I also have a separate install script which installs jenkins from scratch and sets up the keys and such.
pre:
---
- name: Get plugin list
  shell: "ls -1 {{ jenkins_dir }}/plugins/*.jpi*"
  register: plugin_list
  changed_when: false

- name: Pull plugins
  fetch: flat=yes src="{{ item }}" dest="files/plugins/" fail_on_missing=yes
  with_items: "{{ plugin_list.stdout_lines }}"

- name: Get jobs list
  shell: "cd {{ jenkins_dir }} && ls -1 {{ jenkins_dir }}/jobs"
  register: jenkins_jobs
  changed_when: false

- name: Pull jobs
  fetch: flat=yes src="{{ jenkins_dir }}/jobs/{{ item }}/config.xml" dest="files/jobs/{{ item }}.xml" fail_on_missing=yes
  with_items: "{{ jenkins_jobs.stdout_lines }}"

- name: Get XML list
  shell: "ls -1 {{ jenkins_dir }}/*.xml"
  register: xml_list

- name: Pull XML
  fetch: flat=yes src="{{ item }}" dest="files/xml/" fail_on_missing=yes
  with_items: "{{ xml_list.stdout_lines }}"

- name: Get secrets list
  shell: "ls -p {{ jenkins_dir }}/secrets | grep -v '/$'"
  register: secrets_list

- name: Pull secrets
  fetch: flat=yes src="{{ jenkins_dir}}/secrets/{{ item }}" dest="files/secrets/" fail_on_missing=yes 
  with_items: "{{ secrets_list.stdout_lines }}"

- name: Pull secret.key
  fetch: flat=yes src="{{ jenkins_dir }}/secret.key" dest="files/secret.key" fail_on_missing=yes

- name: Get user list
  shell: "cd {{ jenkins_dir }} && ls -1 {{ jenkins_dir }}/users"
  register: jenkins_users
  changed_when: false

- name: Pull users
  fetch: flat=yes src="{{ jenkins_dir }}/users/{{ item }}/config.xml" dest="files/users/{{ item }}.xml" fail_on_missing=yes
  with_items: "{{ jenkins_users.stdout_lines }}"
Note that even using pipelines, you still have to save the job xml which says "I'm a pipeline script! I look for my Jenkinsfile in git://..."

I should probably just stick the whole thing on git, but :effort:. I'd do it if anyone would use it, I guess

Bhodi fucked around with this message at 18:13 on Jul 29, 2017

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Code re-use for devops config type stuff is so much worse than in regular software it's mind-boggling.

Has anyone found a decent Terraform module or plugin for generating CloudFormation blueprints? How about a Terraform plugin that can do multi-stage deployment variants like swapping ASGs, blue-green deploys, slow roll-outs, etc.? I don't think even enterprise Terraform does this stuff yet and that bothers me more than it should. We have a tool that's basically a ton of horrific ERBs cobbled together loosely with our Puppet modules and it generates CF blueprints while half-assing some deployment methods (but better than what Terraform has, sadly), and I'd like to have a migration path off of it to Terraform while preserving our existing templates to some extent.

Skier
Apr 24, 2003

Fuck yeah.
Fan of Britches
As far as I know, Terraform wants to own all state of resources which prevents using it with Cloudformation templates.

Troposphere ( https://github.com/cloudtools/troposphere ) works well enough for making Cloudformation templates from Python code instead of hand-coding Cloudformation. The past few jobs we've checked in the Troposphere code to do infra as code.

A few years ago I've used Route53 CNAMEs to do slow deploys. New stack comes up with a CNAME matching the existing one (fooservice.whatever.io). The old service has a weight of 100, new service has 0. Adjust the weights to the new stack and eventually remove the old service version's Cloudformation stack.

This is a bit of an old way of doing things, but would work well for blue/green deploys. AWS CodeDeploy has rolling deploys and ECS does as well. I think CodeDeploy has more nuanced options available for deploys. This video from reinvent 2015 has good info: https://www.youtube.com/watch?v=A4NSyUbAEkw .

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Keep in mind that there are some major issues with scaling CloudFormation past a certain point. If you're already getting started with Terraform you may as well just do things the Terraform way and avoid those problems altogether.

Also, don't do scaling groups with Terraform. Consider it this way, Terraform is how you set up your immutable-ish infrastructure; it's where your IAM roles, security groups, load balancers, S3 buckets, and the like go.

Use something like Spinnaker to manage clustered services and their configuration instead.

Twlight
Feb 18, 2005

I brag about getting free drinks from my boss to make myself feel superior
Fun Shoe
Is anyone doing anything with the AWS spot market? I've got things going where we check the spot market then template our terraform configs for our testing scaling groups then spin them up, its nice we can get extra capacity for much cheaper with the spot market. I've been saving the prices in elasticsearch as well so we have some historical data to look at, not that anyone is ever interested but it's neat to look at the trends.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I used Troposphere for some personal demos in the past but never thought it'd work particularly well for a larger codebase going forward and never tried to adopt it professionally. That and most of the other opsy people here aren't coders fundamentally and it's usually easier to push something that has more documentation (that and I don't want to answer Python questions all day). I totally forgot that Troposphere works for OpenStack though so it may work enough for me to use that for service stacks instead of Terraform. So that is Terraform for infrastructure stack, and each service's stack could be maintained as testable Python. This makes too much sense so it won't fly here I think.

But really, I can't do basically 90%+ of what most shops on AWS do for various reasons ranging from "we're moving off of AWS eventually, don't give them more money" to "we seem to make the literal worst possible technical decisions but somehow keep the lights on." For example, each of my deployments I have to file a DNS change ticket and wait for someone to change the A records and CNAMEs. All the problems of classic IT, few of the advantages of cloud.

Blinkz0rz posted:

Also, don't do scaling groups with Terraform. Consider it this way, Terraform is how you set up your immutable-ish infrastructure; it's where your IAM roles, security groups, load balancers, S3 buckets, and the like go.
We're familiar with some of the scalability limits of CloudFormation because of how our stack generators work. We ran out of stack outputs due to how we were giving a subnet for every service in a 15+ service system, for example. But at least CFN supports rollbacks... usually.

I've got Spinnaker and Urbancode Deploy POCs on my roadmap out into next year but there's no point if the software is super stateful like the mess here. Postgres as service discovery, wtf. I'm about to use this https://github.com/adrianlzt/hiera-postgres-backend :smithicide:

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

necrobobsledder posted:

Postgres as service discovery, wtf. I'm about to use this https://github.com/adrianlzt/hiera-postgres-backend :smithicide:

:wtf: my brother have you heard of zookeeper, etcd, or consul?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I know about them and know how to use them (I've designed single-button deploys of 20+ ZK node clusters with 500+ clients before for Hadoop). The developers.... do not and have no interest in doing so. Management has nothing on the roadmap with the words "service discovery" in it at all.

I'm trying to cloud-wash / port a legacy system that keeps track of system configuration in the same database that business transactions happens (it's a Grails app that's grown to monstrous size as a batch and event processing system frontend but nobody learned anything other than Grails and other web transaction stacks for 8 years). There's a unique table for each service type and when you boot a new node, it calls to the uber-database, adds itself to its respective tables upon bootup if it doesn't find its self-assigned ID in its rows, and ops customizes its always-unique, customer-specific configuration in the UI. The way to automate replacement of an existing node is upon boot run a DB transaction to find the primary key of a node with the roles that you're replacing, delete the old node's row, and modify the primary key matching the node that just registered to the one deleted (the UUID is separate from the row ID).

These are all solved problems with ZK and friends but I think it'll be a cold day in hell before we get around to service discovery of any sort, so this will be a long-term approach that works at the "scale" here.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
goondolances

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

necrobobsledder posted:

I know about them and know how to use them (I've designed single-button deploys of 20+ ZK node clusters with 500+ clients before for Hadoop). The developers.... do not and have no interest in doing so. Management has nothing on the roadmap with the words "service discovery" in it at all.

I'm trying to cloud-wash / port a legacy system that keeps track of system configuration in the same database that business transactions happens (it's a Grails app that's grown to monstrous size as a batch and event processing system frontend but nobody learned anything other than Grails and other web transaction stacks for 8 years). There's a unique table for each service type and when you boot a new node, it calls to the uber-database, adds itself to its respective tables upon bootup if it doesn't find its self-assigned ID in its rows, and ops customizes its always-unique, customer-specific configuration in the UI. The way to automate replacement of an existing node is upon boot run a DB transaction to find the primary key of a node with the roles that you're replacing, delete the old node's row, and modify the primary key matching the node that just registered to the one deleted (the UUID is separate from the row ID).

These are all solved problems with ZK and friends but I think it'll be a cold day in hell before we get around to service discovery of any sort, so this will be a long-term approach that works at the "scale" here.
I expect exactly this, with a few Mad Libs substitutions, in basically any Puppet environment nowadays.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
So back to the earlier question of CI / CD options besides Jenkins, there's one I forgot about that I think is worth looking seriously into partly because of the CloudFoundry heritage - Concourse CI. It even has a nice comparison for what's wrong with Jenkins, GoCD, TravisCI. It's "resource-centric" and builds are natively done in containers, so that side-steps some of the problems with Jenkins but can be a downer if you require a Lovecraftian enterprise horror dependency like a mainframe or dongles (if someone can figure out how to virtualize hardware USB dongles in AWS or GCP let me know). The best part though is that you can run your builds locally before you make an idiot of yourself constantly hitting build and watching for syntax errors. When builds are a set of resource declarations with their dependencies linked it's a lot like Puppet or Chef, and this is probably a more natural way of defining a build than a linear sequence of steps and some hodge-podge of parallel steps. There's a lot of custom resources including Terraform. Feature comparison page

Vulture Culture posted:

I expect exactly this, with a few Mad Libs substitutions, in basically any Puppet environment nowadays.
Well yeah, Puppet and Chef are mostly useful for very stateful systems that shouldn't have nodes go up and down frequently and they're real awkward for elastic systems. I had enough problems with this with Chef node registration and de-registration.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

necrobobsledder posted:

Well yeah, Puppet and Chef are mostly useful for very stateful systems that shouldn't have nodes go up and down frequently and they're real awkward for elastic systems. I had enough problems with this with Chef node registration and de-registration.

Fwiw we bake images and provision them using chef and then run chef-solo on every instance in the fleet to complete provisioning and do dynamic user management.

We've been looking at converting part of our fleet to use chef server because we don't have a great way to provision parts of our fleet in different ways but I'm sure they'll bring a whole bunch of other issues.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Chef-solo and masterless Puppet are just superpowered shell scripts if you're going with immutable infrastructure / golden image semantics instead of continuous configuration management and they still have their place (we have a two-pass image build with Puppet and have launch configurations set Hiera variables which gets you a whole new codepath that's not tested, of course). When you're trying to avoid re-deploying a bunch of containers to patch them or to tweak a value to propagate to certain systems you could do it with applications written to watch configuration like Zookeeper or Etcd, but most developers are bad at dynamic configuration. You could use stuff like Netflix's Eureka that seems to have some capabilities of re-configuring applications on-the-fly but deploying a new foundational stack seems drastic (I'm using Dynomite at work for a greenfield project and that's the impression I got with how Dynomite Manager manages node configuration).

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
We've been using Archaius and a distributed sidecar service we wrote for configuration management and it works extremely well. We have an ohai plugin that we wrote with the intent of using it to tune different settings on running nodes but we've never actually used it in production.

I'm hesitant to post the repo even though it's open source, though, for fear of doxxing myself.

Warbird
May 23, 2012

America's Favorite Dumbass

Oh neat, we have a devops thread. Quick devops hot takes:
Docker - cool and good
Puppet - bane of my existence
Jenkins - cool and good
Jenkins plugins - bane of my existence



So did everyone else get thrust into the role of DevOps Engineer with no training or support, or is that just me?

Also, Docker question because I'm bad at my job, it seems the most practical way to have containers accessible to connections outside your network is to get something set up using docker-gen and ngix/apache/whatever. Is that correct? I'm piddling around with getting stuff working on a VPS but I'm having a time getting anything to be accessible that's not on port 80.

Fake edit - seems the killjoys in InfoSec kill any traffic outside a given series of ports. Completely understandable, but it would have been nice to be notified.

Warbird fucked around with this message at 20:33 on Aug 4, 2017

No Safe Word
Feb 26, 2005

Warbird posted:

Oh neat, we have a devops thread. Quick devops hot takes:
Docker - cool and good
Puppet - bane of my existence
Jenkins - cool and good
Jenkins plugins - bane of my existence



So did everyone else get thrust into the role of DevOps Engineer with no training or support, or is that just me?

I got it because I was the only one who actually tried to put that sort of stuff in place on my own time. So yeah basically. Though I am getting support more and more as I reveal what kind of a festering mess has arisen as a result of everyone doing their own thing.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
"DevOps" in the purely engineering skillset sense is the work most ops people don't want to or can't do and what most software engineers can't or don't want to do. Even among Googlers and Facebookers it's not like people want to be on-call ever for even a pay bump less than < $20k / yr (this is mostly for an SRE type role rather than a backoffice engineering efficiency type org where conditions tend to be better). Most places' builds and deployments are utterly crap and being asked to fix it is probably a fool's errand without a lot of managerial support and incentives to do it (unless the reason for releases sucking is entirely "we never had anyone that knew how to do it" rather than the typical "we have bad development practices and sling code until the last minute and throw it over to someone else" situation that exists in most companies larger than 50 people).

I picked this route out of expectation that most organizations would wisen up and rally around releasing faster and testing better because the economics would work out that companies doing bad practices would be wiped out. I chose wrong.

necrobobsledder fucked around with this message at 23:04 on Aug 4, 2017

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
IMO devops is more of a philosophy and really where the traditional systems engineer role from 10 years ago has turned into.

And also the willingness to code and what not and know where automation can help the business.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Gyshall posted:

IMO devops is more of a philosophy
Not actually an opinion, anyone putting DevOps in a job title has literally no idea what DevOps is

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
I'm glad SRE is a known thing because even though that's not my actual official title I can still point to it and say that's what I do rather than hand wave "devops-y engineering work"

Pollyanna
Mar 5, 2005

Milk's on them.


Ive always seen SRE stuff described as "be on Pagerduty and get frantic calls when alarms go off" while "doing devops" is Docker, CICD, and AWS. :shrug:

I've dealt with Docker before in a very limited capacity so I listed it in my resume, but apparently it's a whole field of study now. And drat near everything wants AWS experience now.

Pollyanna fucked around with this message at 19:06 on Aug 5, 2017

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
SRE is basically embedding people strong on ops within dev teams to ensure that someone who deeply understands both systems and operational concerns can help out with cross-cutting concerns like tracing, logging, instrumentation, and general observability.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply