Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Ithaqua posted:

This is not a good practice, IMO. You should be building a set of binaries, testing them against a dev/integration environment, then promoting those binaries to the next environment in the release path. There are tools out there to help you manage releases like this. Overextending the build process to deploy software is really painful and inflexible.
This is generally sound advice, but it is context-dependent. For example, if you're building a minified web application using something like Webpack, there's a good chance you won't be able to reuse exactly the same build artifacts for each environment, especially if you have debug flags. Even for standard COTS software releases, you typically want a separate debug build on dev/QA than the release build you put on staging and production.

Vulture Culture fucked around with this message at 20:12 on Mar 11, 2015

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
You also need to consider what it means to reproduce a deployment. Reproducing a deploy from a week ago might be a reasonable use case if you have a really nasty regression on a public system. On the other hand, how often are you going to be deploying months-old code where it's worth it to invest that time up front to make it work, as opposed to when (if) it comes up?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

syphon posted:

Tools like Chef have done a really good job of mitigating this problem. Your cookbooks have 'defaults' which can be overridden per environment. This answers minato's stated problem of "configuration drift" across environments. Then, you can enforce versioning of your cookbooks in order to create reproducible deployments.

Managing the mapping of App Version to Cookbook Version is a bit of a pain, but I think the benefits outweigh the costs.
And if that's a big concern -- I've rarely found it to be in practice -- Jamie Winsor's practices outlined here are a big help:

https://www.youtube.com/watch?v=Dq_vGxd-jps

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bhodi posted:

An issue I keep coming across is an elephants all the way down problem of then having to have an associated prod/dev/test/whatever for your all your management code / servers when you use puppet/chef/ansible/whatever.

For example, I built a jenkins test suite that pulls a branch from git and runs a bunch of tests on our cloud environment including creating VMs on a bunch of different vlans with configs using the tool we distribute to users. But now I need to be able to reproduce jenkins itself in both prod and dev, so I have a separate repo for the jenkins configs. And I need a program to be able to import/export, so I wrapped ansible around that and have some ansible tasks to pull/push configs to the various jenkins servers. But wait, the jenkins configs are subtly different because, for example, prod jenkins needs to pull from the prod branch and dev from dev, so now I have to munge it through a tool to dynamically make the jenkins configs.

It's ugly and now I have 3 repos to manage and try and keep in sync, all with different versions and good release process. It's messy but the best I could come up with. My sister group dealing with our openstack silo has it three or four times as bad.
I don't buy that this is a problem that Chef and its kin don't solve, honestly. Chef supports trivially versioning cookbooks (the server + Berkshelf do this easily, future Chef versions will go even further with Policyfile), and it's super-easy to template out the config so the same template produces all the correct configurations.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bhodi posted:

In your example it would be having recipes for setting up your Postgres, rabbit mq, bookshelf, all the components of chef. And because presumably you need to be able to test upgrades and patches while your dev instance is supporting other people's work in dev, you need a separate entire instance for your own testing of those scripts. Maybe chef can bootstrap itself with its own files, I don't know, but you need those too. At some point you have to evaluate if it's all useful and just compromise, as was brought up in the cloud thread, but it's obnoxious to deal with when your systems can't manage themselves.
I deal with these situations all the time by bootstrapping Test Kitchen instances with chef-solo and validating the results with Serverspec. Test Kitchen finally got multi-node support, so the weird integration cases got much easier to support. I don't have any permanent infrastructure assigned to testing environments.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Plorkyeran posted:

FWIW this is all trivially doable with Jenkins (to the extent that anything involving Jenkins can be said to be trivial), but I can definitely see the value in a tool that actually points you in the right direction rather than basically requiring a knowledgeable consultant to end up with anything remotely sane.
TeamCity is free up to three build agents and very reasonably priced beyond that

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Plorkyeran posted:

okay?

that's sort of a non-sequitor
"I can definitely see the value in a tool that actually points you in the right direction rather than basically requiring a knowledgeable consultant to end up with anything remotely sane."

teamcity is p. good and handles artifact deps really nicely

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

TheresNoThyme posted:

Anyone have experience using devops tools to stand up and support an internal cloud?
Anything but OpenStack. :ptsd:

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

minato posted:

Definitely. OpenStack is (mostly) fine to use, but only a masochist would want to manage it.
(For anyone who is a masochist who wants to manage it, I've done every performance deep-dive there is to do. Ask away.)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Alternatively, https://vaultproject.io/

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Sedro posted:

Has anyone used TeamCity + Vagrant? I basically want TC to 'vagrant up' then run its build agent inside the VM. I could use a different build agent for each build but that immediately puts me into their enterprise pricing tier.
You won't be running the build agent in the VM, but you can just invoke vagrant up/vagrant ssh as build steps. Text in, text out. Just be aware that certain classes of failures might cause your VM instances to not get destroyed correctly. You'll also be limited to one build at a time on the host, regardless of how many VMs you can create.

If you need dynamic agent support, you might consider having it spin up EC2 instances; that's directly integrated into the product.

Vulture Culture fucked around with this message at 18:15 on Oct 21, 2015

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Sagacity posted:

What do you guys consider the most painless way of deploying docker containers to AWS? They have something called ECS and Elastic Beanstalk which seems nice, but is it actually good to use when you just want to automate everything or should I look at a different automation layer like (random selection) Convox or Rancher?
EC2 Container Service isn't bad. You have to follow the directions exactly and make sure you create the IAM role and create the container host instances with that IAM role. Setup is a little touchy if you get these parts wrong. The weakest part of ECS is load balancing: it seems like right now you can only send a single port to a service via an ELB, which is bizarre if you want to do really basic stuff like listen on both HTTP and HTTPS.

It takes less than 15 minutes to get set up on though, so you might as well just give it a shot.

If you're not tied down to AWS, Google Container Engine is a pretty well-flushed out container grid.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Sagacity posted:

AFAICT that'll mainly leave me with AWS, Google or Azure, since other options require me to do a lot of the infra management myself. Do you guys have a preference for one of these three managed options?
I can't speak for Azure but GCE's network is a loving ghetto

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

wins32767 posted:

Any advice on hiring a devops person as a manager who has done related work (development, systems administration) but not the devops role specifically?
I swear to God I'm going to lose my poo poo and harpoon the next person who refers to "devops role." The very idea of a DevOps role is completely antithetical to DevOps.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

minato posted:

I think it's totally possible for there to be a DevOps role, because someone has to ensure the teams are adhering to the DevOps principles. I don't think it's enough for the managers to walk in and pronounce that "We're doing DevOps now. Dev, talk to Ops more often, and vice-versa." Someone has to keep the ball from getting dropped. Are Devs just as much on the on-call hook as Ops when prod falls over? Is Ops getting invited to Dev's design/architecture meetings? Is there shared ownership of the build-test-deploy pipeline, and who is responsible for maintaining and developing that?
So instead of having a manager make everyone magically play ball, you hire a non-manager with no authority whatsoever to magically make everyone play ball. Now instead of two silos, you have two slightly smaller silos and an engineer screaming from the mountaintop in between them. You're right that you need culture change to happen, but this doesn't help enable it.

minato posted:

In the same way that there's a ScrumMaster who (amongst other things) is responsible for keeping the team aligned to Agile principles, there can be a "DevOpsMaster" who has a good understanding of the principles and the mandate to enforce them. It's not a given that this person would necessarily be a manager; in some companies, managers are more concerned with developing their reports' abilities than with deciding how they do their job.
Sure, most teams benefit from having leads and organizers. But you wouldn't call that ScrumMaster a Scrum or a Scrum Role.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

wins32767 posted:

Ok, so let me change the conversation a bit here. What's the kind of skills that a small but very rapidly growing company should look for in their first hire in a role that needs to handle some operations work? There isn't enough work for a full time operations position, nor will there be for at least a year, but I want to lay a good foundation.
What kind of stack? What duties are they going to be expected to perform? Who will they be sharing the load with?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Erwin posted:

Right, the term is Thought Leader.
don't trigger me

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
It's probably not a zero-effort thing to get the app working as a 12-factor app, but it's not exactly an ordeal either

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Ben Grahams Ghost posted:

I'm hoping this is the best place for this question. I'm trying to setup a small cluster to get a better handle on ELK and automation/management (using Ansible/etc). Right now I'm just running VMs on VMWare Player, and I have baby's first two machine cluster up and running.

Is there a better tool I can use to accomplish this? Would something like Vagrant allow me to easily spin up machines (ideally five or so, I think), or should I just scale the VM setup?
Depends what you're trying to do. If you're looking to have a bunch of VMs set up so that you can test a bunch of different playbooks corresponding to various server roles, for instance, you might be better off with something like Vagrant and Test Kitchen with the Ansible provisioner. If you're trying to set up clusters of interrelated services, stick with how you're already doing things for your sanity's sake.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
TeamCity is a great build server but nothing is very good at handling the deploy end of CD without a ton of duct tape and glue. That said, I've almost never run into weird operational problems with it, unlike Jenkins.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Ithaqua posted:

That's why the deployment piece is being foisted off onto configuration management systems for the most part. Overextending a build system to do deployments sucks. Plus builds pushing bits encourages building per environment instead of promoting changes from one environment to the next.
While they're extremely useful for solving some of the problems around drift that hinder code deploys, configuration management doesn't really solve (m)any deployment problems for anything except really trivial cases, though. Automating the CD pipeline to any sane degree (that is, without wads of tape) requires insane cognitive investments in stuff like service discovery and container schedulers (or virtual machine images, if that's your thing). Even commercial solutions around Docker like EC2 Container Service or Google Container Engine are really warty and weird, and at absolute best there's no standardization between ZK, etcd, Consul/Serf, and their kin when it comes to actually communicating and messaging between your service instances. Blue/green deploys as handled by some of these systems are pretty cool, but can't effectively handle patterns like swapping out messaging or service discovery systems between deployed versions.

The product I'm managing is sitting somewhere between 3,000 and 4,000 service instances now, and there's nothing yet that operates at that scale and is understandable by normal humans.

Vulture Culture fucked around with this message at 21:45 on Feb 2, 2016

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

the talent deficit posted:

if deploy is your problem (and not building) i would encourage you to look elsewhere. aws codedeploy is pretty good if you are on aws
If they have no intention of ever properly supporting WebSockets on ELB it would be nice if their deploy tooling would at least support another load balancer though

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

YO MAMA HEAD posted:

I only really focus on devopsy stuff for a few days a month (or when something breaks) but I was surprised I hadn't heard anything at all about Otto. Has anyone checked it out? On the dev side I like the idea of a simpler Appfile but I didn't understand provisioning—if we need a dev box with requirements that are in any way out of the ordinary (SoX for audio processing), how do we make that happen?

I didn't even explore any of the deployment functionality etc. but for some of our simpler projects it would be nice to not have to mess with a Bamboo pipeline.
Otto's nice if you need to integrate a deploy process with Terraform. If you're not already using Terraform, it's total overkill.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Sedro posted:

I'm looking to separate my build server from my deployment. Currently TeamCity builds a feature branch then runs a shell script to start an environment locally (passing in the branch name). Instead I want to push the artifact to a QA server and deal with it there. A web interface to manage those environments would be nice too. What should I be looking for (besides a proper devops engineer)?

This is for the development team, not hosting customers. The build artifact is a set of docker images.
Do exactly what you're doing now (maybe replace your shell script with some Compose configs if you need to), but point your docker client at a different server

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Erwin posted:

Anybody know when Policyfiles are going to be considered not experimental/ready for use in ChefDK? I'm trying to improve my cookbook workflow but I don't want to get too invested in Berkshelf if it's going away soon.
I got so loving sick of both these options that I manage my dependency versioning by hand now, it's really not that hard. I run berks-api on my Chef server for Test Kitchen basically but berks never touches production

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
It's a really very valuable tool for resolving dependency chains in test suites, especially if you're targeting local forks of things, but the "conventional wisdom" in Chef is based around things that work for teams of 20 Chef-focused engineers and are absolutely irrelevant for small teams.

We're in the process of rapidly using Chef to manage less and less things anyway, so it's becoming less relevant to us in any case. CM systems like Chef and Puppet are the God Object development anti-pattern applied to infrastructures. Abstractions aren't ever as fixed as you think they are, and we just spend more time unspinning and respinning balls of yarn than we do getting work done with the tool. As we move towards considerably-less-mutable infrastructures, idempotence is just another added cost. I'm feeling much less stressed with a big pile of Dockerfiles, honestly.

Vulture Culture fucked around with this message at 19:28 on Feb 10, 2016

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

wwb posted:

SSH from windows is pretty easy these days.

Microsoft is working on an official solution, but for now you can just go download portable git and add it's bin folder to your path -- that will get you cross-compiled ports of most of the *nix toolchain including ssh.exe.
msys2+pacman is super easy to use these days too.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Rocko Bonaparte posted:

Is there anything out there that spells out how to do a gated check-in between git, Gerrit, and TeamCity? I want to have a pushed commit stage in Gerrit for review, but I want TeamCity to then turn around and run the built-in tests. I'd prefer that the report somehow show up in Gerrit. The reviewer can then see how well the change worked against the repository's tests before possibly reviewing broken code.
There's not much to it if you already have Gerrit and TeamCity running. You need to make sure your VCS build trigger in TC includes refs/changes/* (everything pending review), and then TC needs to know what to do with the build when it's done, which means a plugin for posting a Verified value back to Gerrit. As of TeamCity 10, JetBrains bundles the Commit Status Publisher plugin which handles this, or you can use a different plugin on an older version.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

revmoo posted:

I'm quite happy with my deployment methodology and I'm not interested in changing it. I would definitely like to explore Docker for infrastructure management but I couldn't imagine using it to deploy code.
this is severely backwards

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

the talent deficit posted:

i think blue/green encourages/enables some really harmful practices like treating your standby environment as a staging/integration environment and relaxing requirements on api compatibility. i think in the small (like using blue/green for a particular subsystem like a database or an application group) blue/green can be okay but if you can do blue/green in the small you can probably just do gradual replacement where you can have multiple versions deployed simultaneously without impacting users. basically, i think if you have a healthy blue/green procedure you don't need it, and if you need it you probably have a hard time deploying regularly
Hot take: if it's even possible to use your standby environment as a live staging environment, you already don't have a healthy blue/green procedure.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mr. Crow posted:

Anyone have experience setting up teamcity in a docker container behind a reverse proxy which is also in a container (nginx)?
TeamCity's a weird application to run in a container or even a configuration management setting. It wants to own your config files, not coexist with something else that's trying to manage them. You can't roll back easily because of the database migrations between versions. Stuffing it into a container in any normal way breaks its built-in upgrade process.

This is one of those applications I would generally file under "do not Dockerize" unless you have a mandate to run it on Kubernetes or ECS or something.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Sedro posted:

I run teamcity in a docker container. There's nothing to it. Are you having a specific problem?

The latest teamcity can store its build configuration in code and version control it. They even have official docker images now. https://www.jetbrains.com/teamcity/whatsnew/
Oh, hey, that's nice. I haven't played with version 10 yet. Listen to this person.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

smackfu posted:

Does anyone here work somewhere that forces all commits to master to go through a pull request (which has to build green before merging)? Is it good or does it just add more annoying process? Currently we just do something like "mvn install && git push" which runs all our integration tests before pushing which is pretty good at keeping the build green. But it does require discipline.
Gerrit. Process is good if it keeps people from doing dumb poo poo like hurriedly committing untested, broken code and breaking the build for everyone else.

Integration tests as a gate for merge are bad on big codebases, though. They take a long time. You should have enough unit test coverage to handle most of the clear and obvious build-breaking bugs, and run your integration tests overnight. (This guidance varies if you happen to be doing continuous delivery.)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

the talent deficit posted:

do people complaining about waiting on integration tests not do code review? we don't merge anything in less than 12 hours (unless it's a critical fix) because all prs have to go through extensive code review. that always takes longer than running integration tests
Unsolicited opinion: if a code review takes 12 hours, your change batches are probably too big. Most of the code reviews I submit can be completed in a minute or two (this is obviously not true for enormous refactors)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

fletcher posted:

We use the ELK stack for logging and I ran out of disk space for elasticsearch way sooner than I thought I would. How can I tell which log entries (i.e. by hostname or something) are consuming the most disk space?
Can you post your index template for logstash-* and some better information about what kinds of logs you're ingesting and how? Docjowles' great points beside, you probably have a whole pile of duplicated data and a number of analyzed fields that don't need to be analyzed.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

fletcher posted:

This is gonna be a dumb question as I am a total newbie to the whole ELK stack but how do I see what my index template for logstash-* is?
Okay, so you're almost certainly using the default index template that ships with Logstash! That would be why everything is taking up so much space. Those links everyone else posted will definitely help you get your storage growth under control.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

necrobobsledder posted:

It's fine to Dockerize a database as long as you don't expect it to last a long, long time with its data or have a solid grasp on the data volume's lifecycle (see: Postgres K8s operator), but for most people in production with big ol' clusters and such that doesn't apply. I use Docker containers for launching temporary databases in CI builds and to compare / contrast different configuration settings for different use cases.
Also awesome for ephemeral test environments for developers. Static infrastructure is fine for prod and maybe staging, but gently caress it in the face everywhere else.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
You can update mappings on multiple indexes by specifying a wildcard.

https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html#_multi_index

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


If you're an early-stage startup you can also get up to $100,000 in free money from AWS!

(Most of the other big players offer similar startup programs if you've got the backing of a major fund.)

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NihilCredo posted:

I'm looking for a Docker container (or Compose) to integrate in my own solutions that will create a SSL termination proxy using Let's Encrypt. Nothing fancy like subdomains or anything, I'd just like it to be as idiot-proof as one can hope for.

There seems to be several such projects around, with various degrees of popularity and support:

https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion (the most well-documented)
https://hub.docker.com/r/zerossl/client/
https://hub.docker.com/r/certbot/certbot/

If you couldn't guess, this is new territory for me. Are there any reasons why this is a bad idea, or any other critical information I should be aware of? Have any of you guys used similar solutions?

I think you want Caddy

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply