Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
FamDav
Mar 29, 2008

necrobobsledder posted:

Containers do not replace configuration management. However, it lets you get to a place where you can at least separate application configuration and artifacts from operations concerns such as logging, monitoring, kernel tuning, etc. The model also makes it easier to enforce 12-F applications instead of allowing lazy developers to dump files all over a filesystem.

One new problem that arises is that now containers can become quickly outdated if they're created in a typical lazy manner that includes way more dependencies than necessary (all those lazy people using Ubuntu base images are scary, man). However, you can very quickly update your container hosts (may God have mercy on your soul if you're not using container orchestration) and many patching tasks becomes specific to application containers. This greatly reduces the burden of change management off of operations teams and helps achieve higher density and separation of containers. For example, you can reschedule containers to a set of nodes that are isolated from known updated containers.

I still use Puppet and Chef to provision and maintain my worker nodes that host my containers.

One of the nicest benefits of distinguishing infrastructure from applications is that you can set up infra to rolling replace every say 48 hours and you can put tight bounds on your fleet heterogeneity and overall age.

Adbot
ADBOT LOVES YOU

Mr. Crow
May 22, 2008

Snap City mayor for life
On DB / container chat, here is a good HN thread that discusses it. It's relatively old, but still relevant https://news.ycombinator.com/item?id=13582757

TLDR: draw your own conclusions, it's not a silver bullet (and I don't think anyone is suggesting it is) but I'm personally going to keep an open mind and evaluate it per project as it comes up.

I don't think the usual knee jerk 'THE DB IS SACRED NEVER TOUCH IT' responses help anything.

Twlight
Feb 18, 2005

I brag about getting free drinks from my boss to make myself feel superior
Fun Shoe
We've had great traction using containers to simplify our current bare-metal environment. By reducing the app server types, we can simplify our entire footprint, making things much easier to manage overall. Deployments are a snap now as well. I've also been using Docker Swarm and deploying directly via the api which has been pretty simple to setup with some python.

Ganson
Jul 13, 2007
I know where the electrical tape is!
Containers are great if you need 2 of something ephemeral... err no I need 20... err no I need 50... err no I need 2 again.

They're also great if you want to pick something up in one place, put it somewhere else, and have it work perfectly.

However, if you store persistent data in them you are going to have a very very bad time. And docker volumes are just a joke. The LVM disk driver stuff is better but it's still not somewhere I'd store any data I'm going to go back and drat well expect it's going to be there. The only case where I would consider putting DBMS in a container is if I have to scale it to vast quantities of data (I'm thinking petabytes) where 'database' has become a service in and of itself. The operational overhead at that point might be worth it.

Boutique data stores or possibly things that coordinate over gossip protocols may also be an exception (your hbases or cassandras or elasticsearches) but if you're at the point where you know you need something more complicated than a boring DBMS (and not just because it'll look snazzy on your resume) you can probably figure out a way to do it with containers that doesn't cause all your production data to disappear like the pastor who got my sister preggers or involve making posts to the moderately successful Tony Danza support group located at forums.somethingawful.com.

Basically don't put databases or files you care about in docker containers and you'll be fine.

Ganson fucked around with this message at 06:05 on Jun 27, 2017

Ganson
Jul 13, 2007
I know where the electrical tape is!

Dren posted:

Can anyone share their thoughts on how containers turn updating from a system problem (run yum update once in the system) to an every app problem? Is this not a big deal in practice or does it turn out that updates don't happen as easily or as often as they should?

All the major orchestration platforms have built in support for blue/green or rolling deploys and work fairly well (last I looked Kubernetes was the furthest behind but that was awhile ago). You handle it much like you would handle updating hosts behind an ELB or static VM. Once you have your base image updated you tell the orchestration platform what you want and it'll handle it for you. Mesos was really nice in this regard, you could get real complicated with your requirements (I need at least 2 healthy hosts at all times, never go over this amount of hosts in existence, etc etc).

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
I'm working on automating builds for CI in a more methodical way instead of the ad-hoc bespoke scripts I have now.

It's kind of a unique situation: A decently sized chunk of C++ code that gets built for both the host (native unit tests) and ARM (deployment artifacts). That means the build environment needs two sets of compilers and two sets of dependencies - or two build environments and each commit needs to be run twice.

I'm working with gitlab-ci and the docker-runner process to build in containers, but I honestly have no idea how to setup my build environment so that the pre-build state is cached and can spinup instantly.

As a lesser feature, I'd like to keep the build temporaries around to speed the process, as a full build takes 10 minutes or so and I'd rather just use dependancy resolution to rebuild as-needed instead of always.

Where do I start? I've been doing things the old fashioned way so far that trying to dive into modern methodology every guide expects me to have already mastered things like building my own docker image, or to be doing something trivial that uses a standard one.

EssOEss
Oct 23, 2006
128-bit approved
I do not know anything about gitlab-ci or docker-runner, as I use a rather different toolchain but I suppose the principles are the same. Off the top of my head, without diving too much into whatever makes your solution special, I would structure the build process (to be executed from start to finish for each build) as:

  • Build a Docker image that contains your toolchain and your inputs.
  • Spawn a new container based on this image and have it execute the build process for your app.
  • Mount directories on the host to capture build output files and/or reuse any temporary build files you wish to keep.

To build the container image, I would create a Dockerfile consisting of the following steps:

  • FROM your_preferred_operating_system_image (e.g. ubuntu or microsoft/windowsservercore or whatever you please)
  • Now do whatever is needed to set up your build environment (e.g. install compilers using RUN apt-get install bla bla bla or whatever else you need)
  • ADD your_input_files (I assume they are in some folder available to your automated build process)
  • Set your build script (added to image in previous step) as the entry point command (ENTRYPOINT /root/whatever.sh). This script should do all the "building" work and generate all the needed output and run any tests you want to run.

On every build, you would first execute a "docker build" command to generate the image from the above Dockerfile. This process will cache the result from each command in the Dockerfile (so if the commands needed to set up your build environment do not change, it does not execute that part on every build, skipping right to the "add input files" step). Note that this caching feature may leave a lot of intermediate image layers around that you may need to clean up at some point (I would suggest a weekly/monthly executing script that executes "docker image purge" or whatever it was to clean the cache); nothing specific to your scenario - Docker is always quite messy with temporary data.

After building the image, you would execute the container using "docker run" and wait for your build script to finish, thereafter processing any outputs it generated (into the directories on the host that you mapped into the container using --volume). You can also inspect the exit code of the container to determine whether the build script signaled success or failure.

Makes sense? Ask away if not.

EssOEss fucked around with this message at 21:09 on Jun 28, 2017

Dren
Jan 5, 2001

Pillbug
I'm kind of a docker noob but can't you build a docker image that is fully provisioned, manually version it, publish it to a docker repository, and pull that for a CI build so you can have a fresh one each time but not eat the cost of provisioning? Building and publishing the image in CI is possible too – check if the current version is in the repository and if not have CI build it and push it.

EssOEss
Oct 23, 2006
128-bit approved
To have a second process for building the imge? Sure you can but there is no obvious need to unless you plan to reuse it (e.g. for a whole build cluster). Or do you have some benefit in mind that I do not immediately think of?

Dren
Jan 5, 2001

Pillbug

EssOEss posted:

To have a second process for building the imge? Sure you can but there is no obvious need to unless you plan to reuse it (e.g. for a whole build cluster). Or do you have some benefit in mind that I do not immediately think of?

Harik said he wants to

Harik posted:

setup my build environment so that the pre-build state is cached and can spinup instantly.

Your process has these steps:

EssOEss posted:

  1. Build a Docker image that contains your toolchain and your inputs.
  2. Spawn a new container based on this image and have it execute the build process for your app.
  3. Mount directories on the host to capture build output files and/or reuse any temporary build files you wish to keep.

What I'm talking about is having step 1 be performed once, possibly by a CI step, every time the toolchain changes (which shouldn't be very often) instead of once for every build. This would meet Harik's goal of having the pre-build state cached so it can spin up instantly.

EssOEss
Oct 23, 2006
128-bit approved
Ah, I see. That would be taken care of by this part:

EssOEss posted:

This process will cache the result from each command in the Dockerfile

You get caching for free. No need to push any images for it.

Dren
Jan 5, 2001

Pillbug

EssOEss posted:

Ah, I see. That would be taken care of by this part:


You get caching for free. No need to push any images for it.

Where will devs get the environments from? Do they have to build it themselves?

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison

Dren posted:

Where will devs get the environments from? Do they have to build it themselves?

You could either have devs build it themselves, host an internal (if it's somehow private/confidential) docker registry and push it to there and then they'd pull the base image once, or push the base image to docker hub and do the same.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Private Docker registries can be fairly easy to setup provided you have certificates that don't suck (read: self-signed certs that you have to add to clients is a pain in the rear end). I'm hosting one at work as an nginx container terminating the SSL connection and proxying to the Docker registry container. These are on a single instance in an ASG with sizing of 1 backing to an S3 bucket. Pulling images out of AWS can slightly suck in costing if you have a lot of developers pulling locally, but if you're primarily pulling from within AWS it's pretty nice.

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison
If you're already on AWS, you can use ECR registries for pretty cheap.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I didn't want to have even a chance of the container being accessible on the public Internet and since ECR doesn't support provisioning inside a VPC last I saw, that was a no-go for me. The EC2 instance + EBS probably costs more than what ECR would cost us probably, but with a $280k+ / mo AWS bill from gross mismanagement (110 RDS instances idling 95% of the time, rawr) I'm not being paid to care about cost efficiency anymore.

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison

necrobobsledder posted:

I didn't want to have even a chance of the container being accessible on the public Internet and since ECR doesn't support provisioning inside a VPC last I saw, that was a no-go for me. The EC2 instance + EBS probably costs more than what ECR would cost us probably, but with a $280k+ / mo AWS bill from gross mismanagement (110 RDS instances idling 95% of the time, rawr) I'm not being paid to care about cost efficiency anymore.

ECR requires authentication, though.

Also Jesus Christ I'd give my left nut to have management that gave that few of a shits about AWS spend

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
You do not want this management. At all. Our infrastructure is horrific and even as a consultant across public sector and about half the Fortune 100 I think this is in the bottom 10% of organizational capability around infrastructure management. While I don't have barriers organizationally from me doing a lot of things other places would care about, there's so many other problems from lack of organization that it's a different form of paralysis.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

uncurable mlady posted:

ECR requires authentication, though.

Also Jesus Christ I'd give my left nut to have management that gave that few of a shits about AWS spend
This is such a weird take because all I hear management talk about is the AWS bill every month

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison

Bhodi posted:

This is such a weird take because all I hear management talk about is the AWS bill every month

My problem is that they just don't want to spend money, period. Our main CI server is seven years old and rapidly failing, but when I ask for money to get a new server, it gets rebuffed. When I say that we're going to extend the useful life by offloading build agents to EC2, I get complained at that we're spending too much on AWS. We run a lot of testing in the cloud because when we ran it locally, we'd lose end to end test runs because there was too much load on our VMware cluster and all of the runs would fail, causing lost days and missing milestones. Put it in the cloud, now they bitch about spend.

Sometimes you just can't please anyone except yourself.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

uncurable mlady posted:

My problem is that they just don't want to spend money, period. Our main CI server is seven years old and rapidly failing, but when I ask for money to get a new server, it gets rebuffed. When I say that we're going to extend the useful life by offloading build agents to EC2, I get complained at that we're spending too much on AWS. We run a lot of testing in the cloud because when we ran it locally, we'd lose end to end test runs because there was too much load on our VMware cluster and all of the runs would fail, causing lost days and missing milestones. Put it in the cloud, now they bitch about spend.

Sometimes you just can't please anyone except yourself.
Sorry, yeah. I misread "gave that few of a shits" as "gave that few shits". I think everyone struggles under AWS sticker shock which is simply silly when put next to typical yearly capital spend on datacenter poo poo. Maybe it would be helpful if they only billed yearly, then you could file it next to the hundreds of thousands of dollars if not millions larger corps already give various companies.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Most places I've seen that bitch about cost treat AWS like a datacenter without even looking at options like reserved instances, S3 durability and redundancy tiers, and bandwidth contracts (yes, they do have them to help lower egress costs substantially, mostly of use when you get to petabytes / mo in transfers).

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

necrobobsledder posted:

bandwidth contracts (yes, they do have them to help lower egress costs substantially, mostly of use when you get to petabytes / mo in transfers).

How does this work?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Blinkz0rz posted:

How does this work?
I swear I saw this on an AWS page several months ago but tried to find it again repeatedly in vain. So, it's possible it was offered at one point and pulled, be warned. The gist I saw was this: you pay for an annualized contract on reserved bandwidth for your account where the first discount tier starts about 10 TB / mo at maybe 5%. For us, our recorded outbound bandwidth was measured about 5+ PB annually. The discount was somewhere around 15%+ at that point with the fields I saw, which may have been sufficient for us to reconsider the plan. AWS regularly offers its largest customers really, really substantial discounts on its rates. I've now worked at the #3 and #5 biggest customers supposedly in terms of AWS spend and when you're talking $30MM / mo+ you can ask for a lot of discounts on instances and services but network seemed to be off the table for us.

I really swear I saw such a document because I was about to march into the engineering director's office with it to argue against the egress costing argument (the prime mover away from AWS) to avoid having to deploy our junky flagship product into an expensive Openstack cluster and tie us to a datacenter.

Che Delilas
Nov 23, 2009
FREE TIBET WEED
I don't know how well documented it is for anybody to find - I suspect it's really not - but you should try to get in contact with your Amazon rep (I'm not sure which kind of rep you'd be looking for, my experience with this is secondhand). My company uses a fraction of what you're describing and we negotiated several tens of thousands of dollars worth of credit and support/training. We were clearly going to be long-term customers, and I bet they'd be amenable to helping you control your costs in order to keep you around in the same vein.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


If you're an early-stage startup you can also get up to $100,000 in free money from AWS!

(Most of the other big players offer similar startup programs if you've got the backing of a major fund.)

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

I'm looking for a Docker container (or Compose) to integrate in my own solutions that will create a SSL termination proxy using Let's Encrypt. Nothing fancy like subdomains or anything, I'd just like it to be as idiot-proof as one can hope for.

There seems to be several such projects around, with various degrees of popularity and support:

https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion (the most well-documented)
https://hub.docker.com/r/zerossl/client/
https://hub.docker.com/r/certbot/certbot/

If you couldn't guess, this is new territory for me. Are there any reasons why this is a bad idea, or any other critical information I should be aware of? Have any of you guys used similar solutions?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NihilCredo posted:

I'm looking for a Docker container (or Compose) to integrate in my own solutions that will create a SSL termination proxy using Let's Encrypt. Nothing fancy like subdomains or anything, I'd just like it to be as idiot-proof as one can hope for.

There seems to be several such projects around, with various degrees of popularity and support:

https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion (the most well-documented)
https://hub.docker.com/r/zerossl/client/
https://hub.docker.com/r/certbot/certbot/

If you couldn't guess, this is new territory for me. Are there any reasons why this is a bad idea, or any other critical information I should be aware of? Have any of you guys used similar solutions?

I think you want Caddy

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Vulture Culture posted:

I think you want Caddy

Oh drat, I read about that some months ago but had totally forgotten about it. Thanks!

Cancelbot
Nov 22, 2006

Canceling spam since 1928

How straightforward is it to do the following in AWS (probably lambda):

1. Receive HTTP Post from $deploy_tool on success
2. Take one of the instances that was deployed to
3. Save AMI of server
4. Update launch configuration of the auto scaling group to use new AMI

Essentially a success-triggered AMI bake before we can actually use spinnaker like real devops :v:

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Changing the AMI in the launch configuration of the ASG won't rotate the instances if that's what you're trying to do automatically. You can use CloudFormation to use an update policy that will rotate your change through. It's usually better in my experience to treat ASGs as immutable and to launch new ASGs with launch configurations so it's easier to attach and detach groups of instances atomically to an ELB.

Cancelbot
Nov 22, 2006

Canceling spam since 1928

Rotations isn't a problem as in the end everything will run the same version, it's purely a scaling crutch for the time being until I can get a proper process in place.

Our retarded process:
Octopus does a rolling deploy to 3 servers at a time which is slow and painful, but there are 5 independent large services per instance all deploying at different times which makes it hell for us to manage.

When the ASG scales up it triggers Octopus to do a re-deploy but this must wait for the instance to come online, and then to register with DNS, and then to register with Octopus and then the deploy steps which is currently taking us 2 hours to scale for every 5 instances, due to bullshit reasons such as having to redeploy all 5 services with their post deploy tests for every loving server that comes online. If I can bake every successful deployment into an AMI with a short retention policy this makes scaling faster at the expense of some sanity at the ASG level.

This is a crutch before using spinnaker properly to blue-green all their poo poo because they're paranoid as gently caress when even deploying minor changes that they need a 40 minute post-deploy suite :suicide:

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Cancelbot posted:

How straightforward is it to do the following in AWS (probably lambda):

1. Receive HTTP Post from $deploy_tool on success
2. Take one of the instances that was deployed to
3. Save AMI of server
4. Update launch configuration of the auto scaling group to use new AMI

Essentially a success-triggered AMI bake before we can actually use spinnaker like real devops :v:
This is two Terraform resources, an aws_ami and an aws_autoscaling_group. $ci_tool can run the thing itself without a webhook if you're using something like Jenkins to run your continuous delivery pipeline.

e: if you wanted to you could run Terraform from a Lambda function though

Vulture Culture fucked around with this message at 16:30 on Jul 7, 2017

Cancelbot
Nov 22, 2006

Canceling spam since 1928

Whoa, mind blown. We use terraform for standing up an environment but never thought it could be used to build at deploy!

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Dren posted:

Harik said he wants to


Your process has these steps:


What I'm talking about is having step 1 be performed once, possibly by a CI step, every time the toolchain changes (which shouldn't be very often) instead of once for every build. This would meet Harik's goal of having the pre-build state cached so it can spin up instantly.

Hey, I realized I forgot to thank you for this. I meant to but forums took lower priority than new baby and new hires to train. Still working on it but the majority of the pain is getting the custom dev environment setup for arm-cross compiling to the target with the specific library versions that will exist there.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
What are y'all goons using to do Continuous Delivery in a Java based environment?

I'm moving to a new job at the end of the month, and where my current job is doing CI/CD with Python to deliver a SaaS solution, the new shop does end user software with a number of products and JAR files meant to be installed and run by an end user.

I'm particularity interested in how to solve providing a desktop/GUI for sales/support folk, so that they can always leverage the latest version or last few versions of the software.

OWLS!
Sep 17, 2009

by LITERALLY AN ADMIN

Gyshall posted:

the new shop does end user software with a number of products and JAR files meant to be installed and run by an end user.

Abandon all hope all ye who enter here.

(I sure as hell hope you have some sort of auto-update/deployment mechanism, because relying on your users to do anything is a recipe for hell.)

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Sorry, to clarify, that is what the end product is (as in a customer would be installing it). Not going to rely on any internal user here - where I'm at now, we built a web front end that talks to the Jenkins API, and basically has a big button that says "GIVE ENVIRONMENT" and spits out a docker container running the web app and a URL to navigate to for that user who pressed the button. It works quite well for the business dev/support folks, but at Java Place (TM), I don't think I'm going to have the luxury of a web application, and I don't want to be in the business of distributing dockers to non-technical folks either.

I'm interested in creating a similar pipeline but instead delivering a full X environment with the Jar already pre-installed for their use.

Ideally, the X environment will be running on a cluster of some sorts, or AWS, but I'm not sure just yet. Just wondering if anyone has been able to tackle something like this without the use of Citrix, etc.

https://guacamole.incubator.apache.org/ seems pretty good and close to what I'm looking for.

JHVH-1
Jun 28, 2002

Gyshall posted:

What are y'all goons using to do Continuous Delivery in a Java based environment?

I'm moving to a new job at the end of the month, and where my current job is doing CI/CD with Python to deliver a SaaS solution, the new shop does end user software with a number of products and JAR files meant to be installed and run by an end user.

I'm particularity interested in how to solve providing a desktop/GUI for sales/support folk, so that they can always leverage the latest version or last few versions of the software.

Bleh. My last job had a java based app they were moving away from to a hosted SaaS. Everything was in java though, built in bamboo with gradle/grunt I think. Parts of the app were built in dependent plans eventually so if you updated one part of the app it triggered any parent plans. The bits and pieces got stored back in maven. The master build plan just took all that and assembled it.

The server side things got sent off to opsworks stacks in AWS which worked nice for java apps (auto scaling based on CPU and extra time based servers added during peak hours to reduce waiting for spin up)

There were some test servers that automatically got started for nightly tests I believe.

Adbot
ADBOT LOVES YOU

ultrabay2000
Jan 1, 2010


Can anyone offer any pros/cons of GoCD compared to Jenkins currently? We're evaluating both of these. GoCD seems to have a nicer UI out of the box but Jenkins is a lot more widely used. It seems GoCD was particularly strong with value streams but Jenkins has made progress on that.

I'm leaning towards Jenkins because it's free and more widely used but we use GoCD heavily already.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply