Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mr. Crow
May 22, 2008

Snap City mayor for life
Just going to plug this, The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices https://www.amazon.com/dp/152391744X/ref=cm_sw_r_cp_apa_LBNbybJWVDACY, is a pretty good book, it covers a lot from the whole CI/CD cycle and ties it all together in a way that's hard to find anywhere online and has tons of examples.

It's also pretty up-to-date from my experience, excellent for people getting their feet wet and probably has some tips or tricks for a more established organization.

Adbot
ADBOT LOVES YOU

Votlook
Aug 20, 2005
Does anyone have experience doing blue-green deployments in versioned infrastructures?
At work we are using AWS CloudFormation templates to manage our infrastructure.
We're mostly happy with this, but updating CloudFormation stacks is a bit of a black box at times,
so we want to move to blue-green deployments.
I get the impression that blue-green deployments don't really fit in CloudFormation.
Does anyone have experience with this, or is CloudFormation the wrong tool for the job?

Votlook
Aug 20, 2005

FamDav posted:

It's also important to realize that a dockerfile generates a new layer for every docker command it executes. So if you download an entire compiler toolchain into your image just to discard it after you perform compilation, you are still downloading that toolchain on every docker pull. They still(!) haven't even given users an option to auto squash dockerfiles.

Of course the people at docker are far too busy implementing new buzzword features.
There is a workaround though: just chain commands with &&, if you install the compiler toolchain, compile the code, and remove the compiler toolchain in a single command, the toolchain is not included in the final image.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





Votlook posted:

Does anyone have experience doing blue-green deployments in versioned infrastructures?
At work we are using AWS CloudFormation templates to manage our infrastructure.
We're mostly happy with this, but updating CloudFormation stacks is a bit of a black box at times,
so we want to move to blue-green deployments.
I get the impression that blue-green deployments don't really fit in CloudFormation.
Does anyone have experience with this, or is CloudFormation the wrong tool for the job?

blue green is kinda bad but you can do blue green with cfn. separate your resources into blue and green and then update whichever is the standby on one pass, then flip the load balancer/dns/config on a second pass when you are ready to flip. optionally do a third pass where you clean up the old live resources

the talent deficit fucked around with this message at 00:14 on Oct 19, 2016

EkardNT
Mar 31, 2011

quote:

blue green is kinda bad but you can do blue green with cfn. separate your resources into blue and green and then update whichever is the standby on one pass, then flip the load balancer/dns/config on a second pass when you are ready to flip. optionally do a third pass where you clean up the old live resources

I'm curious what drawbacks you've encountered with blue green deployments?

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





i think blue/green encourages/enables some really harmful practices like treating your standby environment as a staging/integration environment and relaxing requirements on api compatibility. i think in the small (like using blue/green for a particular subsystem like a database or an application group) blue/green can be okay but if you can do blue/green in the small you can probably just do gradual replacement where you can have multiple versions deployed simultaneously without impacting users. basically, i think if you have a healthy blue/green procedure you don't need it, and if you need it you probably have a hard time deploying regularly

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

the talent deficit posted:

i think blue/green encourages/enables some really harmful practices like treating your standby environment as a staging/integration environment and relaxing requirements on api compatibility. i think in the small (like using blue/green for a particular subsystem like a database or an application group) blue/green can be okay but if you can do blue/green in the small you can probably just do gradual replacement where you can have multiple versions deployed simultaneously without impacting users. basically, i think if you have a healthy blue/green procedure you don't need it, and if you need it you probably have a hard time deploying regularly
Hot take: if it's even possible to use your standby environment as a live staging environment, you already don't have a healthy blue/green procedure.

Mr. Crow
May 22, 2008

Snap City mayor for life
Anyone have experience setting up teamcity in a docker container behind a reverse proxy which is also in a container (nginx)?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mr. Crow posted:

Anyone have experience setting up teamcity in a docker container behind a reverse proxy which is also in a container (nginx)?
TeamCity's a weird application to run in a container or even a configuration management setting. It wants to own your config files, not coexist with something else that's trying to manage them. You can't roll back easily because of the database migrations between versions. Stuffing it into a container in any normal way breaks its built-in upgrade process.

This is one of those applications I would generally file under "do not Dockerize" unless you have a mandate to run it on Kubernetes or ECS or something.

Sedro
Dec 31, 2008

Mr. Crow posted:

Anyone have experience setting up teamcity in a docker container behind a reverse proxy which is also in a container (nginx)?

I run teamcity in a docker container. There's nothing to it. Are you having a specific problem?

The latest teamcity can store its build configuration in code and version control it. They even have official docker images now. https://www.jetbrains.com/teamcity/whatsnew/

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Sedro posted:

I run teamcity in a docker container. There's nothing to it. Are you having a specific problem?

The latest teamcity can store its build configuration in code and version control it. They even have official docker images now. https://www.jetbrains.com/teamcity/whatsnew/
Oh, hey, that's nice. I haven't played with version 10 yet. Listen to this person.

Mr. Crow
May 22, 2008

Snap City mayor for life

Sedro posted:

I run teamcity in a docker container. There's nothing to it. Are you having a specific problem?

The latest teamcity can store its build configuration in code and version control it. They even have official docker images now. https://www.jetbrains.com/teamcity/whatsnew/

It works wonderfully when I use the IP and port directly but when I try and put it behind nginx in a reverse proxy it alternates between 404, 502 and not rendering 80% of the pages (and again 404ing trying to access certain parts of teamcity).

Mostly I'm having nginx problems, I guess you need some special settings for websockets but I haven't had much luck thus far and our internet is out today so hurray.




Related but tangential question, can someone explain to me the benefit of a docker data container vs just mounting a volume directly? It seems like an unnecessary layer of indirection, you're replacing being wired to the volume with being wired to the container. I'm missing something. You can share volumes between multiple containers in my experience so...?

ultrabay2000
Jan 1, 2010


I want to look into setting up some automated build systems. I don't have any past experience with like Jenkins ect and I'm also not super close with anyone who is strong in this area. It seems fairly straight forward but I'm wondering if there's a thread list of recommended reading for getting started. Looking for things that cover design considerations and process issues or philosophies. Not too concerned with like how to setup Jenkins tutorial kind of stuff.

This book seems pretty well reviewed. The only part is I don't use Java - but I don't think that would be a big issue.

Sedro
Dec 31, 2008

Mr. Crow posted:

Related but tangential question, can someone explain to me the benefit of a docker data container vs just mounting a volume directly? It seems like an unnecessary layer of indirection, you're replacing being wired to the volume with being wired to the container. I'm missing something. You can share volumes between multiple containers in my experience so...?

You're right, there's no reason to use data containers now that named volumes are a thing.

spacebard
Jan 1, 2007

Football~

ultrabay2000 posted:

I want to look into setting up some automated build systems. I don't have any past experience with like Jenkins ect and I'm also not super close with anyone who is strong in this area. It seems fairly straight forward but I'm wondering if there's a thread list of recommended reading for getting started. Looking for things that cover design considerations and process issues or philosophies. Not too concerned with like how to setup Jenkins tutorial kind of stuff.

This book seems pretty well reviewed. The only part is I don't use Java - but I don't think that would be a big issue.

The Jenkins UI is fairly straight-forward to work with, but in the end I find it manageable to use the Job DSL plugin to write job definitions in Groovy. It makes it easier to review changes in version control. The API documentation is easy enough to look through and lists dependencies for various functions.

Basically have one job that builds the rest via Groovy. Automate your automation.

ultrabay2000
Jan 1, 2010


Alright that's helpful. I'm starting to see how Jenkins could get unwieldy.

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice
(I apologize in advance for screwing up the nomeclature since my docker experience has extended to about a week so far.) Anyone have a suggestion for managing docker services for use in deploying different branches of code? I'm using Jenkins to run shell commands after images are built and stored in private repo.

Example:
Branch A has image built (1.0).
Branch B has image built (2.0).
I want to deploy Branch A using `docker service update --image` since A is just the next version of that branch.
I want to deploy Branch B using docker service seperately instead of doing `docker service update --image` (right?).

Should I just have multiple docker services started already and just tell Jenkins to push something to whatever service respectively i.e. Service-A or Service-B?
Should I be starting up new services for all the one off branches that want to run independently? Is there a way to stop services after they aren't being used?

The goal is to allow QA to test multiple things independently if needed so that developers aren't waiting to push code because current QA environment is being used. We currently just have DEV/QA/PROD environments and builds get pushed around to the environments as needed but we're trying to migrate to docker since infrastructure team has drunk the koolaid. I'm just a developer, but do devops a lot for our team since we're sorta in control of our own destiny in terms of build/deploy and I'm the only one with any sort of experience.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

poemdexter posted:


The goal is to allow QA to test multiple things independently if needed so that developers aren't waiting to push code because current QA environment is being used. We currently just have DEV/QA/PROD environments and builds get pushed around to the environments as needed but we're trying to migrate to docker since infrastructure team has drunk the koolaid. I'm just a developer, but do devops a lot for our team since we're sorta in control of our own destiny in terms of build/deploy and I'm the only one with any sort of experience.

It sounds like you're not continuously integrating. You shouldn't need multiple environments to QA multiple features.

Walked
Apr 14, 2003

I'm coming from an infrastructure background (lots and lots of ops), with a lot of experience with scripting and automation (PowerShell, python, and some C#)

Tomorrow I have a freaking 6hr panel interview for a Senior DevOps Engineer position, including VTC with team members across the US, and leading a roundtable discussion on a topic of my chosing (I'm covering Server 2016 / Docker and managing it with a custom API for shared development environments, complete with a working demo on a laptop).

Just a bit nervous as its rare that I do an interview on this scale; the group of guys actually seem great, and normally I'd laugh if someone asked for an interview of that duration, but I'm going to give it a shot. Gets me out of my comfort zone too, which is cool.

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice

New Yorp New Yorp posted:

It sounds like you're not continuously integrating. You shouldn't need multiple environments to QA multiple features.

What do you mean? Basically when new code gets checked in onto the release branch, a build kicks off and the artifact gets deployed onto the QA environment overriding the previous version. What I'm trying to do is handle the case where we need 2 different versions running at the same time that can be accessed separately.

Mr. Crow
May 22, 2008

Snap City mayor for life

poemdexter posted:

(I apologize in advance for screwing up the nomeclature since my docker experience has extended to about a week so far.) Anyone have a suggestion for managing docker services for use in deploying different branches of code? I'm using Jenkins to run shell commands after images are built and stored in private repo.

Example:
Branch A has image built (1.0).
Branch B has image built (2.0).
I want to deploy Branch A using `docker service update --image` since A is just the next version of that branch.
I want to deploy Branch B using docker service seperately instead of doing `docker service update --image` (right?).

Should I just have multiple docker services started already and just tell Jenkins to push something to whatever service respectively i.e. Service-A or Service-B?
Should I be starting up new services for all the one off branches that want to run independently? Is there a way to stop services after they aren't being used?

The goal is to allow QA to test multiple things independently if needed so that developers aren't waiting to push code because current QA environment is being used. We currently just have DEV/QA/PROD environments and builds get pushed around to the environments as needed but we're trying to migrate to docker since infrastructure team has drunk the koolaid. I'm just a developer, but do devops a lot for our team since we're sorta in control of our own destiny in terms of build/deploy and I'm the only one with any sort of experience.

Maybe I'm misunderstanding but just use tags? e.g. latest-dev, latest-qa, latest-prod

The latest tag with docker images is confusing and doesn't actually mean it's the latest version of an image; you have to explicitly tag it as such, there are a few baked in convenience features with it (uses latest if you don't specify which tag when `docker run` etc.); but fundamentally there is nothing different about it. Just create you're own convention for always using latest-XXX to get the latest version of whichever 'branch' you want.

I'm not sure there is a way to detect if a service isn't being used, seems like it would have to be baked into the service. I guess you could check the logs and see the last time it printed anything.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

poemdexter posted:

What do you mean? Basically when new code gets checked in onto the release branch, a build kicks off and the artifact gets deployed onto the QA environment overriding the previous version. What I'm trying to do is handle the case where we need 2 different versions running at the same time that can be accessed separately.

It comes back to the question of why you want two different versions to begin with. If work is being continuously integrated, you (theoretically) only need to be testing a single version -- the version you're trying to get pushed out the door.

This is assuming a web app, of course. Standalone applications are a different ballgame.

smackfu
Jun 7, 2004

We are upgrading our enterprise-wide Jenkins to v2.something from v1.something. Anything especially cool we should look into using? Our install is pretty locked down as far as team level customization.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

smackfu posted:

We are upgrading our enterprise-wide Jenkins to v2.something from v1.something. Anything especially cool we should look into using? Our install is pretty locked down as far as team level customization.
The biggest feature is multibranch pipelines, where it creates a folder and one job per branch, when you run the job it looks for and executes a jenkinsfile in the root of the chosen branch. This file can contain everything a jenkins job does including setting optional/mandatory parameters and running programs and such. One other new feature that comes with this is a stage view, where you can see success/fail for each self-defined "stage" of the build, and since it's in Groovy you can create arbitrarily complex logic and build order. There's also a new parallelization ability within a stage.

It's a solid move to storing your jenkins configuration within your SC environment rather than outside it, and also a move to a more programesque file config over the standard GUI of 1.x

here's a pretty good primer https://wilsonmar.github.io/jenkins2-pipeline/

Bhodi fucked around with this message at 16:06 on Nov 10, 2016

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice

New Yorp New Yorp posted:

It comes back to the question of why you want two different versions to begin with. If work is being continuously integrated, you (theoretically) only need to be testing a single version -- the version you're trying to get pushed out the door.

This is assuming a web app, of course. Standalone applications are a different ballgame.

The process you are describing is 99% the case here. What I'm trying to set up is that 1% case where we need to test something separate. I'm managing a slew of microservices so we have both web app and standalone API pieces.

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice
A good example of this is a feature we might have wrapped in a feature toggle so we'd sorta wanna test it feature on and feature off in parallel manually.

Docjowles
Apr 9, 2009

poemdexter posted:

A good example of this is a feature we might have wrapped in a feature toggle so we'd sorta wanna test it feature on and feature off in parallel manually.

Is the feature on/off state compiled into your code or something? Another approach would be to set a default value, but allow the feature to toggle on/off via a config file, DB entry, API endpoint, or whatever you prefer. And then deploy the same artifact to all QA servers, but using config management / etcd / whatever you normally use to configure the app, toggle the flag on some boxes and not others.

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice

Docjowles posted:

Is the feature on/off state compiled into your code or something? Another approach would be to set a default value, but allow the feature to toggle on/off via a config file, DB entry, API endpoint, or whatever you prefer. And then deploy the same artifact to all QA servers, but using config management / etcd / whatever you normally use to configure the app, toggle the flag on some boxes and not others.

True. I'm starting to think the 1% of cases is really not something we should be trying to handle. The once in a blue moon situations just seem to get the attention of management more often than the CI process that runs smooth as butter for dozens of releases...

Walked
Apr 14, 2003

Walked posted:

I'm coming from an infrastructure background (lots and lots of ops), with a lot of experience with scripting and automation (PowerShell, python, and some C#)

Tomorrow I have a freaking 6hr panel interview for a Senior DevOps Engineer position, including VTC with team members across the US, and leading a roundtable discussion on a topic of my chosing (I'm covering Server 2016 / Docker and managing it with a custom API for shared development environments, complete with a working demo on a laptop).

Just a bit nervous as its rare that I do an interview on this scale; the group of guys actually seem great, and normally I'd laugh if someone asked for an interview of that duration, but I'm going to give it a shot. Gets me out of my comfort zone too, which is cool.

Holy poo poo that was a brutal interview.

Yes throw me a laptop and have me play code golf with PowerShell while streaming to staff across the country and I'm on projector.

I mean I got all their technical exercise questions done/right but that's so far outside of my comfort zone it was mentally fatiguing as heck.

And then I had to present.

Smart guys and well organized devops team though, even if a thorough interview.

smackfu
Jun 7, 2004

Bhodi posted:

The biggest feature is multibranch pipelines, where it creates a folder and one job per branch, when you run the job it looks for and executes a jenkinsfile in the root of the chosen branch. This file can contain everything a jenkins job does including setting optional/mandatory parameters and running programs and such. One other new feature that comes with this is a stage view, where you can see success/fail for each self-defined "stage" of the build, and since it's in Groovy you can create arbitrarily complex logic and build order. There's also a new parallelization ability within a stage.

It's a solid move to storing your jenkins configuration within your SC environment rather than outside it, and also a move to a more programesque file config over the standard GUI of 1.x

here's a pretty good primer https://wilsonmar.github.io/jenkins2-pipeline/

Thanks, that sounds very cool. I love storing config in version control. I hope they don't lock it down on us too much.

Virigoth
Apr 28, 2009

Corona rules everything around me
C.R.E.A.M. get the virus
In the ICU y'all......



I just got told to halt our Jenkins 2 upgrade process due to our merger closing early. Anyone dealt with splitting Jenkins that might have some life experiences or best practices? We have a single box in AWS and run about 250 slave executors. 4000+ jobs and growing daily. We average around 80 jobs concurrent and the system is starting to die slowly on me.

Definitely use job dsl and jenkinsfiles. We're in the middle of converting that over while gettthe no rid of our chained freestyle jobs for pipelines.

Docjowles
Apr 9, 2009

Jesus. I'll be curious how that goes for you.

What kind of EC2 instance, and Java heap/GC/etc settings are you using on the master? We have a much smaller though still busy Jenkins instance, and it falls over every week or two under normal usage. I haven't taken the time to dig into it because it's not a huge deal if Jenkins is down a few minutes a month, but I would like to stabilize it without just wallpapering over the problem with way too much hardware.

edit: this reminds me I need to go make sure noone is running builds directly on the master again. For a while someone was running a horrible PHP-based job that ate all the RAM. I stamped that out but now I'm wondering if someone else has done it again, intentionally or not.

Docjowles fucked around with this message at 02:09 on Nov 12, 2016

Virigoth
Apr 28, 2009

Corona rules everything around me
C.R.E.A.M. get the virus
In the ICU y'all......



We use a c4.2xlarge for the master. I should say it's 250 slave executors on other standalone Jenkins slave boxes. I'll look up data tomorrow and post it. I can also toss up a script that checks for jobs on master if you need any easy one to run and get s report back on a timer.

Docjowles
Apr 9, 2009

Virigoth posted:

We use a c4.2xlarge for the master. I should say it's 250 slave executors on other standalone Jenkins slave boxes. I'll look up data tomorrow and post it. I can also toss up a script that checks for jobs on master if you need any easy one to run and get s report back on a timer.

Sure, that'd be super helpful. As well as any non-default JVM tuning you may have done. Although we actually are on Jenkins 2.x now so it may not be comparable. We upgraded in part in the hopes of better stability, but nothing's that easy.

Our master's VM is comparable to a c4.xlarge (so half the horsepower) but coordinating WAY fewer jobs. Which is making me very suspicious when it crashes, since it really shouldn't be doing much work. All of our jobs are supposed to take place on slaves, too. But there's a few devs from Ye Olden Days of the company who just do whatever the hell they want, because that's how it was done in 2006 with (1/20th the number of engineers competing for resources), and it was good enough then :bahgawd:

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
I had instability running openjdk and switched to sun but am running far fewer jobs than you. Obviously bump the memory settings in the launch options. There were some more aggressive heap and garbage collection options I dug out of the internet a while back that helped but those were only valid for 1.x

Hughlander
May 11, 2005

Virigoth posted:

We use a c4.2xlarge for the master. I should say it's 250 slave executors on other standalone Jenkins slave boxes. I'll look up data tomorrow and post it. I can also toss up a script that checks for jobs on master if you need any easy one to run and get s report back on a timer.

We use a 8xl for about a similar load with the jvm tuned for large heaps. We also stop nightly for a clean backup.

bolind
Jun 19, 2005



Pillbug
Does anyone know how to make the gerrit trigger not post one single comment upon completion? Situation is thus:

  • Three jenkins jobs are triggered when a patch is sent for review.
  • One of these jobs is an "info-only" and doesn't vote regardless of failure/success.
  • Currently these three jobs all produce one single, combined comment in the review.

This means that, in the case that the info-only job is the last to finish, we need to sit around and wait for that. It would be nice if it could be separated. Any ideas?

Virigoth
Apr 28, 2009

Corona rules everything around me
C.R.E.A.M. get the virus
In the ICU y'all......



Hughlander posted:

We use a 8xl for about a similar load with the jvm tuned for large heaps. We also stop nightly for a clean backup.

What type of tuning do you have in place? I've only owned this beast for about 6 months and it has been mostly trying to figure out what the gently caress the last guy did (no notes) and stabilizing the platform. We're rebooting once a week right now to stop ~bad things~ from happening. I'm learning Java/Groovy slowly but am still a bit fuzzy on making sure it is setup right.

My current Java options look like this (Java 1.7):
code:
JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Dgroovy.use.classvalue=true -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC 
-javaagent:/opt/newrelicjava/newrelic.jar -Xms8192m -Xmx10240m -XX:PermSize=8192m -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/New_York"
We also have a script that runs GC every 10-15 minutes or so.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Anybody here use Sonarqube? I set some settings via the web UI and I was hoping it would update the conf/sonar.properties file, but that doesn't seem to be the case. I was hoping to see how it was storing things like the SMTP settings. I guess it must be in the database somewhere?

Adbot
ADBOT LOVES YOU

Dreadrush
Dec 29, 2008
Hi I'm very new to the whole Docker thing and am trying to learn more about it.

I want to be able to deploy an nginx server that will host static files for my website. The static files are compiled by running webpack.

Currently, I have two containers:
web: uses FROM node:latest to take my source files, and builds the dist static files
nginx: uses FROM nginx to run nginx

I have a docker-compose file setup to run the two dockerfiles.

How can "copy" the files generated in the web container to the nginx container? The web container doesn't even have any server running - it has only created static files.
Should I only have one container (nginx) and be generating the static files all inside that?

I tried hosting an express server in the web container, and using a volume to share the generated files, which worked fine, however then on multiple deployments it seemed I had to do extra work to be deleting all the volumes first - it felt like I was doing this wrong. Also in the end all I need is the nginx server with static files - no node server running at any time.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply