Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
22 Eargesplitten
Oct 10, 2010



Going through the Docker tutorials I don't even know enough Linux shell to know what some of these commands are doing, so welp. :smith: Guess I'm not doing this stuff without learning a lot more Linux fundamentals.

Adbot
ADBOT LOVES YOU

Mao Zedong Thot
Oct 16, 2008


Yeah if you want to get into SRE/devops/ops stuff you absolutely need to familiarize yourself with Linux. Knowing general OS stuff is more helpful than knowing whatever the hot poo poo that runs on top of it. It's all gonna boil down to /proc and shells and cgroups and kernels and netstat and iptables etc.

22 Eargesplitten
Oct 10, 2010



I thought I knew a bit from using Fedora in college and on a VM on my laptop, but I look at some of these commands and don't know what the hell they mean. Like:

code:
strace -c -f -S name whoami 2>&1 1>/dev/null | tail -n +3 | head -n -2 | awk '{print $(NF)}'
Like why the hell is there a linked list involved here? Why is it sending whatever output 2 is to the address of 1 and then sending 1 to dev/null? I heard that Docker was pretty easy to learn in the IT threads, but now I realize that's if you're already a Linux admin and every shop I've worked at has been Windows. :smith:

I'm sorry if I'm being overdramatic or stupid here, but Docker and Kubernetes seem really cool to me and I dread my job every morning and now I see that I've got years to go of Linux work before I can start being useful with this stuff and even if I could get into a junior Linux admin role that's probably a pay cut I can't afford at this point.

22 Eargesplitten fucked around with this message at 05:13 on Jul 29, 2018

Methanar
Sep 26, 2013

by the sex ghost
trace the syscalls that the whoami program makes during its operation and direct the stderr FD to stdout for that whoami instantiation; send that output to /dev/null. Do not read the first two lines.. Read only the first 2 lines after that. Read the last column delimited by whitespace

LochNessMonster
Feb 3, 2005

I need about three fitty


22 Eargesplitten posted:

Going through the Docker tutorials I don't even know enough Linux shell to know what some of these commands are doing, so welp. :smith: Guess I'm not doing this stuff without learning a lot more Linux fundamentals.

Get the Sander van Vugt book for RHCSA and start working on Linux. It’s really fundamental and an RHCSA level of understanding will get you a long way.

On the side you can still explore Docker.

22 Eargesplitten
Oct 10, 2010



Yeah, you're right. If I can study Docker I can study for the RHCSA.

Does Lynda have any good courses for Linux basics?

E: Looks like they have a course by Grant McWilliams, here goes nothing. At least until I can get the book, the library has the second edition from 2013 which I assume is somewhat out of date.

22 Eargesplitten fucked around with this message at 19:51 on Jul 29, 2018

2nd Rate Poster
Mar 25, 2004

i started a joke

22 Eargesplitten posted:

E: Looks like they have a course by Grant McWilliams, here goes nothing. At least until I can get the book, the library has the second edition from 2013 which I assume is somewhat out of date.

The basics dont change enough for it to matter, the biggest difference between 2013 and now is the change to systemd and deprecation of ifconfig/netstat.

Systemd can be a beast I wont lie, but if you're operating at the level of start an app, read some logs, restart an app. The important bits are learnable in a few days. If you focus on redhat stuff, the service restart procedures dont even have to change.

22 Eargesplitten
Oct 10, 2010



Thanks, I'll swing by after work and pick it up from the library. I'm going through a Linux basics course on Lynda since the RHCSA course recommended a year of Linux experience or basics coursework before taking it. It looks like at least in the Denver/Boulder area of CO a junior Linux admin should make around what I'm making now. I would like to make more, but I'm willing to put that off a year or so if it means getting on the right track.

LochNessMonster
Feb 3, 2005

I need about three fitty


22 Eargesplitten posted:

Thanks, I'll swing by after work and pick it up from the library. I'm going through a Linux basics course on Lynda since the RHCSA course recommended a year of Linux experience or basics coursework before taking it. It looks like at least in the Denver/Boulder area of CO a junior Linux admin should make around what I'm making now. I would like to make more, but I'm willing to put that off a year or so if it means getting on the right track.

If you don’t start at the bottom (linux basic stuff) you’ll need a lot more time figuring out (basic linux) stuff while working on Docker, K8s and probably a lot other stuff too.

I get you want to get on the gravy train but to effectively do that you need some Linux experience. This does not have to be 1 year of experience but you should be comfortable with a lot of cli work. It’s not rocket science but it does take some time.

Personally I think RHCSA is a pretty good foundation.

cheque_some
Dec 6, 2006
The Wizard of Menlo Park
I have had this tab open for like a year, after someone posted it here, I was using it to brush up on some kernel level stuff I was a little fuzzy on, but it seems like it has a lot of stuff starting from the beginning level as well.

https://linuxjourney.com/

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison

Vulture Culture posted:

If your big problem around distributed tracing is context propagation (it's ours for sure), consider OpenCensus instead of trying to deal with OpenTracing directly

Can you elaborate on this? I'm only starting to get into distributed tracing and I'd be interested to hear about your experiences with it.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

uncurable mlady posted:

Can you elaborate on this? I'm only starting to get into distributed tracing and I'd be interested to hear about your experiences with it.
The main issue with tracing is that all of your calls need to be correlated together; each logged call needs to know which other call was responsible for invoking it in order to produce a distributed call graph. In a normal system, this sort of information is contained in the stack. In a distributed system, you're responsible for propagating it from place to place. (Okay, maybe your RPC framework or service mesh handles some of the hard parts for you.)

Some languages make this really easy. For example, in Java, you typically have one thread servicing each request, and you have thread-local storage you can use so that you can seamlessly drop that context into upstream calls. In Go, there are some clever and horrifying ways to emulate goroutine-local storage. If your calls to upstream services all go through a library, you can probably code the context propagation into that library directly - just pull out of your thread-local, goroutine-local, whatever storage - and not touch any of your APIs when you want to add tracing. But if you're using something coroutine-driven like Node.js, where there's no concept of request-local storage because it doesn't make sense, you have to design your entire API so that everything you call that conceivably might call an upstream function passes around your request/span context. This actually isn't hard in greenfield - cumbersome, maybe, but not hard - but it's basically a no-go for existing codebases, especially large ones.

OpenCensus standardizes the context, not just the APIs for logging calls. Hopefully, it should promote interoperability with third-party libraries in a way that OpenTracing can't, because the exporter definition is decoupled from the trace collector. If you're familiar with Prometheus, it takes a similar philosophical approach to where it demarcates responsibility over exported data.

Vulture Culture fucked around with this message at 16:56 on Jul 31, 2018

22 Eargesplitten
Oct 10, 2010



LochNessMonster posted:

If you don’t start at the bottom (linux basic stuff) you’ll need a lot more time figuring out (basic linux) stuff while working on Docker, K8s and probably a lot other stuff too.

I get you want to get on the gravy train but to effectively do that you need some Linux experience. This does not have to be 1 year of experience but you should be comfortable with a lot of cli work. It’s not rocket science but it does take some time.

Personally I think RHCSA is a pretty good foundation.

True.

Going to be hard to get into anything Linux before I have a cert too. CO is in a weird situation where unemployment is super low but there’s also a ton of people applying for everything.

I’ll take a look at that Linux Journey site today.

Pile Of Garbage
May 28, 2007



Gyshall posted:

Check out Jenkins Job Builder. We mostly use Jenkins for deploys these days, and let Gitlab handle builds.

Cheers thanks this looks useful! Also today I worked out how to use Pipeline Declarations defined in shared libraries which will make things infinitely easier.

Warbird
May 23, 2012

America's Favorite Dumbass

Me: Developer, give me your code and your vendor contacts so we can do a DevOps.
Dev: Why? I don't want to. I'll do the deploying.
Me: What.

Every day. Every drat day.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Warbird posted:

Me: Developer, give me your code and your vendor contacts so we can do a DevOps.
Dev: Why? I don't want to. I'll do the deploying.
Me: What.

Every day. Every drat day.

If "I'll do the deployment" means "the development team is taking ownership of deployment automation" then that is devops. "Give me your code so I can deploy it" is the exact opposite of devops.

Warbird
May 23, 2012

America's Favorite Dumbass

Which would be perfectly fine by me if the company or teams involved were doing so for that reason. That being said, yeah it's a traditional OPs setup. This is old fashioned "job security by obsfucation" on the part of the dev.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Warbird posted:

Which would be perfectly fine by me if the company or teams involved were doing so for that reason. That being said, yeah it's a traditional OPs setup. This is old fashioned "job security by obsfucation" on the part of the dev.

so how is a siloed operations team deploying software "doing a devops"?

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
the idea that everyone on a team should be involved in all three of writing, releasing, and deploying code is a bad one

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





Ploft-shell crab posted:

the idea that everyone on a team should be involved in all three of writing, releasing, and deploying code is a bad one

it's not that everyone is responsible, it's that there's no walls or organizational barriers between ops and dev. you don't need to do the deploys and write the code, you just need to be on a team that can write code and deploy it and maintain it

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

the talent deficit posted:

it's not that everyone is responsible, it's that there's no walls or organizational barriers between ops and dev. you don't need to do the deploys and write the code, you just need to be on a team that can write code and deploy it and maintain it

Your team has responsibility over a product. This means your team builds, QAs, ships, and runs it in production. There may be an ops or build engineering center of excellence whose job is to facilitate everyone moving faster, but the goal is to get people-bottlenecks out of the way so you can own your own destiny.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
What are some rules of thumb for deciding that it's necessary and justifiable to deploy a service discovery and configuration management solution (ZK, Consul, etcd)? Is it the number of config changes / day? Is it total services with SLAs? My context is that I've never had the opportunity to stop a system from becoming too big to configure and manage at scale and would like to better know when to put my foot down based upon others' experiences. My experience either as a consultant or FTE is being brought in after it's obvious SD is long overdue, so I can only point at consequences like a nerd Ghost of Christmas Future (oftentimes ignored though unlike in Dickens' story). Relying upon team consensus doesn't work when nobody's used it before either. Here's my wild-rear end guesses:

  • You can't deploy config changes fast or responsively enough via Ansible / Salt / Chef / Puppet
  • More services are in development than there are engineers being hired to understand how to configure them
  • SCM is not working out for how you manage configurations (too many secrets, rotations awkward, etc.)
  • Developers hard-coding settings into applications or out-of-sync config files have manifested as actual production incidents requiring a formal deployment to fix
  • You need several DNS cutovers for internal services (databases, queues) going down (scheduled or unscheduled) and foresee no end to it (it is not a one-off change)
  • You have dedicated operations engineers and at least one is spending more than 50%+ of their time managing configurations for deployments (Deployment Dave exists)
  • App code is being written to do client-side load balancing / recovery / fail-fast instead of using straight-forward usage of a queue / topic / endpoint

Methanar
Sep 26, 2013

by the sex ghost
Does anyone actually understand prometheus or use it in any intelligent sort of way?

freeasinbeer
Mar 26, 2015

by Fluffdaddy
I love it but getting it setup is often pretty manual, but I’d also concede it really only makes sense in the kubernetes world because everything preinstrumented for it.

What particularly are you having issues with?

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
We run it just like another container in whatever infrastructure we're monitoring, using python endpoints where needed in our images. For the apps we actually support, we provide patterns for our devs to reference to expose a prom friendly endpoint.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

necrobobsledder posted:

What are some rules of thumb for deciding that it's necessary and justifiable to deploy a service discovery and configuration management solution (ZK, Consul, etcd)? Is it the number of config changes / day? Is it total services with SLAs? My context is that I've never had the opportunity to stop a system from becoming too big to configure and manage at scale and would like to better know when to put my foot down based upon others' experiences. My experience either as a consultant or FTE is being brought in after it's obvious SD is long overdue, so I can only point at consequences like a nerd Ghost of Christmas Future (oftentimes ignored though unlike in Dickens' story). Relying upon team consensus doesn't work when nobody's used it before either. Here's my wild-rear end guesses:

  • You can't deploy config changes fast or responsively enough via Ansible / Salt / Chef / Puppet
  • More services are in development than there are engineers being hired to understand how to configure them
  • SCM is not working out for how you manage configurations (too many secrets, rotations awkward, etc.)
  • Developers hard-coding settings into applications or out-of-sync config files have manifested as actual production incidents requiring a formal deployment to fix
  • You need several DNS cutovers for internal services (databases, queues) going down (scheduled or unscheduled) and foresee no end to it (it is not a one-off change)
  • You have dedicated operations engineers and at least one is spending more than 50%+ of their time managing configurations for deployments (Deployment Dave exists)
  • App code is being written to do client-side load balancing / recovery / fail-fast instead of using straight-forward usage of a queue / topic / endpoint
"Service discovery" is an umbrella term that means a lot of different things, and SD isn't mutually exclusive with configuration management workflows—many tools facilitate them. In the current container era, people usually use the term to refer to the use of cluster health and status data to drive the configuration and wiring of decoupled or loosely-coupled systems that speak a standard interface like HTTP, and typically reconfigure on-the-fly with no downtime. But lots of things qualify as service discovery. Chef doing node discovery to rewrite config files and trigger hot service reloads or cluster reconfigurations is a form of service discovery. So is Active Directory using DNS SRV records to identify Kerberos and LDAP servers in the year 2000.

From a high level, I'd bucket the needs for service discovery into one or more of the following reasons:

  1. Different people or teams are responsible for the different layers of a service and coordinating them to deploy changes has become problematic.
  2. It becomes complex and error-prone to change all of the dependency layers, including orthogonal ones like backup and monitoring, when new service instances are deployed.
  3. Due to auto-scaling or other factors, application instances are highly ephemeral and it is now impractical to maintain configuration lists by hand.
  4. It is impractical to orchestrate fixed configuration changes to all the consumers of a service due to scale, diversity of clients, ownership, network perimeters, ephemerality of clients, or other reasons.
  5. Your application instances themselves have no good way of consuming static configuration, e.g. as an inevitability of moving towards immutable infrastructures

The other things you've mentioned are interesting rules of thumb, but aren't business problems, necessarily. Lots of companies get along having a Deployment Dave or an entire team of them. This is probably something you explicitly want if you're in a highly regulated environment like doctor/patient-facing hospital systems, even. If they need to deploy code more frequently, and the coordination between teams introduces unacceptable delays or error rates, then you're at point (1). App code being written to solve a problem often isn't an issue because a lot of app code is composable across standard interfaces (think of WSGI, Rack, or Express middleware).

At the end of the day: spot problems proactively and mitigate them, spot problems reactively and solve them. Anything else is just consulting fees.

Vulture Culture fucked around with this message at 15:17 on Aug 14, 2018

Hadlock
Nov 9, 2004

Methanar posted:

Does anyone actually understand prometheus or use it in any intelligent sort of way?

We're using it to slowly but surely replace our existing creaky nagios setup, we have ~50 75 hosts across three datacenters and slowly migrating towards three clusters AWS/k8s, works great. If you're not using it with Grafana then you're only using half of the package. The monitoring/alerting in prometheus exists, but it is super rudimentary, especially compared to what grafana gives you.

We recently had a project to add redis caching to the main app, which someone decided that should be handled by external microservices, so we instrumented it in prometheus/grafana. The beauty in prometheus is that pretty much any app can provide simple metrics that prometheus can store, and then Grafana can graph them easily. There's a bit of learning curve to it all but it's straightforward once you've been using it for a couple of weeks.

General workflow is: thing i want to monitor -> point an exporter at it -> point prometheus at it -> build a graph for the data in grafana -> setup thresholds in grafana and set alerts on them.



You can see in the first image, the apps don't always need to have a ton of information, you can add prometheus endpoints with little effort.

Another beautiful thing about grafana is that you can add annotations using a simple one-liner curl (technically an api request), in the center image there's some transparent blue bits on the right of the graphs, that is where our QA dept is running jmeter performance tests, and then some other blue lines on the far left, which are annotations representing deploys, you can mouse over them and it has a description of what's going on, with an HTML link to the bamboo/jenkins job. Pretty useful when things start lagging out, you go look in grafana to see what lagged, and boom, there is the spike in the graph, and wow, look at that, we just deployed new code in that environment, let's click on the annotation, go to the deploy job and see who committed the code, get it fixed. This happens about 3x a week.

Hadlock fucked around with this message at 17:32 on Aug 14, 2018

Methanar
Sep 26, 2013

by the sex ghost
So how do you all handle things like alerting only when a condition has been violated for a certain length of time.

Specific example. I want to alert when a deployment is less than 80% available for 5 minutes. What do I type?

A really naive approach is to say.

code:
kube_deployment_status_replicas != kube_deployment_status_replicas_available
or
code:
kube_deployment_status_replicas_unavailable > 0
But these have no real regard for how long pods have been unavailable or how many are actually missing.

A slightly more intelligent approach would be to use promQL's max_over_time() to pull only the largest value out of a time range. None of the series related to deployment replicas return range-vectors though.

And its impossible to turn instant vectors into a range vector without dealing with a lot of confusing BS in prometheus. At which point I'd just bite the bullet and use alertmanager-proper with its time threshold for this.
https://github.com/prometheus/prometheus/issues/1227
https://www.robustperception.io/composing-range-vector-functions-in-promql

I guess I could do this but prometheus alerting is horrible to work with and test.
code:
  rules:
  - alert: Broken deployment
    expr: |
      kube_deployment_status_replicas != kube_deployment_status_replicas_available
    for: 5m
How do you do this also in prometheus proper. Seriously keep editing the config map, restart the server and then hoping you did it right on your first try? Then later commit the changes to the yaml manifest that is intended to be authoritative for prometheus

I'd much rather use grafana.


Same deal for trying to alert on pods restarting too often, or being evicted frequently.


These all seem like extremely common things that you would want to do. How are you supposed to do this?

Methanar fucked around with this message at 23:51 on Aug 14, 2018

Hadlock
Nov 9, 2004

Methanar posted:

I'd much rather use grafana.

I just use Grafana

IAmKale
Jun 7, 2007

やらないか

Fun Shoe
This is kind of a long shot, but are any of you aware of a Jenkins plugin/technique/etc... that'll let me tear down Docker containers, running on a remote host, when a branch in a Multibranch Pipeline is removed?

I'm adding cleanup capabilities to an orchestration tool we're working on. Our projects' Jenkinsfiles spin up remote Docker containers to serve the project, but when the branch is removed the containers persist. I can't seem to find anything in Jenkins that would let me set up "docker kill"/etc... commands in response to "onRemove" events for that project.

Methanar
Sep 26, 2013

by the sex ghost

IAmKale posted:

This is kind of a long shot, but are any of you aware of a Jenkins plugin/technique/etc... that'll let me tear down Docker containers, running on a remote host, when a branch in a Multibranch Pipeline is removed?

I'm adding cleanup capabilities to an orchestration tool we're working on. Our projects' Jenkinsfiles spin up remote Docker containers to serve the project, but when the branch is removed the containers persist. I can't seem to find anything in Jenkins that would let me set up "docker kill"/etc... commands in response to "onRemove" events for that project.

ssh user@remote 'sudo docker kill thing'

IAmKale
Jun 7, 2007

やらないか

Fun Shoe

Methanar posted:

ssh user@remote 'sudo docker kill thing'
How I remove the containers and images isn't the issue, it's where I'd place such commands in a Jenkins multi-branch pipeline lifecycle.

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice
If you're github, add a webhook on branch deletion to trigger a job.

crazypenguin
Mar 9, 2005
nothing witty here, move along
I've got a medium-sized (50 project/repos) jenkins build thing going with the newish multibranch pipeline approach. It works mostly great, but I have a few issues.

If I can't fix these... oh well. This works acceptably already. I just wanted to check to see if I was missing anything obvious.

  • To do downstream integration testing/building, I manually put a list of other jobs to build in each Jenkinsfile. This slightly annoys me. It means in addition to downstream projects having to know about their upstream dependencies (of course!), I also have to update the upstream projects to inform them what downstream projects should be rebuilt on a change. Not the worst, but if there's a better way of dealing with this, I don't know about it.
  • I have problems with diamond dependencies. If project A is a dependency of B and C, obviously I have A rebuild B and C on a change. But if D has dependencies on B and C, I kinda get screwed. Right now, if A changes, I end up rebuilding D twice, redundantly, via both B and C. If there's a smart way to handle this stuff, I don't know about it.
  • Coordinated changes are annoying. If we have to change two projects at the same time (e.g. make breaking change in one, fix its use in another), we have a half-assed thing right now where it tries to build branches of the same name, so we can sorta do that. But then when merging these branches in multiple projects at once, we overwhelm jenkins with a lot of rebuilds, most of them redundant again. (If A depends on B, then committing to both rebuilds A and B, then A again, this time downstream of B.)

It seems like these should be common issues, but I dunno if I'm missing something, or if this is just part of the fun.

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

crazypenguin posted:

I've got a medium-sized (50 project/repos) jenkins build thing going with the newish multibranch pipeline approach. It works mostly great, but I have a few issues.

If I can't fix these... oh well. This works acceptably already. I just wanted to check to see if I was missing anything obvious.

  • To do downstream integration testing/building, I manually put a list of other jobs to build in each Jenkinsfile. This slightly annoys me. It means in addition to downstream projects having to know about their upstream dependencies (of course!), I also have to update the upstream projects to inform them what downstream projects should be rebuilt on a change. Not the worst, but if there's a better way of dealing with this, I don't know about it.
  • I have problems with diamond dependencies. If project A is a dependency of B and C, obviously I have A rebuild B and C on a change. But if D has dependencies on B and C, I kinda get screwed. Right now, if A changes, I end up rebuilding D twice, redundantly, via both B and C. If there's a smart way to handle this stuff, I don't know about it.
  • Coordinated changes are annoying. If we have to change two projects at the same time (e.g. make breaking change in one, fix its use in another), we have a half-assed thing right now where it tries to build branches of the same name, so we can sorta do that. But then when merging these branches in multiple projects at once, we overwhelm jenkins with a lot of rebuilds, most of them redundant again. (If A depends on B, then committing to both rebuilds A and B, then A again, this time downstream of B.)

It seems like these should be common issues, but I dunno if I'm missing something, or if this is just part of the fun.

For the first issue we keep our automation code separate from the actual code and it works pretty well. You still keep your pom or whatever is required to build with the code in the code repo, but you keep all the automation and coordination between separate jobs in another repo. The pattern where the jenkinsfile lives with the code leads only to headaches.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Alternatively, we're using Jenkins Job Builder for build definitions (which supports Jenkinsfile pipelines) and have minimal problems, we treat the Jenkinsfile/Pom/Gradle/whatever as the actual build logic, however our teams want to handle their actual artifact building is up to them.

Hadlock
Nov 9, 2004

Is there a recipie for auto updating kubernetes config maps from a cannonical source like GitHub?

When updating a config map it looks like there's a unique footer generated at the get configmap step, to prevent two people from checking the same file out at the same time, so it's not a simple Jenkins job "git pull, kubectl replace -f configmap.yaml"

Both Prometheus and Grafana use config maps, and several others as well.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
nginx-ingress does too but i have no idea how

FlapYoJacks
Feb 12, 2009
So while Jenkins has been perfectly fine, I am trying teamcity as well. I do like it quite a bit, however, one of the plugins I use with Jenkins allows me to spawn a new AMI as a runner from AWS when a pull request comes in.

Is there any way to do this with Teamcity? I already have a runner setup, but I’m not sure if I can automate spawning a new AMI easily.

Adbot
ADBOT LOVES YOU

Methanar
Sep 26, 2013

by the sex ghost

ratbert90 posted:

So while Jenkins has been perfectly fine, I am trying teamcity as well

why

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply