fletcher posted:Jenkins Pipeline docs talk about this new buildingTag condition: https://jenkins.io/doc/book/pipeline/syntax/#built-in-conditions Found the answer to my question, I need to update my Pipeline Model Definition Plugin to take advantage of that new feature: https://wiki.jenkins.io/display/JENKINS/Pipeline+Model+Definition+Plugin
|
|
# ? May 7, 2018 22:27 |
|
|
# ? May 21, 2024 07:03 |
|
Hey Kubernetes friends, I'm here again to ask how you are solving a few more unsolved problems 1) There is an issue causing DNS lookups from within pods to frequently timeout, which is bad. This is tracked in a whole bunch of places as they are still trying to figure out who is responsible for fixing it. The root cause appears to be a very low level issue in libc or even the kernel so for most of us it's going to be "who has the best workaround" unless you want to run bleeding edge packages on all your nodes. Most of our apps aren't too bothered, but we have one lovely PHP app that is getting absolutely destroyed by this. Which is unfortunately responsible for a lot of our revenue, because of course it is. Some light reading: https://tech.xing.com/a-reason-for-unexplained-connection-timeouts-on-kubernetes-docker-abd041cf7e02 https://github.com/kubernetes/kubernetes/issues/62628 https://github.com/kubernetes/kubernetes/issues/56903 https://github.com/weaveworks/weave/issues/3287 (problem is not specific to Weave, but we happen to use Weave) Have you seen this too? And if so, how are you mitigating it? 2) Authentication. We're currently doing simple certificate based auth. You authenticate with a cert, and then get authorized to do or not do stuff based on comparing your cert's CN to the defined roles and rolebindings. Which works ok. But curious if anyone is doing anything fancier or that you just like better. We're at the point where we want to start writing automation for access requests and want to make sure we're automating a good process instead of just speeding up trash. My main issue with the cert approach is that kubernetes doesn't appear to support revoking certs/checking a CRL. You can effectively block a cert by removing its CN from any rolebindings, but that still doesn't give me the same warm fuzzies as totally blocking the cert in the first place. Docjowles fucked around with this message at 03:47 on May 8, 2018 |
# ? May 8, 2018 03:39 |
|
Rbac with Coreos Dex authing against LDAP works really well for us.
|
# ? May 8, 2018 04:03 |
|
We keep adding new build nodes to our Jenkins, making my builds slower, because it's increasingly unlikely it'll run on a node with my build docker image cached.
|
# ? May 8, 2018 17:24 |
|
Vanadium posted:We keep adding new build nodes to our Jenkins, making my builds slower, because it's increasingly unlikely it'll run on a node with my build docker image cached. Assign labels to a small number of agents and then your own build.
|
# ? May 8, 2018 18:15 |
|
The Jenkins people aren't so hyped about that idea because they want load to be Optimally Distributed and my build times aren't quite bad enough to start fights over vv
|
# ? May 8, 2018 20:28 |
|
Vanadium posted:The Jenkins people aren't so hyped about that idea because they want load to be Optimally Distributed and my build times aren't quite bad enough to start fights over vv After a couple weeks, your base images should be on all the agents by then, right?
|
# ? May 8, 2018 21:04 |
|
I keep being sent to assignments were all the build agents are running on the same server. Still that pisses me off less then when I learned that were I just was; all the heavy CAD drawing generation stuff was running on the same physical server for all environments. Friggin computers how do they work?
|
# ? May 8, 2018 21:46 |
|
poemdexter posted:After a couple weeks, your base images should be on all the agents by then, right? Naturally, they regularly clear the caches!
|
# ? May 8, 2018 22:02 |
|
I've inherited a Jenkins system that looks pretty neglected. Seems like it's been pretty ad hoc with devs adding tests, nodes, etc. as needed and no one 'owns' this server so there's a ton of cruft and nobody in charge of cleaning it up. And of course Jenkins itself is effectively unmaintained. Is it SOP to have devs log in and add whatever they want to the CI or is someone supposed to be a gatekeeper of sorts to ensure stuff is getting added correctly, is useful, running on the right node(s), etc.? If no one is taking responsibility for it doesn't it just eventually become useless (old tests no one cares about keep running and probably failing for example), no maintenance is getting done etc. There are (at least) three different teams using this server-- is it better to set up each team with their own Jenkins (or other CI server)? Or, since Jenkins is mostly queuing stuff on other nodes, it doesn't really matter and we can just shove everything onto a single Jenkins and each team manages their own nodes? I'm wondering if, for example QA is better served using some other tool, or if everyone should just be forced to use the same system. Which is probably ok, they're using it now, right? If you were going to set up a new CI would you use Jenkins or is there something slightly better these days?
|
# ? May 9, 2018 18:56 |
|
mr_package posted:I've inherited a Jenkins system that looks pretty neglected. Seems like it's been pretty ad hoc with devs adding tests, nodes, etc. as needed and no one 'owns' this server so there's a ton of cruft and nobody in charge of cleaning it up. And of course Jenkins itself is effectively unmaintained. Is it SOP to have devs log in and add whatever they want to the CI or is someone supposed to be a gatekeeper of sorts to ensure stuff is getting added correctly, is useful, running on the right node(s), etc.? If no one is taking responsibility for it doesn't it just eventually become useless (old tests no one cares about keep running and probably failing for example), no maintenance is getting done etc. I was in a similar spot. I pushed Concourse to our organization and have been happy. The multi-tenancy for projects is much much better than Jenkins. It's got a steep learning curve though
|
# ? May 9, 2018 20:00 |
|
We have one person in charge of ensuring Jenkins itself is in a working state, and all the jobs are managed by the individual teams that need them. It's never been an issue. As long as you don't install plugins and give all your jobs appropriate tags for what nodes they can run on there's nothing that can really conflict between teams.
|
# ? May 9, 2018 21:01 |
|
It can also depend how many teams you're talking about, how big they are, and how busy Jenkins is. Whoever initially set up our Jenkins master did one for the whole company. Which was probably fine at the time. Then we grew a bunch and there's like 15 teams all sharing it and the master is almost always executing jobs. This makes it a loving pain in the rear end to do upgrades/maintenance without making someone mad that we interrupted their jobs. Or someone decides to install a plugin and it breaks some other random team's poo poo. We're looking at splitting that up now into per-team masters, and having the tech lead for each one own its care and feeding.
|
# ? May 9, 2018 21:52 |
|
We are moving over to google cloud builder, we love it but I probably should of used drone or concourse. There are a lot of rough edges where cloud builder makes you execute bash to do what you want. The real change for me was how I thought about builds, and the new way of working that it introduces. It basically mounts a volume in a series of containers, and then executes the individual docker containers as commands in the build pipeline. So you don’t need Uber docker containers that have all your build tools and can instead call each one at a time to do one specific task. We often found that we were installing a bunch of poo poo into a container or maybe using onbuild if we were trying to be fancy. Changing it to only build things in the shared volume means that our docker images have become tiny. But it has decent integration into gcloud and we are finding it really easy to key off events using pub/sub and that we can use cloud functions to accomplish what we want. It’s cheap as hell, we’ve been doing 20+ builds a day and it’s cost us so far this month. I also love that I don’t have to manage agents and have what seems like infinite capacity. I really like it and am trying to get our stuff out of Jenkins as fast as possible.
|
# ? May 10, 2018 00:27 |
|
Docjowles posted:It can also depend how many teams you're talking about, how big they are, and how busy Jenkins is. Whoever initially set up our Jenkins master did one for the whole company. Which was probably fine at the time. Then we grew a bunch and there's like 15 teams all sharing it and the master is almost always executing jobs. This makes it a loving pain in the rear end to do upgrades/maintenance without making someone mad that we interrupted their jobs. Or someone decides to install a plugin and it breaks some other random team's poo poo.
|
# ? May 10, 2018 00:53 |
|
Yeah a single Jenkins is enough. We've moved to Jenkins Job Builder and require all jobs to exist as code. We keep all of our infrastructure as code in the same Bitbucket projects as the other related projects, so we can basically monitor $PROJECT-jobs repo for changes and treat our jobs as a CI entity themselves, all based around pull requests. We use the docker plugin for Jenkins to execute our jobs across various swarm clusters based on labels. Access control to Jenkins is handled by our IT group using active directory and the basic matrix plugin. The biggest thing with Jenkins is controlling the permissions for Joe Dev to just create jobs or install plugins whenever. When I started here it was mostly a developer cluster gently caress where everyone was admin with fifty or some nodes of varying stability. We're about 100 devs strong now across six product teams and have actually been pretty stable in terms of Jenkins hygiene using this type of setup. Still mulling how to handle our actual continuous release process in a similar way. Maybe leverage Spinnaker, but I'm not sure if there is overlap with what we're doing now.
|
# ? May 10, 2018 01:36 |
|
Spinnaker is a nightmare. But it is sort of the only one that has got a decent workflow for k8s that isn’t helm.
|
# ? May 10, 2018 01:39 |
|
I have a weird problem where I'm running containers on a dedicated docker host without any orchestration layer. Totally in the past I've just run jwilder's nginx auto config magic thing, but it only works for HTTP/S, the TCP plugin isn't wired up. And now we're adding TCP ( web socket ) connections, any suggestions? We have kubernetes elsewhere, but this is going in to a place where we can't use K8S, yet. Trying to use dns and avoid hard coding poo poo.
|
# ? May 10, 2018 01:40 |
|
Hadlock posted:I have a weird problem where I'm running containers on a dedicated docker host without any orchestration layer. Totally in the past I've just run jwilder's nginx auto config magic thing, but it only works for HTTP/S, the TCP plugin isn't wired up. Is it in the cloud or bare metal? Trafeik maybe? It can do “manual” configs and can do auto configs from a bunch of different sources of truth.
|
# ? May 10, 2018 01:51 |
|
Well the idea is that it goes from one Jenkins I am responsible for to a bunch of Jenkinses individual teams are responsible for. We provide a platform and then the teams are delegated access to do what they need on it. But I take it I'm doing something very wrong here so am open to suggestions. I'm trying to do the neighborly DevOps thing here. We get an disproportionate number of tickets requesting changes to Jenkins, upgrades, new plugins, new nodes. Everyone wants their change now. Yet if it's down for 10 seconds HipChat starts blowing up with "hey is Jenkins down for anyone else?!? Are Jerbs aren't running" comments. I want to get out of the business of managing Jenkins. Unfortunately it's also critical to the business and a shitton of jobs have built up in there over the years, so just switching to something better isn't possible overnight. How do you all deal with this? Features of the paid Cloudbees version? Schedule a weekly maintenance window and tell people "tough poo poo, wait til Wednesday nights, and at that time the thing will be restarted so don't schedule or run stuff then"? Some other incredibly obvious thing I am missing?
|
# ? May 10, 2018 02:04 |
|
freeasinbeer posted:Is it in the cloud or bare metal? Bare metal We are Trojan horsing new services as containers in to our prod setup I will take a look at Trafeik, haven't looked at it since the pre Rancher 1.0 days
|
# ? May 10, 2018 02:07 |
|
Docjowles posted:Well the idea is that it goes from one Jenkins I am responsible for to a bunch of Jenkinses individual teams are responsible for. We provide a platform and then the teams are delegated access to do what they need on it. But I take it I'm doing something very wrong here so am open to suggestions. I'm trying to do the neighborly DevOps thing here. Just foster an environment for shadow IT to thrive in. The problem will probably take care of itself.
|
# ? May 10, 2018 02:09 |
|
If you are farming it out, why not farm it out to something new that forces appdevs to maintain their own CI pipelines. I’d just offer the carrot of not Jenkins and let devs who complain move over to concourse/drone/travis.
|
# ? May 10, 2018 02:19 |
|
Docjowles posted:Well the idea is that it goes from one Jenkins I am responsible for to a bunch of Jenkinses individual teams are responsible for. We provide a platform and then the teams are delegated access to do what they need on it. But I take it I'm doing something very wrong here so am open to suggestions. I'm trying to do the neighborly DevOps thing here. I have a customer that has a similar problem with VSTS build. One central "ops" group, but lots of individual teams that all have different build requirements and a constantly shifting sea of crap that needs to be installed for their builds to work. Some of which break builds for other groups. Great fun. Something I've been experimenting with for them is containerizing the agent and build environment. Let them be responsible for maintaining their own build stuff, then it's just a matter of giving them the means of running their containers. The problem I've been having is that Windows containers are still kind of, uh, lovely. And that this customer is totally incompetent and I'm not sure they'd be capable of understanding or maintaining a containerized solution, but that's not a technology problem. I don't see why something similar couldn't be applied to Jenkins.
|
# ? May 10, 2018 02:30 |
|
Hadlock posted:We are Trojan horsing new services as containers in to our prod setup
|
# ? May 10, 2018 03:51 |
|
What seems common is when devs refuse to properly learn how to write Jenkins jobs (super stateful job nodes hand jammed with tools and random rear end config files or running it all on master) and want you to help them with their jobs while claiming that they can’t move off of Jenkins due to so much “investment” into it. When you’ve written all their deployment pipelines for them and picked out the plugins, it is YOU and not the team that has made the hard investments, not the team that uses it.
|
# ? May 10, 2018 23:10 |
|
Hadlock posted:Bare metal Traefik is fine if all you care about is L7 HTTP ingress. Right now we're using it for our HTTP services but we want to do L4 as well for stuff that doesn't talk HTTP, so we're keeping a close eye on Istio/Envoy to see what surfaces there. We'd also like to see about doing load balancing via anycast; we use Calico for our container networking and they don't seem to have much interest in implementing anycast LB in the short-term.
|
# ? May 10, 2018 23:13 |
|
you cannot jenkins your way out of the devops trap
|
# ? May 11, 2018 00:31 |
|
Docjowles posted:Well the idea is that it goes from one Jenkins I am responsible for to a bunch of Jenkinses individual teams are responsible for. We provide a platform and then the teams are delegated access to do what they need on it. But I take it I'm doing something very wrong here so am open to suggestions. I'm trying to do the neighborly DevOps thing here. You're either the Jenkins farmer of the group or you're not. Once you are the designated Jenkins farmer, if you want to get out of that role you will probably need to change companies. Once you find the one guy on the team who is willing to take Jenkins tickets with minimal complaints, you just shovel all the jenkins tickets down their throat until they choke and die, and/or quit. There is nothing less professionally fulfilling than being a Jenkins farmer. Spending all day tomorrow setting up our first four Jenkins container pipelines at work tomorrow
|
# ? May 11, 2018 06:28 |
|
necrobobsledder posted:What seems common is when devs refuse to properly learn how to write Jenkins jobs (super stateful job nodes hand jammed with tools and random rear end config files or running it all on master) and want you to help them with their jobs while claiming that they can’t move off of Jenkins due to so much “investment” into it. When you’ve written all their deployment pipelines for them and picked out the plugins, it is YOU and not the team that has made the hard investments, not the team that uses it. Our Jenkins people seem to have a strong opinion about devs not loving with the nodes and just doing everything weird in containers, which seems extremely reasonable. I try to make the active part of all my Jenkins jobs a single line invoking a shell script from the repo, but maybe I'm overdoing the ”I don't want to get too invested in jenkins” thing. We have a suboptimal setup where we use the same job for all branches of a given repo, so trying out changes to the job in a branch is unnecessarily painful otherwise too.
|
# ? May 11, 2018 11:13 |
|
Multiscm jobs are pretty trivial tbh.
|
# ? May 11, 2018 14:05 |
|
Vanadium posted:I try to make the active part of all my Jenkins jobs a single line invoking a shell script from the repo, but maybe I'm overdoing the ”I don't want to get too invested in jenkins” thing. We have a suboptimal setup where we use the same job for all branches of a given repo, so trying out changes to the job in a branch is unnecessarily painful otherwise too.
|
# ? May 11, 2018 14:19 |
|
chutwig posted:Traefik is fine if all you care about is L7 HTTP ingress. Right now we're using it for our HTTP services but we want to do L4 as well for stuff that doesn't talk HTTP We just stood up a bunch of nginx pretty much exactly for this purpose. Bonus points for having to do UDP balancing across services.
|
# ? May 11, 2018 15:27 |
|
I honestly don't know how I would use a build pipeline. We just have Jenkins build a thing and then file it away for later, manually triggered deployment, stopping the build if tests fail or whatever. With everybody being excited about pipelines I feel like I'm missing something huge. I could see triggering deploys automatically if we had more confidence in our tests and uh our deploy workflow, is that what makes it a pipeline?
|
# ? May 11, 2018 16:05 |
|
I don't think this was posted here yet, but Humble Bundle is doing a pretty decent looking package of DevOps books / materials - https://www.humblebundle.com/books/devops-books
|
# ? May 11, 2018 17:12 |
|
I think pipelines are more useful when you don't want to do multiple Jenkins jobs chained together with upstream / downstream nonsense and want a pretty view of build steps to show people at a glance how lovely your build and deployment process is.Vanadium posted:Our Jenkins people seem to have a strong opinion about devs not loving with the nodes and just doing everything weird in containers, which seems extremely reasonable. 2. The problem I've hit with "use a single shell script for the whole drat build" is that when you need credentials you'll need to use environment variables or secrets files or whatnot, and Jenkins in a shell step will rape and pillage it and its descendent shells for your shell variables and try to substitute in environment variables or some other weird crap, which makes debugging super duper awful in my experience with Jenkins. It's how a shell script inside my Jenkinsfile buried under a Python subprocess call did the following: code:
code:
code:
|
# ? May 11, 2018 17:14 |
|
I mean I try to avoid putting anything inside the groovy files apart from sh 'bash do-the-actual-build.sh' and if that manages to do something non-intuitive with variable substitution I'm gonna be hella impressed. Using string interpolation in one lang to hopefully get correctly quoted values into another lang is just hell, I hope I don't end up with a Jenkins setup where I can't just put whatever into the environment instead. I guess the thing with triggering downstream jobs is moot when all my jobs pin or vendor their dependencies anyway, welp.
|
# ? May 11, 2018 17:46 |
|
necrobobsledder posted:2. The problem I've hit with "use a single shell script for the whole drat build" is that when you need credentials you'll need to use environment variables or secrets files or whatnot, and Jenkins in a shell step will rape and pillage it and its descendent shells for your shell variables and try to substitute in environment variables or some other weird crap, which makes debugging super duper awful in my experience with Jenkins.
|
# ? May 11, 2018 18:45 |
|
How do I authenticate against vault?
|
# ? May 11, 2018 18:51 |
|
|
# ? May 21, 2024 07:03 |
|
Vanadium posted:How do I authenticate against vault?
|
# ? May 11, 2018 19:19 |