|
Anyone have cool tricks for managing SSH / Ansible host files? I use primarily iTerm to ssh into servers via keys. We use ansible for installs/deployments in dev and prod and have a normal CI environment that I manage. Our real failing right now is keeping track of all our different hosts and being able to quickly look up / ssh into them and look around, because we keep one ansible host file per application checked into the code repository instead of a centralized CMDB. Basically, I was hoping someone had a program that could read in various ansible hosts files (we have one per application) and spit out .ssh/config and iTerm profiles for me/us to use. I could write one myself, but it seems silly when someone almost certainly has already done it already. Or, maybe there is a better solution I'm not seeing?
|
# ¿ Mar 2, 2015 17:04 |
|
|
# ¿ Apr 29, 2024 13:11 |
|
You can set "Branch Specifier" as ${TAG_NAME} and set a string build parameter TAG_NAME with the default name as origin/master or origin/HEAD or whatever, and only change it when you want to build a release. Presumably the options are the same and you're using a post-build action to trigger parameterized build of the second job. You can add "Current build parameters" as an option and it'll forward the tag along. Edit: You may also have to mess around with the refspec, because by default the git plugin may not fetch tags automatically depending on which version you're using. Setting refspec to "+refs/tags/*:refs/remotes/origin/tags/*" and "*/tags/${TAG_NAME}" as branch specifier should do the trick. Never hooked vagrant up to jenkins, sorry. Bhodi fucked around with this message at 23:27 on Mar 2, 2015 |
# ¿ Mar 2, 2015 23:19 |
|
I'm running a jenkins job that kicks off xvfb-run with an rpsec that includes selenium-webdriver and firefox . It uh, just kinda works? It's slightly newer, I think 2.36? I can check monday morning if you need version specifics. It was pretty cut and dry, I just googled around for some tutorials, I can forward them along but it sounds like you've already got something set up. AFAIK selenium webdriver is where it's at, there are a few options for the display layer, but I found xvfb to be convenient and it was the first one I tried that worked basically out of the box on linux so that's what I went with. It takes about 4 seconds to spin up X and firefox. Some people apparently like PhantomJS but I dunno Bhodi fucked around with this message at 05:19 on Mar 7, 2015 |
# ¿ Mar 7, 2015 05:16 |
|
syphon posted:Tools like Chef have done a really good job of mitigating this problem. Your cookbooks have 'defaults' which can be overridden per environment. This answers minato's stated problem of "configuration drift" across environments. Then, you can enforce versioning of your cookbooks in order to create reproducible deployments. For example, I built a jenkins test suite that pulls a branch from git and runs a bunch of tests on our cloud environment including creating VMs on a bunch of different vlans with configs using the tool we distribute to users. But now I need to be able to reproduce jenkins itself in both prod and dev, so I have a separate repo for the jenkins configs. And I need a program to be able to import/export, so I wrapped ansible around that and have some ansible tasks to pull/push configs to the various jenkins servers. But wait, the jenkins configs are subtly different because, for example, prod jenkins needs to pull from the prod branch and dev from dev, so now I have to munge it through a tool to dynamically make the jenkins configs. It's ugly and now I have 3 repos to manage and try and keep in sync, all with different versions and good release process. It's messy but the best I could come up with. My sister group dealing with our openstack silo has it three or four times as bad. All these cloud products enable people to easily do continuous integration on them, but not the app itself.
|
# ¿ May 28, 2015 22:14 |
|
Vulture Culture posted:I don't buy that this is a problem that Chef and its kin don't solve, honestly. Chef supports trivially versioning cookbooks (the server + Berkshelf do this easily, future Chef versions will go even further with Policyfile), and it's super-easy to template out the config so the same template produces all the correct configurations. In your example it would be having recipes for setting up your Postgres, rabbit mq, bookshelf, all the components of chef. And because presumably you need to be able to test upgrades and patches while your dev instance is supporting other people's work in dev, you need a separate entire instance for your own testing of those scripts. Maybe chef can bootstrap itself with its own files, I don't know, but you need those too. At some point you have to evaluate if it's all useful and just compromise, as was brought up in the cloud thread, but it's obnoxious to deal with when your systems can't manage themselves.
|
# ¿ May 29, 2015 13:44 |
|
Oh yeah? I have a serverspec question for you, actually... Are you wrapping stuff for testing multiple hosts inside a rakefile? I can't figure out how to test multiple servers in one spec file because of the weird-rear end instantiation of rspec tests. I really wanted to get it working but I tried basically everything, you can loop it but you can't reset the ssh connection variable and create it even if you put it in :all or :before, so I ended up having to go with a rakefile loop that tests per-server as the (very limited) documentation suggested. rspec variable scope is loving weird, the ordering is loving weird, nothing makes sense
|
# ¿ May 29, 2015 17:17 |
|
What are you conceivably going to do with these logs? Do you really need full console logs and build numbers? Why? That seems like a lot of effort for way too much data that's almost certainly worthless. What about generating / uploading a (junit) test report instead? Or, do some groovy parsing and peel off what you actually need... I get the log hoarder mentality but unless you really do have the capability and man power to go back and do heavy analysis with correlation to networking storage or whatever, with a feedback loop to actually drive change, it's kind of wasted. If your provisioning system really is that much of a mess, most likely you're going to get a shrug and a "well, it works now" so you might want to refocus your effort. Also, build numbers aren't really useful in and of themselves, which is why I suggested a test report so you can tie it to whatever number is actually meaningful to your attempt - git id, tag, date or whatever. If you go down the road of trying to capture the exact state of the jobs dir, the first time you need to reset the build ids or clear the logs it's going to be a mess. You say you never need to do that, but there are some reasons why you might need to, anything from running out of inodes to re-engineering your jobs, or maybe a future version of Jenkins changes the format. You'd be painting yourself into a corner if you went by that method, never mind the extremely convoluted solution. Bhodi fucked around with this message at 03:27 on Jul 1, 2015 |
# ¿ Jul 1, 2015 03:14 |
|
Let me know how it works, i considered doing that but settled with a standard secrets directory.
|
# ¿ Jul 12, 2015 22:02 |
|
Stoph posted:At my job Heroku is still banned. JIRA is hosted on premise so it's fine. BitBucket is marketed as a fork called Atlassian Stash if it's hosted on premise.
|
# ¿ Feb 1, 2016 14:40 |
|
Jenkins works for me but I wouldn't call it enthusiastic approval. It's a pita to wrangle though and for something that enables continuous integration, saving/storing/updating it's own state is a real PITA.
|
# ¿ Feb 2, 2016 16:28 |
|
Honestly, it sounds like you can cut bamboo out of all of that. Jenkins should be able to do all of what it's doing. I use serverspec + rspec + selenium to do integration testing on our project and I'm using the same "RspecJunitFormatter" converter you probably found on github for jenkins to read. Right now, it's just a hot mess of applications stacked on top of each other and hotglued with shell scripts just like what you're doing.
|
# ¿ May 12, 2016 16:21 |
|
What are people using to get production configs into docker images? Are you baking them in with a dockerfile, using a pull-on-first-boot type method or managing the overall env from the top-down using kubernetes or similar? Something entirely different?
|
# ¿ Sep 22, 2016 17:19 |
|
revmoo posted:It seems simple to just wave a hand and say it's easy, but I can see a ton of pitfalls; session management being a huge one. Right now our deploys are less than 1ms between old/new code and we're able to carry on existing sessions without dumping users. I don't see how Docker could do that without adding a bunch of extra layers of stuff.
|
# ¿ Sep 22, 2016 17:37 |
|
I know that Azure supports docker so maybe it isn't as new/buggy as it once was? Are you in the ? Maybe you could try it there first.
|
# ¿ Sep 28, 2016 16:18 |
|
smackfu posted:We are upgrading our enterprise-wide Jenkins to v2.something from v1.something. Anything especially cool we should look into using? Our install is pretty locked down as far as team level customization. It's a solid move to storing your jenkins configuration within your SC environment rather than outside it, and also a move to a more programesque file config over the standard GUI of 1.x here's a pretty good primer https://wilsonmar.github.io/jenkins2-pipeline/ Bhodi fucked around with this message at 16:06 on Nov 10, 2016 |
# ¿ Nov 10, 2016 15:59 |
|
I had instability running openjdk and switched to sun but am running far fewer jobs than you. Obviously bump the memory settings in the launch options. There were some more aggressive heap and garbage collection options I dug out of the internet a while back that helped but those were only valid for 1.x
|
# ¿ Nov 12, 2016 12:53 |
|
A cheap hack I used to get around this problem was the "Use custom workspace" option and just set the second job to use the first's workspace. Looks like you solved it in a much cleaner way.
|
# ¿ May 5, 2017 18:41 |
|
uncurable mlady posted:ECR requires authentication, though.
|
# ¿ Jul 1, 2017 16:58 |
|
uncurable mlady posted:My problem is that they just don't want to spend money, period. Our main CI server is seven years old and rapidly failing, but when I ask for money to get a new server, it gets rebuffed. When I say that we're going to extend the useful life by offloading build agents to EC2, I get complained at that we're spending too much on AWS. We run a lot of testing in the cloud because when we ran it locally, we'd lose end to end test runs because there was too much load on our VMware cluster and all of the runs would fail, causing lost days and missing milestones. Put it in the cloud, now they bitch about spend.
|
# ¿ Jul 1, 2017 17:22 |
|
Erwin posted:If you're defining everything as code (Jenkinsfiles, etc) then Blue Ocean is mostly about looking nice I think. But, if you want to configure each job through the GUI, then you'll get more out of it. I don't do that so I don't have much to say about Blue Ocean. It's true it has some pretty features like tracking and displaying times of individual stages, but that's secondary IMO to dynamically generated jobs that can have actual logic paths, which is a huge step above the previous extremely crude job chaining based on successful return codes. Yeah don't use chef to install/manage the jenkins instance itself, you're asking for heartache. I used ansible and built a pretty tight deploy script on centos 7. I'll share if you need, can PM me. Docker would be equally good. Doing it over again, I'd probably use docker. Bhodi fucked around with this message at 16:21 on Jul 28, 2017 |
# ¿ Jul 28, 2017 16:18 |
|
Twlight posted:How are you backing up your jenkins configs? we have a nightly job which copies it straight to SCM so we save all those job scripts, I've done a bunch of custom scripting within jenkins since as you rightly said the plugins can be hit or miss. We're finally moving to bitbucket and the multi branch project plugin sounds great i'd love to move to using more pipeline stuff as currently our jobs are a bit too "one dimensional" Here's my pull as an example, the push is the same except reversed. I also have a separate install script which installs jenkins from scratch and sets up the keys and such. pre:--- - name: Get plugin list shell: "ls -1 {{ jenkins_dir }}/plugins/*.jpi*" register: plugin_list changed_when: false - name: Pull plugins fetch: flat=yes src="{{ item }}" dest="files/plugins/" fail_on_missing=yes with_items: "{{ plugin_list.stdout_lines }}" - name: Get jobs list shell: "cd {{ jenkins_dir }} && ls -1 {{ jenkins_dir }}/jobs" register: jenkins_jobs changed_when: false - name: Pull jobs fetch: flat=yes src="{{ jenkins_dir }}/jobs/{{ item }}/config.xml" dest="files/jobs/{{ item }}.xml" fail_on_missing=yes with_items: "{{ jenkins_jobs.stdout_lines }}" - name: Get XML list shell: "ls -1 {{ jenkins_dir }}/*.xml" register: xml_list - name: Pull XML fetch: flat=yes src="{{ item }}" dest="files/xml/" fail_on_missing=yes with_items: "{{ xml_list.stdout_lines }}" - name: Get secrets list shell: "ls -p {{ jenkins_dir }}/secrets | grep -v '/$'" register: secrets_list - name: Pull secrets fetch: flat=yes src="{{ jenkins_dir}}/secrets/{{ item }}" dest="files/secrets/" fail_on_missing=yes with_items: "{{ secrets_list.stdout_lines }}" - name: Pull secret.key fetch: flat=yes src="{{ jenkins_dir }}/secret.key" dest="files/secret.key" fail_on_missing=yes - name: Get user list shell: "cd {{ jenkins_dir }} && ls -1 {{ jenkins_dir }}/users" register: jenkins_users changed_when: false - name: Pull users fetch: flat=yes src="{{ jenkins_dir }}/users/{{ item }}/config.xml" dest="files/users/{{ item }}.xml" fail_on_missing=yes with_items: "{{ jenkins_users.stdout_lines }}" I should probably just stick the whole thing on git, but . I'd do it if anyone would use it, I guess Bhodi fucked around with this message at 18:13 on Jul 29, 2017 |
# ¿ Jul 29, 2017 17:58 |
|
fletcher posted:Appreciate your reply on the go
|
# ¿ Nov 11, 2017 03:38 |
|
Hadlock posted:Googling for advice on this is useless, every dashboard company has SEO'd good modern discussion off the first couple of pages.
|
# ¿ Nov 11, 2017 03:43 |
|
I find ansible great for one offs and initial configuration but completely hopeless when it comes to consistency and conformity remediation (did someone edit a file where they shouldn't? revert it), it's not great at working through baton hosts, and the static nature of the hosts file is at odds with the fast moving and mercurial nature of cloud vms. Including files depending on variables is it's token nod to modularization, but puppet has an entire dependency tree with resolution. They're mostly different tools and i don't feel like they overlap all that much, which is why every company using ansible is ansible plus something else.
|
# ¿ Dec 14, 2017 16:48 |
|
Vulture Culture posted:It works fine with bastion hosts if you feed it a proper SSH config, and dynamic inventory scripts have been supported, with cloud examples shipped in contrib, for years now uncurable mlady posted:why do you have servers where this is a problem? cattle, not pets.
|
# ¿ Dec 15, 2017 04:43 |
|
Vulture Culture posted:The requirement is an executable bit. Hope this helps!
|
# ¿ Dec 15, 2017 04:59 |
|
Space Whale posted:Obviously a ruby script can just loop and then "if response is that it's done, bust out of that loop." My question is how do I have Jenkins act as some sort of a dashboard for that. Just use stdout from the ruby script? https://jenkins.io/doc/book/pipeline/ Bhodi fucked around with this message at 16:45 on Jan 24, 2018 |
# ¿ Jan 24, 2018 16:42 |
|
Get a linux academy subscription and run through some of their labs. It launches real containers in aws on their dime and you can get guided hands-on experience which it seems like you need because you aren't experienced enough yet to even ask the right questions.
|
# ¿ Jul 27, 2018 04:35 |
|
I like the look of blue ocean, especially for quick visual reporting of echo commands in jobs, even if the whole thing is weirdly slow, especially compared to the beta. My beef is that file upload from parameters is STILL broken in pipeline. C'mon guys this is basic jenkins functionality. https://issues.jenkins-ci.org/browse/JENKINS-27413?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel&showAll=true
|
# ¿ Sep 18, 2018 19:53 |
|
LochNessMonster posted:I’ve taken that particular piece of advice very seriously.
|
# ¿ Sep 19, 2018 16:07 |
|
that way lies a factory factory factory
|
# ¿ Sep 25, 2018 01:42 |
|
what the hell is 12 factor, do i have yet another bullshit invented term to learn? jesus, i hate computers I interviewed a guy today whose resume looked like it was a word jumble from tech trends magazine. "Implemented machine learning and AI neural networks to solve automation and monitoring" sort of stuff. Don't be like that guy. if you actually know how to install and manage kuberbetes jesus let me hire you please because 95% of candidates have flubbed the "what network layer provider did you decide on" because they used EKS or equiv and it's all done for you On that note, gently caress you amazon give me EKS and route 53 in govcloud already It is pretty funny that almost everyone has moved away from puppet/chef to ansible which makes sense if you aren't worried about state validation/enforcement because you just rebuild golden image containers all day errday but dammit, we've got 3 years invested in this stack and you'll have to pry chef from our senior devs's cold dead hands it's rough hearing candidate after candidate suck air though their teeth and say well I'm pretty rusty on chef, I've been doing ansible for the last three years and i just sit there silently nodding but we won't budge no sir Bhodi fucked around with this message at 03:14 on Sep 29, 2018 |
# ¿ Sep 29, 2018 02:41 |
|
quote is not edit also i should probably also stop drunkposting in serious threads but you're not the boss of me
Bhodi fucked around with this message at 03:08 on Sep 29, 2018 |
# ¿ Sep 29, 2018 02:54 |
|
Zorak of Michigan posted:How would you feel about "flannel because it was the first one I tried and it worked?" 99% of people who didn't use eks just mashed "I'm feeling lucky" and then "brew install kops" I tell everyone who's trying to get into or up on tech that you have no idea the kind of talent you're up against Docjowles posted:set up kubernetes to run your company’s old rear end Java monolith Bhodi fucked around with this message at 04:14 on Sep 29, 2018 |
# ¿ Sep 29, 2018 04:01 |
|
Docjowles posted:Are you vaguely aware of how to write and operate modern applications, where modern is like TYOOL 2012? It is that. https://12factor.net/. Plus the usual "make your app stateless!!!" "ok but the state has to live someplace, what do we do with that?" "???????????" Actually more than hate it, the fact that someone typed that inane drivel up and is apparently treating it like the new testament is THERE IS NO CAROL level crazy. I lose respect for anyone who takes self-evident do-the-thing-like-you'd-do-it as some enlightened idea. "Run admin/management tasks as one-off processes" come the gently caress on. This advice is on the same level as "Use e-mail or chat to coordinate tasks with your coworkers!" Docjowles posted:I mean this is basically how we got k8s into our org at my current job, except it was from the Director level instead of a rando ops engineer so we actually had power to tell people to rewrite their poo poo to not monopolize an entire worker node with one 64GB Java heap container that cannot tolerate downtime ever Bhodi fucked around with this message at 04:42 on Sep 29, 2018 |
# ¿ Sep 29, 2018 04:33 |
|
Yeah, but at it's root it's a problem with a monetary figure large enough that it tends to get decided on business interests way above the technical level and as we all know business interests are quarter-based with maybe thinking out up to a year. So the solution is keep the Frankenstein stack shambling along until it falls over or you can't buy parts for it anymore. At least when things were physical there was a solid "Sorry, we have to rebuild it anyway, we're moving datacenters / hardware is being EOL'd". With containers, it's going to be interesting in a decade since in theory you could run garbage code in perpetuity, there's no hard requirement to replace/rebuild. Or maybe the next thing will be incompatible with containers in the same way containers are incompatible with vcenter / xen. It's probably going to be worse than I even expect because at some point there will come a time where you literally cannot build another container due to your CI/CD software being EOL'd but your current containers run just fine so I fully expect to see some container built a decade prior running somewhere like those windows NT installs still running in water treatment plants, only connected up to the internet Bhodi fucked around with this message at 19:48 on Sep 30, 2018 |
# ¿ Sep 30, 2018 19:38 |
|
We've been testing nfs on gluster (ganesha) for our k8 persistent storage and early results are promising. We don't plan on using it as anything more than a file store though, as it's beginning to feel like an abstraction layer cake with each layer doing it's own i/o buffering and that's making me nervous offlining a gluster node during write-read tests and watching everything freeze for several seconds then recover cleanly was pretty cool. anyone whose done this already, is there any reason NOT to have a peer heal back into the cluster automatically on boot? Bhodi fucked around with this message at 13:12 on Oct 4, 2018 |
# ¿ Oct 4, 2018 13:06 |
|
Windows on containers is so busted even microsoft threw up their hands and recommends porting to your code to .net core and running it on linux for microservicesExtremely Penetrated posted:I did this last spring. It was awful so I wrote a guide on how I did it. https://github.com/mooseracer/WindowsDocker Bhodi fucked around with this message at 04:29 on Oct 18, 2018 |
# ¿ Oct 18, 2018 04:26 |
|
Vault is such a solid and well put together product, i didn't know teraform had so many issues.
|
# ¿ Oct 26, 2018 14:18 |
|
|
# ¿ Apr 29, 2024 13:11 |
|
Vulture Culture posted:I've found it to not actually be bad for any of these use cases, with judicious use of per-environment projects and clear separation of modules, but the documentation does make it incredibly obtuse how you should structure this and split code between projects so as to not blow your foot off with a sawed-off shotgun. terraform import has been great, but still isn't supported on enough resources and can be mind-bogglingly annoying to use on providers with (IMO) hostile APIs like AWS.
|
# ¿ Oct 27, 2018 16:44 |