Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Anyone have cool tricks for managing SSH / Ansible host files?

I use primarily iTerm to ssh into servers via keys. We use ansible for installs/deployments in dev and prod and have a normal CI environment that I manage. Our real failing right now is keeping track of all our different hosts and being able to quickly look up / ssh into them and look around, because we keep one ansible host file per application checked into the code repository instead of a centralized CMDB.

Basically, I was hoping someone had a program that could read in various ansible hosts files (we have one per application) and spit out .ssh/config and iTerm profiles for me/us to use. I could write one myself, but it seems silly when someone almost certainly has already done it already.

Or, maybe there is a better solution I'm not seeing?

Adbot
ADBOT LOVES YOU

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
You can set "Branch Specifier" as ${TAG_NAME} and set a string build parameter TAG_NAME with the default name as origin/master or origin/HEAD or whatever, and only change it when you want to build a release. Presumably the options are the same and you're using a post-build action to trigger parameterized build of the second job. You can add "Current build parameters" as an option and it'll forward the tag along.

Edit: You may also have to mess around with the refspec, because by default the git plugin may not fetch tags automatically depending on which version you're using. Setting refspec to "+refs/tags/*:refs/remotes/origin/tags/*" and "*/tags/${TAG_NAME}" as branch specifier should do the trick.

Never hooked vagrant up to jenkins, sorry.

Bhodi fucked around with this message at 23:27 on Mar 2, 2015

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
I'm running a jenkins job that kicks off xvfb-run with an rpsec that includes selenium-webdriver and firefox . It uh, just kinda works? It's slightly newer, I think 2.36? I can check monday morning if you need version specifics. It was pretty cut and dry, I just googled around for some tutorials, I can forward them along but it sounds like you've already got something set up.

AFAIK selenium webdriver is where it's at, there are a few options for the display layer, but I found xvfb to be convenient and it was the first one I tried that worked basically out of the box on linux so that's what I went with. It takes about 4 seconds to spin up X and firefox. Some people apparently like PhantomJS but I dunno

Bhodi fucked around with this message at 05:19 on Mar 7, 2015

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

syphon posted:

Tools like Chef have done a really good job of mitigating this problem. Your cookbooks have 'defaults' which can be overridden per environment. This answers minato's stated problem of "configuration drift" across environments. Then, you can enforce versioning of your cookbooks in order to create reproducible deployments.

Managing the mapping of App Version to Cookbook Version is a bit of a pain, but I think the benefits outweigh the costs.
An issue I keep coming across is an elephants all the way down problem of then having to have an associated prod/dev/test/whatever for your all your management code / servers when you use puppet/chef/ansible/whatever.

For example, I built a jenkins test suite that pulls a branch from git and runs a bunch of tests on our cloud environment including creating VMs on a bunch of different vlans with configs using the tool we distribute to users. But now I need to be able to reproduce jenkins itself in both prod and dev, so I have a separate repo for the jenkins configs. And I need a program to be able to import/export, so I wrapped ansible around that and have some ansible tasks to pull/push configs to the various jenkins servers. But wait, the jenkins configs are subtly different because, for example, prod jenkins needs to pull from the prod branch and dev from dev, so now I have to munge it through a tool to dynamically make the jenkins configs.

It's ugly and now I have 3 repos to manage and try and keep in sync, all with different versions and good release process. It's messy but the best I could come up with. My sister group dealing with our openstack silo has it three or four times as bad.

All these cloud products enable people to easily do continuous integration on them, but not the app itself.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Vulture Culture posted:

I don't buy that this is a problem that Chef and its kin don't solve, honestly. Chef supports trivially versioning cookbooks (the server + Berkshelf do this easily, future Chef versions will go even further with Policyfile), and it's super-easy to template out the config so the same template produces all the correct configurations.

In your example it would be having recipes for setting up your Postgres, rabbit mq, bookshelf, all the components of chef. And because presumably you need to be able to test upgrades and patches while your dev instance is supporting other people's work in dev, you need a separate entire instance for your own testing of those scripts. Maybe chef can bootstrap itself with its own files, I don't know, but you need those too. At some point you have to evaluate if it's all useful and just compromise, as was brought up in the cloud thread, but it's obnoxious to deal with when your systems can't manage themselves.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Oh yeah? I have a serverspec question for you, actually...

Are you wrapping stuff for testing multiple hosts inside a rakefile? I can't figure out how to test multiple servers in one spec file because of the weird-rear end instantiation of rspec tests.

I really wanted to get it working but I tried basically everything, you can loop it but you can't reset the ssh connection variable and create it even if you put it in :all or :before, so I ended up having to go with a rakefile loop that tests per-server as the (very limited) documentation suggested.

rspec variable scope is loving weird, the ordering is loving weird, nothing makes sense

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
What are you conceivably going to do with these logs? Do you really need full console logs and build numbers? Why?

That seems like a lot of effort for way too much data that's almost certainly worthless. What about generating / uploading a (junit) test report instead?

Or, do some groovy parsing and peel off what you actually need...

I get the log hoarder mentality but unless you really do have the capability and man power to go back and do heavy analysis with correlation to networking storage or whatever, with a feedback loop to actually drive change, it's kind of wasted. If your provisioning system really is that much of a mess, most likely you're going to get a shrug and a "well, it works now" so you might want to refocus your effort.

Also, build numbers aren't really useful in and of themselves, which is why I suggested a test report so you can tie it to whatever number is actually meaningful to your attempt - git id, tag, date or whatever.

If you go down the road of trying to capture the exact state of the jobs dir, the first time you need to reset the build ids or clear the logs it's going to be a mess. You say you never need to do that, but there are some reasons why you might need to, anything from running out of inodes to re-engineering your jobs, or maybe a future version of Jenkins changes the format. You'd be painting yourself into a corner if you went by that method, never mind the extremely convoluted solution.

Bhodi fucked around with this message at 03:27 on Jul 1, 2015

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Let me know how it works, i considered doing that but settled with a standard secrets directory.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Stoph posted:

At my job Heroku is still banned. JIRA is hosted on premise so it's fine. BitBucket is marketed as a fork called Atlassian Stash if it's hosted on premise.
You've got it backwards. Bitbucket is now the on prem stash replacement, called the same as hosted. Has a few new features, we upgraded last month.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Jenkins works for me but I wouldn't call it enthusiastic approval.

It's a pita to wrangle though and for something that enables continuous integration, saving/storing/updating it's own state is a real PITA.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Honestly, it sounds like you can cut bamboo out of all of that. Jenkins should be able to do all of what it's doing.

I use serverspec + rspec + selenium to do integration testing on our project and I'm using the same "RspecJunitFormatter" converter you probably found on github for jenkins to read.

Right now, it's just a hot mess of applications stacked on top of each other and hotglued with shell scripts just like what you're doing.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
What are people using to get production configs into docker images? Are you baking them in with a dockerfile, using a pull-on-first-boot type method or managing the overall env from the top-down using kubernetes or similar? Something entirely different?

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

revmoo posted:

It seems simple to just wave a hand and say it's easy, but I can see a ton of pitfalls; session management being a huge one. Right now our deploys are less than 1ms between old/new code and we're able to carry on existing sessions without dumping users. I don't see how Docker could do that without adding a bunch of extra layers of stuff.
It might not work for you, but I've seen this handled through halting new connections and rolling restarts, or if you're using shared cache, a mechanism to transfer sessions.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
I know that Azure supports docker so maybe it isn't as new/buggy as it once was? Are you in the :yaycloud:? Maybe you could try it there first.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

smackfu posted:

We are upgrading our enterprise-wide Jenkins to v2.something from v1.something. Anything especially cool we should look into using? Our install is pretty locked down as far as team level customization.
The biggest feature is multibranch pipelines, where it creates a folder and one job per branch, when you run the job it looks for and executes a jenkinsfile in the root of the chosen branch. This file can contain everything a jenkins job does including setting optional/mandatory parameters and running programs and such. One other new feature that comes with this is a stage view, where you can see success/fail for each self-defined "stage" of the build, and since it's in Groovy you can create arbitrarily complex logic and build order. There's also a new parallelization ability within a stage.

It's a solid move to storing your jenkins configuration within your SC environment rather than outside it, and also a move to a more programesque file config over the standard GUI of 1.x

here's a pretty good primer https://wilsonmar.github.io/jenkins2-pipeline/

Bhodi fucked around with this message at 16:06 on Nov 10, 2016

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
I had instability running openjdk and switched to sun but am running far fewer jobs than you. Obviously bump the memory settings in the launch options. There were some more aggressive heap and garbage collection options I dug out of the internet a while back that helped but those were only valid for 1.x

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
A cheap hack I used to get around this problem was the "Use custom workspace" option and just set the second job to use the first's workspace. Looks like you solved it in a much cleaner way.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

uncurable mlady posted:

ECR requires authentication, though.

Also Jesus Christ I'd give my left nut to have management that gave that few of a shits about AWS spend
This is such a weird take because all I hear management talk about is the AWS bill every month

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

uncurable mlady posted:

My problem is that they just don't want to spend money, period. Our main CI server is seven years old and rapidly failing, but when I ask for money to get a new server, it gets rebuffed. When I say that we're going to extend the useful life by offloading build agents to EC2, I get complained at that we're spending too much on AWS. We run a lot of testing in the cloud because when we ran it locally, we'd lose end to end test runs because there was too much load on our VMware cluster and all of the runs would fail, causing lost days and missing milestones. Put it in the cloud, now they bitch about spend.

Sometimes you just can't please anyone except yourself.
Sorry, yeah. I misread "gave that few of a shits" as "gave that few shits". I think everyone struggles under AWS sticker shock which is simply silly when put next to typical yearly capital spend on datacenter poo poo. Maybe it would be helpful if they only billed yearly, then you could file it next to the hundreds of thousands of dollars if not millions larger corps already give various companies.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Erwin posted:

If you're defining everything as code (Jenkinsfiles, etc) then Blue Ocean is mostly about looking nice I think. But, if you want to configure each job through the GUI, then you'll get more out of it. I don't do that so I don't have much to say about Blue Ocean.
I completely disagree with this sentiment because unless you're using something that spits out the fully formed xml like job builder, the groovy pipeline scripts are by far the best way to couple the jobs with the code they manage within your change control.

It's true it has some pretty features like tracking and displaying times of individual stages, but that's secondary IMO to dynamically generated jobs that can have actual logic paths, which is a huge step above the previous extremely crude job chaining based on successful return codes.

Yeah don't use chef to install/manage the jenkins instance itself, you're asking for heartache. I used ansible and built a pretty tight deploy script on centos 7. I'll share if you need, can PM me. Docker would be equally good. Doing it over again, I'd probably use docker.

Bhodi fucked around with this message at 16:21 on Jul 28, 2017

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Twlight posted:

How are you backing up your jenkins configs? we have a nightly job which copies it straight to SCM so we save all those job scripts, I've done a bunch of custom scripting within jenkins since as you rightly said the plugins can be hit or miss. We're finally moving to bitbucket and the multi branch project plugin sounds great i'd love to move to using more pipeline stuff as currently our jobs are a bit too "one dimensional"
Those are great for backing up the jobs themselves, but for jenkins itself you're still looking at tar/scp or using one of their plugin wrappers which do the same thing. I use ansible, which I pull into a local repository, then check in / tag to keep our dev/prod in sync. I have an ansible job for a pull and one for a push, so that I could manually update dev, tweak it to where it was good, then be able to pull it locally, check it in, then push/install it to prod. I have a .gitignore to prevent pushing anything secret into SC, so for initial pushes it has to be from the working directory directly. It's kinda half-assed but it worked for me, since I was the only one managing it.

Here's my pull as an example, the push is the same except reversed. I also have a separate install script which installs jenkins from scratch and sets up the keys and such.
pre:
---
- name: Get plugin list
  shell: "ls -1 {{ jenkins_dir }}/plugins/*.jpi*"
  register: plugin_list
  changed_when: false

- name: Pull plugins
  fetch: flat=yes src="{{ item }}" dest="files/plugins/" fail_on_missing=yes
  with_items: "{{ plugin_list.stdout_lines }}"

- name: Get jobs list
  shell: "cd {{ jenkins_dir }} && ls -1 {{ jenkins_dir }}/jobs"
  register: jenkins_jobs
  changed_when: false

- name: Pull jobs
  fetch: flat=yes src="{{ jenkins_dir }}/jobs/{{ item }}/config.xml" dest="files/jobs/{{ item }}.xml" fail_on_missing=yes
  with_items: "{{ jenkins_jobs.stdout_lines }}"

- name: Get XML list
  shell: "ls -1 {{ jenkins_dir }}/*.xml"
  register: xml_list

- name: Pull XML
  fetch: flat=yes src="{{ item }}" dest="files/xml/" fail_on_missing=yes
  with_items: "{{ xml_list.stdout_lines }}"

- name: Get secrets list
  shell: "ls -p {{ jenkins_dir }}/secrets | grep -v '/$'"
  register: secrets_list

- name: Pull secrets
  fetch: flat=yes src="{{ jenkins_dir}}/secrets/{{ item }}" dest="files/secrets/" fail_on_missing=yes 
  with_items: "{{ secrets_list.stdout_lines }}"

- name: Pull secret.key
  fetch: flat=yes src="{{ jenkins_dir }}/secret.key" dest="files/secret.key" fail_on_missing=yes

- name: Get user list
  shell: "cd {{ jenkins_dir }} && ls -1 {{ jenkins_dir }}/users"
  register: jenkins_users
  changed_when: false

- name: Pull users
  fetch: flat=yes src="{{ jenkins_dir }}/users/{{ item }}/config.xml" dest="files/users/{{ item }}.xml" fail_on_missing=yes
  with_items: "{{ jenkins_users.stdout_lines }}"
Note that even using pipelines, you still have to save the job xml which says "I'm a pipeline script! I look for my Jenkinsfile in git://..."

I should probably just stick the whole thing on git, but :effort:. I'd do it if anyone would use it, I guess

Bhodi fucked around with this message at 18:13 on Jul 29, 2017

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

fletcher posted:

Appreciate your reply on the go :D

I think the catch here is that I want these to be separate jobs.

So one job that runs every commit: test kitchen, chef deploy, packer builds

A separate job that runs at specific times of the day: Apply the most recent AMI to the test environment

I could just write a little python script to figure out what the newest AMI is but I figured there was a way to do this by passing around jenkins parameters
The two non-janky ways to pass variables are through params (if one job launches another) through the paramaterized plugin and to write it to a file and use the copy artifact plugin to pass the file in. The slightly janky way is to use a shared custom workspace and do the file thing as well. There are really janky groovy ways to query variables of other previous jobs and you can also query the localhost API from within the job but I really don't recommend doing that.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Hadlock posted:

Googling for advice on this is useless, every dashboard company has SEO'd good modern discussion off the first couple of pages.
if someone finds one let me know, I threw together something in http://shopify.github.com/dashing but was never totally happy with the results.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
I find ansible great for one offs and initial configuration but completely hopeless when it comes to consistency and conformity remediation (did someone edit a file where they shouldn't? revert it), it's not great at working through baton hosts, and the static nature of the hosts file is at odds with the fast moving and mercurial nature of cloud vms. Including files depending on variables is it's token nod to modularization, but puppet has an entire dependency tree with resolution. They're mostly different tools and i don't feel like they overlap all that much, which is why every company using ansible is ansible plus something else.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Vulture Culture posted:

It works fine with bastion hosts if you feed it a proper SSH config, and dynamic inventory scripts have been supported, with cloud examples shipped in contrib, for years now
I remember trying dynamic inventory but there was some requirement that made me unable to use it, I can't recall what anymore though. probably due the booze I used to cope :yotj:

uncurable mlady posted:

why do you have servers where this is a problem? cattle, not pets.
the same answer is here, it's legacy systems, it's always legacy systems :yotj:

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Vulture Culture posted:

The requirement is an executable bit. Hope this helps! :c00lbutt:
I think it had to do with being unable to pull appropriate amount of information out of our custom machine tracker service for post-install poo poo because it was on a 10 minute delay due to polling or something equally annoying

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Space Whale posted:

Obviously a ruby script can just loop and then "if response is that it's done, bust out of that loop." My question is how do I have Jenkins act as some sort of a dashboard for that. Just use stdout from the ruby script?
Are you running jenkins 2.X? If so, yes. A pipeline script is exactly what you're looking for. Just use the built-in "stage" functionality to have pretty green / red boxes, derived from either script exit codes or string scraping or whatever you want. Sadly this requires access to jenkins which you don't have(?)

https://jenkins.io/doc/book/pipeline/

Bhodi fucked around with this message at 16:45 on Jan 24, 2018

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Get a linux academy subscription and run through some of their labs. It launches real containers in aws on their dime and you can get guided hands-on experience which it seems like you need because you aren't experienced enough yet to even ask the right questions.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
I like the look of blue ocean, especially for quick visual reporting of echo commands in jobs, even if the whole thing is weirdly slow, especially compared to the beta. My beef is that file upload from parameters is STILL broken in pipeline. C'mon guys this is basic jenkins functionality.
https://issues.jenkins-ci.org/browse/JENKINS-27413?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel&showAll=true

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

LochNessMonster posted:

I’ve taken that particular piece of advice very seriously.

Trying to figure out how groovy uses escape characters in specific corner cases is mindblowing though.

Shell session code:
node {
    echo 'No quotes, pipeline command in single quotes'
    sh 'echo $BUILD_NUMBER'
    echo 'Double quotes are silently dropped'
    sh 'echo "$BUILD_NUMBER"'
    echo 'Even escaped with a single backslash they are dropped'
    sh 'echo \"$BUILD_NUMBER\"'
    echo 'Using two backslashes, the quotes are preserved'
    sh 'echo \\"$BUILD_NUMBER\\"'
    echo 'Using three backslashes still results in preserving the single quotes'
    sh 'echo \\\"$BUILD_NUMBER\\\"'
    echo 'To end up with \" use \\\\\\\" (yes, seven backslashes)'
    sh 'echo \\\\\\"$BUILD_NUMBER\\\\\\"'
    echo 'This is fine and all, but we cannot substitute Jenkins variables in single quote strings'
    def foo = 'bar'
    sh 'echo "${foo}"'
    echo 'This does not interpolate the string but instead tries to look up "foo" on the command line, so use double quotes'
    sh "echo \"${foo}\""
    echo 'Great, more escaping is needed now. How about just concatenate the strings? Well that gets kind of ugly'
    sh 'echo \\\\\\"' + foo + '\\\\\\"'
    echo 'We still needed all of that escaping and mixing concatenation is hideous!'
    echo 'There must be a better way, enter dollar slashy strings (actual term)'
    def command = $/echo \\\"${foo}\\\"/$
    sh command
    echo 'String interpolation works out of the box as well as environment variables, escaped with double dollars'
    def vash = $/echo \\\"$$BUILD_NUMBER\\\" ${foo}/$
    sh vash
    echo 'It still requires escaping the escape but that is just bash being bash at that point'
    echo 'Slashy strings are the closest to raw shell input with Jenkins, although the non dollar variant seems to give an error but the dollar slash works fine'
}
This is a crime. Did you have to figure all that out on your own?

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
that way lies a factory factory factory

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
what the hell is 12 factor, do i have yet another bullshit invented term to learn? jesus, i hate computers

I interviewed a guy today whose resume looked like it was a word jumble from tech trends magazine. "Implemented machine learning and AI neural networks to solve automation and monitoring" sort of stuff. Don't be like that guy.

if you actually know how to install and manage kuberbetes jesus let me hire you please because 95% of candidates have flubbed the "what network layer provider did you decide on" because they used EKS or equiv and it's all done for you

On that note, gently caress you amazon give me EKS and route 53 in govcloud already

It is pretty funny that almost everyone has moved away from puppet/chef to ansible which makes sense if you aren't worried about state validation/enforcement because you just rebuild golden image containers all day errday but dammit, we've got 3 years invested in this stack and you'll have to pry chef from our senior devs's cold dead hands

it's rough hearing candidate after candidate suck air though their teeth and say well I'm pretty rusty on chef, I've been doing ansible for the last three years and i just sit there silently nodding but we won't budge no sir

Bhodi fucked around with this message at 03:14 on Sep 29, 2018

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
quote is not edit also i should probably also stop drunkposting in serious threads but you're not the boss of me :colbert:

Bhodi fucked around with this message at 03:08 on Sep 29, 2018

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Zorak of Michigan posted:

How would you feel about "flannel because it was the first one I tried and it worked?"

I don't have k8s on my resume, no sir.
you're hired, all we got was five seconds of dead air and one hesitant explanation of VPCs. by mid week we would have considered even saying the word flannel or calico a win

99% of people who didn't use eks just mashed "I'm feeling lucky" and then "brew install kops"

I tell everyone who's trying to get into or up on tech that you have no idea the kind of talent you're up against

Docjowles posted:

set up kubernetes to run your company’s old rear end Java monolith
don't doxx me

Bhodi fucked around with this message at 04:14 on Sep 29, 2018

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Docjowles posted:

Are you vaguely aware of how to write and operate modern applications, where modern is like TYOOL 2012? It is that. https://12factor.net/. Plus the usual "make your app stateless!!!" "ok but the state has to live someplace, what do we do with that?" "???????????"
Thanks, I hate it

Actually more than hate it, the fact that someone typed that inane drivel up and is apparently treating it like the new testament is THERE IS NO CAROL level crazy. I lose respect for anyone who takes self-evident do-the-thing-like-you'd-do-it as some enlightened idea. "Run admin/management tasks as one-off processes" come the gently caress on. This advice is on the same level as "Use e-mail or chat to coordinate tasks with your coworkers!"

Docjowles posted:

I mean this is basically how we got k8s into our org at my current job, except it was from the Director level instead of a rando ops engineer so we actually had power to tell people to rewrite their poo poo to not monopolize an entire worker node with one 64GB Java heap container that cannot tolerate downtime ever
we're on target to convert our 50 server m4.large low cpu load app onto 12 k8s nodes while automating half our ops team (autocorrected to oops team, not far off) and are going to look like loving wizards

Bhodi fucked around with this message at 04:42 on Sep 29, 2018

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Yeah, but at it's root it's a problem with a monetary figure large enough that it tends to get decided on business interests way above the technical level and as we all know business interests are quarter-based with maybe thinking out up to a year. So the solution is keep the Frankenstein stack shambling along until it falls over or you can't buy parts for it anymore.

At least when things were physical there was a solid "Sorry, we have to rebuild it anyway, we're moving datacenters / hardware is being EOL'd". With containers, it's going to be interesting in a decade since in theory you could run garbage code in perpetuity, there's no hard requirement to replace/rebuild. Or maybe the next thing will be incompatible with containers in the same way containers are incompatible with vcenter / xen.

It's probably going to be worse than I even expect because at some point there will come a time where you literally cannot build another container due to your CI/CD software being EOL'd but your current containers run just fine so I fully expect to see some container built a decade prior running somewhere like those windows NT installs still running in water treatment plants, only connected up to the internet

Bhodi fucked around with this message at 19:48 on Sep 30, 2018

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
We've been testing nfs on gluster (ganesha) for our k8 persistent storage and early results are promising. We don't plan on using it as anything more than a file store though, as it's beginning to feel like an abstraction layer cake with each layer doing it's own i/o buffering and that's making me nervous

offlining a gluster node during write-read tests and watching everything freeze for several seconds then recover cleanly was pretty cool.

anyone whose done this already, is there any reason NOT to have a peer heal back into the cluster automatically on boot?

Bhodi fucked around with this message at 13:12 on Oct 4, 2018

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Windows on containers is so busted even microsoft threw up their hands and recommends porting to your code to .net core and running it on linux for microservices

Extremely Penetrated posted:

I did this last spring. It was awful so I wrote a guide on how I did it. https://github.com/mooseracer/WindowsDocker
This is a crimescene; you are a criminal. I hope you feel bad about this and also yourself.

Bhodi fucked around with this message at 04:29 on Oct 18, 2018

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Vault is such a solid and well put together product, i didn't know teraform had so many issues.

Adbot
ADBOT LOVES YOU

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Vulture Culture posted:

I've found it to not actually be bad for any of these use cases, with judicious use of per-environment projects and clear separation of modules, but the documentation does make it incredibly obtuse how you should structure this and split code between projects so as to not blow your foot off with a sawed-off shotgun. terraform import has been great, but still isn't supported on enough resources and can be mind-bogglingly annoying to use on providers with (IMO) hostile APIs like AWS.
out of curiosity, before I implement things the wrong way, any blogs or guides on doing this? I have to deal with multiple nearly airgapped environments (we can do syncs of git, but only as a release process with a zipfile, but no syncing of environmental state data in/out because it contains hostnames). Trying to come up with decent tools and processes to handle promotion to higher environments from dev/qa and the fact a real live person has to perform the sync really puts a damper on your traditional commit-build-test-tag-deploy framework.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply