|
Punkbob posted:I’d go with ingress-nginx(but not the one by nginx, inc) it’s a first class project of kubernetes and has good support. I’d really go with this both on prem and in the cloud as it’s the most full featured ingress controller at the moment. ingress-nginx it is, then! Thanks! Ideally we'd have something like jwilder/nginx which I've been using casually at home for years now and love the simplicity of, but I guess this isn't a whole lot more complex.
|
# ? Jan 9, 2018 08:28 |
|
|
# ? May 21, 2024 17:15 |
|
Punkbob posted:Mea culpa. I guess google just wasn’t worried about in transit and HIPAA peering. That’s a bummer that I misunderstood. yeah, sorry if i was rude. i get super concerned when anyone is making a compliance/security decision based on mistaken information, which there is a ton of around cloud services. combination of complexity of feature velocity :/.
|
# ? Jan 9, 2018 14:59 |
|
And he works for Amazon.
|
# ? Jan 9, 2018 15:04 |
|
Stringent posted:And he works for Amazon. I do, and sorry if i sounded like i was shilling. if they had made the same statement about aws networking (which is also authenticated but not encrypted within the region) i would've corrected them because making policy/security decisions on incorrect information is harmful to just about everybody, including other cloud providers.
|
# ? Jan 9, 2018 16:44 |
|
EssOEss posted:As in, some GUI options in the installer? If they just change runtime settings and do not affect the installer, I would simply do a post-MSI step that sets the appropriate regkeys or whatnot. These are all just GUI options, yes. A few are represented via switches, but most are not. Specific stuff like enabling or disabling features or creating services are a pain in particular. If I go and set the system’s environment variables in advance, it solves some of the issues. AutoIT is getting the job done, but I’m interested in the registry tool someone mentioned. Just doing vanilla silent installs and then tweaking the registry may be the better option in the long run.
|
# ? Jan 9, 2018 18:06 |
For a machine that is part of an auto scaling group, how do you make sure filebeat had a chance to send everything to logstash before the machine goes bye bye?
|
|
# ? Jan 11, 2018 23:23 |
|
You’ll probably want to use an ASG Lifecycle hook and write some scripts to do the validation like checking for last modification times.
|
# ? Jan 11, 2018 23:56 |
|
Problem: $work wants to "use Jenkins" to parse poo poo from an XML file and use it to send another as a restful POST to something, according to some schedule, then have some kind of report visible to a user based on the response from said endpoint, for the purposes of scheduled test automation. It's reading the file for params for the tests to run then asking the service on the other side of the endpoint to run said job. I've used TeamCity before, and even set it up once, but just "check a distro, kick off a build when poo poo changes, unit tests, then some deploy script." Jenkins is something I used once at home to magically just make "oh I see you committed, I'll run build deploy" and since I said I did that months ago I'm now the expert. I also push for automation when possible so here I am. Is Jenkins the way to go with this? Are plugins available that would make this easy? Is this a XY problem? I do not yet have any real requirements at all, they just want a proof of concept that this can be used in this way.
|
# ? Jan 17, 2018 00:14 |
|
Jenkins is an alright cron, but it's also potentially overkill. I say potentially because it does give you nice stuff like 'some schlub can just go look at the last job report', and the ability to like use it for other things (builds, deploys). Your use case is also satisfied with cron and email of course, so really depends on your org and future plans.
|
# ? Jan 17, 2018 00:24 |
|
For background, I'm at a cable company, that just bought another cable company, and my boss is new, and I'm even newer. So there's no real requirements yet, this seems to be proof-of-concept fishing. I also haven't done any automation work like this in ages so I feel incredibly rusty. What's the idiomatic way to get this out the door? I'm brand new to Jenkins and I've never green-fielded anything like this before. I'm sponging documentation but I hate not having anything to show.
|
# ? Jan 17, 2018 00:30 |
|
Space Whale posted:What's the idiomatic way to get this out the door? I'm brand new to Jenkins and I've never green-fielded anything like this before. I'm sponging documentation but I hate not having anything to show.
|
# ? Jan 17, 2018 00:41 |
|
Do not use a Jenkins plugin for anything other than integrating with Jenkins itself.
|
# ? Jan 17, 2018 02:51 |
|
There’s always the thorny issue of needing an SCM plugin like Github or Bitbucket if you want to use Jenkins multi-branch pipelines. But after seeing how the Bitbucket plugin starts to cause socket leaks gradually somehow in the Jenkins master (varying upon branch scanning frequency and number of branches on average per repo it appears) I’m inclined to believe that even using those plugins is too much for a sane, scalable Jenkins server to handle without me rebooting or restarting it weekly and HA it around with some horrific NFS backed Jenkins home directory. It’s kinda scary how my current place with 1/4 the number of developers as my last place easily quadruples the commit / build frequency as my last place and shows the warts of Jenkins’ master nodes far faster as a result.
|
# ? Jan 17, 2018 04:22 |
|
tbh you'd probably be better off with a python script and a cron job and an http server (if you wanna be fancy, put it all in docker containers)
|
# ? Jan 17, 2018 04:36 |
I've got packer job that cranks out AMIs. Each AMI it creates has unique SSH host keys. This makes it kinda hard to manage the ~/.ssh/known_hosts file. Should part of the packer build be placing a known SSH host key on the machine? So in effect, every AMI it produces has the same SSH host key (these are all private AMIs)
|
|
# ? Jan 23, 2018 02:34 |
|
Jenkins to schedule a .bat (I'm in windows) to then call a ruby script ~~on my machine~~ works. The actual Jenkins server/environment that already exists I have no access to nor any contact with the team that wanted this proof of concept. lol As far as "reporting" or "a dashboard" - the thing done by the ruby script is to call another server with a REST call that puts XML in a JSON (lol again) to "run these automated tests." These automated tests are not build tests, rather, integration tests to make sure all manner of devices that use our network actually work. It's cool but outside of scope*. To get status of the job, you make another REST call - a get with the test job ID. So, you'd need to periodically keep hitting it and get status and if it's done then say "I'm done" Obviously a ruby script can just loop and then "if response is that it's done, bust out of that loop." My question is how do I have Jenkins act as some sort of a dashboard for that. Just use stdout from the ruby script? Because I have no clear requirements, I'm still just scratching my head. I'm not even sure what the people with Jenkins are doing except for the fact that they do use it for builds and would like to have daily/whatever test runs. OK fine. But I should be able to talk to them and see their environment. *The testing I do involves an IR transmitter with IR-fibers taped to rokus, set top boxes, Xboxes, and so on simulating a remote and going through channel guides and channel switching on the STBs, roku channels and authentication (including ocr for the codes) and yadda yadda. Kinda cool. Basically every HO/CO has one of these test machines to catch outages and to test the rollout of new poo poo so the internet and tv stays on. This is the testing called by the ruby script called by Jenkins. We already have this implemented with automated builds for CI, I guess they just want to hit a machine daily too on top of that for some reason?
|
# ? Jan 23, 2018 17:05 |
|
I talked my boss in to kubernetes, he sold my boss on it and got our CTO jumping out of his seat pointing at the screen in the board room; halp; We're a 2003 era Java company, GitHub is like Jesus, kubernetes is like having a conversation with God himself. Oh God, oh God. Our app is a fancy Java CRUD app that scales horizontally very well.
|
# ? Jan 24, 2018 09:27 |
|
You're hosed unless you have a massively forward-thinking ops team who will manage your cluster(s).
|
# ? Jan 24, 2018 13:10 |
|
Are you running everything on-prem? Node management of k8s is a pain in the rear end, but if they're gung-ho and willing to throw bucks at it maybe look @ Tectonic or RH Openshift? How big of a cluster are we talking about here? If you're in AWS, use kops until Amazon EKS hits GA and move over to that. If you're in Google, lucky you, because GKE is the poo poo and does all the node poo poo for you really easily. If your application isn't _already_ containerized in Docker then yeah, you bit off a big chunk. First-pass containerization + orchestration in k8s is pretty steep. Get the poo poo running on a macbook with Docker CE Edge w/ Kubernetes enabled before doing anything else.
|
# ? Jan 24, 2018 15:10 |
|
Hadlock posted:I talked my boss in to kubernetes, he sold my boss on it and got our CTO jumping out of his seat pointing at the screen in the board room; halp; Good luck! Honestly the hard part is docker at this point and then getting used to all the concepts. You can pretty much convert to using gke or deploying on kops on AWS in a week if you know what you are doing and your poo poo is dockerized. On Prem is a bit harder(although not that hard) and openshift is kind of heavily bifurcated from kubernetes in a way that I’d avoid. Tectonic is closer to mainline kubernetes. Also the comment about your Ops team is important, if they are gonna be a bunch of babies or negative nancies then welcome to a shitshow. Edit: I went back and looked at you old posts you are in AWS already with kops so this should be easy. PM if you want to bounce ideas or I can point you to a few slacks that be able to help you with questions. freeasinbeer fucked around with this message at 15:52 on Jan 24, 2018 |
# ? Jan 24, 2018 15:49 |
|
Hadlock posted:I talked my boss in to kubernetes, he sold my boss on it and got our CTO jumping out of his seat pointing at the screen in the board room; halp; Just get the java app working in Lambda and go serverless.
|
# ? Jan 24, 2018 16:26 |
|
Space Whale posted:Obviously a ruby script can just loop and then "if response is that it's done, bust out of that loop." My question is how do I have Jenkins act as some sort of a dashboard for that. Just use stdout from the ruby script? https://jenkins.io/doc/book/pipeline/ Bhodi fucked around with this message at 16:45 on Jan 24, 2018 |
# ? Jan 24, 2018 16:42 |
|
Moving a monolithic Java application is typically a problem not because of Java but because the typical company writing in Java is simply not going to be able to deploy anything remotely modern and fitting the patterns that Kubernetes really needs for you to be successful. I have tried to do the migration steps just to get applications somewhat stateless and monitored at about 15 different companies / customers now and basically all of them are failures for cultural reasons rather than some technical reason that keeps them stuck on 90s style J2EE app servers. I’ve seen monolithic Django apps failed to move to Docker containers, similarly. If it requires more than about 30% of the code to be changed, you are empirically better off completely rewriting the application. I really want to find that paper but I’ve seen it before and it was eye-opening just how hard it is to maintain and migrate software systems. Go greenfield with K8S in such companies or don’t even try. I really mean it.
|
# ? Jan 24, 2018 20:41 |
|
Punkbob posted:deploying on kops on AWS in a week if you know what you are doing Kops has a lot of weird edge cases that are show-stoppers when they crop up like slotting into pre-existing infrastructure-as-code or using pre-existing bastion hosts. Also it's not CI friendly in any way. It made me sad 'cause the dev team is super nice and helpful, they just built it to fit their use case and then had to do a bunch of work to make it more generalized.
|
# ? Jan 24, 2018 22:57 |
|
Blinkz0rz posted:Kops has a lot of weird edge cases that are show-stoppers when they crop up like slotting into pre-existing infrastructure-as-code or using pre-existing bastion hosts. Also it's not CI friendly in any way. I completely agree and think it’s a larger issue with the kube community. But I also sort of agree with their perspectives and think that a lot of orthodoxy needs to be questioned in how infra is handled.
|
# ? Jan 24, 2018 23:17 |
|
necrobobsledder posted:I have tried to do the migration steps just to get applications somewhat stateless and monitored at about 15 different companies / customers now and basically all of them are failures for cultural reasons rather than some technical reason
|
# ? Jan 25, 2018 00:08 |
|
Kops has been an interesting experience. 6 months ago I would have recommended against it unless you were forced to use it, but with their 1.8 release they've made great improvements for customizing your deployments. I like the direction they're going but once EKS comes out I would take a serious looks at that if you have to run on AWS.
|
# ? Jan 25, 2018 03:58 |
|
Cerberus911 posted:Kops has been an interesting experience. 6 months ago I would have recommended against it unless you were forced to use it, but with their 1.8 release they've made great improvements for customizing your deployments.
|
# ? Jan 25, 2018 06:05 |
|
Bhodi posted:Are you running jenkins 2.X? If so, yes. A pipeline script is exactly what you're looking for. Just use the built-in "stage" functionality to have pretty green / red boxes, derived from either script exit codes or string scraping or whatever you want. Sadly this requires access to jenkins which you don't have(?) In a half hour I'm finally calling the guy wanting me to do that and actually getting requirements
|
# ? Jan 26, 2018 22:29 |
|
The guy having the call wanted me to just do it on the spot in the course of 30 minutes on the spot, over webex. Apparently we already have a script running system that does this and he just doesn't know how to use it. So Jenkins is some magic bullet some manager heard and nobody realizes all it would do is fire off a script I'd have to write, it's not some magical test management system. And gently caress maintaining scripts, we already have a system we wrote to do this. I have to talk to my boss and explain this is already doable. I drug in the new PM/Scrum/etc person of many hats and she just looked lost and confused the entire time but can corroborate what I saw so people know I'm not lying, because this is unbelievable. edit: most sr dev here is pissed because our new test execution engine is coming out really soon and does what they want and they don't want to wait so I'm wasting time on old bs. fgsfds Space Whale fucked around with this message at 01:16 on Jan 27, 2018 |
# ? Jan 27, 2018 00:04 |
|
Huh we went from... no AWS, no docker, no Kubernetes to... One kubernetes cluster for ops, one kubernetes cluster for our reporting team, this month; and a third kubernetes cluster in Prod (really, a limited customer facing late-stage alpha) in mid-february. RIght now we're using kops 1.8.x to manage/create the clusters, as my friend describes it "a high leverage tool" injecting a cluster in to an existing VPC on it's own subnet(s) seems to work, and we have some existing infrastructure-as-code tectonic stuff that a contractor sort of maintains and I think my kops stuff makes him mildly irritated but kops has an "Export as Terraform" option so... I guess I'll just do that, and then merge that in to our terraform codebase? Haven't figured that part out yet. I'd prefer to just spin these things up in their own VPC as god intended. So, kops to deploy/maintain the cluster; nginx-ingress to handle reverse proxy; kube-lego for SSL. I've spun up prometheus and grafana but haven't had a chance to wire them up or anything. Will have to back in to RBAC, right now everything is controlled from either my user or a helm/tiller user. Do we have a kubernetes thread yet? Or is this it? Boss really wants to get us out of this managed server hell and in to AWS; we're using this as a beachead to get there so things are moving pretty fast.
|
# ? Jan 27, 2018 11:03 |
|
We're doing the same. We've gone from 0 to Ceph+3 Kubernetes clusters (dev, test, prod) in about 2 weeks, and we're going full-tilt towards getting our applications container-native over the next two months. We opted for kubespray over kops since we're running big swaths of our infrastructure on-premises, though.
|
# ? Jan 27, 2018 14:32 |
|
Hadlock posted:Huh we went from... no AWS, no docker, no Kubernetes to... One kubernetes cluster for ops, one kubernetes cluster for our reporting team, this month; and a third kubernetes cluster in Prod (really, a limited customer facing late-stage alpha) in mid-february. I’d switch from kube-lego to cert-manager, it’s by the same folks but is a better spin on what kube-lego does and has features like using dns verification so you don’t have to expose everything to the world. Edit: the team that uses kops does do the terraform export, but it’s one of the things that I don’t understand why they do it or fight so hard with it besides just really liking terraform. freeasinbeer fucked around with this message at 14:39 on Jan 27, 2018 |
# ? Jan 27, 2018 14:37 |
|
i just wrapped up a big project consulting on a failed migration from self hosted to k8s-on-aws. i think they would have probably succeeded if they'd done self hosted k8s (or openshift) or just straight self hosted to aws but where they ended up was a huge mess i'm reserving judgement on aws eks until it's actually ga but i think right now k8s on aws is a mistake unless you need to also support self hosted k8s or you are starting from something easily ported to k8s (like all your applications are already running in docker in production). almost every legacy project is going to have a hard enough time moving to aws rds/aws ec2/aws ecs/... without also throwing docker and k8s into the mix
|
# ? Jan 27, 2018 22:07 |
|
I migrated dozens of JVM services to Kubernetes without much effort or pain, but they were all stateless to begin with and containerizing them was easy too.
|
# ? Jan 27, 2018 22:25 |
|
On prem kube should be harder to do mostly because so much native functionality isn’t there. I am genuinely confused on how it would work better on prem then in AWS. I’d like to know what went wrong.
|
# ? Jan 28, 2018 02:35 |
|
the talent deficit posted:i just wrapped up a big project consulting on a failed migration from self hosted to k8s-on-aws. i think they would have probably succeeded if they'd done self hosted k8s (or openshift) or just straight self hosted to aws but where they ended up was a huge mess Do you have a debrief of what went wrong? There is so much kubernetes running on AWS that they made a service specifically to help more people use it on AWS. Our TAMs were happy they pushed the kubernetes service so fast because it was blowing up their time from enterprises.
|
# ? Jan 28, 2018 02:47 |
|
AWWNAW posted:I migrated dozens of JVM services to Kubernetes without much effort or pain, but they were all stateless to begin with and containerizing them was easy too. java -jar thing.jar was containerization before it was cool
|
# ? Jan 28, 2018 03:10 |
|
Punkbob posted:On prem kube should be harder to do mostly because so much native functionality isn’t there. I am genuinely confused on how it would work better on prem then in AWS. I’d like to know what went wrong. My team runs 20 or so k8s clusters. In the past few months we've started building hardware clusters in our data centers, as we can requisition hardware and holy poo poo are they nice. Way more capacity, globally routable pods and services (using kube-router with bgp). It's opening up k8s to a lot of teams that were previously blocked by having to go through an ingress to reach services. We do our own cluster provisioning with chef, and it was not a big deal to change our recipes to support both hardware and vm clusters.
|
# ? Jan 28, 2018 06:12 |
|
|
# ? May 21, 2024 17:15 |
|
I do openshift if anyone wants help with that.
|
# ? Jan 28, 2018 06:16 |