|
necrobobsledder posted:Containers do not replace configuration management. However, it lets you get to a place where you can at least separate application configuration and artifacts from operations concerns such as logging, monitoring, kernel tuning, etc. The model also makes it easier to enforce 12-F applications instead of allowing lazy developers to dump files all over a filesystem. One of the nicest benefits of distinguishing infrastructure from applications is that you can set up infra to rolling replace every say 48 hours and you can put tight bounds on your fleet heterogeneity and overall age.
|
# ? Jun 13, 2017 16:12 |
|
|
# ? May 16, 2024 15:21 |
|
On DB / container chat, here is a good HN thread that discusses it. It's relatively old, but still relevant https://news.ycombinator.com/item?id=13582757 TLDR: draw your own conclusions, it's not a silver bullet (and I don't think anyone is suggesting it is) but I'm personally going to keep an open mind and evaluate it per project as it comes up. I don't think the usual knee jerk 'THE DB IS SACRED NEVER TOUCH IT' responses help anything.
|
# ? Jun 13, 2017 17:51 |
|
We've had great traction using containers to simplify our current bare-metal environment. By reducing the app server types, we can simplify our entire footprint, making things much easier to manage overall. Deployments are a snap now as well. I've also been using Docker Swarm and deploying directly via the api which has been pretty simple to setup with some python.
|
# ? Jun 21, 2017 16:21 |
|
Containers are great if you need 2 of something ephemeral... err no I need 20... err no I need 50... err no I need 2 again. They're also great if you want to pick something up in one place, put it somewhere else, and have it work perfectly. However, if you store persistent data in them you are going to have a very very bad time. And docker volumes are just a joke. The LVM disk driver stuff is better but it's still not somewhere I'd store any data I'm going to go back and drat well expect it's going to be there. The only case where I would consider putting DBMS in a container is if I have to scale it to vast quantities of data (I'm thinking petabytes) where 'database' has become a service in and of itself. The operational overhead at that point might be worth it. Boutique data stores or possibly things that coordinate over gossip protocols may also be an exception (your hbases or cassandras or elasticsearches) but if you're at the point where you know you need something more complicated than a boring DBMS (and not just because it'll look snazzy on your resume) you can probably figure out a way to do it with containers that doesn't cause all your production data to disappear like the pastor who got my sister preggers or involve making posts to the moderately successful Tony Danza support group located at forums.somethingawful.com. Basically don't put databases or files you care about in docker containers and you'll be fine. Ganson fucked around with this message at 06:05 on Jun 27, 2017 |
# ? Jun 27, 2017 05:38 |
|
Dren posted:Can anyone share their thoughts on how containers turn updating from a system problem (run yum update once in the system) to an every app problem? Is this not a big deal in practice or does it turn out that updates don't happen as easily or as often as they should? All the major orchestration platforms have built in support for blue/green or rolling deploys and work fairly well (last I looked Kubernetes was the furthest behind but that was awhile ago). You handle it much like you would handle updating hosts behind an ELB or static VM. Once you have your base image updated you tell the orchestration platform what you want and it'll handle it for you. Mesos was really nice in this regard, you could get real complicated with your requirements (I need at least 2 healthy hosts at all times, never go over this amount of hosts in existence, etc etc).
|
# ? Jun 27, 2017 06:11 |
|
I'm working on automating builds for CI in a more methodical way instead of the ad-hoc bespoke scripts I have now. It's kind of a unique situation: A decently sized chunk of C++ code that gets built for both the host (native unit tests) and ARM (deployment artifacts). That means the build environment needs two sets of compilers and two sets of dependencies - or two build environments and each commit needs to be run twice. I'm working with gitlab-ci and the docker-runner process to build in containers, but I honestly have no idea how to setup my build environment so that the pre-build state is cached and can spinup instantly. As a lesser feature, I'd like to keep the build temporaries around to speed the process, as a full build takes 10 minutes or so and I'd rather just use dependancy resolution to rebuild as-needed instead of always. Where do I start? I've been doing things the old fashioned way so far that trying to dive into modern methodology every guide expects me to have already mastered things like building my own docker image, or to be doing something trivial that uses a standard one.
|
# ? Jun 28, 2017 02:47 |
|
I do not know anything about gitlab-ci or docker-runner, as I use a rather different toolchain but I suppose the principles are the same. Off the top of my head, without diving too much into whatever makes your solution special, I would structure the build process (to be executed from start to finish for each build) as:
To build the container image, I would create a Dockerfile consisting of the following steps:
On every build, you would first execute a "docker build" command to generate the image from the above Dockerfile. This process will cache the result from each command in the Dockerfile (so if the commands needed to set up your build environment do not change, it does not execute that part on every build, skipping right to the "add input files" step). Note that this caching feature may leave a lot of intermediate image layers around that you may need to clean up at some point (I would suggest a weekly/monthly executing script that executes "docker image purge" or whatever it was to clean the cache); nothing specific to your scenario - Docker is always quite messy with temporary data. After building the image, you would execute the container using "docker run" and wait for your build script to finish, thereafter processing any outputs it generated (into the directories on the host that you mapped into the container using --volume). You can also inspect the exit code of the container to determine whether the build script signaled success or failure. Makes sense? Ask away if not. EssOEss fucked around with this message at 21:09 on Jun 28, 2017 |
# ? Jun 28, 2017 21:07 |
|
I'm kind of a docker noob but can't you build a docker image that is fully provisioned, manually version it, publish it to a docker repository, and pull that for a CI build so you can have a fresh one each time but not eat the cost of provisioning? Building and publishing the image in CI is possible too – check if the current version is in the repository and if not have CI build it and push it.
|
# ? Jun 29, 2017 02:43 |
|
To have a second process for building the imge? Sure you can but there is no obvious need to unless you plan to reuse it (e.g. for a whole build cluster). Or do you have some benefit in mind that I do not immediately think of?
|
# ? Jun 29, 2017 07:04 |
|
EssOEss posted:To have a second process for building the imge? Sure you can but there is no obvious need to unless you plan to reuse it (e.g. for a whole build cluster). Or do you have some benefit in mind that I do not immediately think of? Harik said he wants to Harik posted:setup my build environment so that the pre-build state is cached and can spinup instantly. Your process has these steps: EssOEss posted:
What I'm talking about is having step 1 be performed once, possibly by a CI step, every time the toolchain changes (which shouldn't be very often) instead of once for every build. This would meet Harik's goal of having the pre-build state cached so it can spin up instantly.
|
# ? Jun 29, 2017 19:37 |
|
Ah, I see. That would be taken care of by this part:EssOEss posted:This process will cache the result from each command in the Dockerfile You get caching for free. No need to push any images for it.
|
# ? Jun 29, 2017 20:44 |
|
EssOEss posted:Ah, I see. That would be taken care of by this part: Where will devs get the environments from? Do they have to build it themselves?
|
# ? Jun 29, 2017 21:28 |
|
Dren posted:Where will devs get the environments from? Do they have to build it themselves? You could either have devs build it themselves, host an internal (if it's somehow private/confidential) docker registry and push it to there and then they'd pull the base image once, or push the base image to docker hub and do the same.
|
# ? Jun 30, 2017 01:22 |
|
Private Docker registries can be fairly easy to setup provided you have certificates that don't suck (read: self-signed certs that you have to add to clients is a pain in the rear end). I'm hosting one at work as an nginx container terminating the SSL connection and proxying to the Docker registry container. These are on a single instance in an ASG with sizing of 1 backing to an S3 bucket. Pulling images out of AWS can slightly suck in costing if you have a lot of developers pulling locally, but if you're primarily pulling from within AWS it's pretty nice.
|
# ? Jun 30, 2017 03:44 |
|
If you're already on AWS, you can use ECR registries for pretty cheap.
|
# ? Jun 30, 2017 12:24 |
|
I didn't want to have even a chance of the container being accessible on the public Internet and since ECR doesn't support provisioning inside a VPC last I saw, that was a no-go for me. The EC2 instance + EBS probably costs more than what ECR would cost us probably, but with a $280k+ / mo AWS bill from gross mismanagement (110 RDS instances idling 95% of the time, rawr) I'm not being paid to care about cost efficiency anymore.
|
# ? Jun 30, 2017 13:25 |
|
necrobobsledder posted:I didn't want to have even a chance of the container being accessible on the public Internet and since ECR doesn't support provisioning inside a VPC last I saw, that was a no-go for me. The EC2 instance + EBS probably costs more than what ECR would cost us probably, but with a $280k+ / mo AWS bill from gross mismanagement (110 RDS instances idling 95% of the time, rawr) I'm not being paid to care about cost efficiency anymore. ECR requires authentication, though. Also Jesus Christ I'd give my left nut to have management that gave that few of a shits about AWS spend
|
# ? Jul 1, 2017 04:47 |
|
You do not want this management. At all. Our infrastructure is horrific and even as a consultant across public sector and about half the Fortune 100 I think this is in the bottom 10% of organizational capability around infrastructure management. While I don't have barriers organizationally from me doing a lot of things other places would care about, there's so many other problems from lack of organization that it's a different form of paralysis.
|
# ? Jul 1, 2017 15:33 |
|
uncurable mlady posted:ECR requires authentication, though.
|
# ? Jul 1, 2017 16:58 |
|
Bhodi posted:This is such a weird take because all I hear management talk about is the AWS bill every month My problem is that they just don't want to spend money, period. Our main CI server is seven years old and rapidly failing, but when I ask for money to get a new server, it gets rebuffed. When I say that we're going to extend the useful life by offloading build agents to EC2, I get complained at that we're spending too much on AWS. We run a lot of testing in the cloud because when we ran it locally, we'd lose end to end test runs because there was too much load on our VMware cluster and all of the runs would fail, causing lost days and missing milestones. Put it in the cloud, now they bitch about spend. Sometimes you just can't please anyone except yourself.
|
# ? Jul 1, 2017 17:14 |
|
uncurable mlady posted:My problem is that they just don't want to spend money, period. Our main CI server is seven years old and rapidly failing, but when I ask for money to get a new server, it gets rebuffed. When I say that we're going to extend the useful life by offloading build agents to EC2, I get complained at that we're spending too much on AWS. We run a lot of testing in the cloud because when we ran it locally, we'd lose end to end test runs because there was too much load on our VMware cluster and all of the runs would fail, causing lost days and missing milestones. Put it in the cloud, now they bitch about spend.
|
# ? Jul 1, 2017 17:22 |
|
Most places I've seen that bitch about cost treat AWS like a datacenter without even looking at options like reserved instances, S3 durability and redundancy tiers, and bandwidth contracts (yes, they do have them to help lower egress costs substantially, mostly of use when you get to petabytes / mo in transfers).
|
# ? Jul 1, 2017 21:34 |
|
necrobobsledder posted:bandwidth contracts (yes, they do have them to help lower egress costs substantially, mostly of use when you get to petabytes / mo in transfers). How does this work?
|
# ? Jul 3, 2017 01:56 |
|
Blinkz0rz posted:How does this work? I really swear I saw such a document because I was about to march into the engineering director's office with it to argue against the egress costing argument (the prime mover away from AWS) to avoid having to deploy our junky flagship product into an expensive Openstack cluster and tie us to a datacenter.
|
# ? Jul 3, 2017 02:41 |
|
I don't know how well documented it is for anybody to find - I suspect it's really not - but you should try to get in contact with your Amazon rep (I'm not sure which kind of rep you'd be looking for, my experience with this is secondhand). My company uses a fraction of what you're describing and we negotiated several tens of thousands of dollars worth of credit and support/training. We were clearly going to be long-term customers, and I bet they'd be amenable to helping you control your costs in order to keep you around in the same vein.
|
# ? Jul 3, 2017 06:55 |
|
If you're an early-stage startup you can also get up to $100,000 in free money from AWS! (Most of the other big players offer similar startup programs if you've got the backing of a major fund.)
|
# ? Jul 5, 2017 03:40 |
|
I'm looking for a Docker container (or Compose) to integrate in my own solutions that will create a SSL termination proxy using Let's Encrypt. Nothing fancy like subdomains or anything, I'd just like it to be as idiot-proof as one can hope for. There seems to be several such projects around, with various degrees of popularity and support: https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion (the most well-documented) https://hub.docker.com/r/zerossl/client/ https://hub.docker.com/r/certbot/certbot/ If you couldn't guess, this is new territory for me. Are there any reasons why this is a bad idea, or any other critical information I should be aware of? Have any of you guys used similar solutions?
|
# ? Jul 6, 2017 17:50 |
|
NihilCredo posted:I'm looking for a Docker container (or Compose) to integrate in my own solutions that will create a SSL termination proxy using Let's Encrypt. Nothing fancy like subdomains or anything, I'd just like it to be as idiot-proof as one can hope for. I think you want Caddy
|
# ? Jul 6, 2017 18:08 |
|
Vulture Culture posted:I think you want Caddy Oh drat, I read about that some months ago but had totally forgotten about it. Thanks!
|
# ? Jul 6, 2017 20:21 |
|
How straightforward is it to do the following in AWS (probably lambda): 1. Receive HTTP Post from $deploy_tool on success 2. Take one of the instances that was deployed to 3. Save AMI of server 4. Update launch configuration of the auto scaling group to use new AMI Essentially a success-triggered AMI bake before we can actually use spinnaker like real devops
|
# ? Jul 7, 2017 13:36 |
|
Changing the AMI in the launch configuration of the ASG won't rotate the instances if that's what you're trying to do automatically. You can use CloudFormation to use an update policy that will rotate your change through. It's usually better in my experience to treat ASGs as immutable and to launch new ASGs with launch configurations so it's easier to attach and detach groups of instances atomically to an ELB.
|
# ? Jul 7, 2017 15:00 |
|
Rotations isn't a problem as in the end everything will run the same version, it's purely a scaling crutch for the time being until I can get a proper process in place. Our retarded process: Octopus does a rolling deploy to 3 servers at a time which is slow and painful, but there are 5 independent large services per instance all deploying at different times which makes it hell for us to manage. When the ASG scales up it triggers Octopus to do a re-deploy but this must wait for the instance to come online, and then to register with DNS, and then to register with Octopus and then the deploy steps which is currently taking us 2 hours to scale for every 5 instances, due to bullshit reasons such as having to redeploy all 5 services with their post deploy tests for every loving server that comes online. If I can bake every successful deployment into an AMI with a short retention policy this makes scaling faster at the expense of some sanity at the ASG level. This is a crutch before using spinnaker properly to blue-green all their poo poo because they're paranoid as gently caress when even deploying minor changes that they need a 40 minute post-deploy suite
|
# ? Jul 7, 2017 15:47 |
|
Cancelbot posted:How straightforward is it to do the following in AWS (probably lambda): e: if you wanted to you could run Terraform from a Lambda function though Vulture Culture fucked around with this message at 16:30 on Jul 7, 2017 |
# ? Jul 7, 2017 16:14 |
|
Whoa, mind blown. We use terraform for standing up an environment but never thought it could be used to build at deploy!
|
# ? Jul 8, 2017 20:37 |
|
Dren posted:Harik said he wants to Hey, I realized I forgot to thank you for this. I meant to but forums took lower priority than new baby and new hires to train. Still working on it but the majority of the pain is getting the custom dev environment setup for arm-cross compiling to the target with the specific library versions that will exist there.
|
# ? Jul 15, 2017 03:54 |
|
What are y'all goons using to do Continuous Delivery in a Java based environment? I'm moving to a new job at the end of the month, and where my current job is doing CI/CD with Python to deliver a SaaS solution, the new shop does end user software with a number of products and JAR files meant to be installed and run by an end user. I'm particularity interested in how to solve providing a desktop/GUI for sales/support folk, so that they can always leverage the latest version or last few versions of the software.
|
# ? Jul 18, 2017 03:24 |
|
Gyshall posted:the new shop does end user software with a number of products and JAR files meant to be installed and run by an end user. Abandon all hope all ye who enter here. (I sure as hell hope you have some sort of auto-update/deployment mechanism, because relying on your users to do anything is a recipe for hell.)
|
# ? Jul 18, 2017 06:07 |
|
Sorry, to clarify, that is what the end product is (as in a customer would be installing it). Not going to rely on any internal user here - where I'm at now, we built a web front end that talks to the Jenkins API, and basically has a big button that says "GIVE ENVIRONMENT" and spits out a docker container running the web app and a URL to navigate to for that user who pressed the button. It works quite well for the business dev/support folks, but at Java Place (TM), I don't think I'm going to have the luxury of a web application, and I don't want to be in the business of distributing dockers to non-technical folks either. I'm interested in creating a similar pipeline but instead delivering a full X environment with the Jar already pre-installed for their use. Ideally, the X environment will be running on a cluster of some sorts, or AWS, but I'm not sure just yet. Just wondering if anyone has been able to tackle something like this without the use of Citrix, etc. https://guacamole.incubator.apache.org/ seems pretty good and close to what I'm looking for.
|
# ? Jul 18, 2017 13:37 |
|
Gyshall posted:What are y'all goons using to do Continuous Delivery in a Java based environment? Bleh. My last job had a java based app they were moving away from to a hosted SaaS. Everything was in java though, built in bamboo with gradle/grunt I think. Parts of the app were built in dependent plans eventually so if you updated one part of the app it triggered any parent plans. The bits and pieces got stored back in maven. The master build plan just took all that and assembled it. The server side things got sent off to opsworks stacks in AWS which worked nice for java apps (auto scaling based on CPU and extra time based servers added during peak hours to reduce waiting for spin up) There were some test servers that automatically got started for nightly tests I believe.
|
# ? Jul 18, 2017 22:15 |
|
|
# ? May 16, 2024 15:21 |
|
Can anyone offer any pros/cons of GoCD compared to Jenkins currently? We're evaluating both of these. GoCD seems to have a nicer UI out of the box but Jenkins is a lot more widely used. It seems GoCD was particularly strong with value streams but Jenkins has made progress on that. I'm leaning towards Jenkins because it's free and more widely used but we use GoCD heavily already.
|
# ? Jul 27, 2017 15:36 |