|
Vulture Culture posted:First, I'm going to challenge "no syncing of environmental state data in/out". You can use Consul, etcd, any S3-compatible datastore (e.g. Ceph or OpenStack Swift), local Artifactory, or a REST endpoint to handle first-party storage of the state information. Any of these options will work fine in an airgapped configuration. If you're doing cloud in an airgapped environment, I assume you're running in an OpenStack environment, so you should be able to just use whatever you're currently running for object storage. I also need to find a way of giving terraform derived variables for the vpcs and such to create hosts, and what makes sense in this context. I think modules are likely the answer here. Fortunately, we're really only looking at terraform to create random one-off machines for apps that don't belong in containers, and we're using chef to manage apps, so we're using terraform strictly as our instance deployment manager. The hierarchy thing is a good thing to note, definitely. I can see how this can get complex, fast. Bhodi fucked around with this message at 21:55 on Oct 27, 2018 |
# ¿ Oct 27, 2018 21:42 |
|
|
# ¿ May 16, 2024 13:19 |
|
Anyone else going to re:invent? Can't wait to look at all the stuff I can't / won't use
|
# ¿ Nov 25, 2018 00:04 |
|
Ah yeah, that one's next month. Our group split, half are going to reinvent and half are going there.
|
# ¿ Nov 25, 2018 00:09 |
|
toadoftoadhall posted:When commissioning a new server, I run through a mental checklist I've cobbled together from linode and digital ocean tutorials for CentOS or Ubuntu machines. Eg: Less pithy answer: pick your favorite stig/hardening, make it into an image / kickstart / autoinstall. Basically don't do it by hand
|
# ¿ Nov 26, 2018 17:12 |
|
Docjowles posted:Current re:Invent status: Waiting in an hour long line to even register for the conference. Going to miss my first session. No food or coffee because everyplace that serves those also has an hour long line. the shuttle busses are the most silly thing to me, it takes an hour to get ferried across the street because this city is hell on earth, it's like everything is done and designed in the worst way possible, deliberately it's 4pm and feels like 10pm, i spent all day getting pitched to and what little I learned could have been conveyed in a 10 minute blog post read; i want to pull the plug on this entire week
|
# ¿ Nov 27, 2018 00:53 |
|
necrobobsledder posted:How do you guys do continuous prod deployments to systems that have message queue based communication and handle heterogeneous application component versions consuming from shared queues? In a synchronous processing world that'd be an endpoint handled by different versions of the service. We have a web request frontend, clients upload large artifacts separately (S3, their own hosting service, etc.), reference them in their API request, and processing is picked up asynchronously via SQS queues serialized as < 1 KB XML messages across several upstream services that self-report the status of their tasks to the primary Aurora MySQL DB. I'm trying to setup an architecture in AWS using a canary / blue-green approach using environment-specific SQS queues, load balancers, and instances but shared data stores like S3 buckets and DBs. DB updates to apps will be done by mutating their views, not by changing the actual underlying DB structures (the latency hit isn't measurable for us in tests so far). This would allow us to make a bunch of changes in production as necessary, cherry pick messages from queues to run through a deployment candidate's queues, and rollback changes faster than we do now (a deployment process straight out of 1995 but in AWS and with 90% of our services that can't be shut down on demand without losing customer data, which really, really, really is a pain in the rear end)
|
# ¿ Dec 13, 2018 03:54 |
|
StabbinHobo posted:this somewhat impossible recursive chasing of a way to abstract away a state assumption is, in large part, why kafka was invented.
|
# ¿ Dec 14, 2018 14:46 |
|
it's all https://github.com/brandonhilkert/fucking_shell_scripts
|
# ¿ Dec 18, 2018 18:49 |
|
Blinkz0rz posted:Hot take: deploying kubernetes (properly) and maintaining deployment systems on top of it takes more work (and reaps less rewards) than a mostly working existing system. Of course, we're not deploying kubernetes properly, we're literally throwing together whatever we can get into production as fast as possible because we were given 4 months to go from 0 to fully deployed, with December being one of those months. So it's a toss-up as to how well it actually performs. We already ran into this unresolved issue running on m5s: : https://github.com/kubernetes/kubernetes/issues/49926 and so I look forward to stepping on other land mines.
|
# ¿ Dec 19, 2018 01:56 |
|
We're using artifactory because it allows us to host all of our other repos (rpm, nuget, gem, etc) at the same time and because it's S3 backed it's fairly economical and we don't really have to worry about space issues.
|
# ¿ Feb 8, 2019 23:28 |
|
smackfu posted:Ironically often the reason GitHub is blocked is because people check company code into it so they can work on it at home. Now that they have free private repos maybe this will be less of a problem.
|
# ¿ Feb 10, 2019 23:36 |
|
Anyone have a good aws terraform example config for something like that? We're bringing up a new vpc from scratch and want to switch to autogenerating the subnets, sgs, iam roles and such and I don't really want to fall into any obvious traps. Any good whitepapers or blogs on this? Like, some vpcs are nearly permanent you might want a different state store than your apps just to prevent accidents? Stuff like that.
|
# ¿ Feb 21, 2019 14:27 |
|
That terraform blog post was amazing, I need more words like that.
|
# ¿ Feb 27, 2019 02:59 |
|
The first two bullet points work fine with github; you're describing tags, and your test env is labeling all your commits to your feature branches with whether they passed your tests. You can absolutely restrict pull requests to only tags. Depending on the frequency of commits / size of your dev team you may not need the two-tiered approach that you laid out, and if you do it's more commonly implemented as unit testing feature branches (your Test), then merging into dev if passing and then periodically tagging dev branch commits for integration testing (this would be manual in your case, sometimes it's weekly or daily or whatever) as a pre-requisite for merging a release into master. If it ends up failing, you end up just doing an additional commit into dev from your feature branch and kick the test off again - it's not really necessary to track it back to the commit of the feature branch like you're suggesting. The benefit of doing it this way is that you can test multiple feature commits at the same time on a periodic basis, it conveniently follows common business requirements like sprints and quarterly releases, and if you have REALLY long tests you can tune the auto testing to fit them instead of having them queue behind each other as devs frantically try and get their features in at 3pm on a friday before the end of the sprint. Bhodi fucked around with this message at 03:23 on Mar 13, 2019 |
# ¿ Mar 13, 2019 03:08 |
|
FISHMANPET posted:We're way more on the ops side than dev side, so we basically have zero formal software development process requirements. And generally the changes we're working on are small enough that only one person is working on them. We don't do "releases" we just push code when we write it. And we've never used tags (should we be?). Tags are good, but only if you care about seeing whether a specific commit passed tests at a glance. Making releases in github does functionally the same thing. For my own stuff, I may have asked this before anyone have good terraform whitepapers on infra design for multiple environments and app deployment with a CI/CD pipeline? We're building out a new env from scratch and it's been decreed that we're not going to be using ansible and jenkins will the the orchestrator so I need to figure out a way to wedge absolutely everything I can into a terraform git repo including application configuration. Does it even have a templating feature? I'll probably be leveraging our existing chef infra for the hard stuff but woof, it's going to suck to split code like that. Bhodi fucked around with this message at 22:22 on Apr 21, 2019 |
# ¿ Apr 21, 2019 22:18 |
|
chutwig posted:I would recommend spending some time getting to know Packer so that you can build AMIs. There's a temptation to use Chef to both lay down your baseline and then configure instance-specific settings, which is problematic because it takes more time to run and you need to add in a reboot for security patches/kernel updates to take effect, which is annoying to deal with. Moving the baseline setup into Packer means you spend less time converging, don't need to reboot, minimize the complexity of the cookbooks, and have a known good AMI built on a schedule that you can use across the rest of your infrastructure. Bhodi fucked around with this message at 20:43 on Apr 22, 2019 |
# ¿ Apr 22, 2019 20:39 |
|
Umbreon posted:If anyone here has some spare time to answer:
|
# ¿ May 5, 2019 14:14 |
|
I told the interns this year that my job is "computer janitor". I helpfully explained that I "Janitor the computers, you know, tidy up the cloud"
|
# ¿ Jul 4, 2019 16:00 |
|
I'm getting really fed up with declarative poo poo for systems management and just want to go back to procedural Things really do run in cycles, don't they. We're back to fancy shell scripts
|
# ¿ Aug 22, 2019 03:13 |
|
Necronomicon posted:Can anybody provide some conventional wisdom re: Terraform backends in AWS? Specifically regarding things like multiple managed environments. Should each deployment have its own specific S3 bucket and DynamoDB table? For instance, I currently have four deployments - Company A Staging, Company A Production, Company B Staging, and Company B Production. Is there a clever way of keeping all of those state and lock files in the same location to keep things nice and clean, or is it better for them each to have their own isolated environment? backend.config posted:bucket = "my-s3-bucket-name" backend.tf posted:terraform { In fact, I advocate for multiple state files within the env, like if you are using terraform to deploy EVERYTHING, I highly HIGHLY suggest you isolate your vpc, security group, and IAM stuff from ec2 instance deployment and management. Ignore this at your own peril.
|
# ¿ Sep 23, 2019 20:19 |
|
Necronomicon posted:...but Terraform yelled at me, since apparently you can't use variables or expressions within a backend config. So I'm stuck hard-coding (at the very least) the key for every single deployed environment, which annoys the hell out of me.
|
# ¿ Sep 23, 2019 20:21 |
|
The problem with that sentence is that "should" has to be bolded, underlined, and in 24 point font
|
# ¿ Nov 14, 2019 20:56 |
|
Vulture Culture posted:One extremely low-lift way to resolve this is to use remote state with S3 and object versioning on the bucket.
|
# ¿ Nov 15, 2019 01:33 |
|
flux good tho?
|
# ¿ Dec 11, 2019 06:31 |
|
Whereas, I think cfengine (of which he was the author) and chef and promise theory application onto individual servers in general as fundamentally flawed and outdated ways of managing systems. But i'm not going to write a 500 page treatise about it
|
# ¿ Dec 14, 2019 19:20 |
|
Zorak of Michigan posted:Re container chat, my org is still in its infancy in containerizing workloads. I've been advocating Kubernetes because, when I tinkered with Swarm, I couldn't imagine it scaling up to the number of different teams I would hope would eventually be using our container environment. Is there something easier to live with for an on-prem deployment than Kubernetes that can still support multiple siloed teams deploying to it? IMO docker-compose is good enough for a majority of stuff that doesn't aggressively autoscale. The last few pages talk a bit about this. Bhodi fucked around with this message at 02:28 on Dec 18, 2019 |
# ¿ Dec 18, 2019 02:25 |
|
12 rats tied together posted:There will always be tools to glue together and you'll always have to glue them together with a mixture of automation and human process. With this in mind I judge a tool mostly by its ability to accomplish what I need it to, and for it to play nicely with other tools and arbitrary code. There's a sweet spot in there that obviously hugely varies based on your org, but in mine Terraform has a long way to go before it is "better enough" than a locally optimal piece of tech (ARM, Cloudformation, ROS, etc) to justify using, just as an example. We have a gently caress terraform "FTF" box on the whiteboard and everyone adds a tic mark when they discover something dumb, like the fact you can't use index on modules, along the auto-closed or stale as hell git issue with people begging for support.
|
# ¿ Feb 10, 2020 22:41 |
|
I think fargate is what they're trying to sell in that niche but I've never used it. If you are trying to go to docker images -> the golden standard is gke and everyone else has a long way to catch up. EKS is a pretty lovely offering because you still have to manage all the hard poo poo yourself, and they make you pay a premium for it on top of the hassle. It's kind of like directory service which is another sub-par offering that is almost worse than just running the poo poo on your own. Bhodi fucked around with this message at 23:19 on Feb 12, 2020 |
# ¿ Feb 12, 2020 17:43 |
|
Remember: AWS hates you.
|
# ¿ Feb 13, 2020 16:47 |
|
CyberPingu posted:Yeah I went down the output and dependency route. Cheers. here's the git issue - feel free to throw your pleas on top of the multiple year thread with hundreds of posts: https://github.com/hashicorp/terraform/issues/17101 I've got a dozen git issues right behind that that we've run into and documented and last time we had hashicorp on a phone for renewal talks we pinned them to the wall about their lack of responsiveness to these kinds of multi-year architecture destroying-issues. I could write hundreds of words on the problems we've discovered and our lovely band-aid workarounds as we try to move to code-driven deployment. I almost want to type it up and publish in blog format to at least make people aware of the absolutely massive list of gotchas that terraform has, things we wish we knew a year ago when we decided to use the new hotness (pronounced hot mess). the tl;dr is modules suck, fail at encapsulation and only have partial functionality - critical things like looping and dependencies are missing or only partially implemented. Bhodi fucked around with this message at 17:44 on Feb 27, 2020 |
# ¿ Feb 27, 2020 16:55 |
|
Blinkz0rz posted:What's the current hotness for managing Jenkins job definitions in code? Is it still pipelines with Jenkinsfiles in the project repo or something else? Blue Ocean's made pretty large strides with ease of setting up new Jenkinsfile style jobs; setting up a new repo is pretty much 4 clicks of the next button which was a perfect low-effort solution to our end-user's needs.
|
# ¿ Mar 2, 2020 17:11 |
|
necrobobsledder posted:Shared libraries come with their own baggage of fun and while Jenkins is JVM based it’s never been easy for me to write pipelines or jobs in anything except Groovy. Like if I want to try Kotlin for our jobs, that’s not going to be convenient as I re-run jobs repeatedly to figure out another class loader problem when using anything other than plain old Java and Jenkins classes. It fits user hostile while being “user friendly” in its own universe of misery. The jenkinsfile itself is similar to below, with a bunch of predefined steps which lints branches, deploys and tests a dev instance/container on PRs, and tags/uploads on master branch. It's just 200 lines of generic code that pulls in config files/secrets from vault and runs shell commands. You'd be insane to try and develop your actual tests in groovy. You can use conditional build steps to skip specific stages but in the end jenkins is best used for just some really simple flow control goo that's executed through hooks from your source control, like line 80 in this: https://github.com/jenkinsci/pipeline-examples/blob/master/declarative-examples/jenkinsfile-examples/mavenDocker.groovy We use a jenkins docker image and launch all our jobs as docker containers. For our tests, all jenkins does is execute test/*.sh and if anything returns non-zero the job fails. The scripts in the test directory can execute complex rspec code or sometimes something so basic as a curl test against a built container. This decoupling of tests has the advantage of being tool-agnostic and allow you to test outside of the jenkins pipeline with just a local docker daemon and local checkout of the repo. All of this could easily be ported to concourse or whatever CI hotness you choose. It CAN be very complex but it really shouldn't be. Bhodi fucked around with this message at 05:08 on Mar 3, 2020 |
# ¿ Mar 3, 2020 04:25 |
|
The Fool posted:Also known as “the real world” for a ton of enterprises Vulture Culture posted:Yeah, I've never been able to run a Helm chart in production without modifying something about it Flux seems to work better, at least for our needs. Bhodi fucked around with this message at 22:10 on Apr 17, 2020 |
# ¿ Apr 17, 2020 22:06 |
|
Methanar posted:I hate these things https://github.com/terraform-aws-modules/terraform-aws-security-group
|
# ¿ Apr 20, 2020 04:19 |
|
12 rats tied together posted:I agree the various "with_", "loop_control", etc, features in ansible have all been really bad. J2's "{% for %}" though is totally fine and coincidentially it also hasn't changed since 2011 or whatever. I'm starting in on ansible hard for the first time and yeah maybe i should just make a custom filter i guess? It shouldn't be this hard to parse and be able to make the index names 7.1_repo1 7.1_repo2 and 7.2_repo3: pre:repos: rhel7: "7.1": - name: repo1 baseurl: http://whatever1 - name: repo2 baseurl: http://whatever2 "7.2": - name: repo3 baseurl: http://whatever3
|
# ¿ Apr 24, 2020 01:00 |
|
12 rats tied together posted:I spent some time with this and yeah, this definitely sucks/is a good time to use a filter plugin. I would suggest that, in general, a yaml file (interacted w/ via single templated parameter value in ansible-playbook) is not the place for this data. If you asked me to do it from scratch: Followup question for ansible people; what's the best practice way of using a local boto .aws/confiig profile to run aws commands on remote hosts? I'd think there would be a best practice way of doing this, but docs seem to infer copying your entire .aws/ over temporarily or export a mess of environment variables (which you'd need to parse from local profile first). Has anyone developed a task block or plugin to streamline this or do I get to reinvent the wheel?
|
# ¿ Apr 28, 2020 15:28 |
|
In this particular case, I needed to proxy my own permissions to copy some stuff to a S3 bucket the instance normally doesn't have access to. I could pass them through environment variables, but to use the profile I'd need to first parse my local boto profile config for them through a localhost connection and register a variable and I was just kinda was hoping there was something built-in to do it all for me and make them ephemeral. most of the aws related tasks support profiles now, it's just that those have to be local to the instance they're being run on. The need was definitely an edge case, since most everything can be done through local commands as you noted. In the end I got lazy and gave the machine role additional creds for a few hours.
|
# ¿ Apr 29, 2020 04:52 |
|
Vulture Culture posted:Is it difficult to generate this via Jinja templates rather than doing all the weirdo machinations in Ansible YAML?
|
# ¿ May 9, 2020 16:16 |
|
IMO I heavily, heavily recommend and prefer that that secrets (and environment specific properties when possible) get pulled in as environment variables rather than a command line -e other_vars flag or an ansible plugin. It's more secure, extensible, and portable. It's straightforward and works with pretty much everything. It allows you to swap your hashicorp secrets solution for an LDAP one, it allows you to stick stuff in a docker image or kubernetes and "just work(tm)", it allows you to easily iterate and test locally without relying on outside servers and it allows you to override those secrets locally on a given run when needed. It also lets you hook into not-ansible for things like automated rspec/junit tests using the same method. It's just a better, more flexible approach than locking yourself into only ansible with a community supported plugin. There are definitely use cases for ansible plugins in general though, it's just not my preferred method in this use case. HOWEVER, all that said, putting secrets in -e flags is wildly insecure so for a variety of reasons so Do Not Do That regardless of what solution you finally decide on. Bhodi fucked around with this message at 19:10 on Jan 20, 2021 |
# ¿ Jan 20, 2021 18:54 |
|
|
# ¿ May 16, 2024 13:19 |
|
Using ansible as your secrets entrypoint definitely works as long as you buy into ansible as wrapper for anything you'd conceivably need to access those secrets. That hasn't been a good fit for me in the past, for example if you need some secrets to access or modify things which ansible isn't a good fit for - network hardware, AWS services, basically anything that isn't at the OS or application level. Just as an example, if your CI/CD testing wants to stand up an entire stack including VPC and EBS teardown, well you're probably going to be running terraform or cloudformation. If you go that route you've also got to manage accessing the same secrets in multiple different ways or wrap the whole thing in ansible - you may find that to be one Matryoshka too deep. It's better to have some sort of smaller wrapper to manage secrets outside ansible, something that's straightforward, relatively secure and broadly supported by literally every CI/CD tool - environment variables. Yes, it's definitely an extra step and probably not as clean. For something much more contained such as a single repository application without any external dependencies and with a straightforward compile/test/deploy, ansible works great. It starts to work less great when ansible is only a small tool in your overall CI/CD box rather than your entrypoint and your task is to try and keep them all in sync within the same pipeline / process. I completely agree that If you're baking in the secrets you probably don't need environment variables; you're replacing that with cloud-init metadata or some file on the system that's external to ansible in a similar way. From the ansible side you plug both in very similarly. Maybe a fact would be the right approach instead? I'd need to think on it and dig into details. Bhodi fucked around with this message at 19:51 on Jan 20, 2021 |
# ¿ Jan 20, 2021 19:44 |