Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mr Shiny Pants
Nov 12, 2012
So I started working with Terraform some more and can I just say that it is loving awesome? I forgot how jaded I've become regarding software in general, but this just made me smile.

And it just loving worked, no weird error messages, no configuration errors, no nothing. Add provider, put in the API key and you're off.
Pretty awesome for a change, especially seeing the stuff it does (let's not get carried away, it just calls APIs) but still, I was pleasantly surprised.

So I've been reading some more about it and I've come to the following setup:
I read that keeping all your state in one file leads to long planning phases so I've decided to separate my infrastructure by update frequency.
Resource groups, subnets and all the other stuff in one, instances and the like in the other. I might even put the network stuff in a separate file.
One thing though, I am already using a "globals" module and using the state of the "top" infrastructure file as a datasource, It seems there is just no way around that?

Is this smart?

Adbot
ADBOT LOVES YOU

Methanar
Sep 26, 2013

by the sex ghost

Mr Shiny Pants posted:

So I started working with Terraform some more and can I just say that it is loving awesome? I forgot how jaded I've become regarding software in general, but this just made me smile.

And it just loving worked, no weird error messages, no configuration errors, no nothing. Add provider, put in the API key and you're off.
Pretty awesome for a change, especially seeing the stuff it does (let's not get carried away, it just calls APIs) but still, I was pleasantly surprised.

So I've been reading some more about it and I've come to the following setup:
I read that keeping all your state in one file leads to long planning phases so I've decided to separate my infrastructure by update frequency.
Resource groups, subnets and all the other stuff in one, instances and the like in the other. I might even put the network stuff in a separate file.
One thing though, I am already using a "globals" module and using the state of the "top" infrastructure file as a datasource, It seems there is just no way around that?

Is this smart?

Separate things by area of concern. Shared infra like vpc subnet network stuff should be independent.

Then do one terraform project per stack of whatever you're doing.

Tag management is another perfect example of something else that is shared and should be broken down into a module and attached to other terraform projects. We have an org wide set of tf modules with all the environments defined in a big struct full of key values for what cost accounting tags to use. What the chef server urls are. What the aws region is. That sort of thing. Just one place to update and then every thing else inherits.

Mr Shiny Pants
Nov 12, 2012

Methanar posted:

Separate things by area of concern. Shared infra like vpc subnet network stuff should be independent.

Then do one terraform project per stack of whatever you're doing.

Tag management is another perfect example of something else that is shared and should be broken down into a module and attached to other terraform projects. We have an org wide set of tf modules with all the environments defined in a big struct full of key values for what cost accounting tags to use. What the chef server urls are. What the aws region is. That sort of thing. Just one place to update and then every thing else inherits.

By concern is a better description. :)
I have one "global" module that has the region, and all the "environment" variables needed for this specific provider.
I don't really want to over engineer it, we don't use stacks as it were, but I might separate a VMware cluster from the other instances.

I do get a vibe of "over-engineering" Terraform in a lot of the stuff I read.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
We do per solution/implementation... So we have distinct terraform implementations (and by association state files) for things like AWS vpcs, Google host projects, generic k8s clusters, and prescriptive k8s clusters, etc.

These solutions usually consume generic modules. Some of our bigger solutions like k8s take 45 minutes to apply end to end. But we test everything nightly via fixtures and our dev environments do a clean destroy/apply every night to test reproducibility

Thanks Mitch.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Almost all software patterns are easy to manage and work fine at small scale. The question is what the sharp edges are and how things can pathologically deteriorate over time. A tiny old PHP application is fine and easy to secure, but in practice the super low barrier to entry made it easy to churn out unmaintainable, insecure slop we’re still trying to rein in decades later.

That’s where all the design patterns and so forth come in. It’s all over engineered until it isn’t. While the best scenario is to make it easy to iterate and throw stuff away easily, we can’t simply throw away entire product and services away on a whim as professionals (insert Google kills products joke here). I can’t simply delete entire VPCs that suck in production without migrating data over elsewhere. This process can take years at the speed of business rather than engineering.

But one thing about Terraform or CloudFormation or whatever else is consistent - it will not fix your organization being Bad at Sysadmin in any way. If monitoring, auditing, ssh key rotation, user management, and secrets management are impediments for your organization, Terraform is another tool problem potentially that magnifies existing gaps and winds up becoming too much technology for your organization to fail to manage effectively. But sometimes a Terraform (or any other IaC) effort helps expose this stuff and give political ammo to fix the underlying problems swept under many rugs.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
What's the right way to restrict traffic to docker containers so that only certain IP addresses are allowed? Googling around shows various solutions but I'm not sure what may be outdated. I'm using iptables on Debian.

edit: Found https://docs.docker.com/network/iptables/#restrict-connections-to-the-docker-host seems pretty straightforward

fletcher fucked around with this message at 07:58 on Feb 7, 2021

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Gyshall posted:

Imagine wanting to use interpolation in providers/versions.

At some point you have to stop abstraction. Terraform began life as a declarative provisioning tool, it's miles ahead of where it was and is still loads better and accessible. I'm of the opinion that it doesn't need try/catch stuff and if you need that or backend interpolation then just write a thin wrapper for it or use something like terragrunt.
try/catch is the thing that enables loose coupling between projects (you can actually use service discovery to drive TF decision-making!) or prevents you from having wacky poo poo like flags that you turn on once for bootstrap and leave off. You definitely shouldn't be trying to design for it in the general case, but it absolutely has use cases that are real and would be implemented in far worse, jankier, and more brittle ways with a wrapper.

You don't have to have actual try/catch to do it, it would have been manageable with a different error handling model like how Ansible treats lookup plugins, for example. But you do need to handle the case of "this data doesn't exist" for a lot of very reasonable cases.

Vulture Culture fucked around with this message at 15:10 on Feb 7, 2021

Pile Of Garbage
May 28, 2007



try/catch is not a substitute for real service discovery. TF implementation of try/catch sounds like one giant anti-pattern.

12 rats tied together
Sep 7, 2006

Vulture Culture posted:

try/catch is the thing that enables loose coupling between projects

Yeah, don't get me wrong, I read the documentation for try/catch and immediately started laughing because "returns the first item that does not error" sounds hilarious, but if I was stuck using Terraform at work for some reason it would be a lifesaver. I'd be able to make the tool completely usable with just some light templating, making it at least feature-comparable with CloudFormation.

Mr Shiny Pants posted:

By concern is a better description. :)
I have one "global" module that has the region, and all the "environment" variables needed for this specific provider.
I don't really want to over engineer it, we don't use stacks as it were, but I might separate a VMware cluster from the other instances.

I do get a vibe of "over-engineering" Terraform in a lot of the stuff I read.
To attempt to explicitly describe "by concern": when we build complicated systems, as much as possible we want to build decoupled parts that can be implemented and understood separately.

Your "globals" module is a common emergent pattern in Terraform, I'm not sure if it's an explicitly recommended best practice but it works fine enough. The alternative pattern would be to move the data out of your globals module and into a bare variables file which you keep in a special location, and then to use the --vars-file flag to include them with your plans and applies. There are pros and cons to both approaches but starting with the module is best, IMHO.

12 rats tied together fucked around with this message at 19:15 on Feb 7, 2021

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Man you have a lot of opinions about how to use a tool you admit that you've only used minimally

12 rats tied together
Sep 7, 2006

I've been using terraform since ~0.4.2 or something? The first thing I ever did with it at work was push a recompiled version of it to an internal artifact repository because the official version didn't support IAM roles as an access mechanism yet. I've suffered through like 2-3 years of using it in production across 3 jobs by now.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Nm then, I thought I'd read a post from you where you said you hadn't used terraform in years.

12 rats tied together
Sep 7, 2006

That's still correct, it blew up for me in a bad way at $last-job and I was given permission to "just make things work" for a particular product initiative, so I did, and we switched from it to ansible->cloudformation. Later that quarter it blew up for a bunch of other teams too and I moved a lot of it into ansible->terraform which helped fix a lot of the issues they were having. I have a post in this thread around that time where I went into what it looks like and why I think it's a good idea. This was like mid 2019 or something and I haven't touched Terraform since.

I spent a lot of time using and especially a lot of time getting mad about it before that though, which is where all of my opinions come from.

The Fool
Oct 16, 2003


I’m sure this is fine

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
stupid question but documentation isn't leading me anywhere

my work runs ansible vault. i have a json file that i need to copy to a server that contains SHA256'd password hashes. I don't want to commit that file to our VCS unencrypted. It will live on the server in the clear before being cleaned up in my startup script.

The smart option here is to convert my static json file to a jinja2 template, encrypt the password hashes and store them as variables, reference them in the .j2 and construct the json on the fly.
e.g.
code:
- name: Construct Json on server
  template:
    src: data.json.j2
    dest: /path/to/file.json
I don't really want to do that, because there's 25 of them and it's a bit of a pain.

What I would like to do is encrypt the whole file and decrypt it on the fly as I copy it over. Something like:

code:
- name: Copy json to server
  copy:
    src: /path/to/encrypted/file.json
    dest: /path/to/decrypted/file.json
I'm not entirely sure this is possible at all, judging from the digging I've done so far, probably because it's a stupid-rear end idea. but hey here we are.

obviously it's not great for even the hashes to live on the server in the clear, access is limited, we all make compromises, and this application is dumb.


vvv: welp that's what not reading the docs carefully before gets me

The Iron Rose fucked around with this message at 21:20 on Feb 11, 2021

12 rats tied together
Sep 7, 2006

Ansible's copy module will accept a decrypt parameter that controls whether or not ansible will automatically decrypt a vaulted file when it is copied to a remote server.

You might need a newer version of ansible if these params aren't supported in yours?

edit: formatting
edit 2: You might also consider, instead of storing the hashes, storing the plaintext passwords in vault. You can copy them as hashes to the server by creating a template, referencing the post-decryption vaulted variable, and throwing it through one of the various password hashing filters.

edit 3: ^^^ no worries :) the ansible docs are laid out really poorly for concerns that cut across multiple domains (modules plus vault, templates plus filters, etc.). This sort of stuff is also very hard to google!

12 rats tied together fucked around with this message at 21:22 on Feb 11, 2021

Scikar
Nov 20, 2005

5? Seriously?

12 rats tied together posted:

Using variables in a terraform's version or source field is dynamic only in the sense that not manually updating module version across 300 module invocations every time you update a module is "dynamic". Having a "default module version" and a "not default module version" is extremely a best practice, even ignoring writing software in general we've been doing this in config files for decades. Same with provider, the PR to update the AWS provider across your terraform repository should be a one liner.

Do you have an example of where you've run into this? The versioning stuff changed in 0.12 and 0.13, and while it still has its own problems, I'm not sure this is one. The advice for module versions now is:

quote:

If you are writing a shared Terraform module, constrain only the minimum required provider version using a >= constraint. This should specify the minimum version containing the features your module relies on, and thus allow a user of your module to potentially select a newer provider version if other features are needed by other parts of their overall configuration.
So their example is:
code:
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 2.7.0"
    }
  }
}
The one place you are supposed to be more specific is in the root module, so yeah that should be a one liner. 0.14 has lock files now on top of this so even if your version constraints allow newer, if your last apply was successful and your configuration hasn't changed, a new plan should use the same versions of everything. I haven't moved anything to 0.14 yet though so I dunno how much that helps.

12 rats tied together posted:

Your "globals" module is a common emergent pattern in Terraform, I'm not sure if it's an explicitly recommended best practice but it works fine enough.
It is.

12 rats tied together posted:

I did this because the official terraform example for "a vpc" is a literal nightmare

I think this is probably the root of the "how to terraform" problem, the only examples from Hashicorp themselves are kept really simple, and as soon as you try to do something non-trivial you get problems. You then google those problems and you get 8 wildly different solutions with various explanations along the lines of "this broke everything for me when I imported some ancient stuff across that was originally written in 0.11, but here's a hack that made it work eventually". The only common reference people seem to agree on is the aforementioned Babenko's modules, but they are the opposite of the Hashicorp examples - a fuckton of boolean vars to conditionally create everything under the sun, and they pretty much all violate this part of the advice:

https://www.terraform.io/docs/language/modules/develop/index.html#when-to-write-a-module posted:

We do not recommend writing modules that are just thin wrappers around single other resource types. If you have trouble finding a name for your module that isn't the same as the main resource type inside it, that may be a sign that your module is not creating any new abstraction and so the module is adding unnecessary complexity. Just use the resource type directly in the calling module instead.

The person who introduced me to TF used Babenko's modules a lot and suggested to start with them whenever I had questions about how to use a resource. They are useful for the latter, but the majority of the time I would have one, maybe two ways I needed to use something so all of the overlapping conditional stuff is just unncessary. For example every S3 bucket we use fits one of three cases, so we can be opinionated: there's one S3 bucket submodule for each of those cases and that's it. They each have at most one conditional, only because the S3 bucket stuff is kinda weird how inventories and bucket policies are separate resources (so you can define those elsewhere and attach them), but lifecycle rules are inherent to the bucket itself.

In case you couldn't tell, yes I have spent most of the last 6 months gradually reworking a bunch of flat TF configs originally written in 0.11, into what I hope is a vague approximation of module best practice. It's a big improvement for now but I won't know if I actually succeeded until I have to make a substantial change, and it's either a couple of lines or a total rewrite I guess.

Scikar fucked around with this message at 05:49 on Feb 13, 2021

Methanar
Sep 26, 2013

by the sex ghost
spinnaker sux

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Methanar posted:

spinnaker sux

+1

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Methanar posted:

spinnaker sux

It's overengineered nonsense but it shouldn't surprise you given that it's Asgard's successor.

12 rats tied together
Sep 7, 2006

Scikar posted:

Do you have an example of where you've run into this? The versioning stuff changed in 0.12 and 0.13, and while it still has its own problems, I'm not sure this is one. The advice for module versions now is:

This is still not good, take a look at TheFool's image from a little bit upthread, do you think it makes a significant difference if all 21 (?) instances of the same exact config had a ">=" in front of them? I suppose that this makes the extremely common scenario of "forward-compatible change introduced to module" only require a stupid amount of plan -> apply loops, and not also require a stupid amount of manual version bumps. What about a change that isn't immediately runnable on every consumer, though? This advice essentially reduces to "always run latest, don't introduce breaking changes" which is not a useful recommendation. I would rather just literally configure "latest" and just roll with it, since any scenario where my terraform plan apply loop actually catches an error because I've said "version 0.4 or later" is unthinkable: why would I even pull version 0.4 in the first place? Am I using an artifact cache that was frozen in 2015?

It's also not super clear to me how I would even benefit from this if I'm using git to host my modules, since I don't get a "version" parameter to put comparison operators in -- I have to use the ?ref parameter inside a string that cannot be interpolated, maybe this was fixed in 0.13 too though?

Methanar posted:

spinnaker sux
Yes.

Methanar
Sep 26, 2013

by the sex ghost
I've spent 13 hours over this weekend migrating spinnaker around from halyard to kleat-based configuration management, migrating it across k8s clusters, and tracking down all the lovely little details of spinnaker itself and our implementation of it trying to get everything working again.

long weekend? more like short weekend.

It's unreal how over engineered this thing is.

Scikar
Nov 20, 2005

5? Seriously?

12 rats tied together posted:

This is still not good, take a look at TheFool's image from a little bit upthread, do you think it makes a significant difference if all 21 (?) instances of the same exact config had a ">=" in front of them? I suppose that this makes the extremely common scenario of "forward-compatible change introduced to module" only require a stupid amount of plan -> apply loops, and not also require a stupid amount of manual version bumps. What about a change that isn't immediately runnable on every consumer, though? This advice essentially reduces to "always run latest, don't introduce breaking changes" which is not a useful recommendation. I would rather just literally configure "latest" and just roll with it, since any scenario where my terraform plan apply loop actually catches an error because I've said "version 0.4 or later" is unthinkable: why would I even pull version 0.4 in the first place? Am I using an artifact cache that was frozen in 2015?

It's also not super clear to me how I would even benefit from this if I'm using git to host my modules, since I don't get a "version" parameter to put comparison operators in -- I have to use the ?ref parameter inside a string that cannot be interpolated, maybe this was fixed in 0.13 too though?

Ehh, that picture is (I think) off a single plan. The problem there (as far as Hashicorps recs go at least) is the modules the config depends on declare the provider with ~> so they only allow patch version increments. If child modules have ">= 2.0, <= 3.0" instead then you change the version once in your root module (assuming you do have it pinned in the root anyway), any modules pick up the same one via proxy provider and you should be good. I will say this is specifically with providers rather than modules so far, Hashicorp are mostly good about not introducing breaking changes on minor version bumps as far as I can tell (last case I can find where they did break AWS provider was Sep 2019 which they hotfixed the same day). Thing is you're right - this doesn't feel like the right way to go, especially on your own internal modules that are less likely to have comprehensive tests in place. It's the pattern Hashicorp seem to want people to run with though, so I'm trying to find people putting into practice at scale to see if it actually works out.

If you do want to apply in multiple places as a result of a child module version bump, that's something I'm struggling to picture (again because I've not used it at that scale yet). Like do you have a bunch of workspaces all running off the same root config? Then it should be one commit to upgrade them all. If you mean you have a child module that is being referenced by multiple root configs, then I'm struggling to picture the use case where updating a child module and it not actually being applied everywhere straight away is a problem. If it was a problem on my current scale I would probably set up an auto plan on a daily/weekly schedule or something, that doesn't feel great either but I've no need to do it currently anyway.

It is built around the module registry workflow yeah, so it assumes you're either using TF Cloud or have your own private module registry spun up. I really didn't like Atlantis so I'm gonna take a wild guess that the open source module registry options are not great either.

The Fool
Oct 16, 2003


Yeah, my picture was from a single plan.

Specifically, that plan was using 3 modules, but those modules had dependencies on other modules resulting in needing to dig through 21 different modules to find the one where someone set the provider version requirement to ~> 2.30.0 while other modules in the plan had required higher than 2.30.x

Basically, if you have a choice don’t do what my employer is doing

If you don’t have a choice, have a team to maintain and support the house of cards you are building

12 rats tied together
Sep 7, 2006

Scikar posted:

I will say this is specifically with providers rather than modules so far, Hashicorp are mostly good about not introducing breaking changes on minor version bumps as far as I can tell (last case I can find where they did break AWS provider was Sep 2019 which they hotfixed the same day).
Agreed, it's fine with providers and the providers are generally "stable" enough for this type of workflow.

Scikar posted:

If you do want to apply in multiple places as a result of a child module version bump, that's something I'm struggling to picture (again because I've not used it at that scale yet). Like do you have a bunch of workspaces all running off the same root config?
I could have been more clear here -- in my experience having specifically a "child module" is not a thing, because you always have more than 1 root state. The state <-> module relationship is more composition than inheritance, you have some defined abstraction ("networking") and you either need to invoke that abstraction in every AWS account, or you need a "every AWS account" state which is, IIRC, strictly against recommended practices.

In that situation when you update your networking module you do need to plan -> apply in every root state that consumes it, which is where the "stupid amount of plan -> apply" loops comes from. Networking is probably not the best example here since it is relatively stable, but it shouldn't be too hard to come up with an applicable "new requirement for existing feature" example in your managed environments.

e: As far as I can tell the workspace is a wholly pointless and vestigial feature. I have no idea how anybody actually uses them.

12 rats tied together fucked around with this message at 20:50 on Feb 16, 2021

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

12 rats tied together posted:

Ansible's copy module will accept a decrypt parameter that controls whether or not ansible will automatically decrypt a vaulted file when it is copied to a remote server.

You might need a newer version of ansible if these params aren't supported in yours?

edit: formatting
edit 2: You might also consider, instead of storing the hashes, storing the plaintext passwords in vault. You can copy them as hashes to the server by creating a template, referencing the post-decryption vaulted variable, and throwing it through one of the various password hashing filters.

edit 3: ^^^ no worries :) the ansible docs are laid out really poorly for concerns that cut across multiple domains (modules plus vault, templates plus filters, etc.). This sort of stuff is also very hard to google!

so follow up to this;

I'm doing this the smart(er) way, and I'm generating my config file with a jinja template. I'm having some trouble working with my dictionary though.

So, I have a dictionary of dictionaries in vars.yml as follows:
code:
users_dictionary:
  service:
    username: serviceUsername
    password_hash: encryptedString
  otherService:
    username: otherServiceUsername
    password_hash: otherEncryptedString
    tags: administrator
I also have my jinja template, which looks something like this:
code:
{% for user in users_dictionary.values() %}
{
  "hashing_algorithm": "application_password_hashing_sha256",
  "name": "{{ user.username }}",
  "password_hash": "{{ user.password_hash }}",
  "tags": "{{ user.tags|default("") }}"
{% if not loop.last %}
},
{% else %}
}
{% endif %}
{% endfor %}

i want to construct my template on the fly, and I've achieved some success with:
code:
---
- hosts: localhost
  vars_files:
    - vars.yml
  tasks:
    - name: test jinja2
      template: src=template.j2 dest=test.conf
However, while the usernames and tags work fine (for users with tags defined and users without), the password hashes do not. Specifically, I get "fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'password_hash'"}"

This is confusing me quite a bit, since user.username works fine, and user.tags works fine, and all three key:value pairs are in the same dictionary. Do you have any advice here? I'm pretty sure I'm messing something up but honestly at this point I'm at a loss.

edit: moreover, when printing {{ user }} I can see the password_hash key/value pair just fine!
e.g.
{'username': 'serviceUsername', 'password_hash': 'serviceEncryptedString'}

The Iron Rose fucked around with this message at 00:52 on Feb 17, 2021

12 rats tied together
Sep 7, 2006

This is not a great answer but I can't reproduce, mocking out your stuff as best I'm able. The only way I can get an AnsibleUndefined here is to add a user to the dictionary who does not have a password_hash :(

What I would suggest though is:
code:
---
- hosts: localhost
  vars_files:
    - vars.yml
  tasks:
    - name: test jinja2
      template: src=template.j2 dest=test.conf
+ debugger: on_failed
Append a debugger statement to your task, and run your playbook. It should fail like normal and spit out something that looks something like this:
code:
[localhost] TASK: test jinja2 (debug)>
You can poke at your variables like this, and you might consider running something like this to make sure that the data structure your task is running with it also the structure that you expect:
code:
[localhost] TASK: test jinja2 (debug)> task_vars['vars']['users_dictionary']
[localhost] TASK: test jinja2 (debug)> from pprint import pprint; pprint(task_vars['vars']['users_dictionary'])
[localhost] TASK: test jinja2 (debug)> list( x['password_hash'] for x in task_vars['vars']['users_dictionary'].values() )
These aren't copy pasted directly because I don't have ansible installed on this laptop, so there might be a syntax error lurking in here.

e: vvv Glad to hear it. :) I've done that many, many times myself.

12 rats tied together fucked around with this message at 01:08 on Feb 17, 2021

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
I figured it out!


One "password_hash" was actually "passsword_hash" :v:


Appreciate the assist very much.

The Iron Rose fucked around with this message at 01:03 on Feb 17, 2021

12 rats tied together
Sep 7, 2006

The Iron Rose posted:

I also have my jinja template, which looks something like this:

<snip>

I meant to include this previously, but I forgot. To make these templates easier to read, you can do:
code:
{% for user in users_dictionary.values() %}
{
  "hashing_algorithm": "application_password_hashing_sha256",
  "name": "{{ user.username }}",
  "password_hash": "{{ user.password_hash }}",
  "tags": "{{ user.tags|default("") }}"
{%   if not loop.last %}
},
{%   else %}
}
{%   endif %}
{% endfor %}
Basically, indent (I like two spaces) each time you go deeper into a loop. You can't indent outside of jinja2's brackets, since that would render extra whitespace in your document, but you can indent inside of them. It's also of course much easier to template yaml than json, but I assume you probably have no control over that. :)

Methanar
Sep 26, 2013

by the sex ghost
What should I do for thinkweek

12 rats tied together
Sep 7, 2006

Hook up alerts to auto-remediation automation through something event sourced, implement an event pipeline by adding a new topic to one of your kafka clusters or by using something like eventstore db or messagedb. You could also use cloudwatch event triggers but that's kind of boring imho.

I've been meaning to experiment with faust (kafka) for this purpose. A simple example here would be "when a new member of an autoscaling group launches, run an ansible-playbook on it". You can just run the setup module, or invoke a role, whatever sounds fun.

Methanar
Sep 26, 2013

by the sex ghost
I was thinking of writing a keda plugin to query Burrow for metrics on kafka lag that I can scale on.

https://github.com/kedacore/keda/blob/main/pkg/scalers/kafka_scaler.go#L355

I'm currently using the kafka plugin but it's got some design flaws on it that are causing me issues. Particularly that it queries every partition in series, so if I have a large topic with 240/360 partitions, and every call takes 20-25ms I'm looking at it taking 5-10 seconds to actually return a value to me. Which is pretty bad.

I could probably just submit a PR to make this bit concurrent, but writing a keda plugin would be some nice OSS cred.

Writing a graphite plugin, like the prometheus plugin that already exists, would be a good choice too. We have like 150 billion metrics a day worth of application metrics stored in graphite that might be nice to scale on.

Methanar fucked around with this message at 00:01 on Feb 18, 2021

12 rats tied together
Sep 7, 2006

It's silly that it's called an "event-driven" autoscaler when it's based on metric thresholds, imho, but yeah that seems like a good change to make. I would probably end up using it, too, so purely for selfish reasons I would like you to do that.

Personally I'm trying to avoid ever writing production golang.

Methanar
Sep 26, 2013

by the sex ghost

12 rats tied together posted:

Personally I'm trying to avoid ever writing production golang.

I've been trying that for a long time. I'm not a real software engineer, and don't really want to be one either. I read golang to find problems, I don't write in new problems myself.

Trying, what is for me, something new like writing a plugin that will ultimately be owned and maintained by an OSS community might be a low barrier of entry way of trying it. And again, I might get some nice cred as well as something useful out of it.

The Fool
Oct 16, 2003


so far my only professional golang has been writing terratest tests

I think that’s ok

The Fool
Oct 16, 2003


Looking for a way to run terraform init against a tfe workspace without installing terraform cli, any suggestions?

Might be an x-y problem so I’ll post more info if you need it

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

The Fool posted:

Looking for a way to run terraform init against a tfe workspace without installing terraform cli, any suggestions?

Might be an x-y problem so I’ll post more info if you need it

Always post more information especially if you want to do something weird. The Terraform CLI can be "installed" by downloading it and unzipping it which is two shell commands on both Linux and Windows.

The Fool
Oct 16, 2003


Yeah fair enough

The target audience for this is our app teams. We build and maintain a bunch of modules for them to use so that they can manage their applications infrastructure by just setting some variables for the modules they need then pushing their repo up to azure devops, then the build pipeline handles the rest

The actual terraform deployment is done as an api-driven run through terraform enterprise, the applications terraform state is stored here in a workspace as well.

Right now we are working on migrating from 12.29 to 13.5, all of our modules are updated, with the idea that the app team should be able to make sure they are using the right module versions, then set their workspace to 13.5 and deploy.

This works with the exception of an issue with provider namespaces. The issue is resolved by either running terraform init or terraform replace-provider against the remote backend

We want to eliminate the need for the app teams to have to install terraform and configure the remote backend just for this one task

Hadlock
Nov 9, 2004

Google meet chat has a 500 character limit and will truncate public keys

That is all

Adbot
ADBOT LOVES YOU

spiritual bypass
Feb 19, 2008

Grimey Drawer
Time to switch to ed25519

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply