Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
12 rats tied together
Sep 7, 2006

i think we do something like that at my current job and the problems with it seem to be that the team that provides the S tier experience is also on the hook for litigating and justifying why people don't get the S tier experience and, for those people, providing essentially help desk support over slack anyway

it's not really my impression that it is good or saves any time or effort, but i also don't work on that team (i'm currently a customer), and it's possible we are simply holding it wrong and just need a few policy adjustments before it's good

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Platform engineering efforts fail when nobody is in the business of actually identifying and surfacing patterns. Most of the work is grueling work of building observability into people's day-to-day work to figure out where they're getting stuck, but doing it in such a way that people believe you're on their side, and not secret agents of management sent to spy on them. You need this observability because what gets people stuck are unmanaged de facto workflows that emerge throughout a company at all the integration points between different systems.

If you aren't paying attention to all the tickets a team is logging with other teams in a given week, all the self-service pipelines being committed against, all the merge reviews needing security or other SME approval, you'll miss that someone's process of getting an API token for Confluence actually took seven teams two weeks to do, and while each of those individual tickets got done just fine, you accidentally pushed an entire IT workflow orchestration problem onto every end-user in the company.

The good news is that most teams are poo poo at tracking their work and rarely think to write things down. Once you give the company a couple examples of these kinds of problems, and show them the workflow to arrive there, they tend to get pretty good at showing you more cases.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

The Iron Rose posted:

Missed this when writing my reply, but I actually agree with this to an extent, which is that I almost think a business is better served by having no central devops/infra team at all. I’m not sure an internal support org is worth it either, but I’m also not sure the business is served by infrastructure existing at all.

Maybe just fold it + networking into security. Shared services feels like a bitch to manage in general.
I like the definition of "infrastructure" as "delivery dependencies". It's all the things you need to go from code to production service other than the code itself that your team wrote. Sure, maybe it's a server or a VPC or something. It's also your code repositories, your build/release pipelines, your feature flags and run-time configuration, and the release train processes you create to scale your continuous integration across a hundred teams.

You're never going to make infrastructure go away. There are certain irreducible problems that need attention from subject matter experts or the company won't scale. Least-privilege access control is a great example; every company has some story about getting this wrong and spending years fixing it. Most companies put this kind of infrastructure attention into wasteful places instead of productive ones, and come to the mistaken conclusion that infrastructure doesn't serve the business. It's the decisions and investments that the company has made that aren't serving the business.

Docjowles
Apr 9, 2009

I would say we currently do a light version of that. There's a premade, opinionated platform that's meant to be the default for spinning up new services. This gets reasonable support from a dedicated team. You're also welcome to opt out and YOLO all of your own poo poo because you know better, but you pretty much forego support beyond asking questions in a Slack channel and hoping someone replies. Services do have to go through a design review before launching to production, and if you use the standard platform this is largely a rubber stamp vs having to prove everything you built really does meet various requirements. I don't want to overstate the maturity of this, we've been at this model for less than a year and are still figuring out if it even works well. The platform option is definitely not where it needs to be yet. We have very vocal factions for both the "I don't give a poo poo about anything but shipping features, literally do not care at all just give me a way to run my code" and "I, a JavaScript developer, care very deeply about what IP addresses my VPC uses for some reason and I DEMAND full control of every detail of my infrastructure, which I will hand build in the AWS console" camps. Trying to satisfy both has certainly been a thing.

"Management can’t help too much beyond really broad diktats like 'use kubernetes'" certainly hits home. I would love to have some more firm and opinionated guidance from the people with the organizational clout to actually make things happen. Instead the CTO just drops "we should move to AWS, figure it out". Then as poo poo starts rolling downhill, VPs and directors jump out of the way, until it hits IC's and their line managers to build something and hope everyone comes to Jesus. It has not been ideal.

Vulture Culture posted:

Platform engineering efforts fail when nobody is in the business of actually identifying and surfacing patterns

People are currently spending a lot of time on this, so, hopefully something good comes out of it!

Docjowles fucked around with this message at 22:07 on Feb 14, 2023

i am a moron
Nov 12, 2020

"I think if there’s one thing we can all agree on it’s that Penn State and Michigan both suck and are garbage and it’s hilarious Michigan fans are freaking out thinking this is their natty window when they can’t even beat a B12 team in the playoffs lmao"
I helped a platform team and company get onto the cloud the past two years. Their former architect/now platform director came up with a three tiered product offering: self managed in guardrails defined by platform, running on infra deployed by the platform team, and full on ‘I can’t do any of this poo poo’ product offering. There is also a devops orgs that BUs can use in exchange for the standard chargeback scheme that is entirely separate from platform. Some teams like their e-commerce people have their own ‘SREs’ or devops people. I foresee some centralization happening due to budgets and k8s density at some point, but for now it’s working really, really well. Rarely do people bitch about being blocked, security is happy, business is happy.

Platform is also responsible for understanding patterns, socializing those with the teams, and hoping to generate buy in. So far everything is deployed, monitored, etc. in a nearly identical fashion.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Man, I really wish I got to work with smart people like all of you and solve these kinds of interesting problems. I guess my frustration over the state of my org currently is one of the reasons I'm getting pushed out of it! We're nowhere near the organizational maturity to try any of this, and leadership is incredibly allergic to any actual improvements, so things keep getting worse and worse, and they all get more and more stubborn.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
It helps if the platform team runs their team like a business vs a government agency. If they run it like a business, they care about internal customers and want to meet their needs and add useful features. But if it's run like a government agency, and seen as a cost sink, it stagnates as a monopoly that makes the rules and has a captive (and unhappy) customer base.

In reality I suspect most platform teams are somewhere in the middle; or at least start out with ambitions of being the former, before legal / security / technical realities are imposed upon them and they end up being the bureaucratic monster they once professed to hate.

i am a moron
Nov 12, 2020

"I think if there’s one thing we can all agree on it’s that Penn State and Michigan both suck and are garbage and it’s hilarious Michigan fans are freaking out thinking this is their natty window when they can’t even beat a B12 team in the playoffs lmao"

minato posted:

It helps if the platform team runs their team like a business vs a government agency. If they run it like a business, they care about internal customers and want to meet their needs and add useful features. But if it's run like a government agency, and seen as a cost sink, it stagnates as a monopoly that makes the rules and has a captive (and unhappy) customer base.

In reality I suspect most platform teams are somewhere in the middle; or at least start out with ambitions of being the former, before legal / security / technical realities are imposed upon them and they end up being the bureaucratic monster they once professed to hate.

I mean, yea. If security and legal want you to enforce their rules there is no wiggling out of it unless you want to get slapped. The platform team I work with hasn’t solved this exactly, but they do facilitate the conversations and make people aware that when, say, deploying a service with public endpoints enabled is blocked by the guardrails they need to talk to whichever team to get an exception. And the org I work with basically grants them all, but starts off by not just letting people do that so they can record and accept the risk intentionally.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Platform adoption under a product mindset is always opt-in until you hit the point where you've got critical mass but new adoption has stalled. "Surprise and delight our users" is a carrot, but there's always a stick behind someone's back.

i am a moron
Nov 12, 2020

"I think if there’s one thing we can all agree on it’s that Penn State and Michigan both suck and are garbage and it’s hilarious Michigan fans are freaking out thinking this is their natty window when they can’t even beat a B12 team in the playoffs lmao"
IME there is a stick behind everyone’s back of ‘the chief whatever person said get your rear end on the loving cloud’. It’s been useful for sorting out the various teams though and providing a more tailored approach to adoption for each business and app

12 rats tied together
Sep 7, 2006

don't take poo poo from security, ask them what the control is and how you can satisfy it.

it's part of your job as infrastructure to not let the shortcuts infosec takes to propagate into usability issues for the rest of the org.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
There definitely needs different approaches for different levels of app. For example, I have a half-page script that reads data from an internal API and squirts it into a human-readable Google Sheet. I'd run it off my laptop, but it needs to run when I'm on vacation or if I left the company, so I need to find a proper home for it.

If I apply to the IT platform department, then they will ask:
- security: what ports does it expose? What's your process for handling CVEs? Do you do static analysis & container scanning? When and how are you rotating credentials?
- GDPR: What personal data are you processing? Where is it stored? How long will you retain it? Is that policy published anywhere?
- ops: what CPU / network / storage requirements? Where's your testing/staging environment? How critical is this app? Who do we escalate PagerDuty to? What health checks can we apply?
- legal: Is the source code open-sourced? If so, what license is it using? What's our exposure?
- procurement/finance: Is it using any 3rd party SaaS services? Are they onboarded into our vendor management system? Do we have an enterprise agreement with them?

All of these are very reasonable questions... but this is just a non-critical script, and there are thousands of similar things all over the company. If the bar to entry into a platform is this high, no-one is going to wade through all that red tape. They'll either hobble along running it on their laptop, or go rogue and procure an off-the-books cloud account to run it where the eagle eyes of infosec et al won't see it.

If the platform team is made up of "cover my own rear end" types, they don't care: they only care that they asked the right questions and got their boxes ticked. But if they actually care about internal customers, they'll recognize this friction for what it is, and make an easy path for them.

12 rats tied together
Sep 7, 2006

most of those questions shouldn't even arrive at you because they shouldn't even be asked because you shouldn't be running your script with internal auth that has access to pci data, personal info, you don't own google sheets so there are no listening ports to ask about, etc

step 1 of running a competent infosec org is to know your scope and limit it as much as possible. the same thing applies to ops, even in my ideal dream world where you have to open a ticket for this, the ticket is same day and the only piece of info ops needs from you is a name for whatever this is

this is the org version of the null problem. nothing is nothing, it's not "i forgot" or "i didn't do it yet"

12 rats tied together fucked around with this message at 00:33 on Feb 15, 2023

Hadlock
Nov 9, 2004

The Iron Rose posted:


The problem is developers still aren’t good enough at doing this to be as effective as we want them to. Logging, monitoring, scaling, HA, o11y, database administration, effective use of compute, secure design, and so on.



I agree with this

I think you can get 80% of the way away from DevOps with managed services, but you still need, wild guessing here, about 1.5 DevOps headcount per 50 engineering headcount

Due to ramp up time and bus factor I think you end up with a minimum team headcount of about 3

When production goes down you really want a guy who has a high level overview of what's going on and remembers what happened the last time Joe's service died since he's skiing today

Someone will now chime in that if production goes down you're doing it wrong

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.

minato posted:

All of these are very reasonable questions... but this is just a non-critical script, and there are thousands of similar things all over the company. If the bar to entry into a platform is this high, no-one is going to wade through all that red tape.
At my previous job we'd also spend months arguing with security about things like the patching process and they'd routinely block deployments until everything was approved.

I understood where they were coming from but at the same time there were glaring security issues across the platform Security weren't addressing since that would have required actual effort instead of creating terabytes of Google Forms. When this was pointed out they wouldn't budge at all.

This eventually soured a lot of people on the Security team and the relationship became super adversarial.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Sagacity posted:

At my previous job we'd also spend months arguing with security about things like the patching process and they'd routinely block deployments until everything was approved.

I understood where they were coming from but at the same time there were glaring security issues across the platform Security weren't addressing since that would have required actual effort instead of creating terabytes of Google Forms. When this was pointed out they wouldn't budge at all.

This eventually soured a lot of people on the Security team and the relationship became super adversarial.

There is a huge difference between “security” people who just generate reports and the security people who actually understand real risks to an environment. The former are a dime a dozen and often cause more problems than they solve. The latter are worth their weight in gold.

Chopstick Dystopia
Jun 16, 2010


lowest high and highest low loser of: WEED WEE
k

deedee megadoodoo posted:

There is a huge difference between “security” people who just generate reports and the security people who actually understand real risks to an environment. The former are a dime a dozen and often cause more problems than they solve. The latter are worth their weight in gold.

If only they would stick to generating reports. I once worked at a place where outages were caused multiple times by security changing AWS permissions to match their new policies without any knowledge of how that would impact running services, and of course no discussion with the engineering teams that maintained them.

Trapick
Apr 17, 2006

Chopstick Dystopia posted:

If only they would stick to generating reports. I once worked at a place where outages were caused multiple times by security changing AWS permissions to match their new policies without any knowledge of how that would impact running services, and of course no discussion with the engineering teams that maintained them.
We recently had a *major* outage because some security folks decided to add some additional logging in a container but nothing to forward or rotate those logs anywhere, and it filled up the volume and crashed everything.

Of course they didn't tell anyone ahead of time, get change approval, or anything else. Was great.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Security by bureaucracy, the least effective policy of all

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
It's all Goodhart's Law stuff, right? Infosec imposes onerous policies because bad number go down brrrrr.

Since security incident quantity/cost is easy to track, but the negative effects of onerous policies are intangible and 2nd order (e.g. people spinning up services on their credit card because it's easier than going through InfoSec), then it's really easy for InfoSec to slip into a bureaucratic friction bottleneck that primarily only benefits themselves.

Hughmoris
Apr 21, 2007
Let's go to the abyss!
If anyone is working in DevSecOps, or familiar with it, can you weigh in on that kind of work? Exciting, or boring and soul draining? I might have a line on a junior DevSecOps position. My background is data analytics but am looking for a change and an opportunity to make $$$ down the road.

I don't have any professional experience with DevOps but I have some hands-on time with AWS and Azure personal projects.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hughmoris posted:

If anyone is working in DevSecOps, or familiar with it, can you weigh in on that kind of work? Exciting, or boring and soul draining? I might have a line on a junior DevSecOps position. My background is data analytics but am looking for a change and an opportunity to make $$$ down the road.

I don't have any professional experience with DevOps but I have some hands-on time with AWS and Azure personal projects.
It could be really cool work, or it could be impotently asking teams to please put Snyk in the Jenkins pipelines that neither they nor you know how to edit.

Either way, if you have good analyst skills, and you're good at digging at different projects throughout a company to understand Where Things Stand in relation to some across-the-board improvement someone is looking to make, you should have a relatively easy time of it.

Hadlock
Nov 9, 2004

Junior devsecops is going to be closing out 20 terraform security tickets a week generated by an automatic scan system managed by a 3rd party contractor and some ancillary stuff

Hughmoris
Apr 21, 2007
Let's go to the abyss!

Vulture Culture posted:

It could be really cool work, or it could be impotently asking teams to please put Snyk in the Jenkins pipelines that neither they nor you know how to edit.

Either way, if you have good analyst skills, and you're good at digging at different projects throughout a company to understand Where Things Stand in relation to some across-the-board improvement someone is looking to make, you should have a relatively easy time of it.

Hadlock posted:

Junior devsecops is going to be closing out 20 terraform security tickets a week generated by an automatic scan system managed by a 3rd party contractor and some ancillary stuff

Thanks for the info. After watching some videos and reading some articles, I'm not sure if DevSecOps will be my bag.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So I'm doing some Terraform for the first time. What I'm managing is Azure DevOps pipelines (there's a provider for that!). For each customer that we onboard into this particular service, we setup a pipeline for them, and grant some access to it. I've got a "proof of concept" in a single main.tf file, with 2 data sources and 4 resources needed to create a new pipeline. Right now all the pipelines would share those two data sources and 1 of the resources, and the other 3 resources are unique to the pipeline. So for each new pipeline I'd need 3 new resources.

I know I could just copy/paste but also I know that that's a really bad idea. But I'm not sure what the right approach to this actually would be. My guess is a local module where the pipeline-specific resources are defined, then I define all my pipelines in variables.tf in a map, and use for-each to iterate through them. Am I on the right track here? Is there another way to be doing this?

The Fool
Oct 16, 2003


I would set it up without using a module first then move the pieces that deploy together into the module once you have them all set up the way you want.

Also, variables are for taking input that can change from an external source. If you want to just create a value inside your config you can use a local block.

Otherwise, map + for_each is the right idea.

12 rats tied together
Sep 7, 2006

2nding that you should make the module last, or ideally never. Hashicorp documentation in this area has improved drastically in the past year-ish and the key insight from the documentation here would be: (phrasing mine) if you have trouble coming up with a name for your module that isn't just the name of the resources inside it, it's not raising the level of abstraction, and probably shouldn't exist

resource for_each with a map argument is best way to get started on this type of thing

edit: a map argument or a for expression that results in an appropriate data type and shape for your resources. the expression is usually a better pattern imo but they are substitutable for each other at any point so it doesn't matter which one you start with

12 rats tied together fucked around with this message at 19:36 on Feb 21, 2023

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
The "resource inside plus a bunch of sensible defaults or governance poo poo" pattern is much less problematic in Terraform 1.3+, where you not only have the moved block, but the usage restriction on cross-package moves is lifted. I used to advocate strongly against it, but nowadays the biggest problem is that it invariably adds a bunch of provider version constraints that are probably all artificial. It's mostly fine, but if what you really want is to provide default owners for a KMS key policy or something, you can also consider a data-only module.

The Fool
Oct 16, 2003


12 rats tied together posted:

2nding that you should make the module last, or ideally never. Hashicorp documentation in this area has improved drastically in the past year-ish and the key insight from the documentation here would be: (phrasing mine) if you have trouble coming up with a name for your module that isn't just the name of the resources inside it, it's not raising the level of abstraction, and probably shouldn't exist

that's a good litmus test, but I'd argue that if you're doing a local only submodule to consolidate connected resources you don't need to be as strict about it.

quote:

edit: a map argument or a for expression that results in an appropriate data type and shape for your resources. the expression is usually a better pattern imo but they are substitutable for each other at any point so it doesn't matter which one you start with

this is absolutely the best way to handle config information coming from sources you don't control and need to shape, but if you are generating the config yourself you're better off making it look right from the beginning

12 rats tied together
Sep 7, 2006

i think it's still strictly worse unless you're embedding provider blocks into the modules, which i've never done, but i assume is "fine"

the useful governance poo poo applies to the provider (e.g. default tags), providers are nasty, module blocks are also nasty (still can't use variables in a module path im pretty sure?), you should minimize the total amount of them that exist

i can't think of any other type of governance poo poo that doesn't better live in root state, or a different state entirely, at least in an AWS API world

it's not my opinion that moved{} meaningfully improves management here, we also really need imported{} from pulumi's ResourceOptions block, otherwise we're just doing the ansible thing where you have a series of merge-appends to a "list of stuff that shouldn't exist" which is not very declarative

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

12 rats tied together posted:

resource for_each with a map argument is best way to get started on this type of thing

This would make sense (to me) if I was just creating multiple of the same resource, but what I'm doing is creating multiple sets of a set of resources. Each thing comprises an azuredevops_build_definition, an azuredevops_group, and and an azuredevops_build_definition_permissions. And they're dependent on outputs of each other. The permissions resource needs the id of the group and the id of the build definition, so I'm not sure how I could effectively pull that off in a for_each without grouping those resources into a single kind of entity, and (to my very limited knowledge) the module is the only way to do that.

It doesn't look like the for expression can define resources, so I'm not sure how else to group these other than a module.

12 rats tied together
Sep 7, 2006

The Fool posted:

this is absolutely the best way to handle config information coming from sources you don't control and need to shape, but if you are generating the config yourself you're better off making it look right from the beginning

i don't think this is true, a for expression is a filter or a lens through which you can view a data structure, its not incorrect to have a structure that is not valid as-is in every resource that consumes it

an example of this would be the ec2 route resource, which requires a different param depending on the target of the route. you can have 5 maps for each type of route, or you can have a route for_each where every parameter is conditionally toggled off sometimes, or you can have a route resource for each type of destination that uses an expression to filter "routes of my type" and has no conditionals otherwise

the last one is the best one, pretty objectively i think

The Fool
Oct 16, 2003


12 rats tied together posted:

i think it's still strictly worse unless you're embedding provider blocks into the modules, which i've never done, but i assume is "fine"

provider blocks outside of the root cause a ton of problems and actually entirely break count/for_each

The Fool
Oct 16, 2003


FISHMANPET posted:

This would make sense (to me) if I was just creating multiple of the same resource, but what I'm doing is creating multiple sets of a set of resources. Each thing comprises an azuredevops_build_definition, an azuredevops_group, and and an azuredevops_build_definition_permissions. And they're dependent on outputs of each other. The permissions resource needs the id of the group and the id of the build definition, so I'm not sure how I could effectively pull that off in a for_each without grouping those resources into a single kind of entity, and (to my very limited knowledge) the module is the only way to do that.

It doesn't look like the for expression can define resources, so I'm not sure how else to group these other than a module.

for_each creates an index of resources it deploys based on the key in your map.

So, given:
code:
config_map = {
  item1 = {
	value1 = foo
  	value2 = bar
  }
}
You deploy resource1:
code:
resource base_resource "resource1" {
	for_each = local.config_map
	name = format("%s-BASE", each.key)
	value1 = each.value.value1
}
Then deploy resource2, with a reference to resource1:
code:
resource dependent_resource "resource2" {
	for_each = local.config_map
	name = format("%s-DEPENDENT", each.key)
	base_resource_id = base_resource.resource1[each.key].id
	value2 = each.value.value2
}
e: you can also derive names or other values from the key, updated example to reflect

The Fool fucked around with this message at 20:13 on Feb 21, 2023

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Ooooook, that makes sense. I'll give that a try.

Though, shouldn't it be each.value1.value and each.value2.value? Looks like you transposed the order of the values.

E: and I was planning on taking advantage of some "default" values in the module variables, but I can just move to using conditional expressions in the actual resource block it looks like. If value is set, use that, otherwise use this default value

FISHMANPET fucked around with this message at 20:15 on Feb 21, 2023

The Fool
Oct 16, 2003


each.value is how you access the values of the iterated item in for_each, where value2/value1 is the key of the value

The Fool
Oct 16, 2003


FISHMANPET posted:

E: and I was planning on taking advantage of some "default" values in the module variables, but I can just move to using conditional expressions in the actual resource block it looks like. If value is set, use that, otherwise use this default value

I really dislike doing logic inside of resource blocks if it can be at all helped.

Do the logic in locals, then the resource blocks are just a list of assignments.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

12 rats tied together posted:

i think it's still strictly worse unless you're embedding provider blocks into the modules, which i've never done, but i assume is "fine"
No, this is bad, it breaks the entire contract that your module is in any sense modular. It also prevents you from ever using the module in a count/for_each context if that's the kind of thing that's important to you (and it should be).

12 rats tied together posted:

the useful governance poo poo applies to the provider (e.g. default tags), providers are nasty, module blocks are also nasty (still can't use variables in a module path im pretty sure?), you should minimize the total amount of them that exist
Yeah, that's one trivial case. Most of the ones in the real world rely on a multitude of resources, like "we don't permit use of the account-default encryption key, so create a new CMK for this resource if none was provided", or "ensure this S3 bucket writes access logs to the standard location for the account", or "create a new backup vault for this file system instead of using the account-default one that every tenant shares".

12 rats tied together posted:

i can't think of any other type of governance poo poo that doesn't better live in root state, or a different state entirely, at least in an AWS API world
It depends totally on your answer to who writes the Terraform. Root state is fine if you have few Terraform authors and few important governance rules to implement consistently. It doesn't necessarily scale when you have different authors trying to comply with compliance guardrails they have nothing to do with.

12 rats tied together posted:

it's not my opinion that moved{} meaningfully improves management here, we also really need imported{} from pulumi's ResourceOptions block, otherwise we're just doing the ansible thing where you have a series of merge-appends to a "list of stuff that shouldn't exist" which is not very declarative
Import-as-code is something Terraform definitely needs in order to work in the kinds of GitOps workflows people imagine Terraform would actually be good at, but I'm not connecting it to this problem. Say more?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

The Fool posted:

I really dislike doing logic inside of resource blocks if it can be at all helped.

Do the logic in locals, then the resource blocks are just a list of assignments.
Better yet, if you have logic that is so complex that it isn't straightforwardly correct, break the transformation into a data-only module. That's the only way you're going to get to unit test it.

Adbot
ADBOT LOVES YOU

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Well, success. That was, all-in-all, much easier than I thought it would be. Each "pipeline" is defined via 8 values in a local map, and my actual resource definition is only about 60 lines of code.

I'll have to do a little work because of an edge case I just discovered with provider weirdness, but this is all pretty slick.

Is there a way to force terraform to verify that its current stored state actually aligns with the state of the actual objects? I know it should be doing that but, because of provider weirdness things got out of sync.

Basically, I used the same group for two different items, and then removed one of the items. So it removed the group definition entirely, but unfortunately, it doesn't know that, and so it removed the access permissions I set, and a group membership I set. I'm going to workaround this in a way that should prevent it from happening entirely, but still kind of curious if there's a way to force terraform to sync its state.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply