Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
The Fool
Oct 16, 2003


Hadlock posted:

1) compile a giant mono container with both Django and "yarn dev" running, identical to how it runs on the developer laptop

I maintain an internal application where I do something like this with nginx and fastapi. And the same container gets deployed the same way through all environments.

Adbot
ADBOT LOVES YOU

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
Is this goal to provide a dev environment in a box? If so, who's it targeted at? If it's UI devs then your best bet is probably a docker compose based backend stack with the dev running the UI dev server on their machine.

If you're looking for something a bit closer to prod for smoke testing, etc then bundle assets to S3 in the same way as prod.

Hadlock
Nov 9, 2004

Yeah both basically :downsgun:

1) seamless developer experience. I think they'll want the option to deploy both front and end back end so they can test features with end users agile style once or twice a week

The developers have the front and back end pretty tightly coupled; basically they have a bunch of data and the business users want to mash it up in different ways, so they modify the django view to expose xyz models and perform an extract, and then they add six lines of vue to display it in the right place and label it. So it's a lot of back and forth, presumably, trial and error

2) yeah I guess I want to deploy to S3 as well in the staging/qa environment/s to smoke test everything and provide consistent deployment between environments

Hadlock
Nov 9, 2004

Current status: unwinding someone's idea of local vs dev config hardwired into settings.py that overrides all sorts of environmental config, with five years worth of BeSt InTeNtIoNs stacked on top of that

Hadlock
Nov 9, 2004

My org is basically a newborn baby when it comes to operational maturity. We had an issue today and one guy went to logon to the server to look at the problem and couldn't get on because the other guy was rebooting the box to fix it that way and I haven't looked at the time stamps yet but not sure if the box was locked up or he couldn't get on the machine because it was rebooting. Too many cooks in the kitchen

Basically we're taking a giant messy poo poo while eating hot fudge on the toilet and can't tell where one ends and the other begins

I'm just going to copy the atlassian post mortem report template. I've been using that in various forms since forever

What about outage policy and procedure docs? Like, I know roughly what they're gonna say, but I can't find anything good to use as a baseline template. Thoughts?

Gonna go over the training plan with my boss Monday and then do a 30 minute training Tuesday with the post mortem from today. Am I missing anything

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
without trying to be mean, a realistic look at your org’s willingness to adopt any of those policies and your personal resiliency at how hard and long you’re willing to tilt at a windmill. how you might take those docs and that training getting entirely ignored or actively resisted.

Bhodi fucked around with this message at 23:01 on Jan 19, 2024

Hadlock
Nov 9, 2004

I guess if Tuesdays meeting goes poorly I'll start looking for another job because this is one of the main reasons they hired me

Docjowles
Apr 9, 2009

You don’t have to go 0 to 60 in a day in terms of policy and procedure around incidents. But some baseline level of communication around who is responding and providing status out to the rest of the org is a good start. Cause yeah I’ve been there where you’re knee deep in troubleshooting / remediation and then some other idiot helpful coworker Kool Aid Mans in to save the day.

Falcon2001
Oct 10, 2004

Eat your hamburgers, Apollo.
Pillbug

Hadlock posted:

My org is basically a newborn baby when it comes to operational maturity. We had an issue today and one guy went to logon to the server to look at the problem and couldn't get on because the other guy was rebooting the box to fix it that way and I haven't looked at the time stamps yet but not sure if the box was locked up or he couldn't get on the machine because it was rebooting. Too many cooks in the kitchen

Basically we're taking a giant messy poo poo while eating hot fudge on the toilet and can't tell where one ends and the other begins

I'm just going to copy the atlassian post mortem report template. I've been using that in various forms since forever

What about outage policy and procedure docs? Like, I know roughly what they're gonna say, but I can't find anything good to use as a baseline template. Thoughts?

Gonna go over the training plan with my boss Monday and then do a 30 minute training Tuesday with the post mortem from today. Am I missing anything

So I'll start by just saying up front I don't have any convenient docs I can link you, but I can at least probably confirm your suspicions about process; I managed high-impact outages for a major tech company for close to ten years so I've got a lot of experience in the area. I've written some blog posts around oncall that might be relevant so let me know if you're interested.

Docjowles posted:

You don’t have to go 0 to 60 in a day in terms of policy and procedure around incidents. But some baseline level of communication around who is responding and providing status out to the rest of the org is a good start. Cause yeah I’ve been there where you’re knee deep in troubleshooting / remediation and then some other idiot helpful coworker Kool Aid Mans in to save the day.

This is a solid bit of starting advice, all of it - especially the 'don't try and jump straight to perfect'. Start here for sure, but I'll add in a bit more detail.


Bhodi posted:

without trying to be mean, a realistic look at your org’s willingness to adopt any of those policies and your personal resiliency at how hard and long you’re willing to tilt at a windmill. how you might take those docs and that training getting entirely ignored or actively resisted.

This is also good advice and definitely ties into the 'don't jump straight to perfect' comment; crisis / incident management stuff tends to be bad news delivery and people are going to resent it if you go too hard, but you can generally convince people of basic poo poo, especially team members/etc. Pick your battles, basically.

So going into more details:

When something happens, you should have a clear idea of who's working on it. Generally, I'd say this should mean 'there's a ticket in our ticketing system and we track that' - this should be assigned to a specific person or at least a specific role, and when someone starts working on it they should put literally anything into the ticket to indicate they've begun investigating. "ack", "looking", etc are all perfectly fine, because the point is that you have a way to say 'oh X is looking at this.' The goal of stuff like this is to put this information where people are looking, so ideally people aren't just hopping around trying to fix things without checking for tickets/etc first.

When anything happens that ends up in anything your company would consider an 'incident' and it didn't generate a ticket, then you have a monitoring gap - so go figure out how to make sure there's an automated ticket next time. Ideally, try and fix it that way instead of going down the path of 'you have to create a ticket by hand for every xyz bullshit' because nobody likes that. It's not bad advice, but nobody likes it and at your current maturity state it's not worth fighting about.

I don't actually know if you ever need a standards doc for this, but if you feel like writing one, keep it brief and then get buy in from your coworkers/etc.

The Atlassian Postmortem Template seems extremely reasonable, although honestly longer than necessary. I'd review the rest of their related docs because I assume it's all probably decent and might get you something you can throw at managers as 'sources'.

Falcon2001 fucked around with this message at 02:42 on Jan 20, 2024

Hadlock
Nov 9, 2004

Very much thanks I kind of skimmed over these posts initially, re-reading them again. Thanks for the wise words. This has given me a lot to think about. I'm still pondering on it

Kind of contemplating putting the family guy oh yeah kool-aid man clip in Tuesday's slideshow

Falcon2001
Oct 10, 2004

Eat your hamburgers, Apollo.
Pillbug
Whenever trying to improve operational excellence, I'd start from 'how do I get people on board with this?' - in this case I'd try and lean on the fact that nobody likes wasting their time or playing a blame game; so trying to get people in the habit of either talking about what they're doing openly, or tracking it in a ticket is something you can help people see the benefits of.

Certain operational things (security patching, etc) kind of fall into the 'nobody really likes doing it but we have to', but a lot of things can be argued for from a position of self-interest, or at least a position of 'here's how it helps people you actually give a poo poo about like your coworkers'.

Edit: this reminded me of another option. If a ticketing/monitoring system is out of your team's reach or simply impossible to implement, getting people in the habit of declaring what they're doing when they're investigating in a slack channel or something like that could probably be a workable alternative; the point is letting folks know what's going on. Even this is an improvement over 'eight cooks blindly wandering around a kitchen'.

Falcon2001 fucked around with this message at 21:34 on Jan 21, 2024

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Weaponizing people's innate curiosity is the only tool you have in your toolbox for knowledge workers. Don't pretend you have other ones. You can't force the issue. You can't short-circuit people onto the aha moment where they realize their lack of coordination prolonged a systems outage, and that there are other ways to do it.

If people are accountable for an outcome, either they'll find a way or the system will replace them with people who will get things done for the business. If they aren't accountable for results, they'll do whatever causes them the least stress (and this is actually a good thing).

If the business knows what it wants, your one job is to deconstruct and rebuild the system around people's patterns of curiosity. If the business doesn't know what it wants, what are you doing arguing with technicians?

Vulture Culture fucked around with this message at 19:29 on Jan 22, 2024

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Does anyone have a really good answer for why secrets management, as a whole problem space with products in it, is actually its own distinct problem? The best I'm able to do is something like "it's half an identity broker" or "it's what you use at the boundaries between systems when you kinda don't actually want a client to have its own identity"

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
To me it’s a sanitation problem. It transforms unsafe poop (say, plaintext passwords or root or admin access keys) into something that’s safe to touch and use. which would be awesome, except you still need to get the poop in there which requires touching the poop, updating the poop, backing up the poop, and since it happily contaminates and gets into everything and is overall unpleasant to deal with, you cant use the same tools and automation that you use to handle the safe stuff so it gets to be manual or have it’s own separate PPE systems which themselves have to be managed, monitored, secured and kept up to date. Oh and you can’t isolate it because by its very nature it needs to be globally accessible.

This ecosystem is by its nature isolated from the rest of your automation, is a self-contained system with well-defined boundaries (did the poop touch?), doesn’t really need to interoperate, and so is a natural separate and isolated "problem space" ripe for a random product to come in and claim to “handle it” soup to nuts.

Bhodi fucked around with this message at 21:02 on Jan 22, 2024

12 rats tied together
Sep 7, 2006

I don't have anything nice to say. It's a distinct problem because that means people can sell a solution for it.

Hadlock
Nov 9, 2004

Secrets management is just config management, but with an unnecessary step of two tiered access control

I've never come across a situation where someone who had access to change config wasn't someone I didn't trust with secrets

At one company we had the unencrypted configmap in the monorepo but I doubt most developers knew it existed or what it was for

My buddy works at a startup in this space. They focus on secret management and rotation but he uses their internal product primarily for config management of ephemeral environments

JehovahsWetness
Dec 9, 2005

bang that shit retarded

Vulture Culture posted:

Does anyone have a really good answer for why secrets management, as a whole problem space with products in it, is actually its own distinct problem? The best I'm able to do is something like "it's half an identity broker" or "it's what you use at the boundaries between systems when you kinda don't actually want a client to have its own identity"

I think "it's half an identity broker" is the crux, but on the ground it's companies trying to pack features so people just don't normalize on whatever their primary CSP provides. Assuming all they're actually interested in is just secrets management. We use Hashi Vault and secret storage is the _least_ interesting thing we do with it, all the cool poo poo is identity brokering shenanigans to bridge cross-system identity gaps.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
It's a special case of config management. It needs controls & auditing far beyond what's needed for "regular" config. And as someone mentioned above, it's also about getting that data where it's used in a secure way. Most config management software is not overly concerned with the security of the config itself, because the config is often not sensitive info. But secrets are sensitive, and need to be distributed to the same areas as regular config, so either config management tooling clumsily integrates with secrets managers to do that, or the system is architected to bypass the config management tooling entirely and talk directly to the secrets managers.

(We had a system where our config management + deployment scripts were horribly integrated with our secrets manager, because it was the config management tool that was ultimately responsible for putting those secrets onto the hosts in a way that our programs could use them. We inverted the model and instead get our programs to reach out to the secrets manager on startup to retrieve whatever secrets they need, and this has vastly simplified (and secured) our deployment scripts, at the cost of coupling our programs with a secrets manager. Seems worth it, so far.)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

minato posted:

It's a special case of config management. It needs controls & auditing far beyond what's needed for "regular" config. And as someone mentioned above, it's also about getting that data where it's used in a secure way. Most config management software is not overly concerned with the security of the config itself, because the config is often not sensitive info. But secrets are sensitive, and need to be distributed to the same areas as regular config, so either config management tooling clumsily integrates with secrets managers to do that, or the system is architected to bypass the config management tooling entirely and talk directly to the secrets managers.

(We had a system where our config management + deployment scripts were horribly integrated with our secrets manager, because it was the config management tool that was ultimately responsible for putting those secrets onto the hosts in a way that our programs could use them. We inverted the model and instead get our programs to reach out to the secrets manager on startup to retrieve whatever secrets they need, and this has vastly simplified (and secured) our deployment scripts, at the cost of coupling our programs with a secrets manager. Seems worth it, so far.)
Sure, the sensitive data is stored in something that calls itself a vault and it could be used in a secure way, maybe. The software itself meets the requirements for FIPS something-or-other. You have some greater degree of confidence that the data is encrypted at rest someplace, but not everywhere. It's encrypted in transit from the source of truth, but honestly, any client system could be doing god-knows-what with the data on the other end, intentionally or unintentionally. You have a terrific audit log of what accessed the system directly from the source of truth, but not from any of the thousand places the data has gotten copied to, each of which is hopefully a hundred times easier to exfiltrate data from than your secrets store.

The only way to responsibly handle this is to use short-lived credentials, at which point you become a full identity broker. That's a great endgame! I'm unconvinced that in the space between "literally nothing" and "full identity broker", these products provide any value at all. They don't in any way touch the hard part of the problem.


Bhodi posted:

To me it’s a sanitation problem. It transforms unsafe poop (say, plaintext passwords or root or admin access keys) into something that’s safe to touch and use. which would be awesome, except you still need to get the poop in there which requires touching the poop, updating the poop, backing up the poop, and since it happily contaminates and gets into everything and is overall unpleasant to deal with, you cant use the same tools and automation that you use to handle the safe stuff so it gets to be manual or have it’s own separate PPE systems which themselves have to be managed, monitored, secured and kept up to date. Oh and you can’t isolate it because by its very nature it needs to be globally accessible.

This ecosystem is by its nature isolated from the rest of your automation, is a self-contained system with well-defined boundaries (did the poop touch?), doesn’t really need to interoperate, and so is a natural separate and isolated "problem space" ripe for a random product to come in and claim to “handle it” soup to nuts.
Right, an application has some amount of bits where you don't want people with access to the source code to also have access to those bits. There's good use cases for sticking those things into data stores on some kind of need-to-know basis. What defines "secrets management" as distinct from handling the secrecy of any other production data (ex. PHI) enough to warrant the usage of separate products and platforms?

Vulture Culture fucked around with this message at 22:10 on Jan 22, 2024

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Bhodi posted:

To me it’s a sanitation problem. It transforms unsafe poop (say, plaintext passwords or root or admin access keys) into something that’s safe to touch and use. which would be awesome, except you still need to get the poop in there which requires touching the poop, updating the poop, backing up the poop, and since it happily contaminates and gets into everything and is overall unpleasant to deal with, you cant use the same tools and automation that you use to handle the safe stuff so it gets to be manual or have it’s own separate PPE systems which themselves have to be managed, monitored, secured and kept up to date. Oh and you can’t isolate it because by its very nature it needs to be globally accessible.

This ecosystem is by its nature isolated from the rest of your automation, is a self-contained system with well-defined boundaries (did the poop touch?), doesn’t really need to interoperate, and so is a natural separate and isolated "problem space" ripe for a random product to come in and claim to “handle it” soup to nuts.

This is the best analogy of it I've ever heard. Love it.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

fletcher posted:

This is the best analogy of it I've ever heard. Love it.
:yeshaha:

Vulture Culture posted:

Right, an application has some amount of bits where you don't want people with access to the source code to also have access to those bits. There's good use cases for sticking those things into data stores on some kind of need-to-know basis. What defines "secrets management" as distinct from handling the secrecy of any other production data (ex. PHI) enough to warrant the usage of separate products and platforms?
Scope and the ability to fully encapsulate it. You don't do provisioning with PII/PHI and I've never seen something where architectural data like tags or network names or instance names was considered "in scope" for PII/PHI. PII/PHI can be encapsulated or isolated in a variety of ways up and down your stack, to specific systems/repos/APIs considered "in-boundary" and so there's typically no chicken-egg or bootstrap problem with using those credentials because there is always greater access to encapsulate it with, but "secrets management" is used at a much more fundamental, often the most fundamental level within your infra (most often provisioning of cross-boundary items such as VPCs, datastores, and various platform services and deployment tools) that fundamentally cannot be encapsulated.

Bhodi fucked around with this message at 22:41 on Jan 22, 2024

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Vulture Culture posted:

You have a terrific audit log of what accessed the system directly from the source of truth, but not from any of the thousand places the data has gotten copied to, each of which is hopefully a hundred times easier to exfiltrate data from than your secrets store.

Generally the way you're supposed to work it is that systems outside of your secret store should not be copying these secrets willy-nilly. For secrets that are rarely used (e.g. a sign-in secret for a request that gives you a machine-specific token) you might read from the secrets store, attach the secret to the request that needs it, and then immediately discard it.

In no situation should your service be reading the secret and then handing it off to a different one of your services - if that other service legitimately needs the secret, it should be reading that secret from the secrets store itself. You enforce this through policy and security reviews of the code that interacts with the secrets store.

Ideally your secrets store should come with a client library that can be embedded in your other services and handles that integration in a best-practices way, rather than each service needing to hand-write their own implementation.

If your org has the opinion that "we have a secrets store so we don't need to care about how the code that interacts with the secrets store actually uses those secrets" then yes, your secrets store isn't buying you anything - but that's not the sort of fuckup that can be fixed by any technical measures (not even by short-lived secrets!).

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Vulture Culture posted:

The only way to responsibly handle this is to use short-lived credentials, at which point you become a full identity broker. That's a great endgame! I'm unconvinced that in the space between "literally nothing" and "full identity broker", these products provide any value at all. They don't in any way touch the hard part of the problem.
We mostly use Hashicorp Vault as a KV store, and to be honest it doesn't provide significant value beyond a generic KV store like Redis, if it had some authnz + HA + backup controls slapped around it. But it was nice that it came with all that out of the box.

Using Vault as a credentials minter for short-term creds is convenient because Vault is also a secure place to store the permanent "bootstrap" creds required by the minter. AFAICT even if you arranged ephemeral creds for all apps, there'll always be a need for perma-creds to bootstrap the creds minter. So the value proposition of a secrets manager is partly as a way to handle those necessary perma-creds.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Jabor posted:

Generally the way you're supposed to work it is that systems outside of your secret store should not be copying these secrets willy-nilly. For secrets that are rarely used (e.g. a sign-in secret for a request that gives you a machine-specific token) you might read from the secrets store, attach the secret to the request that needs it, and then immediately discard it.

In no situation should your service be reading the secret and then handing it off to a different one of your services - if that other service legitimately needs the secret, it should be reading that secret from the secrets store itself. You enforce this through policy and security reviews of the code that interacts with the secrets store.

Ideally your secrets store should come with a client library that can be embedded in your other services and handles that integration in a best-practices way, rather than each service needing to hand-write their own implementation.

If your org has the opinion that "we have a secrets store so we don't need to care about how the code that interacts with the secrets store actually uses those secrets" then yes, your secrets store isn't buying you anything - but that's not the sort of fuckup that can be fixed by any technical measures (not even by short-lived secrets!).
I'm not saying (and, certainly, neither is my org) that apps shouldn't care about handling secrets securely, I'm asserting that secrets storage products don't meaningfully contribute to whether or not that happens. The marketers selling the audit log features sure pretend that audit log is a reflection of reality, though. It's not even a bronze bullet for detecting misuse of a secret by a bad actor.

12 rats tied together
Sep 7, 2006

Audit logging is usually useless by itself and the challenge is making sure you have audit logs that can be correlated across your various internal tools in such a way they end up at a change request that has an author and an approver. Misuse of credentials is a human problem so it needs a human solution (a write-up).

The old tools are usually the best tools for this and they're usually free as well, auditd probably comes included on your OS for example. Hopefully you have a security team for this.

If I were tasked with evaluating a system that cannot meaningfully satisfy our compliance requirements by, for example, including a reference to a jira isssue or a pull request with its administrative actions, I would say that the system is dog poo poo and should not be used in production.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
I don't know how useful it is for secrets managers to have comprehensive audit trails. Practically-speaking, whenever I've seen a leaked credential they would not have helped because it's been either:
- a dipshit engineer treating a perma-secret as regular data (e.g. adding a credential to some dev / CI script that they mistakenly upload to a public Git repo)
- leaked in logs. CI is a common problem here because everyone dials their debug levels up to the max so more stuff gets logged, including "dump the environment to aid debugging". And some CI systems log to public buckets because "it's just build results, it's not sensitive, we don't want to make life harder for people debugging stuff by adding an authnz layer over it".

Docjowles
Apr 9, 2009

minato posted:

I don't know how useful it is for secrets managers to have comprehensive audit trails. Practically-speaking, whenever I've seen a leaked credential they would not have helped because it's been either:
- a dipshit engineer treating a perma-secret as regular data (e.g. adding a credential to some dev / CI script that they mistakenly upload to a public Git repo)
- leaked in logs. CI is a common problem here because everyone dials their debug levels up to the max so more stuff gets logged, including "dump the environment to aid debugging". And some CI systems log to public buckets because "it's just build results, it's not sensitive, we don't want to make life harder for people debugging stuff by adding an authnz layer over it".

Going off on a tangent but at a past job we totally had a careless dev commit admin IAM keys to public GitHub. It was impressive how quickly they were found and used to spin up crypto miners in every region. Also the dev team had previously requested limit increases to 1000 instances of every family and size in every region “just in case” lmao. The breach was immediately caught by billing alerts of all things saying we were going to blow past the monthly budget by like 500k. They had access for less than half an hour and racked up an insane bill which Amazon kindly refunded after giving us a stern talking to

I don’t know why I keep falling for jobs where devs with zero operational or security knowledge are handed the keys to the Ferrari. In this case it was an acquired startup that negotiated strong autonomy as part of the deal. Turns out this was a bad idea!!!

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Docjowles posted:

Going off on a tangent but at a past job we totally had a careless dev commit admin IAM keys to public GitHub. It was impressive how quickly they were found and used to spin up crypto miners in every region.
This happens because Github (for reasons I'm not clear on) publish an public event log of every public change that happens in every single Github repo, and hackers listen to it and scan for leaked credentials. Consequently, any leaked credentials get used almost instantly. AWS is thankfully aware of this, and now listens to the same event log. They will quickly quarantine any AWS credentials they find by applying an IAM policy that effectively makes it a read-only credential, and then inform the account owner.

When we do forensics on these kind of incidents, it's common to find that the culprit was trying to cobble together some CI pipeline and had no idea how to inject credentials into it, so they just hardcoded it into the script.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bhodi posted:

Scope and the ability to fully encapsulate it. You don't do provisioning with PII/PHI and I've never seen something where architectural data like tags or network names or instance names was considered "in scope" for PII/PHI. PII/PHI can be encapsulated or isolated in a variety of ways up and down your stack, to specific systems/repos/APIs considered "in-boundary" and so there's typically no chicken-egg or bootstrap problem with using those credentials because there is always greater access to encapsulate it with, but "secrets management" is used at a much more fundamental, often the most fundamental level within your infra (most often provisioning of cross-boundary items such as VPCs, datastores, and various platform services and deployment tools) that fundamentally cannot be encapsulated.
There's always a chicken-egg bootstrap problem. The secrets manager itself is authenticated and always needs to be bootstrapped, as do all of its clients, whether you're talking about Vault on-prem or in the cloud, AWS Secrets Manager, or the org administrator account for LastPass. Considering the bootstrap dependency chain a little more holistically, it does seem useful from a DR perspective to say that all the identity information you need to facilitate cross-service access all lives in one place and can be pretty much guaranteed not to result in a circular dependency situation.

I had been thinking of a secrets store principally as some type of security tool, but hadn't considered its utility as a DR/BC or reliability tool. So I'll notch a point for secrets being centrally-managed rather than distributed, but I'm still unclear what, beyond just conceptual separation of duties, makes any of these tools better than files in object storage.

Hadlock
Nov 9, 2004

Where are you putting your shared secrets - aws master password, master admin accounts login and password etc. in the past this was typically one password or LastPass but they tend to be obvious targets for bragging rights and expensive. Using a shared Google doc seems janky

LochNessMonster
Feb 3, 2005

I need about three fitty


Hadlock posted:

Where are you putting your shared secrets - aws master password, master admin accounts login and password etc. in the past this was typically one password or LastPass but they tend to be obvious targets for bragging rights and expensive. Using a shared Google doc seems janky

Current gig: on prem cyberark

Previous gig: sealed envelope inside a physical vault that needed to be opened by 2 keys belonging to people from different parts of the org (so nobody in IT could single handedly obtain both keys). Access of the vault and content was administrated and checked by a security admin weekly and audited by a 3rd party (randomly picked from the Big 4) each quarter.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Vulture Culture posted:

I had been thinking of a secrets store principally as some type of security tool, but hadn't considered its utility as a DR/BC or reliability tool. So I'll notch a point for secrets being centrally-managed rather than distributed, but I'm still unclear what, beyond just conceptual separation of duties, makes any of these tools better than files in object storage.
I won't say better but like 12 rats says, it's a product space with a logical boundary that is similar if not identical across various companies, which makes it easy to develop and sell a product that fits that niche, and if it's popular enough, it's a known quantity that auditors will expect that (when properly configured) has a baseline level of security competence, and comes with it's own inheritable controls/certifications/compliance for fips/fedramp/pci. You can "roll your own" solution but I know you just winced at that phrase because everyone here knows how homegrown poo poo is inconsistent and gets hosed up in some subtle way and now the onus is on you to convince your auditors that your process is just as good as an "industry established" one.

Or you can just buy cyberark, no one gets blasted for buying cyberark and you won't have to write a book of justification for it for external auditors who immediately break out the fine-toothed combs, magnifying glasses and guarded expressions.

If you don't have external auditors though the pressing business need to just buy a solution is a lot less pressing.

Bhodi fucked around with this message at 16:48 on Jan 24, 2024

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Secrets are a particular class of data both with regulatory requirements and well defined access patterns such that reporting, audibility, and potentially higher cryptographic guarantees must be enforced. Object storage is possible to store unencrypted while we can't let that be a possibility for secrets or someone needs to be held legally liable for not encrypting it like uh... LastPass (seriously, why they haven't been sued into oblivion is beyond me).

In the end I think that there's a warm fuzzy feeling factor more than objective facts that guides security products way more than a lot of other product categories in enterprise space, which is a bit alarming and frustrating as someone that works in security.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
We know we're secure because we checked that "encrypt storage" chceckbox in AWS

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Cunningham's Law hard at work, thanks everyone

12 rats tied together
Sep 7, 2006

This is a misapplication of cunningham because you were right, actually.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

12 rats tied together posted:

This is a misapplication of cunningham because you were right, actually.

But Cunningham's law is actually bullshit. In my experience instead of posting the actual right answer, some pedantic rear end in a top hat says something like "Cunningham's law is bullshit" and says the provided answer is wrong without giving the right solution.

12 rats tied together
Sep 7, 2006

Well at least this time I know the rear end in a top hat isn't me, because I did the exact opposite :)

There's a fine line between posting a hot take on purpose to spur discussion and just threadshitting. I assume at this point that everyone ITT knows I will field any question with "actually ansible solved this problem in 2015 and [...]", or whatever, and if you wanted the explanation you would ask for it directly. :cheers:

But, I think other than me, people should feel free to post hot takes or half-formed thoughts ITT. I think the posters in here do a pretty good job of not making GBS threads on each other or being unnecessarily rude. You shouldn't have to cunningham anybody.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
I'm a little confused but I hope this wasn't about my obvious single sentence shitpost/joke

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

12 rats tied together posted:

This is a misapplication of cunningham because you were right, actually.
There's a fine line between "secrets management products are snake oil and barely help you achieve the things you buy one for" (true) and "you should just keep all your secrets in S3 and production databases" (false, but I wouldn't tut-tut you if you did)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply