|
Per-release AMI baking is how Netflix does it, since they don't modify running instances and just phase-out their running fleets w/ new AMIs. They've got a couple of posts about their baking / release pipeline: http://techblog.netflix.com/2016/03/how-we-build-code-at-netflix.html http://techblog.netflix.com/2013/03/ami-creation-with-aminator.html "The bakery reduced AMI creation time to under 5 minutes. This improvement led to further automation by engineers around Netflix who began scripting bakery calls in their Jenkins builds. Coupled with Asgard deployment scripts, by committing code to SCM, developers can have the latest build of their application running on an EC2 instance in as little as 15 minutes."
|
# ¿ Mar 10, 2017 16:09 |
|
|
# ¿ May 3, 2024 12:45 |
|
A single instance in a personal account (and I'm assuming not a big monthly spend)? Probably not much a chance for a refund. I've been at places where AWS waived the charges because some someone leaked their keys and got a fuckload of coin miners spun up but not a single DBA dipshitting a big RDS instance in the wrong region and forgetting about it. I think AWS makes a rough distinction between maliciousness and mistake weighed against how much you pay them a month.
|
# ¿ May 10, 2018 20:03 |
|
Votlook posted:What is a good way to manage ssh access to ec2 servers? I've seen this used in the wild: https://github.com/widdix/aws-ec2-ssh Assumes you're using individual IAM users, etc, and doesn't gently caress around w/ userdata and other stuff like OpsWorks does.
|
# ¿ Jun 21, 2018 11:31 |
|
GCP will also give you a $300 credit (for 1yr) on signup and their free-tier is reasonable, although no managed DBMS is in the free-tier: https://cloud.google.com/free/
|
# ¿ Jul 11, 2018 15:05 |
|
My team has/had that problem because SSM secrets were shared between ECS and Jupyter notebooks and were JSON strings, too. (Previous devs were really big on having single-sources of connection/credential info because we deal with a lot of external data sources.) I ended up writing this https://github.com/ian-d/ecs-template for use in our ECS containers as a lightweight entrypoint to pull / parse / templatize poo poo from SSM instead of baking it into the apps themselves. Keeps the apps more 12 Factor-ish and makes local testing easier since I could just rely on ENV vars and not SSM locally.
|
# ¿ Jan 24, 2020 13:08 |
|
I use Aurora Serverless for a couple of temporary, rarely-used reporting instances. The only problem we've had is the timeouts on to scale-from-zero / resume operations. Most clients have a default timeout that's shorter than Aurora's refresh spin-up time. Make sure client timeouts are set to 30s and it's usually fine. We do also run into an occasional resume error from the server when it takes too long: "Database was unable to resume within timeout period", so you may need bake in a connection retry in your client. Pooled connections would probably "just work", but we're using SQLAlchemy engines in one-shot ETLs.
|
# ¿ Feb 8, 2020 02:06 |
|
Just-In-Timeberlake posted:The NAT gateway associated with that VPC has a static IP address assigned to it Why? Inbound traffic to the NAT Gateway isn't somehow going to trigger your lambda or get routed via API Gateway. Running a Lambda function inside a VPC is so it can be part of your private network because it needs ip/network-level access to something that it couldn't otherwise reach. If you just want a custom domain for your API Gateway: https://aws.amazon.com/premiumsupport/knowledge-center/custom-domain-name-amazon-api-gateway/ https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html
|
# ¿ Aug 21, 2020 16:42 |
|
Hed posted:I'd like to run a corporate Django site on Fargate, does AWS have anything like Azure App Proxy? ALBs can do OIDC "directly" or other options (SAML, LDAP, etc) by bouncing through Cognito: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-authenticate-users.html. It also signs the resulting headers so you can validate it on the app side to ensure the request actually passed through the ALB auth flow.
|
# ¿ Jun 9, 2022 13:41 |
|
Is the bucket using a custom KMS key for encryption? If it is then that key also needs to have a resource policy that also grants access to the other account's principal. You also won't get a KMS-specific error, just the regular forbidden error.
|
# ¿ Sep 20, 2022 23:14 |
|
|
# ¿ May 3, 2024 12:45 |
|
BaseballPCHiker posted:There are a lot of tools out there (names escaping me ) that can basically look back at what API calls a principal has made and then give you a recommendation as well. The AWS-provided one is Access Analyzer: https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html. I think Netflix's repokid was the precursor but Access Anazlyer's pretty good. I wish I could find it again but I think Netflix also promoted an IAM Policy pattern that was basically "make a policy per-principal" since trying to make "shared" cross-cutting policies to attach to multiple principals inevitably diverged and since you're managing these policy in some declarative/IaC fashion (right?!) then having a policy-per-principal actually make change tracking / control easier. (This advice is primarily targeted at workload roles.)
|
# ¿ Feb 15, 2024 18:54 |