|
I'm looking to transfer storing some application required API keys in AWS secrets manager, vs the previous method of storing them as environment variables and then also having each developer manually add them to a global, non-source controlled config file. My question is, what are the best practices for allowing the application to retrieve these keys when it's running locally on a developer machine? I'm using the AWS SDK for .NET which grabs AWS credentials from the user/.aws folder, and I have my application configured to look for a "my app" named profile there. Should I be passing the same AWS access key and secret access key to each developer? Or should I create separate IAM users for each developer, add the Secrets Manager access role, and have them add their own access key and secret key? Or am I missing some third, easier option?
|
# ¿ Apr 15, 2021 15:15 |
|
|
# ¿ May 1, 2024 21:46 |
|
Is there a way to default to a different credentials profile for a specific subdirectory for the CLI besides using the --profile flag on each command or setting an environment variable at the beginning of the terminal session? Like, can I have a local .aws file that overrides the one in my home directory?
|
# ¿ Dec 23, 2021 21:13 |
|
I'm working on migrating some services from being manually provisioned via the AWS console to using CDK instead. The application architecture is a web-facing service running on ECS to put jobs into an SQS queue and a backend service running on ECS to retrieve jobs from the queue and process them. So far, I'm implementing this in 3 tiers of stacks, 1 top level stack for resources shared company-wide across multiple applications (VPC, an S3 scratch bucket, etc), 1 application level "shared" stack that sets up the SQS queues, permissions, and ECR repositories for both aspects of the application code, and finally a stack each for the web API and backend processing ECS deployments. The stack to deploy ECS requires a task definition that points to the image in ECR, so when the application code changes, we create and tag a new docker image and push to ECR. But afterwards, what is the "correct" way to update the running tasks? Should the ECS task definition be updated via running cdk deploy or running the aws update-service CLI command? We had a consultant help set this up initially, but they left us with deployment stage using both methods, which seems like overkill, plus deploying via the ECS stack resets the number of desired instances so I feel like going CLI only for application version updates is the correct way. Regardless of which deployment method, I've found that I also need to store the latest version tag in ssm so that if we do update anything in the CDK stack (things like instance type, scaling parameters, etc), the task definition can find the correct latest version, but I guess my main question is how close is this setup to "standard" and is it supposed to feel this convoluted.
|
# ¿ Mar 15, 2022 13:45 |
|
Working on setting up an ECS service with an auto scaling group. Both the ASG and the service require a security group, and the application will require sending and receiving traffic to and from EFS and SQS. Should the autoscaling group and ECS service be in the same security group? Coming from a rewrite of a bunch of CDK code that was given to us by a consultant who might have been doing this for the first time so I have no idea what's correct and what's not. Current setup is EFS, SQS, the ECS service and the Auto Scaling groups are all in their own security groups with a web of inbound/outbound permissions on each.
|
# ¿ Apr 8, 2022 14:39 |
|
This might be a dumb question. Configuring some ECS services and trying to figure out the security group permissions I need. I have an ALB which has its own security group, then an Auto Scaling Group, which has its own, and finally a service with its own. Who needs to talk to whom to get this sorted out? I have the ALB set up to allow inbound traffic on port 80, the ASG to allow inbound traffic from the ALB SG on port 80, and the Service to also allow inbound traffic from the ALB on port 80. But I'm unable to hit any API endpoints running on the service, so I'm pretty sure something is misconfigured, but I don't really understand how the 3 components talk to each other.
|
# ¿ Jul 12, 2022 21:02 |
|
Plank Walker posted:This might be a dumb question. Configuring some ECS services and trying to figure out the security group permissions I need. I have an ALB which has its own security group, then an Auto Scaling Group, which has its own, and finally a service with its own. Who needs to talk to whom to get this sorted out? I figured this out (maybe? idk but I can hit the service so it works). Maybe someone more knowledgeable can confirm that my understanding is correct: Application Load Balancer is in Security Group A, which is open to outside traffic for requests. Auto Scaling Group has Security Group B, which can receive traffic from A. This security group is applied to every EC2 instance that gets brought online for the application (I think?). The ECS Service is in Security Group C, which is set up to receive traffic from B and allows the EC2 instance to pass requests to the service running within it (also I think?) Now let's say I have some other resource that I want to talk to the service, which security group do I allow traffic from this resource in?
|
# ¿ Jul 16, 2022 15:05 |
|
I'm running a container on ECS and would like to get the EC2 instance ID for logging. Container is running .NET core and the AWS SDK for .NET exposes a Amazon.Util.EC2InstanceMetadata.InstanceId but this appears to return null. I'm assuming this is because it's running in a container and not directly on the instance. Any idea what methods I can use to get this instance ID? Some stack overflow answers mention querying http://169.254.169.254/latest/meta-data/instance-id but 1) I'm not sure whether this will work from inside the container and 2) testing this requires trial and error on another deployment so I'd rather have some idea upfront if it will work or not
|
# ¿ Feb 29, 2024 16:03 |
|
Happiness Commando posted:Dumb question but are you sure you're running on EC2 vs Fargate? Yep definitely EC2
|
# ¿ Feb 29, 2024 17:02 |
|
Plank Walker posted:I'm running a container on ECS and would like to get the EC2 instance ID for logging. Container is running .NET core and the AWS SDK for .NET exposes a Amazon.Util.EC2InstanceMetadata.InstanceId but this appears to return null. I'm assuming this is because it's running in a container and not directly on the instance. So I figured out my issue, I needed to set ECS_ENABLE_CONTAINER_METADATA=true in the file /etc/ecs/ecs.config on the EC2 instances that were hosting my containers. The only way to do that that I could find was to add commands to the user data section of the auto scaling group configuration in cloudformation/CDK. This ended up populating the field Amazon.Util.EC2InstanceMetadata.InstanceId in my .NET code, so no need to mess around with reading and parsing JSON from the internal metadata URLs.
|
# ¿ Mar 7, 2024 18:10 |
|
Been migrating a bunch of AWS resource creation to CDK/Cloudformation vs manual, noticed we are getting hit with a big bill for S3 access so trying to add a VPC gateway. Working off a stack overflow answer here: https://stackoverflow.com/a/72040360/2483451, is it sufficient to just add the gateway endpoint to the VPC configuration or do I have to add some reference to the VPC to the S3 construct as well? The comment on stack overflow says VPC configuration is all that's necessary, but not much more detail.
|
# ¿ Apr 5, 2024 14:06 |
|
|
# ¿ May 1, 2024 21:46 |
|
Docjowles posted:There is nothing to do on the S3 side. Just make the gateway endpoint and put it in your VPC route tables. There’s a free endpoint for dynamodb too if you use that service. Yeah we had a similar issue with S3 KMS key caching, it's a one liner to add but miss it and oops now you're getting charged for a KMS key retrieval every time you do anything with S3
|
# ¿ Apr 5, 2024 16:11 |