Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Plank Walker
Aug 11, 2005
I'm looking to transfer storing some application required API keys in AWS secrets manager, vs the previous method of storing them as environment variables and then also having each developer manually add them to a global, non-source controlled config file. My question is, what are the best practices for allowing the application to retrieve these keys when it's running locally on a developer machine?

I'm using the AWS SDK for .NET which grabs AWS credentials from the user/.aws folder, and I have my application configured to look for a "my app" named profile there. Should I be passing the same AWS access key and secret access key to each developer? Or should I create separate IAM users for each developer, add the Secrets Manager access role, and have them add their own access key and secret key? Or am I missing some third, easier option?

Adbot
ADBOT LOVES YOU

Plank Walker
Aug 11, 2005
Is there a way to default to a different credentials profile for a specific subdirectory for the CLI besides using the --profile flag on each command or setting an environment variable at the beginning of the terminal session? Like, can I have a local .aws file that overrides the one in my home directory?

Plank Walker
Aug 11, 2005
I'm working on migrating some services from being manually provisioned via the AWS console to using CDK instead. The application architecture is a web-facing service running on ECS to put jobs into an SQS queue and a backend service running on ECS to retrieve jobs from the queue and process them. So far, I'm implementing this in 3 tiers of stacks, 1 top level stack for resources shared company-wide across multiple applications (VPC, an S3 scratch bucket, etc), 1 application level "shared" stack that sets up the SQS queues, permissions, and ECR repositories for both aspects of the application code, and finally a stack each for the web API and backend processing ECS deployments.

The stack to deploy ECS requires a task definition that points to the image in ECR, so when the application code changes, we create and tag a new docker image and push to ECR. But afterwards, what is the "correct" way to update the running tasks? Should the ECS task definition be updated via running cdk deploy or running the aws update-service CLI command? We had a consultant help set this up initially, but they left us with deployment stage using both methods, which seems like overkill, plus deploying via the ECS stack resets the number of desired instances so I feel like going CLI only for application version updates is the correct way.

Regardless of which deployment method, I've found that I also need to store the latest version tag in ssm so that if we do update anything in the CDK stack (things like instance type, scaling parameters, etc), the task definition can find the correct latest version, but I guess my main question is how close is this setup to "standard" and is it supposed to feel this convoluted.

Plank Walker
Aug 11, 2005
Working on setting up an ECS service with an auto scaling group. Both the ASG and the service require a security group, and the application will require sending and receiving traffic to and from EFS and SQS. Should the autoscaling group and ECS service be in the same security group?

Coming from a rewrite of a bunch of CDK code that was given to us by a consultant who might have been doing this for the first time so I have no idea what's correct and what's not. Current setup is EFS, SQS, the ECS service and the Auto Scaling groups are all in their own security groups with a web of inbound/outbound permissions on each.

Plank Walker
Aug 11, 2005
This might be a dumb question. Configuring some ECS services and trying to figure out the security group permissions I need. I have an ALB which has its own security group, then an Auto Scaling Group, which has its own, and finally a service with its own. Who needs to talk to whom to get this sorted out?

I have the ALB set up to allow inbound traffic on port 80, the ASG to allow inbound traffic from the ALB SG on port 80, and the Service to also allow inbound traffic from the ALB on port 80. But I'm unable to hit any API endpoints running on the service, so I'm pretty sure something is misconfigured, but I don't really understand how the 3 components talk to each other.

Plank Walker
Aug 11, 2005

Plank Walker posted:

This might be a dumb question. Configuring some ECS services and trying to figure out the security group permissions I need. I have an ALB which has its own security group, then an Auto Scaling Group, which has its own, and finally a service with its own. Who needs to talk to whom to get this sorted out?

I have the ALB set up to allow inbound traffic on port 80, the ASG to allow inbound traffic from the ALB SG on port 80, and the Service to also allow inbound traffic from the ALB on port 80. But I'm unable to hit any API endpoints running on the service, so I'm pretty sure something is misconfigured, but I don't really understand how the 3 components talk to each other.

I figured this out (maybe? idk but I can hit the service so it works). Maybe someone more knowledgeable can confirm that my understanding is correct:
Application Load Balancer is in Security Group A, which is open to outside traffic for requests. Auto Scaling Group has Security Group B, which can receive traffic from A. This security group is applied to every EC2 instance that gets brought online for the application (I think?). The ECS Service is in Security Group C, which is set up to receive traffic from B and allows the EC2 instance to pass requests to the service running within it (also I think?)

Now let's say I have some other resource that I want to talk to the service, which security group do I allow traffic from this resource in?

Plank Walker
Aug 11, 2005
I'm running a container on ECS and would like to get the EC2 instance ID for logging. Container is running .NET core and the AWS SDK for .NET exposes a Amazon.Util.EC2InstanceMetadata.InstanceId but this appears to return null. I'm assuming this is because it's running in a container and not directly on the instance.

Any idea what methods I can use to get this instance ID? Some stack overflow answers mention querying http://169.254.169.254/latest/meta-data/instance-id but 1) I'm not sure whether this will work from inside the container and 2) testing this requires trial and error on another deployment so I'd rather have some idea upfront if it will work or not

Plank Walker
Aug 11, 2005

Happiness Commando posted:

Dumb question but are you sure you're running on EC2 vs Fargate?

Yep definitely EC2

Plank Walker
Aug 11, 2005

Plank Walker posted:

I'm running a container on ECS and would like to get the EC2 instance ID for logging. Container is running .NET core and the AWS SDK for .NET exposes a Amazon.Util.EC2InstanceMetadata.InstanceId but this appears to return null. I'm assuming this is because it's running in a container and not directly on the instance.

Any idea what methods I can use to get this instance ID? Some stack overflow answers mention querying http://169.254.169.254/latest/meta-data/instance-id but 1) I'm not sure whether this will work from inside the container and 2) testing this requires trial and error on another deployment so I'd rather have some idea upfront if it will work or not

So I figured out my issue, I needed to set ECS_ENABLE_CONTAINER_METADATA=true in the file /etc/ecs/ecs.config on the EC2 instances that were hosting my containers. The only way to do that that I could find was to add commands to the user data section of the auto scaling group configuration in cloudformation/CDK.

This ended up populating the field Amazon.Util.EC2InstanceMetadata.InstanceId in my .NET code, so no need to mess around with reading and parsing JSON from the internal metadata URLs.

Plank Walker
Aug 11, 2005
Been migrating a bunch of AWS resource creation to CDK/Cloudformation vs manual, noticed we are getting hit with a big bill for S3 access so trying to add a VPC gateway. Working off a stack overflow answer here: https://stackoverflow.com/a/72040360/2483451, is it sufficient to just add the gateway endpoint to the VPC configuration or do I have to add some reference to the VPC to the S3 construct as well? The comment on stack overflow says VPC configuration is all that's necessary, but not much more detail.

Adbot
ADBOT LOVES YOU

Plank Walker
Aug 11, 2005

Docjowles posted:

There is nothing to do on the S3 side. Just make the gateway endpoint and put it in your VPC route tables. There’s a free endpoint for dynamodb too if you use that service.

It annoys the poo poo out of me that there is this no downside thing you can easily drop in your VPC to improve costs and the network path and AWS doesn’t just do it for you. I’m curious what their public justification would be, cause it sure feels like the real motivation is “rip off suckers with totally pointless NAT fees and hope they don’t notice”.

Yeah we had a similar issue with S3 KMS key caching, it's a one liner to add but miss it and oops now you're getting charged for a KMS key retrieval every time you do anything with S3

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply