Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

PierreTheMime posted:

What would be the best option for S3 object movement automation to sort incoming files from an AWS Transfer-connected bucket to other buckets based on key "location"? I'm looking at Java and Python code that both seem fine using an S3 event to trigger it, but I wanted to see what the general opinion of the best route would be before I dive in.

Lambda triggered by S3 event seems ideal to me.

Adbot
ADBOT LOVES YOU

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

Ah, yeah, sorry, didn't properly understand your question. That's a pretty neat way of doing it that I hadn't thought of! My only thought from a maintenance perspective is that for each client S3 folder you set up, you'll need some way of automating the creation of your target S3 bucket and then adding the user-defined metadata to the client folder in the SFTP bucket.

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

thotsky posted:

What helped me the most by far was to look for options that are clearly wrong and can be dismissed instead of looking for the right answer. Even where you feel like you have no clue you might have enough of a clue to reduce the question to a 50-50 guess.

I haven't done the Solution Architect exams but this was definitely true for the SysOps and DevOps ones. The other bit that I noticed was that generally the solution with the least moving parts tended to be the most cost-effective and performant, which helped a lot for questions where I wasn't intimately familiar with the services.

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

CarForumPoster posted:

I have a DB hosted in RDS that I want to put an API in front of to serve content via a flask app served by elastic beanstalk. I may potentially give ~10 users access to the API as well.

Does AWS have some hilariously easy way to make a REST or similar JSON serving API before I get started on a new django (DRF) or maybe FastAPI project? This won't get a ton of traffic, I'd mostly be using it to generate reports server side that are served by a small flask app.

API Gateway's not bad, but might be a bit heavy for what you're after. You can supply an OpenAPI spec and it'll fill out all the resources and object representations for you and you just configure it to forward it to the HTTP endpoint of your flask app after validation. You can also set it up to require API keys which are managed by AWS if you don't want to handle that functionality yourself.

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

Twerk from Home posted:

How do you guys successfully handle IAM roles for whatever process is doing your deployments?

I'm having a hard time striking a balance between permissiveness and actual practical ability to deploy applications that are actively changing and evolving. Any type of least-privilege role for deployment has to be constantly updated whenever we integrate a new AWS feature, and nobody's going to prioritize removing unused permissions from the role when we stop using something so it doesn't stay a least-privilege role at all.

My employer has 20+ AWS accounts to separate lines of business. I've been implementing automated deployments in an AWS account that's never had them before so I'm the one designing all of the roles and such. I've learned that internally, every single role that's been used for automated deployments by the 3+ groups I've talked to are wildly over-permissioned, hated by security, and everyone intends to clean them up at some point in the future but cleaning has never happened.

Am I missing some better way to determine what sort of access is necessary to run a Cloudformation-based deployment? We're using the Cloud Development Kit to create our Cloudformation stacks. Applications are all sorts of things involving a wide array of AWS products, which is what would make figuring out actual least privilege to touch all of them tough.

To be clear, I only am struggling with the roles used for the deployment process itself. CDK is creating least privilege roles for each application to run as.

We have an Ansible playbook that we run locally against a newly created AWS account that provisions (among other things) our infrastructure deployment role. When we change the playbook to change the IAM role we make sure we apply it against all of our accounts. I'm sure it could be handled better with Control Tower

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

crazypenguin posted:

It's probably pointless to try to restrict a CD role much, unless I'm mistaken. Don't forget to threat model.

Your attacker gets access to your CD role. What are their goals? What's the worst-case? Are they thwarted because the role diligently doesn't allow creating an AppStream fleet? Or do they have the keys to the whole account anyway, because they can just iam:CreateRole whatever they please and use that?

I'm not sure there's any way to make a CD role less valuable (wait, no: SCPs), so you have to make it really well protected instead.

This is a pretty good point. Especially when I consider that the CD role is pretty solidly protected such that the only people that could gain access to it are people with the administrator role in the first place. We're about to start building out a bunch more stuff so it's something I might pursue on Monday. Good shout, thanks!

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

We run a distributed file store for our application and we've finally gotten the go-ahead to pursue replacing the existing GlusterFS setup with EFS. Fairly sure the way we're gonna handle the migration is to just rsync gluster to EFS, unmount gluster and mount EFS. Has anyone run into any issues with migrating to EFS before? We plan on testing the poo poo out of it in terms of performance and cutover, but wanted to see if there were some pitfalls we need to avoid early.

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

freeasinbeer posted:

If it’s small enough I’d actually lean into EC2 over ECS; its a lot more straightforward to manage bare linux hosts and doesn’t make you learn a whole new ecosystem.

To be honest I’m not sure there is a _good_ reason to use ECS at all anymore. Maybe more options to scale fargate tasks?

And if your looking for just running containers in the cloud; heroku or something like fly.io are wayyyy simpler to get started on and don’t require learning a bunch of useless aws semantics.

ECS on Fargate is pretty good if you want container orchestration without a lot of the extra complication that EKS and Kubernetes brings. You use AWS building blocks for everything, you don't need to manage IAM role <-> ServiceAccount mappings and vice versa, there feels like there's a lot less pitfalls (I.e. secret management, ConfigMap management, etc.) But, this might be a grass-is-greener thing from managing so many workloads in Kube.

Definitely agree on running things in EC2 though. Auto Scaling Group + ALB/NLB is pretty rock solid. Either pre-package AMIs with something like Packer or do it all via user-data when the instance starts. The only two real things I'd watch out for are that management gets difficult when you start talking about many different applications and it's on you to make sure logs and metrics are getting shipped properly.

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

Happiness Commando posted:

You don't need a WAF to limit ingress IP - you can do that with regular security groups. It's a fine thing to add if you want, though

If he's going via CloudFront he will because you can't attach security groups to a CloudFront distribution.


lazerwolf posted:

I was exploring cloud front serving the private S3 files and putting a WAF on top limiting IP ranges.

We use basically this setup on our end, all in Terraform as well. But because we all work remote and none of us have static IPs it requires us to route requests for CloudFront's public IP addresses through our VPN. So, if that's something you have to worry about you might get a lot more traffic on your VPN than you bargained for.

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

I always get really excited when Honeycomb announce job openings but they're still US/Canada only for their tech roles so I don't even get past the recruiter screen :smith:

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

Hed posted:

We have an Application Load Balancer that is ingress for a k8s application. Most of the time it works really well but occasionally it just times out. Running curl -v https://app.com doesn’t even negotiate TLS and then times out. Once it works it seems to be sticky. Intermittent so it’s hard to debug.
Looking at the health checks for the app it seems fine.

What should I be looking at to debug this? Shouldn’t the ALB negotiate TLS with a client first or is it “smart” and making sure the app is in a good state.

I'd start by looking at CloudWatch metrics looking at load balancer 504s vs. backend 504s. That'll tell you pretty quick where AWS thinks the problem is, at least. If you're not even seeing the failed requests hit the load balancer then you'll want to have a look at VPC flow logs to see if you get spikes of packets being rejected (which generally points toward something like a security group rule being deleted and recreated and requests coming in while the rule is absent).

Adbot
ADBOT LOVES YOU

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

My new job has me janitoring some EB stuff and I think it's because the stuff that runs from .ebextensions is run via cfn-init which has different default creds that are more tightly scoped to the CloudFormation stack (which is what Elastic Beanstalk is under the hood). It's kinda obliquely mentioned in the docs here and they show what you're supposed to do for S3 in the EB docs. Looks like you've got to create an auth method for cfn-init to use (in this case, use the instance profile for the instance) and then tell it to use that when downloading the file from S3.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply