Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Docjowles
Apr 9, 2009

Pile Of Garbage posted:

To explain my shameful situation, we had a handful of domains to buy and setup zones for them and R53 really is the easiest thing for that. Literally just click click done. Five domains registered of which only one we're using (The rest were just claimed for brand-protection). We'll probably port them to Azure to reduce our surface area some day idk.

The number of domains my company owns for brand protection is loving staggering. Like whatever number you're imagining you probably need to add zeroes. This comes of being pretty old as internet companies go, being a global brand, and having done a lot of acquisitions, I guess. Not a concern that ever occurred to me before coming here heh.

We could probably jettison a ton of them with zero harm but we're comically risk averse so instead we just pay zillions of dollars to park weird typo domains in every possible TLD

Docjowles fucked around with this message at 20:54 on Apr 28, 2023

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

Godspeed. Dealing with email is to operations as dealing with printers is to IT support. Every company I've worked for that decided to send their own bulk email vs use a SaaS, it's consumed an insane percentage of my time. A lot of it just comes down to all the reputation management stuff you need to do to ensure your IP's aren't getting blacklisted, despite your users' best efforts to send mega spammy poo poo (and then come yell at you to ask why their mega spammy poo poo is being flagged as spam). Also the configuration files for whatever underlying MTA you select (exim, postfix, etc) are abominations that are basically their own general purpose programming language. I am extremely glad I don't have to work with bulk email much in my current role.

edit:

SMTP itself is a very simple protocol. Which is actually the problem; it was built for a time when everyone on the internet knew each other personally cause there were like 100 users and they were all at universities and government labs. Similar to BGP. So more and more protocols and standards have been tacked on over time, all of which you also need to deeply understand to run a modern email infrastructure that isn't firing every message directly into spam folders, or accidentally being left open as a relay for spammers to abuse.

Docjowles fucked around with this message at 17:12 on May 18, 2023

Docjowles
Apr 9, 2009

necrobobsledder posted:

You can send e-mails at volume (more than n thousand / month I think?) with AWS if you sign an agreement with AWS that you're an actual e-mail vendor of some sort and that you'll be vigilant about spam reports at a legal level.

Oh yeah this reminds me that you will start getting actual death threats from unhinged lunatics to your abuse@ address if they ever receive something they perceive as spam, lol

Docjowles
Apr 9, 2009

I think either approach is ok. Personally I would put it in a lambda layer if you like lambda for this use case and intentionally want to use it. Otherwise put it in a container and use a more traditional container environment like ECS or k8s that doesn’t subject you to the various structures and limits of lambda.

Docjowles
Apr 9, 2009

I am in no way advocating this but a lot of companies just give the bare minimum lip service to security and privacy. Because the cost of actually doing it right is less than just paying a fine and eating some bad PR if/when a major breach occurs. I mean it’s not like Equifax or Target went down when they had massive incidents a while back.

Thankfully legislation like GDPR actually has some teeth and isn’t just comically finding a company the size of Google like $100k.

Docjowles fucked around with this message at 00:58 on Jun 18, 2023

Docjowles
Apr 9, 2009

Arzakon posted:

Lambda to trigger a forward isnt Rube Goldbergian. My team project to take an incoming SES e-mail and pass it through SNS to SQS to Lambda to parse the subject line and run that as a query against an RDS database and drop the output into an S3 object was Rube Goldbergian. Mostly because it was part of a joke team building exercise to build a Rube Goldberg machine using as many AWS services as possible (back when there weren’t very many services).

And then those jokes get published as official AWS Solutions. I swear some of those architecture diagrams have 50 unique shapes on them

Docjowles
Apr 9, 2009

Are you talking about like building a new environment on AL2023 and what pitfalls you will encounter deploying your app that has been running on AL1 to it? Or trying to upgrade in place via yum or something? AL1 is ancient (in cloud terms), I would be surprised if the latter is even possible. I think AWS would tell you to build new AMIs and new hosts and cut over. AWS is not pet friendly.

Docjowles
Apr 9, 2009

This is a total shot in the dark and probably not your issue, but we were running a very high traffic service behind ALB and dealt with the same thing where requests would randomly be slow or time out entirely at the load balancer even if the backing service was totally healthy. A couple findings

1. If the ALB has to scale up or down there can be brief periods of refusing connections. This was confirmed by AWS support. Not really anything you can do here. Their recommendation was "make sure clients implement retries" which doesn't really help if your client is Safari on some dude's iPhone.

2. Switching from round robin to Least Outstanding Requests was a massive performance win for our specific application. So try that maybe?

Docjowles
Apr 9, 2009

Vanadium posted:

I got laid off from my AWS heavy job a few months ago and do not see a lot of AWS in my future but this stuff is still swirling around my head send help.

Today I was at a park with my kids and started chatting with another dad. Asked what he does for work and he loving works for AWS. Even on the weekend there is no escaping the shadow of the cloud.

Sorry to hear about the layoff that sucks.

Docjowles
Apr 9, 2009

If I had to guess the instance doesn’t have permission to read the object. If you run the script manually as the same user it normally runs under does it work? Are there logs you could inspect or anything in cloudtrail?

Docjowles
Apr 9, 2009

Scrapez posted:

I'm looking to use Amazon EventBridge to kick off a monthly script that gathers data about an RDS instance data transfer via resource metrics. Everything I'm seeing says to have EventBridge kick off SSM to execute the shell script on a specific instance. I can do it this way but I don't like the idea of it being tied to a single instance.

Is there a different way to do this where I can have a monthly job that executes
code:
aws pi get-resource-metrics --service-type RDS --identifier db-XXXXXXXXXX --start-time $weekagodate --end-time $todaydate --period-in-seconds 86400 --metric-queries '[{"Metric": "os.network.tx.sum"  }]'
and then passes the output to SNS to be emailed?

I would put this into a Lambda or an ECS task and have EventBridge run that. Rather than having a whole EC2 instance just to run one command once a month

Docjowles
Apr 9, 2009

Thanks Ants posted:

I've been dropped into a situation where someone has had their SES service suspended after AWS detected potential misuse. I'm 99% sure this is a "the horse is out the barn door" scenario and I am aware that email sucks, but is there any way to access a log of sent messages, the IP address making the request, how it was authenticated (e.g. access key used), subject line of the email, destination etc. or is that all logging that you have to configure yourself if you want access to the data?

Assuming the answer to the above is "you're out of luck, build it yourself" is there a handy best practises guide that is worth paying attention to in terms of getting a good balance between what it being sent to Cloudwatch and the cost of doing so?

If your senders are using the SES API you might be able to find something useful in CloudTrail. If it's just raw STMP, yeah, "you're out of luck, build it yourself". AWS loves to offer these "solutions" that aren't so much a feature as they are lovely Rube Goldberg machines of 5 different AWS services smashed together. Take a look at Event Publishing. You can set it up to publish detailed logs to various destinations. You will need control over the senders, though, since you need to pass in custom headers to make it happen. Hopefully you aren't just offering an open relay anyway though, otherwise, I think I know how you got in trouble :v:

If the account has any level of AWS paid support, give that a shot, too. Maybe they can pull logs or something. It's in their interest to not have people making GBS threads up their IP reputation.

edit: I see PCHiker and I are on the same page lol

Docjowles
Apr 9, 2009

If they’re being charged for it, it must exist somewhere. Maybe you’re looking in the wrong region? If you go into the billing console it should have pretty detailed info about exactly what you are being billed for and why.

If you can’t figure it out, sure, a support case can’t hurt.

Spinning something expensive up as an experiment or mistake and then never tearing it back down is unfortunately very common.

Docjowles
Apr 9, 2009

On top of the other good advice I would suggest specifically focusing on IAM and VPC/networking basics. Cause those are building blocks you will need no matter what other AWS services your company uses on top of them. The idea of IAM roles in particular took way too long to click for me when I first started using AWS. Hopefully coming from another cloud provider it’s familiar for you.

AWS networking isn’t bad if you have any network background at all. Unfortunately a lot of people do not. So they end up shooting themselves in the foot with architectures that are extremely cost inefficient or insecure or incompatible with the company’s IP address plan. Then you have to start over which means tearing everything down, which sucks. Planning out your VPCs should very much be “measure twice, cut once”.

Docjowles
Apr 9, 2009

FISHMANPET posted:

My read on those is that you have to "switch" into a role and isn't really meant to be a user's level of regular access. And it still has a policy attachment limit.

It was like 5 years ago now so my memory is hazy. But I stood up an AWS org with a few dozen accounts without SSO/Identity Center (I kind of forget why, I think like you that company's IT SSO story was just nonexistent at that point). Amazon's guidance at the time was to have one account that held all the IAM users, whether that was the organization root or just a dedicated "identity" account. Those users don't have permission to do poo poo except assume roles in other accounts. If you wanted to do something in ProductionServiceA, you'd auth as your IAM user then assume the Operator or ViewOnly or whatever role was appropriate in the ProductionServiceA account and do your thing. You'd include trust policies to control who could assume what role.

If this sounds like a lovely, hood-rear end reimplementation of SSO, it totally was. We wrote some simple tooling to make the flow a little less horrific for devs but Identity Center really smooths this all out and is worth the time to stand up. You don't need to run things like it's 2018 anymore, the AWS auth experience has gotten orders of magnitude better.

Also buyer beware on those managed IAM policies. Amazon absolutely does not keep them up to date with new services and features. At some point you're going to need to tack on your own custom policies anyway that cover random gaps where AWS released new things and then just...never went back and added them to the policies. Not saying you shouldn't use them, they're still helpful, they're just not a silver bullet.

Docjowles
Apr 9, 2009

It looks like both users (I would assume this also applies to groups) and roles have an initial limit of 10 policies and a hard cap of 20. So yeah you could request an increase and relieve the immediate pressure. https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html

Having 20 policies on a single object feels a bit nuts though and at some point you do need to take the time to just craft your own policy that does exactly what you want. IAM janitoring is basically the Eating Your Vegetables of using AWS, in that it's not a lot of fun but pays dividends in terms of the health and safety of your cloud environment.

Docjowles
Apr 9, 2009

I’m guessing a lot of weird limits boil down to 1. A decision made like 15 years ago that is a total bitch to change now. 2. Something that seems reasonable in your one account but multiplied by 10 billion IAM objects or whatever becomes A Problem. Or the combination of both. Not trying to carry water for Amazon, though, since they certainly have the resources to do pretty much anything if they want to.

You can at least kind of break up your policy into statements with a Sid describing the purpose but that’s not great.

Docjowles
Apr 9, 2009

lol my coworker asked Amazon's Q AI assistant thing if Compute Savings Plans work for RDS and it said "yes because under the hood they run on EC2". That's... definitely not true, right? I wonder if they would issue a refund if you bought a useless savings plan on the advice of their dogshit hallucinating AI.

of course not

Docjowles
Apr 9, 2009

I couldn't find it in 30 seconds skimming the docs but https://cloudcustodian.io/ MIGHT be able to do this? It can filter on and react to a whole lot of things. Stuff like this ("I want to find resources matching or not matching a criteria and do something to them") is why the tool exists although with AWS being so vast it of course doesn't cover everything.

Docjowles
Apr 9, 2009

There is nothing to do on the S3 side. Just make the gateway endpoint and put it in your VPC route tables. There’s a free endpoint for dynamodb too if you use that service.

It annoys the poo poo out of me that there is this no downside thing you can easily drop in your VPC to improve costs and the network path and AWS doesn’t just do it for you. I’m curious what their public justification would be, cause it sure feels like the real motivation is “rip off suckers with totally pointless NAT fees and hope they don’t notice”.

Docjowles fucked around with this message at 16:02 on Apr 5, 2024

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

Hed posted:

I have a pretty small Postgres RDSinstance on a db.t4g.medium that is capping out of IOPS and as a result there's a whole lot of WAL wait. We have a 20GB disk on gp2 so expected IOPS is 60.

I shouldn't have let it get to this point, but is there a graceful way to increase IOPS with no downtime? My options appear to be:

  • convert the disk to gp3 (no way this can be done online right?)
  • make the disk bigger to scale the "IOPS = Volume size [GB] * 3", but I don't need it larger

Is there a good way to spin up another instance into the cluster with enough IOPS and gracefully transition to it? I know I could pgdump/restore but would rather not have downtime if possible.

I'm reasonably sure you can convert from gp2 to gp3 online. You certainly can with EBS volumes so I don't know why RDS would be different. There might be some performance degradation during the move but 20GB should be very fast.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply