|
No creds or attached role. Googling around I do see some other people mention that account # in some GitHub pages here and there, so I wonder now if its some Amazon owned account#?
|
# ? Dec 28, 2023 21:19 |
|
|
# ? Apr 27, 2024 17:13 |
|
I've been dropped into a situation where someone has had their SES service suspended after AWS detected potential misuse. I'm 99% sure this is a "the horse is out the barn door" scenario and I am aware that email sucks, but is there any way to access a log of sent messages, the IP address making the request, how it was authenticated (e.g. access key used), subject line of the email, destination etc. or is that all logging that you have to configure yourself if you want access to the data? Assuming the answer to the above is "you're out of luck, build it yourself" is there a handy best practises guide that is worth paying attention to in terms of getting a good balance between what it being sent to Cloudwatch and the cost of doing so?
|
# ? Dec 28, 2023 21:33 |
|
Found my answer - https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html#architecture WorkSpace workloads have two ENIs and two VPCs. The compute part of the workspace actually resides in another aws managed account I had no clue but at least clears up that oddball issue. Thanks Ants posted:I've been dropped into a situation where someone has had their SES service suspended after AWS detected potential misuse. I'm 99% sure this is a "the horse is out the barn door" scenario and I am aware that email sucks, but is there any way to access a log of sent messages, the IP address making the request, how it was authenticated (e.g. access key used), subject line of the email, destination etc. or is that all logging that you have to configure yourself if you want access to the data? Oooft. CloudTrail will only capture the management events which may have some helpful info for you. Moving forward this may be helpful for you - https://docs.aws.amazon.com/ses/latest/dg/monitor-using-event-publishing.html
|
# ? Dec 28, 2023 22:15 |
|
Thanks Ants posted:I've been dropped into a situation where someone has had their SES service suspended after AWS detected potential misuse. I'm 99% sure this is a "the horse is out the barn door" scenario and I am aware that email sucks, but is there any way to access a log of sent messages, the IP address making the request, how it was authenticated (e.g. access key used), subject line of the email, destination etc. or is that all logging that you have to configure yourself if you want access to the data? If your senders are using the SES API you might be able to find something useful in CloudTrail. If it's just raw STMP, yeah, "you're out of luck, build it yourself". AWS loves to offer these "solutions" that aren't so much a feature as they are lovely Rube Goldberg machines of 5 different AWS services smashed together. Take a look at Event Publishing. You can set it up to publish detailed logs to various destinations. You will need control over the senders, though, since you need to pass in custom headers to make it happen. Hopefully you aren't just offering an open relay anyway though, otherwise, I think I know how you got in trouble If the account has any level of AWS paid support, give that a shot, too. Maybe they can pull logs or something. It's in their interest to not have people making GBS threads up their IP reputation. edit: I see PCHiker and I are on the same page lol
|
# ? Dec 28, 2023 22:20 |
|
Event publishing looks like it will do most of the work, the auto tags are pretty much what we're after and then it's just a case of having the information present itself back out again in a way that makes sense. Knowing what I know about the people using this service I think I'm going to push for them to move to a more managed platform that will do all this for them even if it costs a bit more than SES, they aren't really at the level where they can be consuming raw AWS services.
|
# ? Dec 28, 2023 23:33 |
|
Maybe every place I've been was holding it wrong but SES is just hot garbage, I feel.Thanks Ants posted:Knowing what I know about the people using this service I think I'm going to push for them to move to a more managed platform that will do all this for them Very yes.
|
# ? Dec 29, 2023 01:05 |
|
Startyde posted:SES is just hot garbage It works fine for just sending email from AWS services, and there are email marketing platforms out there that use it. But you have to build everything on top of it and just treat it like an SMTP server with terrible logging. I don't think you'd ever want to give end users the ability to send via it directly.
|
# ? Dec 29, 2023 02:16 |
|
BaseballPCHiker posted:Got a weird one today I've never seen. You are probably seeing the AWS service account. Some services run stuff in service accounts managed by AWS and you have to do some digging through logs to find your account that actually spun up the resources. E: well hello there. It’s a completely new page that I needed to read to catch up on the thread. Agrikk fucked around with this message at 03:33 on Dec 29, 2023 |
# ? Dec 29, 2023 03:31 |
|
I recently 'inherited' an AWS account at work. For some reason they're paying $400/month for a private certificate authority, but there's no Private CA listed as configured in their account. Is this likely something that was added some point along the way but never disabled after they didn't need it any more? Should I just contact AWS support and see if they can confirm it's not being used?
|
# ? Jan 3, 2024 01:36 |
|
If they’re being charged for it, it must exist somewhere. Maybe you’re looking in the wrong region? If you go into the billing console it should have pretty detailed info about exactly what you are being billed for and why. If you can’t figure it out, sure, a support case can’t hurt. Spinning something expensive up as an experiment or mistake and then never tearing it back down is unfortunately very common.
|
# ? Jan 3, 2024 02:56 |
|
Docjowles posted:If they’re being charged for it, it must exist somewhere. Maybe you’re looking in the wrong region? If you go into the billing console it should have pretty detailed info about exactly what you are being billed for and why. You were right on, the Private CA was setup in a different region than all their other stuff. It looks like all the certificates they have in there are expired and were for domains that were never setup. It makes me think they could just delete the certs and the CA and save a good amount of money. Edit: actually it looks like certs from a Private CA might not be listed in certificate manager, and I have to generate an audit report…which only exports to an S3 bucket? Who comes up with this stuff?! frogbs fucked around with this message at 17:59 on Jan 3, 2024 |
# ? Jan 3, 2024 17:42 |
|
FYI it's quite unlikely that AWS support would help you figure out if your CA was being used by anything, that's at least a Business tier support offering and more likely they put you in touch with a consultant. If it's just looking at the list of issued certs then they might help with that, but I would expect them to avoid saying anything like "those certs aren't being used and you can delete the CA".
|
# ? Jan 3, 2024 17:59 |
|
This is correct. Support won’t help you determine whether a cert is in use or not.
|
# ? Jan 3, 2024 19:29 |
|
Hey guys, I'm looking for a recommendation here. I'm about to deploy my first full stack portfolio piece and I'd like to do it on AWS to get some experience with it. The project is a small (fake) ecommerce site using Django, React and postgres. I've already got my postgres running on AWS RDS, my catalog images are in a S3 bucket and I purchased my domain through Route 53 (same price as namecheap). I wanted to make this easy and AWS seems to offer everything in one place. There seems to be a few different options available but I'm not sure which is best. Whats the easiest, free, most streamlined way to deploy my django/react project on AWS? I would also appreciate any links or video walkthroughs that can help guide me through the process.
|
# ? Jan 8, 2024 21:54 |
|
Hmmm, AWS Amplify has a free tier component, thats probably the quickest/easiest way, but I wont pretend to be an expert and others with more experience can chime in. You could also go the full painful way and setup an ec2 autoscaling group, build out your own vpc with it, setup Cloudfront, etc. That would be a lot more work obviously but if you havent had any hands on AWS experience would be a good intro project to just touch a lot of common services.
|
# ? Jan 8, 2024 22:18 |
|
Unless you're going out of your way to build as much stuff as possible to demonstrate how full stack it is, I would probably use beanstalk.
|
# ? Jan 8, 2024 23:05 |
|
I used Elastic Beanstalk for the backend of my project.
|
# ? Jan 8, 2024 23:21 |
|
Awesome, I'll try beanstalk then. I'm just trying to keep it simple for the first one. I'll try something more adventurous after I get this one squared away. I appreciate the advice.
davey4283 fucked around with this message at 02:45 on Jan 9, 2024 |
# ? Jan 9, 2024 02:38 |
|
For both learning and professionally I would recommend that you use ECS rather than Beanstalk, because fundamentally Elastic Beanstalk is just opinionated ECS and those opinions may not match up to your desires or needs. That being said, since your objective is to learn, I implore you to set up both options (with CI/CD and your infra as code tool of choice of course), see which you prefer and why, and then you have both in your portfolio. ECS is also much more commonly used at enterprise scale. The Iron Rose fucked around with this message at 03:28 on Jan 9, 2024 |
# ? Jan 9, 2024 03:04 |
|
I start a new job in 2 weeks at a place that uses AWS, whereas all my experience is in Azure (they know this, I don't have to fake anything). What would be a good primer on AWS? Is there an AWS equivalent to the AZ-900 exam/cert (which is a totally free and online cert from Microsoft) that I can use to at least build a foundation of skills?
|
# ? Jan 9, 2024 14:58 |
|
The solutions architect cert is the baseline go to in my opinion. It'll give you a good introduction to the most commonly used services. ACloudGuru is worth the cost for the sandbox environments if youre just starting out. It'll give you a chance to get some hands on experience without having to worry about leaving something on and getting a big bill later. Their other course offerings have gone down hill since they got bought by Pluralsight, but the solutions architect associate course is still fine.
|
# ? Jan 9, 2024 15:57 |
|
Seconding the SAA cert, but try to get your new job to pay for any video lectures. In the meantime, literally read every page of existing documentation on IAM, EC2, S3, and VPC. If your new job is heavy in EKS or ECS then do the same with, again, literally every word of documentation they have on the subject. AWS’ docs are lightyears better organized than Azure’s and you’d need to do this for your exam anyways
|
# ? Jan 9, 2024 16:58 |
|
On top of the other good advice I would suggest specifically focusing on IAM and VPC/networking basics. Cause those are building blocks you will need no matter what other AWS services your company uses on top of them. The idea of IAM roles in particular took way too long to click for me when I first started using AWS. Hopefully coming from another cloud provider it’s familiar for you. AWS networking isn’t bad if you have any network background at all. Unfortunately a lot of people do not. So they end up shooting themselves in the foot with architectures that are extremely cost inefficient or insecure or incompatible with the company’s IP address plan. Then you have to start over which means tearing everything down, which sucks. Planning out your VPCs should very much be “measure twice, cut once”.
|
# ? Jan 9, 2024 17:51 |
|
Speaking of IAM.... Anyone ever setup IAM Roles Anywhere? I sort of want to take on a project myself to get it and CA setup to put the final nail in the coffin for my remaining access key IAM users. But Ive got no experience with it, and I'm about to have baby brain and be out on paternity leave for a while so Im a bit hesitant to start it now.
|
# ? Jan 9, 2024 17:55 |
|
BaseballPCHiker posted:Speaking of IAM.... I’ve been on-and-off nagging my org to use it, since we already have an existing internal PKI. I’ve played around with it a bit and it more-or-less does what it says on the tin. One thing I haven’t looked into deeply is root / intermediate cert rollovers and how you’d handle that. One thing to be aware of is your IAM Roles’ trust policies need to be constructed with some care to avoid being overly permissive with which certs allow assuming the role. In my (admittedly limited) experience, if you have a lot of on-prem workloads it’s a pain in the rear end to manage their AWS access no matter what. So it’s really a matter of picking whichever solution sucks least.
|
# ? Jan 9, 2024 21:41 |
|
quote:Title: Using Beanstalk to deploy Django/React project, Keep getting Boto errors and fails: https://old.reddit.com/r/aws/comments/193f9vp/using_beanstalk_to_deploy_djangoreact_project/ I'm pulling my hair out with beanstalk over here, if anyone has any ideas I would love to hear it
|
# ? Jan 10, 2024 19:45 |
|
Just my two cents, but beanstalk has so many weird gotchas and limitations behind the scenes that its not worth using.
|
# ? Jan 10, 2024 20:09 |
|
I'm honestly just looking for the easiest, most seamless way to deploy my first portfolio project (django/react). I thought aws would be a good route since I can deploy my site, host my postgres, buy a domain, and host static files all in one place. I don't care at all which service is used as long as it works. You mentioned Amplify earlier so maybe I'll give that a shot.
|
# ? Jan 10, 2024 20:39 |
|
davey4283 posted:https://old.reddit.com/r/aws/comments/193f9vp/using_beanstalk_to_deploy_djangoreact_project/ Remove version constraints from requirements.txt, try again, pip freeze You already have a version conflict between awsebcli and botocore, awsebcli requires botocore>1.23.41,<1.32.0, you're installing 1.34.15. Welcome Python dependency hell (also don't install boto, it's not 2018 anymore, everything is in boto3 / botocore)
|
# ? Jan 10, 2024 20:41 |
|
I'm trying to set up a Cloudwatch dashboard to monitor the health of my SQS queues. Looking into the metrics available, I'm having a few issues finding ones that are helpful. So far on my dashboard I have: 1) A line graph for NumberOfMessagesSent. To this I added an anomaly detection band so I'll see if the number - high or low - is outside the normal range. 2) A gauge for my dead letter queue with ApproximateNumberOfMessagesVisible with a Sum stat, with the idea that anything > 0 is a problem. 3) A line graph for ApproximateAgeofOldestMessage with the Maximum stat. If this goes up, there is a problem. I was wondering if anyone has any other useful graphs they use for queue health and if these make sense? I'm new to this so wondering what other people use.
|
# ? Feb 6, 2024 19:13 |
|
2 and 3 are the standard two you would use at a minimum and very widely used. Depending on how you are receiving from the queue, you could additionally look at the number of empty receives. If that goes up you are probably overscaled. Ideally your receiving side is autoscaled though (i.e., use Lambda). Generally though, for alerting, the ones you already have are great. Beyond that I just look at the metric page in the console for the queue as needed.
|
# ? Feb 6, 2024 19:37 |
|
It's been too long since I touched SQS, and those look good, but you missed at least one important one. *After* an incident explodes the queue, you want to be able to easily say "how long until it's back to normal?" It looks like you have size of queue and incoming (I think?) rate, but not the outgoing rate. That's going to be important to compute "size / (out - in)" to answer "queue back to normal in X hours". (And as a very generalized bit of advice for anyone, don't just think about alarmable metrics for dashboards. Informing operators during incidents, or providing data to make choices for runbooks, are also important considerations. And for any SQS queue an important runbook entry to have is "the queue is too big, it won't drain in an acceptable timeframe (e.g. days/weeks!! months?!), and scaling up the fleet consuming it just ran into other scaling bottlenecks, so now what do we do?!" SQS is a deliberately unbounded queue, so this is something that definitely needs answers thought through before you're in the middle of a stressful incident.)
|
# ? Feb 6, 2024 20:57 |
|
I'm trying to put some of the final nails in the coffin of IMDSv1 here. I've got good reporting and metrics on which instances are set to IMDSv2 only, and have config rules ready to go to enforce that as well as an SCP. Looking at CloudWatch though I do see a ton of instances using IMDSv1, all on a regular cadence across instances in an account. My guess is that its something AWS is using. Is there an easy way to see whats actually musing IMDSv1 without installing the packet analyzer on these hosts?
|
# ? Feb 7, 2024 16:21 |
|
Just install it one and see if it's really aws as you suspect or something else and go from there. Im curious to know if its aws or not though
|
# ? Feb 7, 2024 17:01 |
|
Its either AWS or our CSPM. I see IMDSv1 usage for thousands of instances, across multiple accounts, all at the same time. Was hoping for an easy button, but I guess its just time to use the packet analyzer.
|
# ? Feb 7, 2024 17:20 |
|
Well I ended up running the IMDS packet analyzer and its all basically instances assuming their roles. 99% of them in this case were set to support IMDSv1 or 2, and I guess by default then they just opt to use v1. So it should be pretty easy for me to get this switched over. For anyone else following along, the docs for the metabadger tool are more useful than whats provided by AWS as it will show you a few different options for running the tool - https://github.com/salesforce/metabadger
|
# ? Feb 12, 2024 18:05 |
|
Today I learned that you can only attach a maximum of 10 AWS managed IAM policies to an IAM group. If you want more attached, you need multiple groups or you need to copy the settings into your own policy. Which... feels like it defeats the purpose of having AWS managed policies for common roles. Maybe the purpose is just to push you into writing your own policies, but those AWS managed policies are so helpful! Particularly the ReadOnly roles which we're utilizing heavily.
|
# ? Feb 14, 2024 19:48 |
|
Is there any reason you cant just use roles and assign users those roles instead?
|
# ? Feb 14, 2024 19:58 |
|
My read on those is that you have to "switch" into a role and isn't really meant to be a user's level of regular access. And it still has a policy attachment limit.
|
# ? Feb 14, 2024 20:24 |
|
|
# ? Apr 27, 2024 17:13 |
|
A lot of this depends on how your AWS org is setup. Are you using identity center or just regular IAM users signing into the console or using keys?
|
# ? Feb 14, 2024 20:31 |