Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BaseballPCHiker
Jan 16, 2006

No creds or attached role.

Googling around I do see some other people mention that account # in some GitHub pages here and there, so I wonder now if its some Amazon owned account#?

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


I've been dropped into a situation where someone has had their SES service suspended after AWS detected potential misuse. I'm 99% sure this is a "the horse is out the barn door" scenario and I am aware that email sucks, but is there any way to access a log of sent messages, the IP address making the request, how it was authenticated (e.g. access key used), subject line of the email, destination etc. or is that all logging that you have to configure yourself if you want access to the data?

Assuming the answer to the above is "you're out of luck, build it yourself" is there a handy best practises guide that is worth paying attention to in terms of getting a good balance between what it being sent to Cloudwatch and the cost of doing so?

BaseballPCHiker
Jan 16, 2006

Found my answer - https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html#architecture

WorkSpace workloads have two ENIs and two VPCs. The compute part of the workspace actually resides in another aws managed account I had no clue but at least clears up that oddball issue.


Thanks Ants posted:

I've been dropped into a situation where someone has had their SES service suspended after AWS detected potential misuse. I'm 99% sure this is a "the horse is out the barn door" scenario and I am aware that email sucks, but is there any way to access a log of sent messages, the IP address making the request, how it was authenticated (e.g. access key used), subject line of the email, destination etc. or is that all logging that you have to configure yourself if you want access to the data?

Assuming the answer to the above is "you're out of luck, build it yourself" is there a handy best practises guide that is worth paying attention to in terms of getting a good balance between what it being sent to Cloudwatch and the cost of doing so?

Oooft. CloudTrail will only capture the management events which may have some helpful info for you. Moving forward this may be helpful for you - https://docs.aws.amazon.com/ses/latest/dg/monitor-using-event-publishing.html

Docjowles
Apr 9, 2009

Thanks Ants posted:

I've been dropped into a situation where someone has had their SES service suspended after AWS detected potential misuse. I'm 99% sure this is a "the horse is out the barn door" scenario and I am aware that email sucks, but is there any way to access a log of sent messages, the IP address making the request, how it was authenticated (e.g. access key used), subject line of the email, destination etc. or is that all logging that you have to configure yourself if you want access to the data?

Assuming the answer to the above is "you're out of luck, build it yourself" is there a handy best practises guide that is worth paying attention to in terms of getting a good balance between what it being sent to Cloudwatch and the cost of doing so?

If your senders are using the SES API you might be able to find something useful in CloudTrail. If it's just raw STMP, yeah, "you're out of luck, build it yourself". AWS loves to offer these "solutions" that aren't so much a feature as they are lovely Rube Goldberg machines of 5 different AWS services smashed together. Take a look at Event Publishing. You can set it up to publish detailed logs to various destinations. You will need control over the senders, though, since you need to pass in custom headers to make it happen. Hopefully you aren't just offering an open relay anyway though, otherwise, I think I know how you got in trouble :v:

If the account has any level of AWS paid support, give that a shot, too. Maybe they can pull logs or something. It's in their interest to not have people making GBS threads up their IP reputation.

edit: I see PCHiker and I are on the same page lol

Thanks Ants
May 21, 2004

#essereFerrari


Event publishing looks like it will do most of the work, the auto tags are pretty much what we're after and then it's just a case of having the information present itself back out again in a way that makes sense.

Knowing what I know about the people using this service I think I'm going to push for them to move to a more managed platform that will do all this for them even if it costs a bit more than SES, they aren't really at the level where they can be consuming raw AWS services.

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
Maybe every place I've been was holding it wrong but SES is just hot garbage, I feel.

Thanks Ants posted:

Knowing what I know about the people using this service I think I'm going to push for them to move to a more managed platform that will do all this for them

Very yes.

ledge
Jun 10, 2003

Startyde posted:

SES is just hot garbage

It works fine for just sending email from AWS services, and there are email marketing platforms out there that use it. But you have to build everything on top of it and just treat it like an SMTP server with terrible logging.

I don't think you'd ever want to give end users the ability to send via it directly.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

BaseballPCHiker posted:

Got a weird one today I've never seen.

A bunch of workspace instances are showing an account# via IMDS that is nowhere to be found in my AWS org. Like I can view their VPC IDs and confirm theyre in the right account but the host itself reports a different account. Really throwing off inventory for me.

You are probably seeing the AWS service account. Some services run stuff in service accounts managed by AWS and you have to do some digging through logs to find your account that actually spun up the resources.

E: well hello there. It’s a completely new page that I needed to read to catch up on the thread.

Agrikk fucked around with this message at 03:33 on Dec 29, 2023

frogbs
May 5, 2004
Well well well
I recently 'inherited' an AWS account at work. For some reason they're paying $400/month for a private certificate authority, but there's no Private CA listed as configured in their account. Is this likely something that was added some point along the way but never disabled after they didn't need it any more? Should I just contact AWS support and see if they can confirm it's not being used?

Docjowles
Apr 9, 2009

If they’re being charged for it, it must exist somewhere. Maybe you’re looking in the wrong region? If you go into the billing console it should have pretty detailed info about exactly what you are being billed for and why.

If you can’t figure it out, sure, a support case can’t hurt.

Spinning something expensive up as an experiment or mistake and then never tearing it back down is unfortunately very common.

frogbs
May 5, 2004
Well well well

Docjowles posted:

If they’re being charged for it, it must exist somewhere. Maybe you’re looking in the wrong region? If you go into the billing console it should have pretty detailed info about exactly what you are being billed for and why.

If you can’t figure it out, sure, a support case can’t hurt.

Spinning something expensive up as an experiment or mistake and then never tearing it back down is unfortunately very common.

You were right on, the Private CA was setup in a different region than all their other stuff. It looks like all the certificates they have in there are expired and were for domains that were never setup. It makes me think they could just delete the certs and the CA and save a good amount of money.

Edit: actually it looks like certs from a Private CA might not be listed in certificate manager, and I have to generate an audit report…which only exports to an S3 bucket? Who comes up with this stuff?!

frogbs fucked around with this message at 17:59 on Jan 3, 2024

Thanks Ants
May 21, 2004

#essereFerrari


FYI it's quite unlikely that AWS support would help you figure out if your CA was being used by anything, that's at least a Business tier support offering and more likely they put you in touch with a consultant. If it's just looking at the list of issued certs then they might help with that, but I would expect them to avoid saying anything like "those certs aren't being used and you can delete the CA".

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
This is correct. Support won’t help you determine whether a cert is in use or not.

davey4283
Aug 14, 2006
Fallen Rib
Hey guys, I'm looking for a recommendation here. I'm about to deploy my first full stack portfolio piece and I'd like to do it on AWS to get some experience with it. The project is a small (fake) ecommerce site using Django, React and postgres. I've already got my postgres running on AWS RDS, my catalog images are in a S3 bucket and I purchased my domain through Route 53 (same price as namecheap). I wanted to make this easy and AWS seems to offer everything in one place. There seems to be a few different options available but I'm not sure which is best. Whats the easiest, free, most streamlined way to deploy my django/react project on AWS? I would also appreciate any links or video walkthroughs that can help guide me through the process.

BaseballPCHiker
Jan 16, 2006

Hmmm,

AWS Amplify has a free tier component, thats probably the quickest/easiest way, but I wont pretend to be an expert and others with more experience can chime in.

You could also go the full painful way and setup an ec2 autoscaling group, build out your own vpc with it, setup Cloudfront, etc. That would be a lot more work obviously but if you havent had any hands on AWS experience would be a good intro project to just touch a lot of common services.

12 rats tied together
Sep 7, 2006

Unless you're going out of your way to build as much stuff as possible to demonstrate how full stack it is, I would probably use beanstalk.

Xerxes17
Feb 17, 2011

I used Elastic Beanstalk for the backend of my project.

davey4283
Aug 14, 2006
Fallen Rib
Awesome, I'll try beanstalk then. I'm just trying to keep it simple for the first one. I'll try something more adventurous after I get this one squared away. I appreciate the advice.

davey4283 fucked around with this message at 02:45 on Jan 9, 2024

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
For both learning and professionally I would recommend that you use ECS rather than Beanstalk, because fundamentally Elastic Beanstalk is just opinionated ECS and those opinions may not match up to your desires or needs.

That being said, since your objective is to learn, I implore you to set up both options (with CI/CD and your infra as code tool of choice of course), see which you prefer and why, and then you have both in your portfolio. ECS is also much more commonly used at enterprise scale.

The Iron Rose fucked around with this message at 03:28 on Jan 9, 2024

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I start a new job in 2 weeks at a place that uses AWS, whereas all my experience is in Azure (they know this, I don't have to fake anything). What would be a good primer on AWS? Is there an AWS equivalent to the AZ-900 exam/cert (which is a totally free and online cert from Microsoft) that I can use to at least build a foundation of skills?

BaseballPCHiker
Jan 16, 2006

The solutions architect cert is the baseline go to in my opinion. It'll give you a good introduction to the most commonly used services.

ACloudGuru is worth the cost for the sandbox environments if youre just starting out. It'll give you a chance to get some hands on experience without having to worry about leaving something on and getting a big bill later. Their other course offerings have gone down hill since they got bought by Pluralsight, but the solutions architect associate course is still fine.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
Seconding the SAA cert, but try to get your new job to pay for any video lectures.

In the meantime, literally read every page of existing documentation on IAM, EC2, S3, and VPC. If your new job is heavy in EKS or ECS then do the same with, again, literally every word of documentation they have on the subject.

AWS’ docs are lightyears better organized than Azure’s and you’d need to do this for your exam anyways

Docjowles
Apr 9, 2009

On top of the other good advice I would suggest specifically focusing on IAM and VPC/networking basics. Cause those are building blocks you will need no matter what other AWS services your company uses on top of them. The idea of IAM roles in particular took way too long to click for me when I first started using AWS. Hopefully coming from another cloud provider it’s familiar for you.

AWS networking isn’t bad if you have any network background at all. Unfortunately a lot of people do not. So they end up shooting themselves in the foot with architectures that are extremely cost inefficient or insecure or incompatible with the company’s IP address plan. Then you have to start over which means tearing everything down, which sucks. Planning out your VPCs should very much be “measure twice, cut once”.

BaseballPCHiker
Jan 16, 2006

Speaking of IAM....

Anyone ever setup IAM Roles Anywhere? I sort of want to take on a project myself to get it and CA setup to put the final nail in the coffin for my remaining access key IAM users. But Ive got no experience with it, and I'm about to have baby brain and be out on paternity leave for a while so Im a bit hesitant to start it now.

dads friend steve
Dec 24, 2004

BaseballPCHiker posted:

Speaking of IAM....

Anyone ever setup IAM Roles Anywhere? I sort of want to take on a project myself to get it and CA setup to put the final nail in the coffin for my remaining access key IAM users. But Ive got no experience with it, and I'm about to have baby brain and be out on paternity leave for a while so Im a bit hesitant to start it now.

I’ve been on-and-off nagging my org to use it, since we already have an existing internal PKI. I’ve played around with it a bit and it more-or-less does what it says on the tin. One thing I haven’t looked into deeply is root / intermediate cert rollovers and how you’d handle that.

One thing to be aware of is your IAM Roles’ trust policies need to be constructed with some care to avoid being overly permissive with which certs allow assuming the role.

In my (admittedly limited) experience, if you have a lot of on-prem workloads it’s a pain in the rear end to manage their AWS access no matter what. So it’s really a matter of picking whichever solution sucks least.

davey4283
Aug 14, 2006
Fallen Rib

quote:

Title: Using Beanstalk to deploy Django/React project, Keep getting Boto errors and fails:

I have the most up-to-date version of boto3 and botocore, and I've done pip freeze and added it to my requirements.txt but when I try to deploy my site, eb deploy, it always fails. The eb logs error says I'm not satisfying the boto requirements but its in my piplist/requirements.txt which doesn't make any sense.
Log: ============= i-0256677979eb7fc50 ==============
/var/log/eb-engine.log

WARNING: You are using pip version 22.0.4; however, version 23.3.2 is available. You should consider upgrading via the '/var/app/venv/staging-LQM1lest/bin/python3.7 -m pip install --upgrade pip' command.

2024/01/10 18:18:35.673148 [ERROR] An error occurred during execution of command [app-deploy] - [InstallDependency]. Stop running the command. Error: fail to install dependencies with requirements.txt file with error Command /bin/sh -c /var/app/venv/staging-LQM1lest/bin/pip install -r requirements.txt failed with error exit status 1. Stderr:ERROR: Could not find a version that satisfies the requirement boto3==1.34.14 (from versions:

Requirements.txt:

asgiref==3.7.2 awsebcli==3.20.10 blessed==1.20.0 boto==2.49.0 boto3==1.34.15 botocore==1.34.15 cement==2.8.2 certifi==2023.11.17 charset-normalizer==3.3.2 colorama==0.4.3 Django==4.2.4 (etc)

Chatgpt is running me in circles.. Any ideas on what's going on with this?

edit: I'm running windows10, ubuntu wsl, bash, vscode. aws cli through vs code terminal,

I've updated pip and boto multiple times and updated the requirements.txt over and over again but it still fails when I attempt eb deploy

https://old.reddit.com/r/aws/comments/193f9vp/using_beanstalk_to_deploy_djangoreact_project/

I'm pulling my hair out with beanstalk over here, if anyone has any ideas I would love to hear it

BaseballPCHiker
Jan 16, 2006

Just my two cents, but beanstalk has so many weird gotchas and limitations behind the scenes that its not worth using.

davey4283
Aug 14, 2006
Fallen Rib
I'm honestly just looking for the easiest, most seamless way to deploy my first portfolio project (django/react).

I thought aws would be a good route since I can deploy my site, host my postgres, buy a domain, and host static files all in one place.

I don't care at all which service is used as long as it works.

You mentioned Amplify earlier so maybe I'll give that a shot.

vanity slug
Jul 20, 2010

davey4283 posted:

https://old.reddit.com/r/aws/comments/193f9vp/using_beanstalk_to_deploy_djangoreact_project/

I'm pulling my hair out with beanstalk over here, if anyone has any ideas I would love to hear it

Remove version constraints from requirements.txt, try again, pip freeze

You already have a version conflict between awsebcli and botocore, awsebcli requires botocore>1.23.41,<1.32.0, you're installing 1.34.15.

Welcome Python dependency hell

(also don't install boto, it's not 2018 anymore, everything is in boto3 / botocore)

sausage king of Chicago
Jun 13, 2001
I'm trying to set up a Cloudwatch dashboard to monitor the health of my SQS queues. Looking into the metrics available, I'm having a few issues finding ones that are helpful. So far on my dashboard I have:

1) A line graph for NumberOfMessagesSent. To this I added an anomaly detection band so I'll see if the number - high or low - is outside the normal range.
2) A gauge for my dead letter queue with ApproximateNumberOfMessagesVisible with a Sum stat, with the idea that anything > 0 is a problem.
3) A line graph for ApproximateAgeofOldestMessage with the Maximum stat. If this goes up, there is a problem.

I was wondering if anyone has any other useful graphs they use for queue health and if these make sense? I'm new to this so wondering what other people use.

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.
2 and 3 are the standard two you would use at a minimum and very widely used. Depending on how you are receiving from the queue, you could additionally look at the number of empty receives. If that goes up you are probably overscaled. Ideally your receiving side is autoscaled though (i.e., use Lambda). Generally though, for alerting, the ones you already have are great. Beyond that I just look at the metric page in the console for the queue as needed.

crazypenguin
Mar 9, 2005
nothing witty here, move along
It's been too long since I touched SQS, and those look good, but you missed at least one important one. *After* an incident explodes the queue, you want to be able to easily say "how long until it's back to normal?"

It looks like you have size of queue and incoming (I think?) rate, but not the outgoing rate. That's going to be important to compute "size / (out - in)" to answer "queue back to normal in X hours".

(And as a very generalized bit of advice for anyone, don't just think about alarmable metrics for dashboards. Informing operators during incidents, or providing data to make choices for runbooks, are also important considerations. And for any SQS queue an important runbook entry to have is "the queue is too big, it won't drain in an acceptable timeframe (e.g. days/weeks!! months?!), and scaling up the fleet consuming it just ran into other scaling bottlenecks, so now what do we do?!" SQS is a deliberately unbounded queue, so this is something that definitely needs answers thought through before you're in the middle of a stressful incident.)

BaseballPCHiker
Jan 16, 2006

I'm trying to put some of the final nails in the coffin of IMDSv1 here. I've got good reporting and metrics on which instances are set to IMDSv2 only, and have config rules ready to go to enforce that as well as an SCP.

Looking at CloudWatch though I do see a ton of instances using IMDSv1, all on a regular cadence across instances in an account. My guess is that its something AWS is using. Is there an easy way to see whats actually musing IMDSv1 without installing the packet analyzer on these hosts?

Resdfru
Jun 4, 2004

I'm a freak on a leash.
Just install it one and see if it's really aws as you suspect or something else and go from there. Im curious to know if its aws or not though

BaseballPCHiker
Jan 16, 2006

Its either AWS or our CSPM. I see IMDSv1 usage for thousands of instances, across multiple accounts, all at the same time.

Was hoping for an easy button, but I guess its just time to use the packet analyzer.

BaseballPCHiker
Jan 16, 2006

Well I ended up running the IMDS packet analyzer and its all basically instances assuming their roles.

99% of them in this case were set to support IMDSv1 or 2, and I guess by default then they just opt to use v1. So it should be pretty easy for me to get this switched over. For anyone else following along, the docs for the metabadger tool are more useful than whats provided by AWS as it will show you a few different options for running the tool - https://github.com/salesforce/metabadger

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Today I learned that you can only attach a maximum of 10 AWS managed IAM policies to an IAM group. If you want more attached, you need multiple groups or you need to copy the settings into your own policy. Which... feels like it defeats the purpose of having AWS managed policies for common roles. Maybe the purpose is just to push you into writing your own policies, but those AWS managed policies are so helpful! Particularly the ReadOnly roles which we're utilizing heavily.

BaseballPCHiker
Jan 16, 2006

Is there any reason you cant just use roles and assign users those roles instead?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
My read on those is that you have to "switch" into a role and isn't really meant to be a user's level of regular access. And it still has a policy attachment limit.

Adbot
ADBOT LOVES YOU

BaseballPCHiker
Jan 16, 2006

A lot of this depends on how your AWS org is setup. Are you using identity center or just regular IAM users signing into the console or using keys?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply