Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
12 rats tied together
Sep 7, 2006

I was completely ready to say mean things about it but it looks fine actually. It's not that different from many take-home technical interviews I've been presented with. Some of the tool choices are bad, is the worst thing I can muster.

Adbot
ADBOT LOVES YOU

ledge
Jun 10, 2003

BaseballPCHiker posted:

That AI assist thing is so loving dumb and annoying.

I feel bad for anyone at AWS who gets stuck working on that thing.

I've asked it three different things and the answer has been wrong every time.

Just like CodeWhisperer which creates broken code, even when calling aws apis. I mean it adds incorrect arguments to functions that look right but are wrong.

AI is such a loving disaster zone and the sooner it and the companies surrounding it collapse the better. And they will because it is already hitting the limits of what it can do, and it isn't very good at doing it.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I do t k why I hate AI so much but I do.

Maybe that it’s been adopted by so many as the perfect cure-all for everything from procedure manuals to customer emails to doing our thinking for us.

Maybe people who have no actual idea of its capabilities use it like a general purpose “we’ll throw AI at it to fix it!” rallying cry.

Maybe because people use “AI” like they used “paradigm” in the 90s, “devops” in the 00s or “agile” in the teens.

But I hate it and it sucks.

LochNessMonster
Feb 3, 2005

I need about three fitty


Blurb3947 posted:

Curious if Forrest Brazeal’s cloud resume challenge holds any weight in the industry? I’m almost done with it and have learned quite a bit with various services but was skeptical if it actually helps people during their job hunts.

If I’m interviewing a junior and they can explain what they built and why they used specific services I’d treat it the same as hands on professional experience.

I don’t think it’s really “a thing” like a certification or something.

MrMoo
Sep 14, 2000

Agrikk posted:

I do t k why I hate AI so much but I do.

It attracts dot com style investment levels, so people go crazy. It’s also an illusive tech term that will be reduced to the lowest common denominator. Much like the cloud is someone else’s computer, ai may be called a decision made by a device. There has been more effort in trying to use ML for things, by then again the market is heavy with wrappers for ChatGPT.

In my industry people are demanding AI anything, that means buzzwords to stand out from the crowd, even if the implementation is weak.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Agrikk posted:

I do t k why I hate AI so much but I do.

Maybe that it’s been adopted by so many as the perfect cure-all for everything from procedure manuals to customer emails to doing our thinking for us.

Maybe people who have no actual idea of its capabilities use it like a general purpose “we’ll throw AI at it to fix it!” rallying cry.

Maybe because people use “AI” like they used “paradigm” in the 90s, “devops” in the 00s or “agile” in the teens.

But I hate it and it sucks.

it's because ai is championed by the same insufferable assholes who constantly sell the sizzle and refuse to acknowledge how lovely the steak is

Blurb3947
Sep 30, 2022

LochNessMonster posted:

If I’m interviewing a junior and they can explain what they built and why they used specific services I’d treat it the same as hands on professional experience.

I don’t think it’s really “a thing” like a certification or something.

Part of the challenge is to actually go out and get a cert. I did the CCP as part of my degree and the SAA on my own. I'm just trying to get some experience that I can show off and talk about for interviews or at least some sort of leg up for job searches.

BaseballPCHiker
Jan 16, 2006

BaseballPCHiker posted:

Once again I am struggling with cross account permissions.

I'm trying to create a Cloudformation Template that could be deployed in all of our accounts that will detect root user logins via EventBridge and targets a central SNS topic in another account.

The central SNS topic has an access policy of allowing AWS: * to make sns: Publish on the condition that the PrincipalOrgID matches our AWS organization ID. No problems there as far as I can tell.

The CFT I'm writing keeps failing with this error:
code:
Access to the resource blahblahXYZ is denied. Reason: Adding cross-account target is not permitted. (Service: AmazonCloudWatchEvents: Status Code: 400. Error Code: AccessDeniedException. Request ID: Whatever. Proxy: Null
So then I tell myself OK, I need to define a policy in my CFT to give EventBridge rights to publish. But if I do that I get:
code:
"User:" arn whatever is not authorized to perform SNS:SetTopicAttributes on resource blahblahXYZ because no resource based policy allows the SNS:SetTopicAttributes action. (Service: Sns, Status Code: 400. Request ID: whatever REquestToken: whatever. AccessDenied)
Except that I have another SID within the SNS access policy that says allow principal AWS * to make SNS:GetTopic, SetTopic, AddPermission, RemovePermission, DeleteTopic, Subscribe, ListSubsByTopic, AND Publish.

I had thought this would be relatively straight forward. The idea was I could use this as a template and just update events that we wanted to alert on and publish to a central Org topic. But once again I am banging my head against the wall when it comes to cross account access.

Am I missing something obvious or is there a better way to go about this?

So this is from 2+ months ago but I finally figured it out!!!!!

When you go cross account from a service like SNS, EventBridge, CloudWatch, etc they dont pass the orgId attribute. So even though my SNS policy on the other side said accept aws:* with my orgID as a condition I couldnt get poo poo to work because those services werent passing that attribute!

But if I have them go to lambda as an intermediary with an execution policy then the orgID attribute will get passed.

Ugh so frustrating but a hard earned lesson.

BaseballPCHiker
Jan 16, 2006

What would be the best way to add a lifecycle rule to existing buckets in an account that dont already have one? Im basically looking to add a rule to delete aborted multipart uploads in buckets.

My first thought was a lambda that would fire and add in the lifecycle rule to buckets, but I dont necessarily know what would trigger that and how I'd put in logic to check for existing lifecycle policies. This is where being lovely with python really backfires for me.

Org wide this isnt really a huge issue for us but somehow its caught the attention of my bosses. Nevermind the thousands we waste in orphaned ebs volumes....

Docjowles
Apr 9, 2009

I couldn't find it in 30 seconds skimming the docs but https://cloudcustodian.io/ MIGHT be able to do this? It can filter on and react to a whole lot of things. Stuff like this ("I want to find resources matching or not matching a criteria and do something to them") is why the tool exists although with AWS being so vast it of course doesn't cover everything.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

BaseballPCHiker posted:

What would be the best way to add a lifecycle rule to existing buckets in an account that dont already have one? Im basically looking to add a rule to delete aborted multipart uploads in buckets.

My first thought was a lambda that would fire and add in the lifecycle rule to buckets, but I dont necessarily know what would trigger that and how I'd put in logic to check for existing lifecycle policies. This is where being lovely with python really backfires for me.

Org wide this isnt really a huge issue for us but somehow its caught the attention of my bosses. Nevermind the thousands we waste in orphaned ebs volumes....

Lambda to enable across the board once (check for existing rules). Then use an eventbridge rule to trigger a lambda that will enable the lifecycle rule on new buckets that don’t already have one.

Or just run a cron and check for a lifecycle rule before applying your own, but that’s a bit inefficient compared to the eventbridge route.

vanity slug
Jul 20, 2010

BaseballPCHiker posted:

What would be the best way to add a lifecycle rule to existing buckets in an account that dont already have one? Im basically looking to add a rule to delete aborted multipart uploads in buckets.

My first thought was a lambda that would fire and add in the lifecycle rule to buckets, but I dont necessarily know what would trigger that and how I'd put in logic to check for existing lifecycle policies. This is where being lovely with python really backfires for me.

Org wide this isnt really a huge issue for us but somehow its caught the attention of my bosses. Nevermind the thousands we waste in orphaned ebs volumes....

How are you currently deploying your infrastructure?

kalel
Jun 19, 2012

is there a way to send bucket notifications from an s3 in one account to an sqs queue in a different account? I don't know why I shouldn't be able to do it without the use of lambda or eventbridge, but I can't find an example that doesn't use one of those

LochNessMonster
Feb 3, 2005

I need about three fitty


kalel posted:

is there a way to send bucket notifications from an s3 in one account to an sqs queue in a different account? I don't know why I shouldn't be able to do it without the use of lambda or eventbridge, but I can't find an example that doesn't use one of those

Haven’t tried it myself but sqs is a valid destination for s3 event notifications so it seems like throwing it to a different account should be possible.

If you can’t send it to an sns topic in your account amd subscribe the queue in the different acount to it.

lazerwolf
Dec 22, 2009

Orange and Black
Is it a good practice to use container images for lambda functions? Seems to be the easiest way to handle dependencies. Are there any obvious downsides?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

lazerwolf posted:

Is it a good practice to use container images for lambda functions? Seems to be the easiest way to handle dependencies. Are there any obvious downsides?

I think it is pretty standard. It's gonna be a container image either way, just a matter of whether it's amazon's or your own. I think the biggest downside would be that you need a way to build & deploy your images, which probably ranges from trivial to minimal effort. Years ago we were doing a simple lambda and it was one of the first ones with no other use on the horizon, so rather than a custom image to use the requests library we just inlined a simple http request function using python standard library, to avoid that bit of overhead of using a custom image.

ledge
Jun 10, 2003

lazerwolf posted:

Is it a good practice to use container images for lambda functions? Seems to be the easiest way to handle dependencies. Are there any obvious downsides?

Is there a reason to not use layers? It's what they are there for.

lazerwolf
Dec 22, 2009

Orange and Black

ledge posted:

Is there a reason to not use layers? It's what they are there for.

We don’t really have the same reusable requirements among different use cases. I’d have to build a layer per stack which is fine I guess? I’m not sure which direction is better hence the question.

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
Most go or Java programs we’ve either used layers if we needed something or the stock images. I was excited for containers for more elaborate deploys, giant runtimes or data. Would love to hear other use cases, as well.

CarForumPoster
Jun 26, 2013

⚡POWER⚡

lazerwolf posted:

Is it a good practice to use container images for lambda functions? Seems to be the easiest way to handle dependencies. Are there any obvious downsides?

I use the AWS SAM CLI to do and deploy exactly this. I end up having to yum install a bunch in the container to get chrome and chrome driver running on Amazon Linux 2. It’d be harder or impossible to do this with zip files and layers.

CarForumPoster fucked around with this message at 02:41 on Mar 24, 2024

Plank Walker
Aug 11, 2005
Been migrating a bunch of AWS resource creation to CDK/Cloudformation vs manual, noticed we are getting hit with a big bill for S3 access so trying to add a VPC gateway. Working off a stack overflow answer here: https://stackoverflow.com/a/72040360/2483451, is it sufficient to just add the gateway endpoint to the VPC configuration or do I have to add some reference to the VPC to the S3 construct as well? The comment on stack overflow says VPC configuration is all that's necessary, but not much more detail.

Docjowles
Apr 9, 2009

There is nothing to do on the S3 side. Just make the gateway endpoint and put it in your VPC route tables. There’s a free endpoint for dynamodb too if you use that service.

It annoys the poo poo out of me that there is this no downside thing you can easily drop in your VPC to improve costs and the network path and AWS doesn’t just do it for you. I’m curious what their public justification would be, cause it sure feels like the real motivation is “rip off suckers with totally pointless NAT fees and hope they don’t notice”.

Docjowles fucked around with this message at 16:02 on Apr 5, 2024

Plank Walker
Aug 11, 2005

Docjowles posted:

There is nothing to do on the S3 side. Just make the gateway endpoint and put it in your VPC route tables. There’s a free endpoint for dynamodb too if you use that service.

It annoys the poo poo out of me that there is this no downside thing you can easily drop in your VPC to improve costs and the network path and AWS doesn’t just do it for you. I’m curious what their public justification would be, cause it sure feels like the real motivation is “rip off suckers with totally pointless NAT fees and hope they don’t notice”.

Yeah we had a similar issue with S3 KMS key caching, it's a one liner to add but miss it and oops now you're getting charged for a KMS key retrieval every time you do anything with S3

BaseballPCHiker
Jan 16, 2006

Plank Walker posted:

Yeah we had a similar issue with S3 KMS key caching, it's a one liner to add but miss it and oops now you're getting charged for a KMS key retrieval every time you do anything with S3

This has burned my org in the past. Like a lot of things with AWS it feels like it should be the default but isnt.

Looking at you HTTPS S3 enforcement.

Ajaxify
May 6, 2009

Plank Walker posted:

Been migrating a bunch of AWS resource creation to CDK/Cloudformation vs manual, noticed we are getting hit with a big bill for S3 access so trying to add a VPC gateway. Working off a stack overflow answer here: https://stackoverflow.com/a/72040360/2483451, is it sufficient to just add the gateway endpoint to the VPC configuration or do I have to add some reference to the VPC to the S3 construct as well? The comment on stack overflow says VPC configuration is all that's necessary, but not much more detail.

That's correct, for Gateway endpoints (as opposed to Interface endpoints), you don't need any extra configuration to get them to work. Your machines will still resolve the S3/DDB IPs from public DNS and send packets to the public IPs of those services, but the Internet Gateway in your VPC will now intercept those packets and shunt them directly to the AWS service instead of allowing them to traverse the public internet.

For Interface endpoints, you'll need to setup DNS to resolve those service hostnames to VPC local endpoints. This can be done using AWS Private Zones in Route53. Security groups and network ACLs may also need to be updated to allow traffic from the services to the new internal endpoints.

I agree though, these endpoints really should come out of the box with your VPC. When I coded up a VPC CDK construct for my team, I added all endpoints to the VPC by default as there's no situation I can think of where you wouldn't want these enabled. It was pretty cool implementation too; I wrote an Aspect that searched the CDK application for Security Groups, and magically allowed access from any SG in the VPC to the AWS Service VPC Endpoints.

Lysandus
Jun 21, 2010
Anyone recommend a good app for studying/practice tests for certs, starting with Cloud Practitioner? There are like 8 million on the app store and its impossible to find which is actually good.

Rapner
May 7, 2013


I got all the certs with a subscription to acloud.guru. Honestly worth it if you want to churn them out.

I can't speak to apps tho

repsnake
Sep 1, 2002

Post: the cereals you love the most

Lysandus posted:

Anyone recommend a good app for studying/practice tests for certs, starting with Cloud Practitioner? There are like 8 million on the app store and its impossible to find which is actually good.

I used the O'reilly app and read this book in a weekend on my tablet and took the test on Monday, about 20 hours of study time total:
https://learning.oreilly.com/library/view/-/9781801075930/

I also watched the last part of this video series that covers all their stupidly-named services and what they do:
https://learning.oreilly.com/course/aws-cloud-practitioner/0636920864295/

Practitioner is mostly a buzz words quiz. I didn't learn much about AWS through it, but knowing the services and what they do is probably the most helpful.
Coming from an Azure background, I basically just substituted Azure for AWS in the question and got the answer.


Alright, need some help here with Route 53. I migrated a domain off of Google Domains two days ago since it was near expiry and I didn't want to get moved to Squarespace.
So, none of the DNS record information transferred over, which in hindsight should have been expected, however, I updated in the hosted zones for the domain the DNS record with the Office365 record and DKIM record from Microsoft and I still can't receive email on the domain.
Is it because it takes 48 hours to propagate out or am I missing something?
code:
domain.com	MX	Simple	-	No	0 domain.mail.protection.outlook.com.	3600	-	-	-
domain.com	NS	Simple	-	No	ns-712.awsdns-25.net.   ns-1514.awsdns-61.org.       ns-111.awsdns-13.com.       ns-1782.awsdns-30.co.uk.	172800	-	-	-
domain.com	SOA	Simple	-	No	ns-712.awsdns-25.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400	900	-	-	-
domain.com	TXT	Simple	-	No	"v=spf1 include:spf.protection.outlook.com -all"	3600	-	-	-
selector1._domainkey.domain.com	CNAME	Simple	-	No	selector1-domain-com._domainkey.domain.onmicrosoft.com.	3600	-	-	-
selector2._domainkey.domain.com	CNAME	Simple	-	No	selector2-domain-com._domainkey.domain.onmicrosoft.com.	3600	-	-	-
autodiscover.domain.com	CNAME	Simple	-	No	autodiscover.outlook.com.	3600

vanity slug
Jul 20, 2010

Do the nameservers point to AWS?

Thanks Ants
May 21, 2004

#essereFerrari


Did you update the name servers with the domain registrar?

Edit: That's what the above says I just read it too quickly

repsnake
Sep 1, 2002

Post: the cereals you love the most
Isn't Amazon the registrar now that I've transferred the domain to AWS? That's what the NS record above is now reflecting, correct?
I went through the AWS documentation and it said that I should have lowered the NS ttl to 3600 or less to troubleshoot issues like these. Lesson learned, I might have to wait longer to see but I did lower it so when it does refresh I'll see if it corrected itself then.

Thanks Ants
May 21, 2004

#essereFerrari


There’s an NS record in the zone but also name servers set in the domain registration itself. If you do “whois domain.com” in a terminal it should show what the name servers are currently.

Hed
Mar 31, 2004

Fun Shoe
I have a pretty small Postgres RDSinstance on a db.t4g.medium that is capping out of IOPS and as a result there's a whole lot of WAL wait. We have a 20GB disk on gp2 so expected IOPS is 60.

I shouldn't have let it get to this point, but is there a graceful way to increase IOPS with no downtime? My options appear to be:

  • convert the disk to gp3 (no way this can be done online right?)
  • make the disk bigger to scale the "IOPS = Volume size [GB] * 3", but I don't need it larger

Is there a good way to spin up another instance into the cluster with enough IOPS and gracefully transition to it? I know I could pgdump/restore but would rather not have downtime if possible.

Docjowles
Apr 9, 2009

Hed posted:

I have a pretty small Postgres RDSinstance on a db.t4g.medium that is capping out of IOPS and as a result there's a whole lot of WAL wait. We have a 20GB disk on gp2 so expected IOPS is 60.

I shouldn't have let it get to this point, but is there a graceful way to increase IOPS with no downtime? My options appear to be:

  • convert the disk to gp3 (no way this can be done online right?)
  • make the disk bigger to scale the "IOPS = Volume size [GB] * 3", but I don't need it larger

Is there a good way to spin up another instance into the cluster with enough IOPS and gracefully transition to it? I know I could pgdump/restore but would rather not have downtime if possible.

I'm reasonably sure you can convert from gp2 to gp3 online. You certainly can with EBS volumes so I don't know why RDS would be different. There might be some performance degradation during the move but 20GB should be very fast.

Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

Hed posted:

I have a pretty small Postgres RDSinstance on a db.t4g.medium that is capping out of IOPS and as a result there's a whole lot of WAL wait. We have a 20GB disk on gp2 so expected IOPS is 60.

I shouldn't have let it get to this point, but is there a graceful way to increase IOPS with no downtime? My options appear to be:

  • convert the disk to gp3 (no way this can be done online right?)
  • make the disk bigger to scale the "IOPS = Volume size [GB] * 3", but I don't need it larger

Is there a good way to spin up another instance into the cluster with enough IOPS and gracefully transition to it? I know I could pgdump/restore but would rather not have downtime if possible.

gp2 to gp3 should be totally safe and without downtime. Also you should have a backup, because that's a good practice. Documentation for aws cli here

kalel
Jun 19, 2012

For the past few weeks, my prod postgresql RDS instance's CPUUtilization metric rises steadily throughout the day to a max of ~8% and then drops instantly to ~2% at 00:00 UTC, every day, like clockwork. Any reason why that would be the case? Google is giving me nothing.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

kalel posted:

For the past few weeks, my prod postgresql RDS instance's CPUUtilization metric rises steadily throughout the day to a max of ~8% and then drops instantly to ~2% at 00:00 UTC, every day, like clockwork. Any reason why that would be the case? Google is giving me nothing.

Autovacuuming?

Hed
Mar 31, 2004

Fun Shoe

Happiness Commando posted:

gp2 to gp3 should be totally safe and without downtime. Also you should have a backup, because that's a good practice. Documentation for aws cli here

Docjowles posted:

I'm reasonably sure you can convert from gp2 to gp3 online. You certainly can with EBS volumes so I don't know why RDS would be different. There might be some performance degradation during the move but 20GB should be very fast.

Thank you both, I backed up and found where I could change it and it all happened online. Now getting 10x the IOPS and the RDS dashboard wait times look MUCH better.

Adbot
ADBOT LOVES YOU

kalel
Jun 19, 2012

Blinkz0rz posted:

Autovacuuming?

that's what I first thought but I believe auto vacuuming happens non-periodically, whereas cpu plummets always at midnight

I opened an AWS support case and supposedly it's due to a daily log switchover. system monitoring processes have less data to be read from the log, utilization drops, then creeps back up as more data is written to the new log file. it's weird that cpu would go up based on the size of the log file but it doesn't seem to affect performance, it was an oddity more than anything :shrug:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply