|
I was completely ready to say mean things about it but it looks fine actually. It's not that different from many take-home technical interviews I've been presented with. Some of the tool choices are bad, is the worst thing I can muster.
|
# ? Mar 8, 2024 02:26 |
|
|
# ? May 4, 2024 13:02 |
|
BaseballPCHiker posted:That AI assist thing is so loving dumb and annoying. I've asked it three different things and the answer has been wrong every time. Just like CodeWhisperer which creates broken code, even when calling aws apis. I mean it adds incorrect arguments to functions that look right but are wrong. AI is such a loving disaster zone and the sooner it and the companies surrounding it collapse the better. And they will because it is already hitting the limits of what it can do, and it isn't very good at doing it.
|
# ? Mar 8, 2024 04:30 |
|
I do t k why I hate AI so much but I do. Maybe that it’s been adopted by so many as the perfect cure-all for everything from procedure manuals to customer emails to doing our thinking for us. Maybe people who have no actual idea of its capabilities use it like a general purpose “we’ll throw AI at it to fix it!” rallying cry. Maybe because people use “AI” like they used “paradigm” in the 90s, “devops” in the 00s or “agile” in the teens. But I hate it and it sucks.
|
# ? Mar 8, 2024 07:44 |
|
Blurb3947 posted:Curious if Forrest Brazeal’s cloud resume challenge holds any weight in the industry? I’m almost done with it and have learned quite a bit with various services but was skeptical if it actually helps people during their job hunts. If I’m interviewing a junior and they can explain what they built and why they used specific services I’d treat it the same as hands on professional experience. I don’t think it’s really “a thing” like a certification or something.
|
# ? Mar 8, 2024 07:49 |
|
Agrikk posted:I do t k why I hate AI so much but I do. It attracts dot com style investment levels, so people go crazy. It’s also an illusive tech term that will be reduced to the lowest common denominator. Much like the cloud is someone else’s computer, ai may be called a decision made by a device. There has been more effort in trying to use ML for things, by then again the market is heavy with wrappers for ChatGPT. In my industry people are demanding AI anything, that means buzzwords to stand out from the crowd, even if the implementation is weak.
|
# ? Mar 8, 2024 15:52 |
|
Agrikk posted:I do t k why I hate AI so much but I do. it's because ai is championed by the same insufferable assholes who constantly sell the sizzle and refuse to acknowledge how lovely the steak is
|
# ? Mar 8, 2024 16:12 |
|
LochNessMonster posted:If I’m interviewing a junior and they can explain what they built and why they used specific services I’d treat it the same as hands on professional experience. Part of the challenge is to actually go out and get a cert. I did the CCP as part of my degree and the SAA on my own. I'm just trying to get some experience that I can show off and talk about for interviews or at least some sort of leg up for job searches.
|
# ? Mar 8, 2024 16:22 |
|
BaseballPCHiker posted:Once again I am struggling with cross account permissions. So this is from 2+ months ago but I finally figured it out!!!!! When you go cross account from a service like SNS, EventBridge, CloudWatch, etc they dont pass the orgId attribute. So even though my SNS policy on the other side said accept aws:* with my orgID as a condition I couldnt get poo poo to work because those services werent passing that attribute! But if I have them go to lambda as an intermediary with an execution policy then the orgID attribute will get passed. Ugh so frustrating but a hard earned lesson.
|
# ? Mar 8, 2024 17:27 |
|
What would be the best way to add a lifecycle rule to existing buckets in an account that dont already have one? Im basically looking to add a rule to delete aborted multipart uploads in buckets. My first thought was a lambda that would fire and add in the lifecycle rule to buckets, but I dont necessarily know what would trigger that and how I'd put in logic to check for existing lifecycle policies. This is where being lovely with python really backfires for me. Org wide this isnt really a huge issue for us but somehow its caught the attention of my bosses. Nevermind the thousands we waste in orphaned ebs volumes....
|
# ? Mar 14, 2024 16:18 |
|
I couldn't find it in 30 seconds skimming the docs but https://cloudcustodian.io/ MIGHT be able to do this? It can filter on and react to a whole lot of things. Stuff like this ("I want to find resources matching or not matching a criteria and do something to them") is why the tool exists although with AWS being so vast it of course doesn't cover everything.
|
# ? Mar 14, 2024 17:15 |
|
BaseballPCHiker posted:What would be the best way to add a lifecycle rule to existing buckets in an account that dont already have one? Im basically looking to add a rule to delete aborted multipart uploads in buckets. Lambda to enable across the board once (check for existing rules). Then use an eventbridge rule to trigger a lambda that will enable the lifecycle rule on new buckets that don’t already have one. Or just run a cron and check for a lifecycle rule before applying your own, but that’s a bit inefficient compared to the eventbridge route.
|
# ? Mar 14, 2024 17:16 |
|
BaseballPCHiker posted:What would be the best way to add a lifecycle rule to existing buckets in an account that dont already have one? Im basically looking to add a rule to delete aborted multipart uploads in buckets. How are you currently deploying your infrastructure?
|
# ? Mar 14, 2024 22:25 |
|
is there a way to send bucket notifications from an s3 in one account to an sqs queue in a different account? I don't know why I shouldn't be able to do it without the use of lambda or eventbridge, but I can't find an example that doesn't use one of those
|
# ? Mar 20, 2024 21:04 |
|
kalel posted:is there a way to send bucket notifications from an s3 in one account to an sqs queue in a different account? I don't know why I shouldn't be able to do it without the use of lambda or eventbridge, but I can't find an example that doesn't use one of those Haven’t tried it myself but sqs is a valid destination for s3 event notifications so it seems like throwing it to a different account should be possible. If you can’t send it to an sns topic in your account amd subscribe the queue in the different acount to it.
|
# ? Mar 20, 2024 21:46 |
|
Is it a good practice to use container images for lambda functions? Seems to be the easiest way to handle dependencies. Are there any obvious downsides?
|
# ? Mar 22, 2024 18:11 |
lazerwolf posted:Is it a good practice to use container images for lambda functions? Seems to be the easiest way to handle dependencies. Are there any obvious downsides? I think it is pretty standard. It's gonna be a container image either way, just a matter of whether it's amazon's or your own. I think the biggest downside would be that you need a way to build & deploy your images, which probably ranges from trivial to minimal effort. Years ago we were doing a simple lambda and it was one of the first ones with no other use on the horizon, so rather than a custom image to use the requests library we just inlined a simple http request function using python standard library, to avoid that bit of overhead of using a custom image.
|
|
# ? Mar 22, 2024 20:25 |
|
lazerwolf posted:Is it a good practice to use container images for lambda functions? Seems to be the easiest way to handle dependencies. Are there any obvious downsides? Is there a reason to not use layers? It's what they are there for.
|
# ? Mar 22, 2024 20:59 |
|
ledge posted:Is there a reason to not use layers? It's what they are there for. We don’t really have the same reusable requirements among different use cases. I’d have to build a layer per stack which is fine I guess? I’m not sure which direction is better hence the question.
|
# ? Mar 23, 2024 03:03 |
|
Most go or Java programs we’ve either used layers if we needed something or the stock images. I was excited for containers for more elaborate deploys, giant runtimes or data. Would love to hear other use cases, as well.
|
# ? Mar 23, 2024 14:39 |
|
lazerwolf posted:Is it a good practice to use container images for lambda functions? Seems to be the easiest way to handle dependencies. Are there any obvious downsides? I use the AWS SAM CLI to do and deploy exactly this. I end up having to yum install a bunch in the container to get chrome and chrome driver running on Amazon Linux 2. It’d be harder or impossible to do this with zip files and layers. CarForumPoster fucked around with this message at 02:41 on Mar 24, 2024 |
# ? Mar 24, 2024 02:30 |
|
Been migrating a bunch of AWS resource creation to CDK/Cloudformation vs manual, noticed we are getting hit with a big bill for S3 access so trying to add a VPC gateway. Working off a stack overflow answer here: https://stackoverflow.com/a/72040360/2483451, is it sufficient to just add the gateway endpoint to the VPC configuration or do I have to add some reference to the VPC to the S3 construct as well? The comment on stack overflow says VPC configuration is all that's necessary, but not much more detail.
|
# ? Apr 5, 2024 14:06 |
|
There is nothing to do on the S3 side. Just make the gateway endpoint and put it in your VPC route tables. There’s a free endpoint for dynamodb too if you use that service. It annoys the poo poo out of me that there is this no downside thing you can easily drop in your VPC to improve costs and the network path and AWS doesn’t just do it for you. I’m curious what their public justification would be, cause it sure feels like the real motivation is “rip off suckers with totally pointless NAT fees and hope they don’t notice”. Docjowles fucked around with this message at 16:02 on Apr 5, 2024 |
# ? Apr 5, 2024 15:59 |
|
Docjowles posted:There is nothing to do on the S3 side. Just make the gateway endpoint and put it in your VPC route tables. There’s a free endpoint for dynamodb too if you use that service. Yeah we had a similar issue with S3 KMS key caching, it's a one liner to add but miss it and oops now you're getting charged for a KMS key retrieval every time you do anything with S3
|
# ? Apr 5, 2024 16:11 |
|
Plank Walker posted:Yeah we had a similar issue with S3 KMS key caching, it's a one liner to add but miss it and oops now you're getting charged for a KMS key retrieval every time you do anything with S3 This has burned my org in the past. Like a lot of things with AWS it feels like it should be the default but isnt. Looking at you HTTPS S3 enforcement.
|
# ? Apr 5, 2024 16:42 |
|
Plank Walker posted:Been migrating a bunch of AWS resource creation to CDK/Cloudformation vs manual, noticed we are getting hit with a big bill for S3 access so trying to add a VPC gateway. Working off a stack overflow answer here: https://stackoverflow.com/a/72040360/2483451, is it sufficient to just add the gateway endpoint to the VPC configuration or do I have to add some reference to the VPC to the S3 construct as well? The comment on stack overflow says VPC configuration is all that's necessary, but not much more detail. That's correct, for Gateway endpoints (as opposed to Interface endpoints), you don't need any extra configuration to get them to work. Your machines will still resolve the S3/DDB IPs from public DNS and send packets to the public IPs of those services, but the Internet Gateway in your VPC will now intercept those packets and shunt them directly to the AWS service instead of allowing them to traverse the public internet. For Interface endpoints, you'll need to setup DNS to resolve those service hostnames to VPC local endpoints. This can be done using AWS Private Zones in Route53. Security groups and network ACLs may also need to be updated to allow traffic from the services to the new internal endpoints. I agree though, these endpoints really should come out of the box with your VPC. When I coded up a VPC CDK construct for my team, I added all endpoints to the VPC by default as there's no situation I can think of where you wouldn't want these enabled. It was pretty cool implementation too; I wrote an Aspect that searched the CDK application for Security Groups, and magically allowed access from any SG in the VPC to the AWS Service VPC Endpoints.
|
# ? Apr 5, 2024 16:58 |
|
Anyone recommend a good app for studying/practice tests for certs, starting with Cloud Practitioner? There are like 8 million on the app store and its impossible to find which is actually good.
|
# ? Apr 12, 2024 03:23 |
|
I got all the certs with a subscription to acloud.guru. Honestly worth it if you want to churn them out. I can't speak to apps tho
|
# ? Apr 12, 2024 12:30 |
|
Lysandus posted:Anyone recommend a good app for studying/practice tests for certs, starting with Cloud Practitioner? There are like 8 million on the app store and its impossible to find which is actually good. I used the O'reilly app and read this book in a weekend on my tablet and took the test on Monday, about 20 hours of study time total: https://learning.oreilly.com/library/view/-/9781801075930/ I also watched the last part of this video series that covers all their stupidly-named services and what they do: https://learning.oreilly.com/course/aws-cloud-practitioner/0636920864295/ Practitioner is mostly a buzz words quiz. I didn't learn much about AWS through it, but knowing the services and what they do is probably the most helpful. Coming from an Azure background, I basically just substituted Azure for AWS in the question and got the answer. Alright, need some help here with Route 53. I migrated a domain off of Google Domains two days ago since it was near expiry and I didn't want to get moved to Squarespace. So, none of the DNS record information transferred over, which in hindsight should have been expected, however, I updated in the hosted zones for the domain the DNS record with the Office365 record and DKIM record from Microsoft and I still can't receive email on the domain. Is it because it takes 48 hours to propagate out or am I missing something? code:
|
# ? Apr 12, 2024 15:06 |
|
Do the nameservers point to AWS?
|
# ? Apr 12, 2024 17:29 |
|
Did you update the name servers with the domain registrar? Edit: That's what the above says I just read it too quickly
|
# ? Apr 12, 2024 17:47 |
|
Isn't Amazon the registrar now that I've transferred the domain to AWS? That's what the NS record above is now reflecting, correct? I went through the AWS documentation and it said that I should have lowered the NS ttl to 3600 or less to troubleshoot issues like these. Lesson learned, I might have to wait longer to see but I did lower it so when it does refresh I'll see if it corrected itself then.
|
# ? Apr 12, 2024 18:03 |
|
There’s an NS record in the zone but also name servers set in the domain registration itself. If you do “whois domain.com” in a terminal it should show what the name servers are currently.
|
# ? Apr 12, 2024 18:23 |
|
I have a pretty small Postgres RDSinstance on a db.t4g.medium that is capping out of IOPS and as a result there's a whole lot of WAL wait. We have a 20GB disk on gp2 so expected IOPS is 60. I shouldn't have let it get to this point, but is there a graceful way to increase IOPS with no downtime? My options appear to be:
Is there a good way to spin up another instance into the cluster with enough IOPS and gracefully transition to it? I know I could pgdump/restore but would rather not have downtime if possible.
|
# ? Apr 15, 2024 20:36 |
|
Hed posted:I have a pretty small Postgres RDSinstance on a db.t4g.medium that is capping out of IOPS and as a result there's a whole lot of WAL wait. We have a 20GB disk on gp2 so expected IOPS is 60. I'm reasonably sure you can convert from gp2 to gp3 online. You certainly can with EBS volumes so I don't know why RDS would be different. There might be some performance degradation during the move but 20GB should be very fast.
|
# ? Apr 15, 2024 20:46 |
|
Hed posted:I have a pretty small Postgres RDSinstance on a db.t4g.medium that is capping out of IOPS and as a result there's a whole lot of WAL wait. We have a 20GB disk on gp2 so expected IOPS is 60. gp2 to gp3 should be totally safe and without downtime. Also you should have a backup, because that's a good practice. Documentation for aws cli here
|
# ? Apr 15, 2024 21:16 |
|
For the past few weeks, my prod postgresql RDS instance's CPUUtilization metric rises steadily throughout the day to a max of ~8% and then drops instantly to ~2% at 00:00 UTC, every day, like clockwork. Any reason why that would be the case? Google is giving me nothing.
|
# ? Apr 16, 2024 19:38 |
|
kalel posted:For the past few weeks, my prod postgresql RDS instance's CPUUtilization metric rises steadily throughout the day to a max of ~8% and then drops instantly to ~2% at 00:00 UTC, every day, like clockwork. Any reason why that would be the case? Google is giving me nothing. Autovacuuming?
|
# ? Apr 17, 2024 12:11 |
|
Happiness Commando posted:gp2 to gp3 should be totally safe and without downtime. Also you should have a backup, because that's a good practice. Documentation for aws cli here Docjowles posted:I'm reasonably sure you can convert from gp2 to gp3 online. You certainly can with EBS volumes so I don't know why RDS would be different. There might be some performance degradation during the move but 20GB should be very fast. Thank you both, I backed up and found where I could change it and it all happened online. Now getting 10x the IOPS and the RDS dashboard wait times look MUCH better.
|
# ? Apr 17, 2024 16:26 |
|
|
# ? May 4, 2024 13:02 |
|
Blinkz0rz posted:Autovacuuming? that's what I first thought but I believe auto vacuuming happens non-periodically, whereas cpu plummets always at midnight I opened an AWS support case and supposedly it's due to a daily log switchover. system monitoring processes have less data to be read from the log, utilization drops, then creeps back up as more data is written to the new log file. it's weird that cpu would go up based on the size of the log file but it doesn't seem to affect performance, it was an oddity more than anything
|
# ? Apr 17, 2024 17:32 |