|
It's probably pointless to try to restrict a CD role much, unless I'm mistaken. Don't forget to threat model. Your attacker gets access to your CD role. What are their goals? What's the worst-case? Are they thwarted because the role diligently doesn't allow creating an AppStream fleet? Or do they have the keys to the whole account anyway, because they can just iam:CreateRole whatever they please and use that? I'm not sure there's any way to make a CD role less valuable (wait, no: SCPs), so you have to make it really well protected instead.
|
# ? Sep 11, 2020 06:09 |
|
|
# ? May 21, 2024 19:11 |
|
crazypenguin posted:It's probably pointless to try to restrict a CD role much, unless I'm mistaken. Don't forget to threat model. This is a pretty good point. Especially when I consider that the CD role is pretty solidly protected such that the only people that could gain access to it are people with the administrator role in the first place. We're about to start building out a bunch more stuff so it's something I might pursue on Monday. Good shout, thanks!
|
# ? Sep 11, 2020 10:24 |
|
Maybe this is an obvious thing or even an anti-pattern (I’ve only been heavily using AWS CF in a project I inherited in the last 12 months) but I’ve found close proximity has worked well for making sure deployment roles are kept in sync and tightly coupled to the actual deployments. In our system the template defining the roles lives in the same repo as the actual deployment template, and they are close by in the file system. Thinking about it now maybe that’s bad since anyone with repo access could escalate permissions of the role and do whatever they want... not sure how you’d achieve this otherwise tho. I did make it my job to start tightening things down more. Hasn’t been easy but I’ve just been doing trial and error. Most annoying part is avoiding cyclic dependencies on resources that haven’t been created yet.
|
# ? Sep 11, 2020 10:45 |
|
Twerk from Home posted:How do you guys successfully handle IAM roles for whatever process is doing your deployments? This is always some form of whack-a-mole but experience and muscle memory can help a lot. The useful google incantation here is usually "actions context conditions <service name>", which will pull up the IAM documentation that fully enumerates all of the things available to build a policy for a service. You can use this plus your CDK output to vet any least privilege policy, and then integrate a permissions audit process at whatever cadence your security team works at to make sure that the permissions are actually being used. It's relatively simple to take, for example, 90 days of cloudtrail event data and parse it for "all actions granted to principal x that do not appear as the action in any of these cloudtrail events". I've done this with SumoLogic, but you can probably work something out with Athena or CloudWatch Log Insights or whatever they're calling it these days. The tricky part here is when you have to start evaluating "did this principal exercise this very specific s3 bucket + prefix permission?", which gets complicated because s3 permissions can have wildcards. I've used python's glob library for this in the past which was easy enough, but, in general it's good to avoid complicated s3 permissions that need to be audited in this way.
|
# ? Sep 11, 2020 17:04 |
|
Usually at that point you've implemented some kind of SIEM solution that simply ingests the logs and does the needful. On the subject though and I didn't see this mentioned but it's best to have a central security logging AWS account which all your CloudTrail, Config, VPC Flow Logs, etc. from your AWS accounts are pushed into. This simplifies eventual ingestion of the logs/whatever as you only have to pull stuff from a single account. Furthermore it limits the risk of tampering (Someone going in and fabricating logs/events). CloudTrail is still free for the first trail in each region regardless of where you point it.
|
# ? Sep 11, 2020 17:34 |
|
We run a distributed file store for our application and we've finally gotten the go-ahead to pursue replacing the existing GlusterFS setup with EFS. Fairly sure the way we're gonna handle the migration is to just rsync gluster to EFS, unmount gluster and mount EFS. Has anyone run into any issues with migrating to EFS before? We plan on testing the poo poo out of it in terms of performance and cutover, but wanted to see if there were some pitfalls we need to avoid early.
|
# ? Sep 12, 2020 03:41 |
|
whats for dinner posted:We run a distributed file store for our application and we've finally gotten the go-ahead to pursue replacing the existing GlusterFS setup with EFS. Fairly sure the way we're gonna handle the migration is to just rsync gluster to EFS, unmount gluster and mount EFS. Has anyone run into any issues with migrating to EFS before? We plan on testing the poo poo out of it in terms of performance and cutover, but wanted to see if there were some pitfalls we need to avoid early. It pretty much works like regular NFS is you follow the instructions. Just when you load up data you may hit some IO quotas and have to wait for that to build up. You might want to look at FSx too if you are evaulating things. I haven't used it, but it might be more performant.
|
# ? Sep 14, 2020 16:03 |
|
I got a silly question in the wrong thread. Anyone print from their AWS deployed code to a local printer? E.g. to run reports
|
# ? Sep 14, 2020 16:30 |
|
I posted this in the printers thread, not sure if you saw it https://www.printnode.com/en
|
# ? Sep 14, 2020 20:46 |
|
Thanks Ants posted:I posted this in the printers thread, not sure if you saw it I missed it somehow (I posted this same ? a while back) and this looks to be exactly what I want. Thanks so much!
|
# ? Sep 14, 2020 21:11 |
|
Probably a silly question here. I have an instance with an ENI attached to it. So it has 2 private IPs and 2 private DNS names. I'm trying to use AWS CLI to return the PrivateIpAddress and PrivateDnsName of the ENI rather than the built in. When I execute this, it returns both private IPs: code:
code:
|
# ? Sep 15, 2020 20:07 |
|
In general this is where I switch to python/boto3 but the thing you're looking for here is the --query param, where you can do a filter plus starts_with. Googling "awscli query starts with" will get you some examples. boto3 is really really good though and this type of work is much more intuitive in it e: sorry, misread, you probably want "contains" instead of "starts with". The thing in use here is called JMESPath if you want to read the full specification.
|
# ? Sep 15, 2020 20:48 |
|
deedee megadoodoo posted:I've got a quick question about CloudWatch. We currently have 11 accounts and we're using the CloudWatch agent on our ec2 instances to ship system logs. The problem is we want a central location where we can view all of our logs. I was looking at doing this: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html and setting up a central logging account but I don't know how much of a pain that isto work with and maintain. Any thoughts? Just want to quote myself here and write some thoughts about this AWS solution for cross-account CloudWatch. My initial impression: it sucks. I mean, it does what it says, and provides a way to access CloudWatch in other accounts, but it really works like poo poo. First, the configuration for which accounts you can view is client side. I had to write up a wiki page to explain to our developers how to set up the list and I just know that some of them are going to gently caress it up and then I am going to ahve to waste my time troubleshooting something that they shouldn't have to configure in the first place. Second, only the CloudWatch alarms and dashboard are shared across accounts. To view the actual log groups you need to assume a role using the poorly marked "switch to profile" link which will allow a user to assume the preconfigured role in the remote account. I guess it's convenient to have an easy to use link that's already filled in and ready to go, but the user experience is loving atrocious. Did you click on Log Groups while having a different account selected? Here's a tiny box containing an obtuse error message on top of a big blank screen. The whole thing is such a terrible clusterfuck. Whoever designed that UX should be fired immediately. Into the sun I mean. It's such a terrible user experience that I considered scrapping the entire thing but then I remembered that I hate myself so I continued on with it. So to anyone else looking to share cloudwatch logs, I'll spare you some trouble: here be dragons.
|
# ? Sep 18, 2020 01:51 |
|
Like anything else AWS, going cross account is a real PITA. In the early years, no one anticipated customers needing to open more than one account and now, decades later, it shows. My constant refrain is to pull all the things into a central place and point your thing at that place. I recommend pulling logs from all your accounts into a single bucket and pointing Athena or elasticsearch at it, or push all logs into a database and add triggers to it, pull your trusted advisor checks into a database instead of checking them for each account, etc. And yeah, cloudwatch can be really rough.
|
# ? Sep 18, 2020 06:41 |
|
https://github.com/boto/boto3/issues/2596 loving horse poo poo. gently caress this dumbass company. All of our deploys are broken.
|
# ? Sep 18, 2020 15:13 |
|
deedee megadoodoo posted:https://github.com/boto/boto3/issues/2596 I mean it's not going to help your frustrations now, but I guess you don't pin versions of things in production? That's pretty much step 0, you want to know what version you're running in production. And otherwise yah something leaked out from how their requirements.txt differs from yours, but from the look of it, it was detected within 2 hours of release and a workaround with-in 3. That's pretty good turnaround time for when a bug escapes to the wild.
|
# ? Sep 18, 2020 15:34 |
|
Hughlander posted:I mean it's not going to help your frustrations now, but I guess you don't pin versions of things in production? That's pretty much step 0, you want to know what version you're running in production. And otherwise yah something leaked out from how their requirements.txt differs from yours, but from the look of it, it was detected within 2 hours of release and a workaround with-in 3. That's pretty good turnaround time for when a bug escapes to the wild. I am aware of both the fix and the crappiness of our infra code. My frustration lies in the fact that there is no way this was even tested before it was pushed. You can't even run "aws --version". It's not like this is some hidden error. It was just completely non-functional code.
|
# ? Sep 18, 2020 15:39 |
|
deedee megadoodoo posted:I am aware of both the fix and the crappiness of our infra code. My frustration lies in the fact that there is no way this was even tested before it was pushed. You can't even run "aws --version". It's not like this is some hidden error. It was just completely non-functional code. Unless their requirements.txt pinned the version of awscli...
|
# ? Sep 18, 2020 15:58 |
|
Hughlander posted:Unless their requirements.txt pinned the version of awscli... You are talking about boto. I am talking about the new version of the awscli code being pushed to the ec2 yum repo without being tested. We had a fleet of ec2 instances start up this morning that were all broken. deedee megadoodoo fucked around with this message at 16:29 on Sep 18, 2020 |
# ? Sep 18, 2020 16:26 |
|
deedee megadoodoo posted:You are talking about boto. I am talking about the new version of the awscli code being pushed to the ec2 yum repo without being tested. Got it! Yep I'm talking the wrong thing, is it too early in the day to drink?
|
# ? Sep 18, 2020 16:32 |
|
Hughlander posted:Got it! Yep I'm talking the wrong thing, is it too early in the day to drink? It's never too early to start. And I am the one who created the confusion by not being clear about what exactly was causing my frustration. This change not only broke a lot of app code that wasn't pinned to a specific boto3 version, but it also broke a lot of infrastructure. Our startup scripts rely on being able to run the aws command to copy artifacts from s3. It was a fairly simple fix, but I am just flabbergasted that this made it out into the yum repo to begin with.
|
# ? Sep 18, 2020 16:41 |
|
deedee megadoodoo posted:It's never too early to start. And I am the one who created the confusion by not being clear about what exactly was causing my frustration. This change not only broke a lot of app code that wasn't pinned to a specific boto3 version, but it also broke a lot of infrastructure. Our startup scripts rely on being able to run the aws command to copy artifacts from s3. It was a fairly simple fix, but I am just flabbergasted that this made it out into the yum repo to begin with. That's ok. Jeff will send his ? email and everything will be taken care of.
|
# ? Sep 18, 2020 17:02 |
|
Volguus posted:That's ok. Jeff will send his ? email and everything will be taken care of. Getting an e-mail nested 6 deep where its ?s all the way down and you don't have anyone to forward with another ? is the least fun game of hot potato I've ever played.
|
# ? Sep 18, 2020 17:32 |
|
Volguus posted:That's ok. Jeff will send his ? email and everything will be taken care of. Legit want to send him a ? email to tell him to stop being a dickhead.
|
# ? Sep 18, 2020 17:46 |
|
When I quit I’m issuing a sev1 trouble ticket (a sev 1 TT pages pretty much everyone pageable at the executive level) in return for my one ? email. One ? Email = one Sev1 TT. My boss’ boss’ boss told me over drinks that it is expressly forbidden. Which makes it all the more fun, huh?
|
# ? Sep 19, 2020 05:14 |
|
Agrikk posted:When I quit I’m issuing a sev1 trouble ticket (a sev 1 TT pages pretty much everyone pageable at the executive level) in return for my one ? email. I take it you've seen the "notable TT" page? Including the paging level ticket where someone did a real number on a toilet?
|
# ? Sep 28, 2020 13:19 |
|
Question for the AWS experts: A company that I work with has the following scaling policy: Essentially, if the average CPU usage reaches 80%, increase the capacity by 1. All good, it looks like it works, everyone's happy. I wanted to reduce that number to 65%. Editing it gives me this: And I cannot understand what it wants and why does it do that. Reading the documentation left me even more confused. Adding more steps with various numbers in there helps even less. What's a negative lower bound and why do I need it? Actually, no, I don't really care what that is, how can I shove down AWS' throat that 65 number and make it leave me alone? Thank you.
|
# ? Oct 2, 2020 21:09 |
|
Has anyone had any luck copying objects in bulk from a FTP server (or any server really) into s3, ideally using sync command but not required, and keeping the source file's attributes, such as file created/updated, tags etc, and populating that data into the s3 object's custom metadata? Really, my only requirement is I just want to know the source files creation date/time on the FTP and just have that value stuck into a custom metadata tag on S3. This sounds like an easy thing I'm just not seeing any obvious solution. I thought maybe something like S3Browser might have that built in but I'm just not seeing it.
|
# ? Oct 5, 2020 14:13 |
|
I think you’ll have to write a custom script. Either upload the files one at a time and set the tag on each object or run a sync and then have a script set the date tag on every object in s3.
|
# ? Oct 5, 2020 15:34 |
|
deedee megadoodoo posted:I think you’ll have to write a custom script. Either upload the files one at a time and set the tag on each object or run a sync and then have a script set the date tag on every object in s3. Yeah, I think that's the conclusion I've come to. Just sketched out a basic bash script like ya: code:
|
# ? Oct 5, 2020 15:37 |
|
Had a CodeDeploy fail this morning at the Download bundle step with no error. It look 70 minutes to fail and caused our ASG to choke in the meantime. Never seen this before. Opened a ticket with support, but has anyone seen anything like this before? Event Details is likely totally blank too.
|
# ? Oct 6, 2020 20:42 |
|
my bitter bi rival posted:Had a CodeDeploy fail this morning at the Download bundle step with no error. It look 70 minutes to fail and caused our ASG to choke in the meantime. Never seen this before. Opened a ticket with support, but has anyone seen anything like this before? Event Details is likely totally blank too. Is it coming from bit bucket? They had an outage today.
|
# ? Oct 7, 2020 04:03 |
|
SnatchRabbit posted:Has anyone had any luck copying objects in bulk from a FTP server (or any server really) into s3, ideally using sync command but not required, and keeping the source file's attributes, such as file created/updated, tags etc, and populating that data into the s3 object's custom metadata? Really, my only requirement is I just want to know the source files creation date/time on the FTP and just have that value stuck into a custom metadata tag on S3. This sounds like an easy thing I'm just not seeing any obvious solution. I thought maybe something like S3Browser might have that built in but I'm just not seeing it. Would https://aws.amazon.com/aws-transfer-family/ do the job?
|
# ? Oct 7, 2020 06:54 |
|
So I'm on my way for my loop on Monday for the Enterprise Architect role... somehow all the tips from the recruitment got me more confused as they seem to elicit cookie-cutter responses and profiles while I tend to be at my best when I'm sort of free-form-ing ... I know I'd fit the profile (I've done this for a while now) but I'm getting nervous on how I'd actually show that to the interviewers. I'd ask for more tips but I guess I've had enough. Just needed a space to vent a bit I guess
|
# ? Oct 10, 2020 15:48 |
|
Good luck in your loop! Map all of your anecdotes to a leadership principle and you should do fine...
|
# ? Oct 11, 2020 06:22 |
|
Agrikk posted:Good luck in your loop! Exactly this, be prepared to go deeper on your answers; impact and influence. I’m a TAM who does loops so I have to be careful about what I share. Map it all to the LPs.
|
# ? Oct 11, 2020 17:36 |
|
Cancelbot posted:Exactly this, be prepared to go deeper on your answers; impact and influence. I’m a TAM who does loops so I have to be careful about what I share. Map it all to the LPs. also, don't bullshit and say what you would do in an ideal situation but say what you did and how it fit that particular situation. nothing is ever perfect.
|
# ? Oct 11, 2020 21:44 |
The latest Humble Book Bundle has some AWS books and cert study guides/practice exams. I can't speak to the quality but it's pretty cheap and contributes to charity: https://www.humblebundle.com/books/aws-azure-google-and-cloud-security-books
|
|
# ? Oct 12, 2020 21:27 |
|
Waste 2 days troubleshooting why some mounts wouldn't show up in the cloudwatch console and find it's some bug in a cloudwatch dependency that was fixed in june of last year but somehow hasn't made it into production yet:quote:The CloudWatch Service team has identified that newer versions of the Cloudwatch Agent are using a dependency that has been recently updated. The dependency's behaviour when processing, "/dev/mapper/XXX" devices has changed which causes it to perform an additional step I'm pinning our TAM to the wall on this one. Oh man, my jimmies are rustled. This is on top of the 10 year old bug I ran into where S3 is silently converting "+" to spaces in static website hosting which got me the most polite "this is a known issue and we've +1'd the internal tracking ticket, pound sand" reply I've ever gotten. Amazon is so polite with the apologetic replies "We're sorry you're experiencing an issue, we can't resolve it or tell you when it's going to be fixed, we're helpless as newborn babes dropped into the middle of a datacenter! Bless this mess!" Bhodi fucked around with this message at 18:36 on Oct 16, 2020 |
# ? Oct 16, 2020 18:26 |
|
|
# ? May 21, 2024 19:11 |
|
Why yell at the TAM? They haven’t caused the issue, did not write the patch nor cause the delay in deployment. If the TAM gave you bad advice or information that’s one thing, but I’d it’s services that are failing you should have the TAM escalate in your behalf. Try this: “Hey TAM- We are really pissed/upset/angry/irritated at our experience with these issues. Please tell me what you are going to do to make sure service leadership understands how angry we are.” It doesn’t blame the TAM for the issues, but calls the TAM to task for owning your issues and escalating on your behalf. It also uses the words “angry” and “upset” which are trigger words that will engage the account manager and SA as well, bringing your whole account team to bear in your behalf.
|
# ? Oct 17, 2020 02:32 |