Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
crazypenguin
Mar 9, 2005
nothing witty here, move along
It's probably pointless to try to restrict a CD role much, unless I'm mistaken. Don't forget to threat model.

Your attacker gets access to your CD role. What are their goals? What's the worst-case? Are they thwarted because the role diligently doesn't allow creating an AppStream fleet? Or do they have the keys to the whole account anyway, because they can just iam:CreateRole whatever they please and use that?

I'm not sure there's any way to make a CD role less valuable (wait, no: SCPs), so you have to make it really well protected instead.

Adbot
ADBOT LOVES YOU

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

crazypenguin posted:

It's probably pointless to try to restrict a CD role much, unless I'm mistaken. Don't forget to threat model.

Your attacker gets access to your CD role. What are their goals? What's the worst-case? Are they thwarted because the role diligently doesn't allow creating an AppStream fleet? Or do they have the keys to the whole account anyway, because they can just iam:CreateRole whatever they please and use that?

I'm not sure there's any way to make a CD role less valuable (wait, no: SCPs), so you have to make it really well protected instead.

This is a pretty good point. Especially when I consider that the CD role is pretty solidly protected such that the only people that could gain access to it are people with the administrator role in the first place. We're about to start building out a bunch more stuff so it's something I might pursue on Monday. Good shout, thanks!

Granite Octopus
Jun 24, 2008

Maybe this is an obvious thing or even an anti-pattern (I’ve only been heavily using AWS CF in a project I inherited in the last 12 months) but I’ve found close proximity has worked well for making sure deployment roles are kept in sync and tightly coupled to the actual deployments. In our system the template defining the roles lives in the same repo as the actual deployment template, and they are close by in the file system.

Thinking about it now maybe that’s bad since anyone with repo access could escalate permissions of the role and do whatever they want... not sure how you’d achieve this otherwise tho. I did make it my job to start tightening things down more. Hasn’t been easy but I’ve just been doing trial and error. Most annoying part is avoiding cyclic dependencies on resources that haven’t been created yet.

12 rats tied together
Sep 7, 2006

Twerk from Home posted:

How do you guys successfully handle IAM roles for whatever process is doing your deployments?

I'm having a hard time striking a balance between permissiveness and actual practical ability to deploy applications that are actively changing and evolving. Any type of least-privilege role for deployment has to be constantly updated whenever we integrate a new AWS feature, and nobody's going to prioritize removing unused permissions from the role when we stop using something so it doesn't stay a least-privilege role at all.

This is always some form of whack-a-mole but experience and muscle memory can help a lot. The useful google incantation here is usually "actions context conditions <service name>", which will pull up the IAM documentation that fully enumerates all of the things available to build a policy for a service. You can use this plus your CDK output to vet any least privilege policy, and then integrate a permissions audit process at whatever cadence your security team works at to make sure that the permissions are actually being used.

It's relatively simple to take, for example, 90 days of cloudtrail event data and parse it for "all actions granted to principal x that do not appear as the action in any of these cloudtrail events". I've done this with SumoLogic, but you can probably work something out with Athena or CloudWatch Log Insights or whatever they're calling it these days.

The tricky part here is when you have to start evaluating "did this principal exercise this very specific s3 bucket + prefix permission?", which gets complicated because s3 permissions can have wildcards. I've used python's glob library for this in the past which was easy enough, but, in general it's good to avoid complicated s3 permissions that need to be audited in this way.

Pile Of Garbage
May 28, 2007



Usually at that point you've implemented some kind of SIEM solution that simply ingests the logs and does the needful. On the subject though and I didn't see this mentioned but it's best to have a central security logging AWS account which all your CloudTrail, Config, VPC Flow Logs, etc. from your AWS accounts are pushed into. This simplifies eventual ingestion of the logs/whatever as you only have to pull stuff from a single account. Furthermore it limits the risk of tampering (Someone going in and fabricating logs/events). CloudTrail is still free for the first trail in each region regardless of where you point it.

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

We run a distributed file store for our application and we've finally gotten the go-ahead to pursue replacing the existing GlusterFS setup with EFS. Fairly sure the way we're gonna handle the migration is to just rsync gluster to EFS, unmount gluster and mount EFS. Has anyone run into any issues with migrating to EFS before? We plan on testing the poo poo out of it in terms of performance and cutover, but wanted to see if there were some pitfalls we need to avoid early.

JHVH-1
Jun 28, 2002

whats for dinner posted:

We run a distributed file store for our application and we've finally gotten the go-ahead to pursue replacing the existing GlusterFS setup with EFS. Fairly sure the way we're gonna handle the migration is to just rsync gluster to EFS, unmount gluster and mount EFS. Has anyone run into any issues with migrating to EFS before? We plan on testing the poo poo out of it in terms of performance and cutover, but wanted to see if there were some pitfalls we need to avoid early.

It pretty much works like regular NFS is you follow the instructions. Just when you load up data you may hit some IO quotas and have to wait for that to build up.

You might want to look at FSx too if you are evaulating things. I haven't used it, but it might be more performant.

CarForumPoster
Jun 26, 2013

⚡POWER⚡
I got a silly question in the wrong thread. Anyone print from their AWS deployed code to a local printer? E.g. to run reports

Thanks Ants
May 21, 2004

#essereFerrari


I posted this in the printers thread, not sure if you saw it

https://www.printnode.com/en

CarForumPoster
Jun 26, 2013

⚡POWER⚡

Thanks Ants posted:

I posted this in the printers thread, not sure if you saw it

https://www.printnode.com/en

I missed it somehow (I posted this same ? a while back) and this looks to be exactly what I want. Thanks so much!

Scrapez
Feb 27, 2004

Probably a silly question here.

I have an instance with an ENI attached to it. So it has 2 private IPs and 2 private DNS names. I'm trying to use AWS CLI to return the PrivateIpAddress and PrivateDnsName of the ENI rather than the built in.

When I execute this, it returns both private IPs:
code:
aws ec2 describe-instances --instance-ids i-0f9490fe543544680 --region us-west-2 --query 'Reservations[*].Instances[*].NetworkInterfaces[*].PrivateIpAddresses[*].PrivateIpAddress'
[
    [
        [
            [
                "10.5.157.245"
            ],
            [
                "10.5.144.10"
            ]
        ]
    ]
]
The ENI has a description that I could use to do the query but I'm just not sure how to do it. The ENI info looks like this:
code:
                "Description": "us-west-2b BVR",
                "NetworkInterfaceId": "eni-0e96b142405ac24f6",
                "PrivateIpAddresses": [
                    {
                        "PrivateDnsName": "ip-10-5-144-10.us-west-2.compute.internal",
                        "Primary": true,
                        "PrivateIpAddress": "10.5.144.10"
                    }
                ],
So I'd like to have my query basically say "Look at Description and match *BVR*" and return the PrivateIpAddress associated with that. Can someone point me in the right direction? Thanks!

12 rats tied together
Sep 7, 2006

In general this is where I switch to python/boto3 but the thing you're looking for here is the --query param, where you can do a filter plus starts_with. Googling "awscli query starts with" will get you some examples.

boto3 is really really good though and this type of work is much more intuitive in it

e: sorry, misread, you probably want "contains" instead of "starts with". The thing in use here is called JMESPath if you want to read the full specification.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


deedee megadoodoo posted:

I've got a quick question about CloudWatch. We currently have 11 accounts and we're using the CloudWatch agent on our ec2 instances to ship system logs. The problem is we want a central location where we can view all of our logs. I was looking at doing this: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html and setting up a central logging account but I don't know how much of a pain that isto work with and maintain. Any thoughts?

Just want to quote myself here and write some thoughts about this AWS solution for cross-account CloudWatch.

My initial impression: it sucks. I mean, it does what it says, and provides a way to access CloudWatch in other accounts, but it really works like poo poo.

First, the configuration for which accounts you can view is client side. I had to write up a wiki page to explain to our developers how to set up the list and I just know that some of them are going to gently caress it up and then I am going to ahve to waste my time troubleshooting something that they shouldn't have to configure in the first place.

Second, only the CloudWatch alarms and dashboard are shared across accounts. To view the actual log groups you need to assume a role using the poorly marked "switch to profile" link which will allow a user to assume the preconfigured role in the remote account. I guess it's convenient to have an easy to use link that's already filled in and ready to go, but the user experience is loving atrocious. Did you click on Log Groups while having a different account selected? Here's a tiny box containing an obtuse error message on top of a big blank screen.

The whole thing is such a terrible clusterfuck. Whoever designed that UX should be fired immediately. Into the sun I mean. It's such a terrible user experience that I considered scrapping the entire thing but then I remembered that I hate myself so I continued on with it.

So to anyone else looking to share cloudwatch logs, I'll spare you some trouble: here be dragons.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Like anything else AWS, going cross account is a real PITA. In the early years, no one anticipated customers needing to open more than one account and now, decades later, it shows.

My constant refrain is to pull all the things into a central place and point your thing at that place. I recommend pulling logs from all your accounts into a single bucket and pointing Athena or elasticsearch at it, or push all logs into a database and add triggers to it, pull your trusted advisor checks into a database instead of checking them for each account, etc.

And yeah, cloudwatch can be really rough.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


https://github.com/boto/boto3/issues/2596

loving horse poo poo. gently caress this dumbass company. All of our deploys are broken.

Hughlander
May 11, 2005

deedee megadoodoo posted:

https://github.com/boto/boto3/issues/2596

loving horse poo poo. gently caress this dumbass company. All of our deploys are broken.

I mean it's not going to help your frustrations now, but I guess you don't pin versions of things in production? That's pretty much step 0, you want to know what version you're running in production. And otherwise yah something leaked out from how their requirements.txt differs from yours, but from the look of it, it was detected within 2 hours of release and a workaround with-in 3. That's pretty good turnaround time for when a bug escapes to the wild.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Hughlander posted:

I mean it's not going to help your frustrations now, but I guess you don't pin versions of things in production? That's pretty much step 0, you want to know what version you're running in production. And otherwise yah something leaked out from how their requirements.txt differs from yours, but from the look of it, it was detected within 2 hours of release and a workaround with-in 3. That's pretty good turnaround time for when a bug escapes to the wild.

I am aware of both the fix and the crappiness of our infra code. My frustration lies in the fact that there is no way this was even tested before it was pushed. You can't even run "aws --version". It's not like this is some hidden error. It was just completely non-functional code.

Hughlander
May 11, 2005

deedee megadoodoo posted:

I am aware of both the fix and the crappiness of our infra code. My frustration lies in the fact that there is no way this was even tested before it was pushed. You can't even run "aws --version". It's not like this is some hidden error. It was just completely non-functional code.

Unless their requirements.txt pinned the version of awscli...

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Hughlander posted:

Unless their requirements.txt pinned the version of awscli...

You are talking about boto. I am talking about the new version of the awscli code being pushed to the ec2 yum repo without being tested.

We had a fleet of ec2 instances start up this morning that were all broken.

deedee megadoodoo fucked around with this message at 16:29 on Sep 18, 2020

Hughlander
May 11, 2005

deedee megadoodoo posted:

You are talking about boto. I am talking about the new version of the awscli code being pushed to the ec2 yum repo without being tested.

We had a fleet of ec2 instances start up this morning that were all broken.

Got it! Yep I'm talking the wrong thing, is it too early in the day to drink?

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Hughlander posted:

Got it! Yep I'm talking the wrong thing, is it too early in the day to drink?

It's never too early to start. And I am the one who created the confusion by not being clear about what exactly was causing my frustration. This change not only broke a lot of app code that wasn't pinned to a specific boto3 version, but it also broke a lot of infrastructure. Our startup scripts rely on being able to run the aws command to copy artifacts from s3. It was a fairly simple fix, but I am just flabbergasted that this made it out into the yum repo to begin with.

Volguus
Mar 3, 2009

deedee megadoodoo posted:

It's never too early to start. And I am the one who created the confusion by not being clear about what exactly was causing my frustration. This change not only broke a lot of app code that wasn't pinned to a specific boto3 version, but it also broke a lot of infrastructure. Our startup scripts rely on being able to run the aws command to copy artifacts from s3. It was a fairly simple fix, but I am just flabbergasted that this made it out into the yum repo to begin with.

That's ok. Jeff will send his ? email and everything will be taken care of.

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.

Volguus posted:

That's ok. Jeff will send his ? email and everything will be taken care of.

Getting an e-mail nested 6 deep where its ?s all the way down and you don't have anyone to forward with another ? is the least fun game of hot potato I've ever played.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Volguus posted:

That's ok. Jeff will send his ? email and everything will be taken care of.

Legit want to send him a ? email to tell him to stop being a dickhead.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
When I quit I’m issuing a sev1 trouble ticket (a sev 1 TT pages pretty much everyone pageable at the executive level) in return for my one ? email.

One ? Email = one Sev1 TT.

My boss’ boss’ boss told me over drinks that it is expressly forbidden. Which makes it all the more fun, huh?

Cancelbot
Nov 22, 2006

Canceling spam since 1928

Agrikk posted:

When I quit I’m issuing a sev1 trouble ticket (a sev 1 TT pages pretty much everyone pageable at the executive level) in return for my one ? email.

One ? Email = one Sev1 TT.

My boss’ boss’ boss told me over drinks that it is expressly forbidden. Which makes it all the more fun, huh?

I take it you've seen the "notable TT" page? Including the paging level ticket where someone did a real number on a toilet?

Volguus
Mar 3, 2009
Question for the AWS experts:

A company that I work with has the following scaling policy:


Essentially, if the average CPU usage reaches 80%, increase the capacity by 1. All good, it looks like it works, everyone's happy.

I wanted to reduce that number to 65%. Editing it gives me this:

And I cannot understand what it wants and why does it do that. Reading the documentation left me even more confused. Adding more steps with various numbers in there helps even less. What's a negative lower bound and why do I need it? Actually, no, I don't really care what that is, how can I shove down AWS' throat that 65 number and make it leave me alone?

Thank you.

SnatchRabbit
Feb 23, 2006

by sebmojo
Has anyone had any luck copying objects in bulk from a FTP server (or any server really) into s3, ideally using sync command but not required, and keeping the source file's attributes, such as file created/updated, tags etc, and populating that data into the s3 object's custom metadata? Really, my only requirement is I just want to know the source files creation date/time on the FTP and just have that value stuck into a custom metadata tag on S3. This sounds like an easy thing I'm just not seeing any obvious solution. I thought maybe something like S3Browser might have that built in but I'm just not seeing it.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


I think you’ll have to write a custom script. Either upload the files one at a time and set the tag on each object or run a sync and then have a script set the date tag on every object in s3.

SnatchRabbit
Feb 23, 2006

by sebmojo

deedee megadoodoo posted:

I think you’ll have to write a custom script. Either upload the files one at a time and set the tag on each object or run a sync and then have a script set the date tag on every object in s3.

Yeah, I think that's the conclusion I've come to. Just sketched out a basic bash script like ya:

code:
#!/bin/bash
for filename in ./*; do
  mtime=$(stat -c "%y" $filename) 
  echo $filename $mtime
  aws s3 cp $filename s3://$bucket/$filename --metadata '{"x-amz-meta-mtime":$mtime}'
done

post hole digger
Mar 21, 2011

Had a CodeDeploy fail this morning at the Download bundle step with no error. It look 70 minutes to fail and caused our ASG to choke in the meantime. Never seen this before. Opened a ticket with support, but has anyone seen anything like this before? Event Details is likely totally blank too.

CarForumPoster
Jun 26, 2013

⚡POWER⚡

my bitter bi rival posted:

Had a CodeDeploy fail this morning at the Download bundle step with no error. It look 70 minutes to fail and caused our ASG to choke in the meantime. Never seen this before. Opened a ticket with support, but has anyone seen anything like this before? Event Details is likely totally blank too.



Is it coming from bit bucket? They had an outage today.

fluppet
Feb 10, 2009

SnatchRabbit posted:

Has anyone had any luck copying objects in bulk from a FTP server (or any server really) into s3, ideally using sync command but not required, and keeping the source file's attributes, such as file created/updated, tags etc, and populating that data into the s3 object's custom metadata? Really, my only requirement is I just want to know the source files creation date/time on the FTP and just have that value stuck into a custom metadata tag on S3. This sounds like an easy thing I'm just not seeing any obvious solution. I thought maybe something like S3Browser might have that built in but I'm just not seeing it.


Would https://aws.amazon.com/aws-transfer-family/ do the job?

cosmin
Aug 29, 2008
So I'm on my way for my loop on Monday for the Enterprise Architect role... somehow all the tips from the recruitment got me more confused as they seem to elicit cookie-cutter responses and profiles while I tend to be at my best when I'm sort of free-form-ing ... I know I'd fit the profile (I've done this for a while now) but I'm getting nervous on how I'd actually show that to the interviewers.

I'd ask for more tips but I guess I've had enough. Just needed a space to vent a bit I guess :D

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Good luck in your loop!

Map all of your anecdotes to a leadership principle and you should do fine...

Cancelbot
Nov 22, 2006

Canceling spam since 1928

Agrikk posted:

Good luck in your loop!

Map all of your anecdotes to a leadership principle and you should do fine...

Exactly this, be prepared to go deeper on your answers; impact and influence. I’m a TAM who does loops so I have to be careful about what I share. Map it all to the LPs.

FamDav
Mar 29, 2008

Cancelbot posted:

Exactly this, be prepared to go deeper on your answers; impact and influence. I’m a TAM who does loops so I have to be careful about what I share. Map it all to the LPs.

also, don't bullshit and say what you would do in an ideal situation but say what you did and how it fit that particular situation. nothing is ever perfect.

ObsidianBeast
Jan 17, 2008

SKA SUCKS
The latest Humble Book Bundle has some AWS books and cert study guides/practice exams. I can't speak to the quality but it's pretty cheap and contributes to charity: https://www.humblebundle.com/books/aws-azure-google-and-cloud-security-books

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Waste 2 days troubleshooting why some mounts wouldn't show up in the cloudwatch console and find it's some bug in a cloudwatch dependency that was fixed in june of last year but somehow hasn't made it into production yet:

quote:

The CloudWatch Service team has identified that newer versions of the Cloudwatch Agent are using a dependency that has been recently updated. The dependency's behaviour when processing, "/dev/mapper/XXX" devices has changed which causes it to perform an additional step
Recently updated my rear end. This was june of last year.

I'm pinning our TAM to the wall on this one. Oh man, my jimmies are rustled. This is on top of the 10 year old bug I ran into where S3 is silently converting "+" to spaces in static website hosting which got me the most polite "this is a known issue and we've +1'd the internal tracking ticket, pound sand" reply I've ever gotten.

Amazon is so polite with the apologetic replies "We're sorry you're experiencing an issue, we can't resolve it or tell you when it's going to be fixed, we're helpless as newborn babes dropped into the middle of a datacenter! Bless this mess!"

Bhodi fucked around with this message at 18:36 on Oct 16, 2020

Adbot
ADBOT LOVES YOU

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Why yell at the TAM? They haven’t caused the issue, did not write the patch nor cause the delay in deployment.

If the TAM gave you bad advice or information that’s one thing, but I’d it’s services that are failing you should have the TAM escalate in your behalf. Try this:

“Hey TAM-

We are really pissed/upset/angry/irritated at our experience with these issues. Please tell me what you are going to do to make sure service leadership understands how angry we are.”

It doesn’t blame the TAM for the issues, but calls the TAM to task for owning your issues and escalating on your behalf. It also uses the words “angry” and “upset” which are trigger words that will engage the account manager and SA as well, bringing your whole account team to bear in your behalf.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply