Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
JHVH-1
Jun 28, 2002

PierreTheMime posted:

Coming in knowing almost nothing, what’s the best method to invoke a Java executable against a file that appears in S3 and keep it as ephemeral as possible? I’d prefer if it could be a service like Elastic Beanstalk, but I’m not sure how friendly that is with executables that take arguments or properties.

I could just invoke it from an OS running in an EC2 but I thought there must be a more AWS-friendly method.

There’s a bunch of ways. You could use ECS and Fargate with a docker image, put the java code into lambda and go serverless, get the thing working on opsworks (which I think they have recipes pre made for java) to manage the ec2 and some options. Or use just ec2 and maybe auto scaling which you can set to 0 when you aren’t using it.

Don’t think there’s a best way, it just depends on your needs and how you want to manage it.

Adbot
ADBOT LOVES YOU

JHVH-1
Jun 28, 2002
We wanted to set up signed URLs for an app recently, and I still can't get over that you have to use the root account to create them for Cloudfront.

One account we have is Rackspace managed so we don't even have access to root, and the other we basically never log in as root and someone outside our team holds the device with the MFA for it making it inconvenient.

JHVH-1
Jun 28, 2002

PierreTheMime posted:

Is there a way to set folder-level metadata in AWS Management Console? I can do it via API and Java, but if I try to apply a metadata change to a folder in the console it only applies it downstream to non-folder objects in the folder and it's driving me nuts.

Are you talking about s3? The concept of folders doesn’t really exist, it’s really just bucket and paths. It just looks like directories so an action on it applies to things that match. The directory itself isn’t an object.
If you want to control access or something you have to use IAM roles (or bucket policy, but that’s more of the old way) or you have your code set an acl when files get added.

It might be easier in some cases to write code using an AWS sdk or use the cli to do something more specific on bulk files.

JHVH-1
Jun 28, 2002

PierreTheMime posted:

Yes I meant S3, sorry. You can apply metadata to “folders” in paths, though? If you can do it by putting values in the header via s3api, setObjectMetadata(), and presumably a number of other ways. The metadata is key-specific and doesn’t require an object to exist.

Per my above posts, I was able to use this to create a new bucket name and key from metadata values via a Lamdba-invoked jar, so it’s not like a theoretical thing. It’s just weird that it’s possible (and useful) but not an option I can see in the default interface.

Edit: Hrm, apparently placing a metadata tag on a key silently converts it to a 0-byte file. I'll have to do some digging, but I'm thinking that still works, it's just... weird. Especially since they still show as folders in the console.

Yeah the console is pretty much designed to make it easier to navigate.

https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html

If you use the trailing slash it creates an empty object that will show up as a folder. I don’t know if it helps anything except for presentation though, it doesn’t really serve much purpose.

JHVH-1
Jun 28, 2002

PierreTheMime posted:

What’s the best method to accept new host keys for SFTP connections during Lambda functions? Just set the hostkey file to an S3 object as a function variable?

That would work, but I think using parameter store or secrets manager might be the preferred/modern way of doing it.

JHVH-1
Jun 28, 2002

SnatchRabbit posted:

Can someone answer a simple S3 encryption question for me? If I set default bucket encryption to S3 using AES256 that will encrypt objects in the bucket at rest, correct. Now what about in transit? Currently, I have a off-site QRadar server which I have configured to ingest Cloudtrail and GuardDuty logs with log sources. These log sources each have their own IAM user with a policy that allows them to access the S3 bucket. The cloudtrail encrypts objects with a KMS key in addition to the default bucket encryption. The Cloudtrail QRadar IAM user has access to this KMS key as well as the bucket and can fetch the logs no problem via Access Key and Secret Access key using Amazon AWS S3 REST API Protocol. GuardDuty only has the bucket level encryption and so it's IAM user policy only has access to the encrypted bucket. Now, my question is: will either of these scenarios encrypt the data in transit to the off-site Qradar? In either case, is there a relevant AWS Docs page explaining why or why not?

You would use SSL/HTTPS so its encrypted in transit. You can enforce it by add a deny to the policy with a condition "aws:SecureTransport": "false"

https://docs.aws.amazon.com/config/latest/developerguide/s3-bucket-ssl-requests-only.html

JHVH-1
Jun 28, 2002
Ive heard of some orgs that just give each team their own account so they are isolated. It also has the benefit that they get to pay the bill so if they waste resources it comes out of their own budget.

JHVH-1
Jun 28, 2002

nexxai posted:

I know this question isn't specifically AWS related, but I wasn't really sure where it would fit better.

We have an API Gateway that basically fronts a bunch of customer connections to a vendor's service we purchase and offer. We're seeing a average failure rate of around 0.1% (e.g. ~11,000 failures on ~11,000,000 API requests in a month) and since this is the first time I've ever worked on a "real" API, I don't know if that's good, bad, about average, or what. We're well within our contractual rights to our customers (guaranteed 1.5% or less failure rate) but our contracts were written by people who have even less knowledge in this space than I do.

I've tried searching for "typical failure rates" or "acceptable failure rates" but nothing really comes up. Is anyone able to give some insight here?

If its important enough you probably would want a retry or queuing system, but still good to break down the errors and look for patterns like similar types of request, type of error, time of day, amount of requests being sent, if the errors occur grouped together, what the app is doing leading up to the error etc.

Then at least if you go to them they might have some insight or you could be finding an issue on their end they aren't aware of. Even if you aren't hitting the SLA they probably would want to avoid it if its a good service.

JHVH-1
Jun 28, 2002

a hot gujju bhabhi posted:

Not AWS but hoping someone can help. We have a Varnish server configured to cache requests and behind that we have an Azure load balancer that balances between 3-4 VMs depending on requirements. The problem is that something about the Varnish server being there is causing the load balancer to go stupid and it seems to be confusing the traffic as one visitor and sending it all to the one VM. In other words, it doesn't seem to know or care about the X-Forwarded-For header when determining where to send requests.

Am I right in this assumption? Is there any way to configure the load balancer to ignore the client IP and use the X-Forwarded-For header instead?

I don’t know anything about azures load balancer but you might want to see if it could be the load balancer has some stickiness and treats the varnish server as the same user requesting which it would send to the same server.

JHVH-1
Jun 28, 2002

a hot gujju bhabhi posted:

After some further investigation it seems like actually the load balancer itself is doing okay, but one of the servers just seems to have a much harder time processing requests. In other words its CPU spikes regularly even though it's handling the same volume as the others. Unfortunately this was all set up long before my arrival so these VMs are far from immutable, in fact they're significantly mutated, so it wouldn't surprise me if there's some configuration defect on that specific VM.

Thanks for the help anyway guys, I definitely learned a lot.

Automate the whole server config and then burn them all to the ground! DevOps anarchy!

JHVH-1
Jun 28, 2002

Scrapez posted:

Is Cloudformer development halted? I know it shows (Beta) when you use it but elsewhere I've seen people saying they have stopped developing it.

It seems like it would be in high demand. I'm currently trying to replicate all objects we have in us-east-2 over to us-west-2. Cloudformer was helpful with generating the initial cloudformation template but it has some pretty serious flaws. One of which is that it didn't grab any of the user-data from the auto-scaling launch configurations.

I'm augmenting the cloudformation template that cloudformer created to be used to build the objects in us-west-2 but if we change things in us-east-2 in the future, I'll have to repeat all of this manual work.

Is there a better way of replicating everything from one region to another?

They added this last month which is probably nicer but I haven’t tested it https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/

JHVH-1
Jun 28, 2002
https://twitter.com/jeffbarr/status/1212425207933095936?s=21

JHVH-1
Jun 28, 2002

sinequanon01 posted:

Do we have an Aussie goontam?

I hope there is an Australian tam named Tim

JHVH-1
Jun 28, 2002
Started writing some stuff up in CDK and plan on making my ECS stacks in it. Its kinda weird it generates cloudformation, but you get the benefits of having your stacks deployed that way without having to write hundreds of lines of YAML or JSON and template or map out everything.
Been kinda confusing to debug with it being so new and functions changing, but theres a good discussion area on this gitter site someone pointed me to. The was someone from AWS on there answering questions.

I wrote up something yesterday that I plan on using to create cloudfront distros for some sites so I don't have to do it by hand.

JHVH-1
Jun 28, 2002

fletcher posted:

Anybody try to do anything meaningful with Aurora Serverless yet? Wondering what sort of unknown horrors I'm about to encounter...

I used it for a couple moodle server instances and a couple functions the installer used to create the tables weren’t quite accessible. I had to do an install on MySQL and then import it to get going. No problems afterwards.

Haven’t yet moved any production workloads to it but that might happen as we migrate to more serverless and containers.

JHVH-1
Jun 28, 2002
Amazons mfa uses the tokens, but if you hooked up a role to your own auth you can do whatever you want.

I’ve started doing that for groups in my company and it’s a benefit since they just self manage one login and 2factor process.

You still need the code for the root account though.

JHVH-1
Jun 28, 2002
You can go to https://calculator.aws and price out most stuff to get an idea.

Though I was trying to check fargate docker container prices earlier and didn’t know where they were in this new estimator page if they are even there.

JHVH-1
Jun 28, 2002
Haven’t used that feature personally but you might need to create a lambda function to do it for you.

JHVH-1
Jun 28, 2002

Pile Of Garbage posted:

Yeah certainly looks that way. Package creation is supported by AWS CLI and the API, the only gap appears to be CFN. From poking around it looks like there's some stuff Amazon does on the back-end where they publish some metadata which associates the SSM Document with your package manifest and content.

It’s one of the things I like about CDK, it will create the lambda functions in certain cases to fill in those gaps. Plus it writes the cloudformation for you.

JHVH-1
Jun 28, 2002
You could also put stuff in lambda at the edge functions in cloudfront and use s3 for static files. I think it would be cheaper than an ALB but depends on what you want to do.

JHVH-1
Jun 28, 2002
I think they want you to import resources into a stack now instead. They haven’t touched that cloudformer in a while.

I think you just add your stuff to an empty stack and then you can see the template and add to/modify it in designer.

JHVH-1
Jun 28, 2002

22 Eargesplitten posted:

I, uh, may have hosed up.

I was trying to copy a zip file over using scp and was getting permission denied. I tried a few things, but got impatient so I just used chmod to change the ~ directory to 777 (not recursive). Now whenever I try to scp or ssh to the device I'm getting an error message saying "Permission denied (publickey,gssapi-keyex,gssapi-with-mic)". All the instances of this that I'm finding on stack overflow and whatnot seem to involve people not using the correct username. Digital Ocean has a tutorial but it requires logging directly into the machine, which I don't believe is possible in AWS.

Is there a way to fix this, or do I need to scrap the instance and start over? I was dumb and didn't take a snapshot of it.

I would check if you have systems manager running and you might be able to repair or get access to it that way.

Other option I guess is something like spinning up a new instance and attaching the volume to mount and fix the permissions (or just get the data off and use the new instance. ). Probably a good idea to make a snapshot just in case if its important data or will set you back if you lose it.

JHVH-1
Jun 28, 2002
Looks like they have an SSM automation just for fixing ssh as well https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-awssupport-troubleshootssh.html

If everything is 777 it won't work correctly. It needs to be something like shown here or else the ssh daemon rejects it:
https://gist.github.com/grenade/6318301

JHVH-1
Jun 28, 2002

PierreTheMime posted:

What’s the simplest way to variabilize your account ID for SAM template ARNs? I got it working by setting a parameter in SSM prior to deploying and referencing that but there’s got to be a less awkward way to do that.

I think it supports some kind of templating similar to CloudFormation (not surprising since it generates it) with pseudo variables

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-template-list.html

JHVH-1
Jun 28, 2002

whats for dinner posted:

We run a distributed file store for our application and we've finally gotten the go-ahead to pursue replacing the existing GlusterFS setup with EFS. Fairly sure the way we're gonna handle the migration is to just rsync gluster to EFS, unmount gluster and mount EFS. Has anyone run into any issues with migrating to EFS before? We plan on testing the poo poo out of it in terms of performance and cutover, but wanted to see if there were some pitfalls we need to avoid early.

It pretty much works like regular NFS is you follow the instructions. Just when you load up data you may hit some IO quotas and have to wait for that to build up.

You might want to look at FSx too if you are evaulating things. I haven't used it, but it might be more performant.

JHVH-1
Jun 28, 2002

Fcdts26 posted:

Anyone have experience with getting help with something like the CDK from business support?

Never tried before. Have you hit up the gitter thing yet https://gitter.im/awslabs/aws-cdk

Probably not business level support but the people working on the project post there and I’ve gotten help before.

JHVH-1
Jun 28, 2002

thotsky posted:

Maybe this is a python issue rather than AWS centric, but I'm having some trouble when working locally with Lambda Layers on my SAM/Python Lambda projects.
I put my layer code in "/layers/python/example_layer.py" while my lambas live in "/lambdas/example_lambda/example_lambda.py" and import the layer using "import example_layer" in my lambda.
SAM deploys both lambdas and the layer just fine, and it seems to work fine in the cloud, but my local IDE (VS Code) and pylint does not like the import statement (unable to import example_layer).

I figure that maybe the layer code needs to be below the lambda code in my folder structure for it to be found, but that would sort of defeat the point of a shared layer, especially since SAM appears to require that I keep the various lambdas in separate folders if I don't want them bundled together.

Never done it in SAM so maybe it had some way to manage it but when I was making a basic Twitter bot I had to do ‘pip install twython -t .’ To install it into the same directory as my index.py

They just added a way or use docker containers though which will get rid of a lot of these annoyances. I need to test that out.

https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/

JHVH-1
Jun 28, 2002
I just had to do that a few weeks ago. Had an old access key for SES that they were going to shut off because of updates. It said when the key was last used, but it’s not something they make available in cloudtrail like an API call. I just shut it off after replacing as many places I could remember it might be used with the new key.

JHVH-1
Jun 28, 2002

deedee megadoodoo posted:

https://docs.aws.amazon.com/ses/latest/DeveloperGuide/monitor-using-event-publishing.html

You can set up an smtp configuration set and specify it when you send an email.

I should probably set this up for the future. Every 6 months or so we get some unwarranted spam complains that fall through the cracks and put our reputation level over the edge.

I manage the external public websites but the domain is shared with our internal IT team. They have it cluttered up with really dumb stuff like a service for managing email signatures, and this pointless phishing test service which sends out spoofed phishing mails which wouldn’t even pass through our o365 if they were real.

Also I hate email.

JHVH-1
Jun 28, 2002
I’ve been using ECS/Fargate for lots of stuff. I have a service spun up with copilot using it.

The new app runner thing they added let’s you skip on the ALB but the minimum size is bigger. Wanted to try it but it makes it kinda a wash.

Fargate I can just spin up the smallest thing possible and then increase if it needs it, and let it auto scale. No hosts to manage, just push images to ECR and refresh the service. It’s not too complicated once it’s set up (and copilot saves a lot of that work)

Adbot
ADBOT LOVES YOU

JHVH-1
Jun 28, 2002

Agrikk posted:

Create a lambda function to pull the files into S3

Then either

Point Athena at the bucket

Or

Data Pipeline (or your own ETL script on a t3.micro instance) to load the CSV from S3 to RDS

And:

A lambda function to turn on/off the EC2 instance when not processing the CSV

If you can run it from ECS they have tasks that just run and exit. I am using it like that to do the reverse, dump a database to an s3 location to create a file that is always the latest data so a 3rd party can analyze it. It just took creating a docker file and setting up the task.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply