|
Arzakon posted:Send me your contact details because I can't nominate a community hero with just an SA username The last thing I want is people getting the impression that helpful is something I do on purpose. It raises expectations dramatically
|
# ? Mar 17, 2020 05:34 |
|
|
# ? May 21, 2024 19:23 |
|
Nomnom Cookie posted:The last thing I want is people getting the impression that helpful is something I do on purpose. It raises expectations dramatically do you want a job i am hiring and you've already aced team fit
|
# ? Mar 17, 2020 05:56 |
|
OK but how do I delete those tags?
|
# ? Mar 17, 2020 11:42 |
|
Pile Of Garbage posted:OK but how do I delete those tags? You can try tossing it at support but it is unlikely there is anything to be done other than replace the resources completely.
|
# ? Mar 17, 2020 15:22 |
|
Arzakon posted:do you want a job i am hiring and you've already aced team fit sure. I already have one job so it's gonna be hard for me to do work for you atm but I'm not gonna turn down another paycheck. Might be easier to think of it as a retainer on my future services when my current employer folds in Q3 and my daytime is freed up
|
# ? Mar 17, 2020 21:22 |
|
Nomnom Cookie posted:when my current employer folds in Q3 and my daytime is freed up true feel
|
# ? Mar 19, 2020 05:42 |
|
How did you get into this situation anyway? Like I know the literal sequence of events that you posted, but why did that happen? Why are you still using resources that were created with a CF template that no longer exists? I'm not trying to override your question or invalidate it, sorry. Just curious what happened?
|
# ? Mar 21, 2020 09:43 |
|
I know it's very small potatoes but I just wrote a Lambda migration tool to swap configuration settings and script values across accounts on the fly and it surprisingly worked the first try. It's also the first decent work I've done all week as children at home 24/7 makes for getting almost nothing done except for surface level meetings and ticketing.
|
# ? Mar 21, 2020 17:49 |
|
a hot gujju bhabhi posted:How did you get into this situation anyway? Like I know the literal sequence of events that you posted, but why did that happen? Why are you still using resources that were created with a CF template that no longer exists? (Edit: this post is bad because you were really asking why.) dividertabs fucked around with this message at 19:54 on Mar 22, 2020 |
# ? Mar 21, 2020 20:46 |
|
a hot gujju bhabhi posted:How did you get into this situation anyway? Like I know the literal sequence of events that you posted, but why did that happen? Why are you still using resources that were created with a CF template that no longer exists? Basically what dividertabs said: dividertabs posted:The DeleteStack operation has a RetainResources parameter. The CF console will give you the option to use it when you delete a stack that was already in delete_failed status (which is not uncommon at all; lots of AWS resources have conditions on when you can delete them. e.g. S3 buckets must be empty.) I had a bunch of CFN stacks deploying EC2 instances and I really needed the CFN to gently caress-off because it was bad and a risk so I enabled API Termination Protection on the EC2s, deleted the stacks which failed and then deleted a second time selecting to retain the EC2 instances. Janky af but it worked. It was a situation born out of a rushed project with insufficient time to design or prototype things.
|
# ? Mar 22, 2020 09:20 |
|
I'm porting code from a Lambda over to a container on ECS. The part where I access a key on SecretsManager now returns a None object (I'm on Python). The code is the same (a copy paste from the get_secret method provided by aws), permissions, roles, vpc and endpoints settings are the same. I've been testing on a minimalist container that only tries to fetch the key. I'd appreciate help.
|
# ? Mar 22, 2020 15:03 |
|
unpacked robinhood posted:I'm porting code from a Lambda over to a container on ECS. There’s a lot it might be. Try printing your Secrets Manager client session config and see if it reflects what you expect.
|
# ? Mar 22, 2020 15:44 |
|
Pile Of Garbage posted:Basically what dividertabs said: This would be a variety of yellow flags for me in my employer's prod account. There should be no situation where an ec2 instance can't be arbitrarily deleted and recreated, is the first one. Any application running in aws needs to be able to tolerate the random and immediate loss of at least one ec2 instance -- it doesn't have to be automatic recovery or even anything more fancy than 'restore snapshot or move volume to new ec2 instance' -- enabling a configuration option to intentionally cause another service to fail is probably the least operationally sane way to accomplish getting something out of a stack. The best way to get this done, IMO, assuming you can't just reprovision the ec2 instance for some reason, would be to add a DeletionPolicy to your cfn resource(s) that specifies retain and then to delete the stack. Most importantly this gets you git history, but it also makes your aws logs cleaner and easier to audit if necessary because you have a single successful operation against a resource in a known state instead of a variety of failed api calls and overrides made through a proxy service. While reading the documentation for DeletionPolicy you would also probably come across the UpdatePolicy specification which could address any concerns you have over the cloudformation stack being a risk. CloudFormation is an extremely safe and reliable service if you take the time to read about all of the functionality it provides.
|
# ? Mar 22, 2020 17:32 |
|
This isn't my first rodeo. My remit was X number of workloads with Y capabilities. X and Y were never defined and to this day remain as such. I took-over a quarter into the thing where a lovely pattern had already been adopted. None of these workloads were designed to be shoot+respawn and we didn't have the time to make it so. By the time I was in multiple EC2 instances had been deployed straight from CFN stacks which was bad. I tore that down and transitioned to deploying Launch Templates via CFN as the workloads had common OS, SG, Instance Profile and User Data. That way further systems can be deployed abstracted from CFN. I made best with what I was given. I was worried about the tags because maybe someone with more time on their hands than me will get confused.
|
# ? Mar 22, 2020 18:27 |
|
PierreTheMime posted:There’s a lot it might be. Try printing your Secrets Manager client session config and see if it reflects what you expect. Is it something readily accessible, or do I need to poke around inside boto3 ? There's no obvious way looking at the docs but I haven't dug a lot.
|
# ? Mar 23, 2020 08:57 |
|
unpacked robinhood posted:Is it something readily accessible, or do I need to poke around inside boto3 ? There's no obvious way looking at the docs but I haven't dug a lot. post the code.
|
# ? Mar 23, 2020 13:02 |
|
unpacked robinhood posted:Is it something readily accessible, or do I need to poke around inside boto3 ? There's no obvious way looking at the docs but I haven't dug a lot. Try grabbing some default session information like so: Python code:
The following should use the same credentials to connect to Secrets Manager and access a secret. In this case, I just grabbed an example for an SMTP account: Python code:
|
# ? Mar 23, 2020 14:20 |
|
PierreTheMime posted:Try grabbing some default session information like so: Thanks, I was stuck looking at the SecretsManager api only. We eventually fixed it, it was a typo in a policy, which in our case didn't trigger an explicit error.
|
# ? Mar 23, 2020 18:03 |
|
What’s the preferred method to migrate Glue jobs and their underlying components to another account? Right now people in my office are still getting their head around working with AWS and it makes duplicating things across accounts take forever. If anyone had any documentation about it it would be great.
|
# ? Mar 23, 2020 22:28 |
|
Pile Of Garbage posted:Did some Googling of this earlier today but couldn't find an answer: is it possible to delete AWS reserved tags from resources? I've got a bunch of EC2 instances that were spun-up from CFN templates that have since been deleted however the instances all still have the AWS reserved tags for CFN on them. If you try to delete the tags via the console it just spits an error about how you can't delete AWS reserved tags. I hit this situation in the past year. We had some legacy resources managed by CFN and wanted them to be managed by Terraform to conform with everything else we do. - Create an IAM role that only has CFN access and nothing else. - Use that role to delete the stack. It will fail since it doesn’t have permission to modify the resources managed by the stack. - Delete again. This time there will be an option to delete the stack but leave the underlying resources behind. This will remove the CFN tags. This can be done through the CLI as well.
|
# ? Mar 25, 2020 16:23 |
|
Cerberus911 posted:I hit this situation in the past year. We had some legacy resources managed by CFN and wanted them to be managed by Terraform to conform with everything else we do. That won't delete the CFN tags and doesn't really help as I've already nuked the stacks: Pile Of Garbage posted:I had a bunch of CFN stacks deploying EC2 instances and I really needed the CFN to gently caress-off because it was bad and a risk so I enabled API Termination Protection on the EC2s, deleted the stacks which failed and then deleted a second time selecting to retain the EC2 instances. Janky af but it worked. How did it even delete the tags when you did it? If there's one consistent thing about AWS its permissions. I suspect that only the CFN principal can delete the AWS tags which means your role would need tag deletion permissions as well.
|
# ? Mar 25, 2020 17:44 |
|
PierreTheMime posted:What’s the preferred method to migrate Glue jobs and their underlying components to another account? Right now people in my office are still getting their head around working with AWS and it makes duplicating things across accounts take forever. For those curious, the official answer from our TAM is there is no way to migrate Glue Jobs normally, but you can do it yourself using the API if you want to play around with it enough. I've created the code to migrate Jobs and perform updates prior to creating them in the destination account (update buckets, script content, roles, etc.). My plan is to lay the foundation into the new account with CloudFormation to setup the the roles, Redshift, S3 and connection info, and then use the code to perform migration actions against a defined list of Jobs with what changes are necessary to match the new account. Is this sane? It seems useful and the people in my office seem to want it but I don't know why this functionality doesn't just exist already.
|
# ? Mar 25, 2020 19:58 |
|
Does anyone know if there's a MFA for the aws console that can be configured to allow push notifications? I'm getting *really* tired of typing in numbers from my phone. Microsoft Authenticator has kinda spoiled me.
|
# ? Apr 8, 2020 15:43 |
|
I think you need to go down the sso route for that
|
# ? Apr 8, 2020 17:59 |
|
Amazons mfa uses the tokens, but if you hooked up a role to your own auth you can do whatever you want. I’ve started doing that for groups in my company and it’s a benefit since they just self manage one login and 2factor process. You still need the code for the root account though.
|
# ? Apr 8, 2020 18:49 |
|
I'm trying to convince our developers to move to AWS Organisations and put SSO in, authing against Azure AD. They seem to think that SSO is less secure because if you're already logged into Azure AD then you don't have to put a password in again to use AWS (I've already done a conditional access demo), so actually having all the accounts separate is better. The other push back I'm getting is that because they are only a small team that their manager is happy to create and turn off accounts as required. If were big enough to have a security team I'd get them to have a word. I'm also a bit confused at a software developer actively pushing back against automating a process (new starter when in the right group automatically pops up in AWS ready to be assigned permissions) but I think a part of it is empire building.
|
# ? Apr 8, 2020 22:52 |
|
Bhodi posted:Does anyone know if there's a MFA for the aws console that can be configured to allow push notifications? I'm getting *really* tired of typing in numbers from my phone. Microsoft Authenticator has kinda spoiled me. Not exactly, but Yubikeys are USB MFA tokens where you just press button in browser and it does the magic. They have an added benefit of spitting out long character strings if you hold them down so you'll get to spam Slack whenever you walk around holding your laptop unlocked while touching it. https://aws.amazon.com/blogs/security/use-yubikey-security-key-sign-into-aws-management-console/ They work for multiple accounts and even non-AWS stuff so you don't need a hundred of them. Thanks Ants posted:I'm trying to convince our developers to move to AWS Organisations and put SSO in, authing against Azure AD. Go look through all of your AWS accounts and find a former employee who still has an IAM user or maybe even some access keys then shame them. Or just find them doing something incredibly dumb like using root, root access keys, or just something else stupid that increases your risk and use that as an excuse to own policy for all of your orgs AWS accounts. Alternatively I will physically fight them for you.
|
# ? Apr 8, 2020 23:41 |
|
Trying to price out some cloud options for a project I'm on. Basically, we will be generating tons of images and storing them. It'll be a responsive web interface, nothing fancy there. There will likely be a need for predicting neural network stuff but it's nothing super compute heavy. So, I'm thinking: EC2 for web servers/NN predictions, S3 for storing the image files, and CloudFront as a CDN. Is that pretty standard, or am I missing something with AWS? It seems like they have like a million different products compared to someone like DigitalOcean so I'm kind of confused. Not reinventing the wheel here by any measure. Is there some kind of guide out there for AWS dummies?
|
# ? Apr 9, 2020 00:44 |
|
Protocol7 posted:Trying to price out some cloud options for a project I'm on. What is the “neural network stuff”? Because I’d use lambda for compute, s3 for image storage AND web hosting and CloudFront for your CDN.
|
# ? Apr 9, 2020 01:26 |
|
Agrikk posted:What is the “neural network stuff”? Tensorflow backed API for image segmentation. Image goes in, artifacts of predictions go out into a DB of some kind. In hindsight I spaced the DB part. Not really sure if I need an RBDMS for this project, but I know either way AWS still has an option, so I’m not too worried about that. Haven’t really dug into lambda though, I’ll definitely take a look!
|
# ? Apr 9, 2020 03:40 |
|
You can go to https://calculator.aws and price out most stuff to get an idea. Though I was trying to check fargate docker container prices earlier and didn’t know where they were in this new estimator page if they are even there.
|
# ? Apr 9, 2020 05:16 |
|
Thanks Ants posted:I'm trying to convince our developers to move to AWS Organisations and put SSO in, authing against Azure AD. They seem to think that SSO is less secure because if you're already logged into Azure AD then you don't have to put a password in again to use AWS (I've already done a conditional access demo), so actually having all the accounts separate is better. The other push back I'm getting is that because they are only a small team that their manager is happy to create and turn off accounts as required. Speaking as a developer who's repeatedly been suffering through petty annoyances because of process improvements, it's probably just something like the azure ad login form loading slightly more slowly than the aws signin.
|
# ? Apr 9, 2020 08:42 |
|
Roughly daily, I get 10 million rows (as parquet files) in S3, provided by another team, all at once. They need to be accessible by key in a service (such that looking up a single row by ID is <100ms). So our ETL is extremely bursty, but our client is a typical service with relatively low usage and low variance in usage. Currently my team does this by triggering an AWS EMR Spark job to read the parquet files and write each row to a DynamoDB table with autoscaling. The job also temporarily sets the table's write capacity higher when the job triggers. It works reliably, but I don't like how long it takes to create the EMR cluster and scale up the table. We don't know ahead of time when the parquet files will be ready, so we can't schedule the scaling ahead of time; and I don't want to leave the resources around all the time. Is there a different service that solves this problem better? Other things we've tried:
One last minor detail. It's not really one batch job providing 10 million rows. It's several batch jobs, each providing between 10 rows and 10 million, at different times, but not enough jobs to smooth out the burstiness of our ETL needs. *To rant, this kind of marketing- instead of technical- focused documentation is the main reason I roll my eyes every time I hear someone in AWS mention "customer obsession" dividertabs fucked around with this message at 18:29 on Apr 14, 2020 |
# ? Apr 14, 2020 18:18 |
|
For Dynamo scaling the best thing would probably be to have your upstream team post a "we're getting started now'" event and use that to preemptively scale your table up. You would be paying extra to prewarm, but not 24/7 extra. You could also drop EMR and manage your own spark cluster that you hibernate. I don't know enough about spark cluster internals to determine if this is better than EMR or just different bad. It kind of seems like you want all of the benefits of unlimited scaling available to you instantly with no prewarm duration and no extra costs which isn't super realistic in today's AWS IMO. Have you tried any of the redshift features in this area? Redshift Spectrum your stuff from S3 directly into a table or a view or something and then convert your clients from Dynamo calls to Redshift queries? I'm not sure if the redshift parquet feature is faster or slower than EMR though, I've never used it.
|
# ? Apr 14, 2020 18:31 |
|
12 rats tied together posted:It kind of seems like you want all of the benefits of unlimited scaling available to you instantly with no prewarm duration and no extra costs which isn't super realistic in today's AWS IMO. Pretty much, yes. At least, I want to simplify managing the capacity, while still having reasonable scaling (to do ETL in under a minute on small inputs, and to scale up eventually to handle really large inputs). We will build a service to do this if we can't buy it. But I think this solution is already for sale, I just don't know about it. quote:Have you tried any of the redshift features in this area? Redshift Spectrum your stuff from S3 directly into a table or a view or something and then convert your clients from Dynamo calls to Redshift queries? I'm not sure if the redshift parquet feature is faster or slower than EMR though, I've never used it. dividertabs fucked around with this message at 18:55 on Apr 14, 2020 |
# ? Apr 14, 2020 18:50 |
|
Redshift is basically a specialized form of clustered postgres, so you have most of what you can do in postgres available to you and most of what you would expect from SQL like basic math, time and date stuff, you can find a better list here. It also supports user defined functions which you can write in python or sql. Redshift clusters also run 24/7 and have significant operational overhead compared to a dynamo table, you need to pick a dist and sort key, if you pick a bad one your queries are going to be significantly slower and might exceed that <100ms threshold. I would suspect that you could probably get your reads to be "about as fast, maybe faster/better" if you can do any sort of batching and caching instead of needing to pull single keys truly at random, redshift does do some pretty decent query response caching though and that is enabled by default so repeated calls to the same key would be reasonably quick.
|
# ? Apr 14, 2020 19:13 |
|
You could also try Athen for S3 and skip all the processing. Athena is simply a database-like wrapper for ordered files in S3.
|
# ? Apr 14, 2020 20:04 |
|
Thanks. I will need to ask co-workers more about Redshift, or go through the documentation. I just don't understand what it is or how we would integrate with it.12 rats tied together posted:if you can do any sort of batching and caching instead of needing to pull single keys truly at random Agrikk posted:Athena
|
# ? Apr 14, 2020 20:29 |
|
Athena is managed presto and totally unsuitable for point queries. Some more information would be helpful, like read volume, how often you get data drops, how long you can wait for data to be queryable, how large your query store is in total, etc.
|
# ? Apr 14, 2020 23:07 |
|
|
# ? May 21, 2024 19:23 |
|
you could try Glue, it's basically serverless spark
|
# ? Apr 15, 2020 04:54 |