|
StabbinHobo posted:thank you. I guess I don't even mean as much from a cost perspective as a "sprawling long tail mess" perspective. aws config exists, but its mostly centered around resources that you actively pay for. there are also services like cloudcheckr that will do a more detailed inventory.
|
# ¿ Apr 3, 2017 03:25 |
|
|
# ¿ Apr 30, 2024 22:03 |
|
Virigoth posted:You forgot Lambda! "You can use services such as AWS Lambda, AWS OpsWorks, and Amazon EC2 Container Service (Amazon ECS) to orchestrate and schedule EC2 instances as long as the actual PHI is processed on EC2 and stored in S3 (or other eligible services). You must still ensure that EC2 instances processing, storing, or transmitting PHI are launched in dedicated tenancy and that PHI is encrypted at rest and in transit. Any application metadata stored in Lambda functions, Chef scripts, or task metadata must not contain PHI." Tl;dr don't put PCI in your task definition and you're a-okay.
|
# ¿ Apr 13, 2017 15:03 |
|
your capacity is measured per-second, but you can burst up to 5 minutes worth of capacity if you've banked it. the analogy is a bucket, where the bucket is as a big as 5 minutes worth of capacity, but only is refilled at your per-second capacity. every request pulls some stuff out of the bucket, and if the bucket is empty your request is throttled.
|
# ¿ Jun 19, 2017 16:19 |
|
Vanadium posted:If banking unused capacity happens on a 1:1 basis, I shouldn't have any issues. Averaged over 30 seconds I'm consistently under the provisioned capacity. Getting throttled at that point makes sense to me if the banked capacity is only provided on a best-effort basis, if the underlying hardware has capacity to spare or whatever. so consumed capacity (i dont believe you can get remaining capacity from any API call) is externalizing your throttles. you don't go negative. a few options are 1. reconfigure your backoff to go across multiple seconds (if you're throttled at time x, you're probably going to be throttled again a x+5ms) 2. use DAX as a write-through cache 3. contact support and ask for a dynamodb heatmap. it will show how your reads/writes are being distributed across partitions. speaking of, do you have an idea what the minimum number of partitions you have is? you can look at the data at http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html#GuidelinesForTables.Partitions to determine how many you could have, but its not guaranteed if you have a lopsided data distribution. also, iirc local secondary indices are interleaved with your data and will increase partition size, whereas global secondary indices are effectively yet another ddb table that is kept in sync. EDIT: and be mindful of call volume by individuals. just because you have a good partition scheme doesn't mean you don't have one caller banging on that partition, or that your data is split across more partitions than you expected because you have one partition that has a lopside distribution of partition+sort data FamDav fucked around with this message at 04:44 on Jun 21, 2017 |
# ¿ Jun 21, 2017 04:41 |
|
Schneider Heim posted:Does AWS have a service that lets you perform geospatial queries? I'm using Cloudsearch right now, tied to DynamoDB, but I don't really relish the setup as I will have to perform the geospatial query in Cloudsearch, then use the result set to query in DynamoDB to get the full data (and to do updates/deletes). The other non-option is DynamoDB's outdated Java-only geospatial add-on that's unsuitable for any kind of non-trivial work. what makes the geospatial library unsuitable beyond 'holy poo poo this thing is like 6 or so SDK revisions out of date"?
|
# ¿ Jul 3, 2017 04:03 |
|
a hot gujju bhabhi posted:I'm fairly new to AWS so I apologise for the super basic question, but what service(s) would I use if I wanted to make a website that could compile less into CSS for a user to download? I figure that I should to do this in a Node.js Lambda and then send the result to S3 and publish to an SNS to say that the download is ready, which my webpage can then react to. Am I on the right track? so first off with lambda i heartily suggest you take a look at https://serverless.com/ , as it will simplify a lot of your development and make it easy enough to write lambda-based services on aws. not really dealing with less and css all that much, how long does it take for the transformation to occur? depending on how reactive this is (single digit seconds?) you could perform this operation purely as request/reply such that you either reply when the transformation has finished/failed or you just timeout. if not, you want to introduce some durability guarantees around the async operation. what does it mean when you return back a 200 OK? because what if that lambda holding all the state dies before completing? when you return back a 200 response, you should really be committing to the customer "this is going to happen, or I'm going to be able to confidently tell you it didn't at some point in the future". To rectify this, I would suggest having your request lambda persist the input less to s3, kick off a step function that will process and persist the css to a different s3 object, and then return that the transformation is "in progress" along with some identifier for the operation. Then your page can call another lambda to poll the status of that particular operation. I was going to say that serverless doesn't have step functions integration, but it turns out there are plugins for that: https://serverless.com/blog/how-to-manage-your-aws-step-functions-with-serverless/ FamDav fucked around with this message at 03:29 on Oct 13, 2017 |
# ¿ Oct 13, 2017 03:18 |
|
fluppet posted:Just found out I need to deploy a couple of environments on alibaba cloud. Given that we're only using rds, ec2 and s3 and they look to have equivalents on alibaba are there any major gotchas that I'm likely to run into? just out of curiosity but why doesn't the mainland china region for aws work here?
|
# ¿ Nov 1, 2017 04:07 |
|
jiffypop45 posted:You have to be a Chinese citizen, a business in China or a multinational company with interests in China to be able to get a Chinese AWS account. I’m aware, which is exactly what their use case sounded like. fluppet posted:It's not China we need to be in otherwise we would still be on aws Ah. Looking at the region list on their site it looks like only Hong Kong and Kuala Lumpur aren’t covered by aws. This is why we just need a region in every country (and whatever the heck you want to define Hong Kong as).
|
# ¿ Nov 1, 2017 19:47 |
|
Volguus posted:Why ECS doesn't work? Because, as I said before, it takes 5 minutes for the thing to start up and launch the application that is in the container. I manually executed the lambda that started the task. So I believe you said you were using fargate for launching your task? i think you're hitting some issues around network interface connectivity when using public ips. if you set up a nat gateway in your vpc and don't enable public ip support for your fargate task, it should start up much faster and much more consistently (dependent on image size and application warmup).
|
# ¿ Apr 21, 2018 23:08 |
|
Arzakon posted:I would have probably said the same thing last week but I logged into the BJS console for the first time yesterday and a sense of calm overcame me as I remembered what AWS was like 5 years ago. Retro AWS is good! I like how the Chinese regions are often more up-to-date than Govcloud, especially if the dev team for a service is outside the US. ITAR is a real trade off between security and velocity/consistency, and the only reason it kind of works is that most cloud providers are based in the US. I think it’d be effectively impossible for pretty much any other country to require the same residency requirements as the US does for a govcloud without also jumpstarting a local competitor. Also imagine I wrote a lot more about how trashy it is that the government is rebuilding itself on top of the work of some incredibly smart immigrants who might get citizenship after well over a decade of constant uncertainty about their position in this country.
|
# ¿ May 3, 2018 17:51 |
|
Vanadium posted:This past page is like the most positive I've ever seen someone be about CloudFormation. I'm used to people complaining about CloudFormation getting your stack into a weird state that you can't get out of without petitioning AWS support and about not knowing what exactly applying a change is going to do when there's multiple stacks involved. that was a very dumb constraint, but they've since gotten to the point of "can't fix stuff, what resources do you want me to just ignore from now on?" and then you can get rid of the stack and do whatever manual cleanup you need. also the cross-stack/cross-region stuff is choice, and about the only thing I think is ridiculous is missing is the ability to map over a list of things (like available AZs into subnets) so you don't have to wrestle with a bunch of if-conditionals for 2/3/4/5 az regions. more philosophically, i think cloudformation is really great for infrastructure but not so great for application definitions. it always feels like as an app developer you really want all of the implementation details of load-balancers, dns, etc. abstracted away from you and just want to focus on what your application is and what it needs.
|
# ¿ May 31, 2018 14:55 |
|
deedee megadoodoo posted:I am currently working on limiting access to resources (namely kms, ssm, and iam) by team. We're basically limiting users to only interact with resources where the arn matches a specific path. One of the problems I'm running into is that developers need to be able to create policies but there's nothing limiting what they can put in their policy. So it ends up being a security issue due to privilege escalation. Is there any way to mitigate this by limiting what a user can put into a policy or are we going to need to insert ourselves into the policy creation process? You want permissions boundaries. Specifically, you can require create-user/role to include a permissions boundary.
|
# ¿ Aug 18, 2019 04:42 |
|
deedee megadoodoo posted:Just an FYI: neither of these options does what I'm looking for. And you can additionally refer to permission boundaries in your own policies context keys. If you read through that, you can write a policy that ensures: 1. Every new user or role has a specific permission boundary 2. Every policy attached to a user or role must only be attached to a user or role that has a particular permission boundary set So your scenario of a user “creating a role, attaching it to an ec2 instance/lambda/ex’s task” is no longer an issue, as you can ensure that any role user or policy they created has to have a permission boundary on it, and so can’t exceed the privileges you wanted to give out in the first place.
|
# ¿ Aug 19, 2019 19:16 |
|
The important bit is you can define the permission boundary condition in the permission boundary policy itself. That’s what forces it to be infectious. They also don’t have to be explicit denys; when permissions are evaluated both the entity policies and the permission boundary policies have to evaluate to an explicit allow. The first doc I linked also has a good walkthrough with examples of doing all of this.
|
# ¿ Aug 19, 2019 20:27 |
|
We’re working on being better about this, at least in the groups I work with like containers and app mesh. IMO while re:invent and other conferences are a great way to broadcast major features/services to customers, the best thing we can do is get our plans and designs in front of as many potential users as possible, as early as possible.
|
# ¿ Nov 18, 2019 02:33 |
|
mobby_6kl posted:I then started implementing V4 anyway because it seemed to be the only option to sign API gateway requests. First of all, what a pain in the rear end. I really don't see what's the point of the turducken hashing and signing, if the attacker knows the secret key they can sign all the pieces just like they can sign the whole request. The point is when an attacker doesn’t know the secret but has access to other information. Off the top of my head * canonical request signing ensures that the request can’t be meaningfully modified * as part of that, the credential scope mitigates against replay attacks at different times and to different regions/services * the signing key derivation process protects the secret key and therefore the account if a signing key is leake
|
# ¿ May 2, 2020 21:30 |
|
Cancelbot posted:Exactly this, be prepared to go deeper on your answers; impact and influence. I’m a TAM who does loops so I have to be careful about what I share. Map it all to the LPs. also, don't bullshit and say what you would do in an ideal situation but say what you did and how it fit that particular situation. nothing is ever perfect.
|
# ¿ Oct 11, 2020 21:44 |
|
ledge posted:Even obvious errors like this one from s3 "Insufficient permissions You need s3:CopyObject permission to perform this action" but s3:CopyObject does not exist as a permission. Don't know if this dialog has been fixed yet. i suspect this is an issue with the console more than anything else. iam policies are fundamentally just json text, so there is nothing stopping you from add "s3:CopyObject" to your policy.
|
# ¿ Sep 17, 2021 03:59 |
|
i guess, as always with s3, the reality is way more confusing than it should be
|
# ¿ Sep 17, 2021 04:52 |
|
|
# ¿ Apr 30, 2024 22:03 |
|
you’re not going to get any details on another customer
|
# ¿ Sep 29, 2021 04:28 |