|
Startyde posted:Is the aurora postgresql mode still built on the maria fork underneath the hood or are they just reusing the aurora name? I'd rather not talk in too much detail about this for obvious reasons , but Aurora Postgres is absolutely a Postgres fork and not a MariaDB one (https://www.youtube.com/watch?v=nd_BT_H-vsM). What Aurora Postgres and Aurora MySQL do share a lot of is the storage layer, which is why they are sharing the same name.
|
# ? May 6, 2018 07:10 |
|
|
# ? May 21, 2024 18:07 |
|
When are we getting aurora serverless? I know you can't answer, but I can't wait.
|
# ? May 6, 2018 14:07 |
|
Orkiec posted:I'd rather not talk in too much detail about this for obvious reasons , but Aurora Postgres is absolutely a Postgres fork and not a MariaDB one (https://www.youtube.com/watch?v=nd_BT_H-vsM). What Aurora Postgres and Aurora MySQL do share a lot of is the storage layer, which is why they are sharing the same name. I was excited with I found Aurora MySQL added smaller instance types so I could give it a try. I had set up a dev environment for a project and got it working. Then the devs come back to me and decide they want Postgres. So I thought I would just spin up a replacement but I find out the instance types didn’t exist yet there. Was quite a bummer and ended up just installing it on ec2 for them.
|
# ? May 6, 2018 14:38 |
|
Jist stopping by to say aws enterprise support blows, never pruchase it for your company.
|
# ? May 8, 2018 12:42 |
|
Sylink posted:Jist stopping by to say aws enterprise support blows, never pruchase it for your company. This but the exact opposite.
|
# ? May 8, 2018 12:44 |
|
Nah it blows. Love me some gcloud. Mostly.
|
# ? May 8, 2018 13:13 |
|
Our aws guys refused to do anything. For the money just hire full time staff with aws knowledge.
|
# ? May 8, 2018 14:38 |
|
Sylink posted:Our aws guys refused to do anything. For the money just hire full time staff with aws knowledge. You need both.
|
# ? May 8, 2018 14:43 |
|
Sylink posted:Our aws guys refused to do anything. For the money just hire full time staff with aws knowledge. Lol. What’s your lead time for hiring people with aws knowledge. Quickest we ever did was 90 days.
|
# ? May 8, 2018 16:40 |
|
Just complain until you get a new TAM and be clear on what you want out of them. Some people just want a dude to read a QBR to them and help triage tickets but if you want someone with some technical specialty just ask.
|
# ? May 8, 2018 19:11 |
|
We currently have a docuwiki site up and running on EC2. It's essentially just a web server for various internal documentation. Some of the pages, however, contain iframe links to documents we have hosted on S3. The S3 policy just has those documents made public. I'd like to get this a bit more secure, but I'm trying to figure out the best solution. I thought about granting the instance a role with S3 privileges, but I think the iframe links for the documents would be technically be requested from the end user on the web and so giving the EC2 instance an s3 role wouldn't solve that problem, would it? I've also thought about having the docs stored locally on the instance and having it do an s3 sync periodically, but that seems kind of overwrought. Is there a simpler solution I'm overlooking? edit: would this be something I could set up with CORS configuration or presigned urls?
SnatchRabbit fucked around with this message at 21:27 on May 9, 2018 |
# ? May 9, 2018 20:26 |
|
You can stream the file from S3 and serve it as if it were local. I don’t recall if there’s an easy way to do an IMS on that in the SDK but it wouldn’t be tough to do yourself. To act as a cache, I mean. Don’t forget to add an S3 endpoint to your VPC or you’ll be making unneeded trips out.
|
# ? May 9, 2018 21:42 |
|
Have a read of https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
|
# ? May 9, 2018 21:53 |
|
Sylink posted:Jist stopping by to say aws enterprise support blows, never pruchase it for your company. As an AWS TAM, I’d love to hear more about your situation. PM me if you want but it sounds like you have a crummy account team.
|
# ? May 9, 2018 21:55 |
|
Thanks Ants posted:Have a read of https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html So, I thought about using cloudfront but I don't know if management will go for the added cost given that this is just an internal wiki/documentation site. I was thinking about adding CORS to the S3 bucket, and allowing GET method from the instance's domain. Would that work while keeping the S3 bucket private, or invalidate the entire purpose of having the bucket private in the first place?
|
# ? May 9, 2018 22:01 |
|
Your CloudFront costs are going to be miniscule - I host the images in our email signatures out of public S3 buckets with CloudFront in front of them to be able to use a custom domain, and the cost is under £5 a month.
|
# ? May 9, 2018 22:03 |
|
Agrikk posted:As an AWS TAM, I’d love to hear more about your situation. PM me if you want but it sounds like you have a crummy account team. Ours isn’t super helpful either tbh. But we also don’t reach out a lot.
|
# ? May 9, 2018 23:59 |
|
Arzakon posted:Just complain until you get a new TAM and be clear on what you want out of them. Some people just want a dude to read a QBR to them and help triage tickets but if you want someone with some technical specialty just ask. This is the right answer. We have some duds in the public services sector in Australia, otherwise they're all amazing.
|
# ? May 10, 2018 04:59 |
|
So...I left a c5.18xlarge instance running over the weekend. I remember trying to stop it, but for whatever reason it didn't stop - could be I had the wrong instance selected. What are the chances of getting any of that refunded? I had some problems with my corporate account and used my personal account for what was supposed to be a quick test, and my group is seriously tight on budget so to be honest I'd rather just eat the cost than expense it. I'm working with some SAs but I don't want to bring it up with them yet because I have other coworkers involved and don't want to get pressured into expensing it and then getting yelled at. And yes I did set up a budget alert - I didn't before because I typically only run instances long enough to test a particular thing (1 hour max).
|
# ? May 10, 2018 19:10 |
|
A single instance in a personal account (and I'm assuming not a big monthly spend)? Probably not much a chance for a refund. I've been at places where AWS waived the charges because some someone leaked their keys and got a fuckload of coin miners spun up but not a single DBA dipshitting a big RDS instance in the wrong region and forgetting about it. I think AWS makes a rough distinction between maliciousness and mistake weighed against how much you pay them a month.
|
# ? May 10, 2018 20:03 |
|
It is 100% worth a shot opening up a CS ticket to see if they can do a one time I was a dumbass refund or even half. If you don't want your work to know about it I'd avoid mentioning it to your SA. Definitely mention the steps you have taken to prevent it from happening again (turning on billing alerts, setting up a cloudwatch event to auto-turn off instances each night unless you add a specific tag, etc)
|
# ? May 10, 2018 21:10 |
|
Thanks. The work I'm doing is actually for the benefit of the SAs I'm working with, just that the costs generally aren't enough to matter much when I'm not an idiot. And I'm not supposed to use my personal account but I got tired of dealing with IT issues. I'll probably start asking for credit especially when the bare metal instances become public. Can't wait to test those.
|
# ? May 10, 2018 22:16 |
|
Arzakon posted:It is 100% worth a shot opening up a CS ticket to see if they can do a one time I was a dumbass refund or even half. If you don't want your work to know about it I'd avoid mentioning it to your SA. This. People doing dumb things are fine. Especially if they have learned from their mistakes and won’t make them the same way again. What we don’t want is a bill from a mistake souring the relationship a customer has with us.
|
# ? May 11, 2018 07:50 |
|
Does anyone know if it is possible to get the output from System Manager's Run Command all in a single file? I have a bunch of instances running a script but the outputs are all separated in S3 according to the commands and the instance IDs. I'd prefer to have all the outputs appended to a single file. Anyone know how?
|
# ? May 15, 2018 22:55 |
|
SnatchRabbit posted:Does anyone know if it is possible to get the output from System Manager's Run Command all in a single file? I have a bunch of instances running a script but the outputs are all separated in S3 according to the commands and the instance IDs. I'd prefer to have all the outputs appended to a single file. Anyone know how? I don’t think so. The ways to get it all into a single file are legion but they are secondary processes. What are you trying to accomplish?
|
# ? May 15, 2018 22:57 |
|
I filed a ticket and my support guy gave me a credit just over the charges, so thanks! Not quite sure if that means I have to pay the bill and then get a credit, but either way is cool. I deal with all the cloud providers and tbh AWS was my favorite anyway, even if you use dumb hostnames.
|
# ? May 17, 2018 00:06 |
|
opie posted:dumb hostnames. Herd not pets. Why reference them as anything at all?
|
# ? May 17, 2018 21:35 |
|
Agrikk posted:Herd not pets. Thank you for your cultural sensitivity but vegans are still mad at you.
|
# ? May 17, 2018 22:32 |
|
Agrikk posted:Herd not pets. Why reference them as anything at all?
|
# ? May 18, 2018 01:07 |
|
is there a best-practices anywhere for using SAML with AWS Cognito as well as the AWS control panel? Presumably I just create one app for Cognito and one for the other stuff, or is there a more elegant way to deal with this?
|
# ? May 20, 2018 20:32 |
|
Thanks Ants posted:is there a best-practices anywhere for using SAML with AWS Cognito as well as the AWS control panel? Presumably I just create one app for Cognito and one for the other stuff, or is there a more elegant way to deal with this? I was just starting to set this up on Friday. Added our Azure AD as identity provider. It basically uses assumed roles to give access to the dashboard or programmatic access. So you can map groups to the role to give access to whatever. Currently logging in is initiated from our microsoft stuff, I haven't gotten around to initiating it from AWS. It doesn't actually create the users either, as its assuming a role. The login gets tracked in cloudtrail though. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml.html I started going through this on my end and the admin of the Azure side had the typical microsoft instructions that tell you what to click where and what to type blah blah blah.
|
# ? May 20, 2018 21:52 |
|
I was excited to see that i3.metal was available in my ec2 list. Works great, although I was kind of surprised to see that it was broadwell and not skylake. I don’t know why I was expecting skylake since I imagine they’re in high demand and short supply right now especially for dedicated access. Still awesome.
|
# ? May 22, 2018 17:01 |
|
Arzakon posted:Thank you for your cultural sensitivity but vegans are still mad at you. I apologize for my carno-centric posting. I humbly submit “weeds not herbs” as a substitute.
|
# ? May 22, 2018 21:31 |
|
Crops, not bonsai.
|
# ? May 22, 2018 22:49 |
|
I'm working on a thing running in AWS that runs a Redshift UNLOAD query to dump a bunch of stuff to some S3 location. The app runs under an instance profile or whatever granting it the required S3 permissions but also a bunch of other permissions, the Redshift cluster doesn't have any relevant permissions. My app can pass AWS credentials as part of the UNLOAD query, so it can just pass its own temporary credentials and everything will work because Redshift can now everything the app can. I'm a bit hesitant to do that because inserting unnecessarily powerful credentials into the query text seems like a super bad idea in general. Is there a way to take my app's temporary credentials and turn it into a new set of temporary credentials with further restrictions on allowed actions (and maybe duration), so that my app can delegate just a subset of its own S3 permissions? The STS AssumeRole action allows specifying a policy to further restrict what resulting set of credentials can do, but that requires me to configure a role and teach my app to assume that particular role in addition to its own role, and I'd like to get out of that somehow. I'd also like to avoid attaching a role to the whole Redshift cluster, since then I'd have to think about how to make it so only my app can take advantage of that and not other users of that Redshift cluster. Since AssumeRole and other STS actions exist, this seems tantalizingly possible, but it's apparently not actually supported by any STS calls for some reason. Am I misunderstanding the permissions model? Should I just get over it and configure a bunch of super narrow extra roles? Or should I just let the Redshift cluster do whatever it wants on my AWS account and not worry about the query texts with the credentials showing up in the wrong place at some point?
|
# ? May 25, 2018 17:34 |
|
You caught everything in the docs that I'm aware of (esp. sts:assumerole with restricted permissions). I would probably just attach the role to the cluster, run the unload, and then remove the role if you absolutely cannot leave the role on the cluster for some reason. Clusters can have multiple roles attached so it's not like you're potentially clobbering someone else's credentials. I also don't think you actually have to put the credentials in the query text - just the role ARN (which isn't powerful to have or know). I think the only permissions leak is that you would be allowing the cluster to indefinitely write to a location in s3 that you care about? You can probably work out something sufficient with s3 permissions to prevent other users from being able to do the same. This is pretty awful but you could copy your data from /ingested into /approved. Only allow your credentials to write to /approved, and only give the cluster role permission to write to /ingested. Generally things are a lot easier though when you can trust most other entities in your account though
|
# ? May 26, 2018 01:00 |
|
I'm not sure why you don't want to role based access control, but if you do use key based access control the credentials are not recorded in the query log.
|
# ? May 26, 2018 19:07 |
|
Hi everyone! Total AWS newbie here. I'm a consultant working for a niche NoSQL database company and we're in the middle of pivoting our customers into using AWS rather than running their own hosting, or at least providing them the option. I'm currently responsible for bird-dogging the AWS training program to find out which courses will be most relevant for our consultants to get them up to speed as well as in what order they should be taken in. We have our own standalone application which has it's own built-in web server, rest api, load balancing, etc. Our primary concern is mostly in being able to spin up servers, allocating resources to them, deploying our application, having the servers communicate with each other. I'm looking at the practicioner essentials for today but was wondering what you guys feel will be the best courses for us to take.
|
# ? May 29, 2018 16:03 |
|
start here: https://acloud.guru/learn/aws-certified-cloud-practitioner after that they should still probably do one of the 'associate' level courses that come next, but for $100 everyone can just start with 'practitioner' and try to plow through it before the end of the quarter.
|
# ? May 29, 2018 16:11 |
|
|
# ? May 21, 2024 18:07 |
|
I've done Cloud Practitioner and it's pretty much the Sales guy's introduction to AWS.
|
# ? May 29, 2018 16:51 |