Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Orkiec
Dec 28, 2008

My gut, huh?

Startyde posted:

Is the aurora postgresql mode still built on the maria fork underneath the hood or are they just reusing the aurora name?



Ah, yea, makes sense I guess.

I'd rather not talk in too much detail about this for obvious reasons :), but Aurora Postgres is absolutely a Postgres fork and not a MariaDB one (https://www.youtube.com/watch?v=nd_BT_H-vsM). What Aurora Postgres and Aurora MySQL do share a lot of is the storage layer, which is why they are sharing the same name.

Adbot
ADBOT LOVES YOU

Rapner
May 7, 2013


When are we getting aurora serverless? :D

I know you can't answer, but I can't wait.

JHVH-1
Jun 28, 2002

Orkiec posted:

I'd rather not talk in too much detail about this for obvious reasons :), but Aurora Postgres is absolutely a Postgres fork and not a MariaDB one (https://www.youtube.com/watch?v=nd_BT_H-vsM). What Aurora Postgres and Aurora MySQL do share a lot of is the storage layer, which is why they are sharing the same name.

I was excited with I found Aurora MySQL added smaller instance types so I could give it a try. I had set up a dev environment for a project and got it working. Then the devs come back to me and decide they want Postgres. So I thought I would just spin up a replacement but I find out the instance types didn’t exist yet there. Was quite a bummer and ended up just installing it on ec2 for them.

Sylink
Apr 17, 2004

Jist stopping by to say aws enterprise support blows, never pruchase it for your company.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Sylink posted:

Jist stopping by to say aws enterprise support blows, never pruchase it for your company.

This but the exact opposite.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
Nah it blows.

Love me some gcloud. Mostly.

Sylink
Apr 17, 2004

Our aws guys refused to do anything. For the money just hire full time staff with aws knowledge.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Sylink posted:

Our aws guys refused to do anything. For the money just hire full time staff with aws knowledge.

You need both.

Hughlander
May 11, 2005

Sylink posted:

Our aws guys refused to do anything. For the money just hire full time staff with aws knowledge.

Lol. What’s your lead time for hiring people with aws knowledge. Quickest we ever did was 90 days.

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.
Just complain until you get a new TAM and be clear on what you want out of them. Some people just want a dude to read a QBR to them and help triage tickets but if you want someone with some technical specialty just ask.

SnatchRabbit
Feb 23, 2006

by sebmojo
We currently have a docuwiki site up and running on EC2. It's essentially just a web server for various internal documentation. Some of the pages, however, contain iframe links to documents we have hosted on S3. The S3 policy just has those documents made public. I'd like to get this a bit more secure, but I'm trying to figure out the best solution. I thought about granting the instance a role with S3 privileges, but I think the iframe links for the documents would be technically be requested from the end user on the web and so giving the EC2 instance an s3 role wouldn't solve that problem, would it? I've also thought about having the docs stored locally on the instance and having it do an s3 sync periodically, but that seems kind of overwrought. Is there a simpler solution I'm overlooking? edit: would this be something I could set up with CORS configuration or presigned urls?

SnatchRabbit fucked around with this message at 21:27 on May 9, 2018

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
You can stream the file from S3 and serve it as if it were local. I don’t recall if there’s an easy way to do an IMS on that in the SDK but it wouldn’t be tough to do yourself. To act as a cache, I mean.
Don’t forget to add an S3 endpoint to your VPC or you’ll be making unneeded trips out.

Thanks Ants
May 21, 2004

#essereFerrari


Have a read of https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Sylink posted:

Jist stopping by to say aws enterprise support blows, never pruchase it for your company.

As an AWS TAM, I’d love to hear more about your situation. PM me if you want but it sounds like you have a crummy account team.

SnatchRabbit
Feb 23, 2006

by sebmojo

So, I thought about using cloudfront but I don't know if management will go for the added cost given that this is just an internal wiki/documentation site. I was thinking about adding CORS to the S3 bucket, and allowing GET method from the instance's domain. Would that work while keeping the S3 bucket private, or invalidate the entire purpose of having the bucket private in the first place?

Thanks Ants
May 21, 2004

#essereFerrari


Your CloudFront costs are going to be miniscule - I host the images in our email signatures out of public S3 buckets with CloudFront in front of them to be able to use a custom domain, and the cost is under £5 a month.

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Agrikk posted:

As an AWS TAM, I’d love to hear more about your situation. PM me if you want but it sounds like you have a crummy account team.

Ours isn’t super helpful either tbh. But we also don’t reach out a lot.

Rapner
May 7, 2013


Arzakon posted:

Just complain until you get a new TAM and be clear on what you want out of them. Some people just want a dude to read a QBR to them and help triage tickets but if you want someone with some technical specialty just ask.

This is the right answer. We have some duds in the public services sector in Australia, otherwise they're all amazing.

opie
Nov 28, 2000
Check out my TFLC Excuse Log!
So...I left a c5.18xlarge instance running over the weekend. I remember trying to stop it, but for whatever reason it didn't stop - could be I had the wrong instance selected. What are the chances of getting any of that refunded? I had some problems with my corporate account and used my personal account for what was supposed to be a quick test, and my group is seriously tight on budget so to be honest I'd rather just eat the cost than expense it. I'm working with some SAs but I don't want to bring it up with them yet because I have other coworkers involved and don't want to get pressured into expensing it and then getting yelled at.

And yes I did set up a budget alert - I didn't before because I typically only run instances long enough to test a particular thing (1 hour max).

JehovahsWetness
Dec 9, 2005

bang that shit retarded
A single instance in a personal account (and I'm assuming not a big monthly spend)? Probably not much a chance for a refund. I've been at places where AWS waived the charges because some someone leaked their keys and got a fuckload of coin miners spun up but not a single DBA dipshitting a big RDS instance in the wrong region and forgetting about it.

I think AWS makes a rough distinction between maliciousness and mistake weighed against how much you pay them a month.

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.
It is 100% worth a shot opening up a CS ticket to see if they can do a one time I was a dumbass refund or even half. If you don't want your work to know about it I'd avoid mentioning it to your SA.

Definitely mention the steps you have taken to prevent it from happening again (turning on billing alerts, setting up a cloudwatch event to auto-turn off instances each night unless you add a specific tag, etc)

opie
Nov 28, 2000
Check out my TFLC Excuse Log!
Thanks. The work I'm doing is actually for the benefit of the SAs I'm working with, just that the costs generally aren't enough to matter much when I'm not an idiot. And I'm not supposed to use my personal account but I got tired of dealing with IT issues. I'll probably start asking for credit especially when the bare metal instances become public. Can't wait to test those.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Arzakon posted:

It is 100% worth a shot opening up a CS ticket to see if they can do a one time I was a dumbass refund or even half. If you don't want your work to know about it I'd avoid mentioning it to your SA.

Definitely mention the steps you have taken to prevent it from happening again (turning on billing alerts, setting up a cloudwatch event to auto-turn off instances each night unless you add a specific tag, etc)

This.

People doing dumb things are fine. Especially if they have learned from their mistakes and won’t make them the same way again.

What we don’t want is a bill from a mistake souring the relationship a customer has with us.

SnatchRabbit
Feb 23, 2006

by sebmojo
Does anyone know if it is possible to get the output from System Manager's Run Command all in a single file? I have a bunch of instances running a script but the outputs are all separated in S3 according to the commands and the instance IDs. I'd prefer to have all the outputs appended to a single file. Anyone know how?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

SnatchRabbit posted:

Does anyone know if it is possible to get the output from System Manager's Run Command all in a single file? I have a bunch of instances running a script but the outputs are all separated in S3 according to the commands and the instance IDs. I'd prefer to have all the outputs appended to a single file. Anyone know how?

I don’t think so.

The ways to get it all into a single file are legion but they are secondary processes.

What are you trying to accomplish?

opie
Nov 28, 2000
Check out my TFLC Excuse Log!
I filed a ticket and my support guy gave me a credit just over the charges, so thanks! Not quite sure if that means I have to pay the bill and then get a credit, but either way is cool. I deal with all the cloud providers and tbh AWS was my favorite anyway, even if you use dumb hostnames.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

opie posted:

dumb hostnames.

Herd not pets. Why reference them as anything at all?

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.

Agrikk posted:

Herd not pets.

Thank you for your cultural sensitivity but vegans are still mad at you.

opie
Nov 28, 2000
Check out my TFLC Excuse Log!

Agrikk posted:

Herd not pets. Why reference them as anything at all?
Its actually just windows and has to deal with one of our tools using the hostname which isn't useful. Linux is great though and everyone should just use it.

Thanks Ants
May 21, 2004

#essereFerrari


is there a best-practices anywhere for using SAML with AWS Cognito as well as the AWS control panel? Presumably I just create one app for Cognito and one for the other stuff, or is there a more elegant way to deal with this?

JHVH-1
Jun 28, 2002

Thanks Ants posted:

is there a best-practices anywhere for using SAML with AWS Cognito as well as the AWS control panel? Presumably I just create one app for Cognito and one for the other stuff, or is there a more elegant way to deal with this?

I was just starting to set this up on Friday. Added our Azure AD as identity provider. It basically uses assumed roles to give access to the dashboard or programmatic access. So you can map groups to the role to give access to whatever.

Currently logging in is initiated from our microsoft stuff, I haven't gotten around to initiating it from AWS. It doesn't actually create the users either, as its assuming a role. The login gets tracked in cloudtrail though.

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml.html I started going through this on my end and the admin of the Azure side had the typical microsoft instructions that tell you what to click where and what to type blah blah blah.

opie
Nov 28, 2000
Check out my TFLC Excuse Log!
I was excited to see that i3.metal was available in my ec2 list. Works great, although I was kind of surprised to see that it was broadwell and not skylake. I don’t know why I was expecting skylake since I imagine they’re in high demand and short supply right now especially for dedicated access. Still awesome.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Arzakon posted:

Thank you for your cultural sensitivity but vegans are still mad at you.

I apologize for my carno-centric posting.


I humbly submit “weeds not herbs” as a substitute.

Erwin
Feb 17, 2006

Crops, not bonsai.

Vanadium
Jan 8, 2005

I'm working on a thing running in AWS that runs a Redshift UNLOAD query to dump a bunch of stuff to some S3 location. The app runs under an instance profile or whatever granting it the required S3 permissions but also a bunch of other permissions, the Redshift cluster doesn't have any relevant permissions.

My app can pass AWS credentials as part of the UNLOAD query, so it can just pass its own temporary credentials and everything will work because Redshift can now everything the app can. I'm a bit hesitant to do that because inserting unnecessarily powerful credentials into the query text seems like a super bad idea in general.

Is there a way to take my app's temporary credentials and turn it into a new set of temporary credentials with further restrictions on allowed actions (and maybe duration), so that my app can delegate just a subset of its own S3 permissions? The STS AssumeRole action allows specifying a policy to further restrict what resulting set of credentials can do, but that requires me to configure a role and teach my app to assume that particular role in addition to its own role, and I'd like to get out of that somehow. I'd also like to avoid attaching a role to the whole Redshift cluster, since then I'd have to think about how to make it so only my app can take advantage of that and not other users of that Redshift cluster.

Since AssumeRole and other STS actions exist, this seems tantalizingly possible, but it's apparently not actually supported by any STS calls for some reason. Am I misunderstanding the permissions model? Should I just get over it and configure a bunch of super narrow extra roles? Or should I just let the Redshift cluster do whatever it wants on my AWS account and not worry about the query texts with the credentials showing up in the wrong place at some point?

12 rats tied together
Sep 7, 2006

You caught everything in the docs that I'm aware of (esp. sts:assumerole with restricted permissions). I would probably just attach the role to the cluster, run the unload, and then remove the role if you absolutely cannot leave the role on the cluster for some reason. Clusters can have multiple roles attached so it's not like you're potentially clobbering someone else's credentials. I also don't think you actually have to put the credentials in the query text - just the role ARN (which isn't powerful to have or know).

I think the only permissions leak is that you would be allowing the cluster to indefinitely write to a location in s3 that you care about? You can probably work out something sufficient with s3 permissions to prevent other users from being able to do the same. This is pretty awful but you could copy your data from /ingested into /approved. Only allow your credentials to write to /approved, and only give the cluster role permission to write to /ingested.

Generally things are a lot easier though when you can trust most other entities in your account though

KernelSlanders
May 27, 2013

Rogue operating systems on occasion spread lies and rumors about me.
I'm not sure why you don't want to role based access control, but if you do use key based access control the credentials are not recorded in the query log.

PIZZA.BAT
Nov 12, 2016


:cheers:


Hi everyone! Total AWS newbie here. I'm a consultant working for a niche NoSQL database company and we're in the middle of pivoting our customers into using AWS rather than running their own hosting, or at least providing them the option. I'm currently responsible for bird-dogging the AWS training program to find out which courses will be most relevant for our consultants to get them up to speed as well as in what order they should be taken in.

We have our own standalone application which has it's own built-in web server, rest api, load balancing, etc. Our primary concern is mostly in being able to spin up servers, allocating resources to them, deploying our application, having the servers communicate with each other. I'm looking at the practicioner essentials for today but was wondering what you guys feel will be the best courses for us to take.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
start here: https://acloud.guru/learn/aws-certified-cloud-practitioner

after that they should still probably do one of the 'associate' level courses that come next, but for $100 everyone can just start with 'practitioner' and try to plow through it before the end of the quarter.

Adbot
ADBOT LOVES YOU

vanity slug
Jul 20, 2010

I've done Cloud Practitioner and it's pretty much the Sales guy's introduction to AWS.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply