|
Does anybody on here login to their AWS console through SAML? I'm looking to sort out our sprawl of independent accounts as the company grows. I have AWS linked to G Suite so everybody pick a role when they log in based on strings stored in the directory schema, and this works well. However, a legit issue that has been raised relates to generating access keys for services - since SAML just grants access rather than actually creating an account, there's no user object to add access keys to. If I make a SAML role that enables people to create users that they can then add access keys to it defeats the purpose of using SAML in the first place, since there's extra workload created to audit these accounts and the permissions attached to them. Is this a thing that anyone has solved, or are people just using something like Spinnaker / their own internal tools which use internal directory details, and not letting people touch the AWS console? Thanks Ants fucked around with this message at 21:40 on Apr 30, 2017 |
# ¿ Apr 30, 2017 21:37 |
|
|
# ¿ Apr 30, 2024 14:06 |
|
Thanks. I've had a look at the documentation for instance profiles and that seems to be applicable for accessing AWS resources from EC2. The dev team are using access keys for things like connecting their desktop applications to the service - sorry I have to be light on details, I need to have a catch-up with the team lead to figure out what they are actually doing. Is the answer here to just use tools that can authenticate using a SAML workflow? I've asked them to put me in touch with our account manager as well to see if we can get on a chat and figure something out, or at least get a clearer idea of what we're trying to do. Thanks Ants fucked around with this message at 11:09 on May 2, 2017 |
# ¿ May 2, 2017 11:00 |
|
Am I correct that I can't control IAM permissions on a per-Route 53 domain basis, just on a per-DNS-zone basis? E.g. I can deny access to changing the transfer lock status, but I can't only have it apply to specific domains.
|
# ¿ Jun 20, 2017 22:12 |
|
Has anybody managed to successfully decipher Azure VM sizing? Looking at their price list: https://azure.microsoft.com/en-gb/pricing/details/virtual-machines/windows/ An A2 instance has 3.5GB RAM and 60GB disk (the disk being temporary scratch rather than persistent storage provided by a Managed Disk which the OS runs from). Looking at their documentation: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general An A2 instance has 3.5GB RAM and 135GB disk. Is this a documentation fuckup, or have I missed something? Edit: I have missed something, A2 on the price list is Basic tier, A2 Standard is a previous version VM and listed https://azure.microsoft.com/en-gb/pricing/details/virtual-machines/windows-previous/. They could really do with coming up with a better way of naming them. Thanks Ants fucked around with this message at 18:21 on Jul 25, 2017 |
# ¿ Jul 25, 2017 18:08 |
|
I'm not understanding the economics of brute-forcing something on AWS to save $5.
|
# ¿ Aug 29, 2017 21:07 |
|
Isn't that also what SQS is designed to manage?
|
# ¿ Sep 9, 2017 17:12 |
|
describe-instances returns the private-dns-name if that is what you meant?
|
# ¿ Sep 11, 2017 17:59 |
|
Is CloudWatch not going to help you here? Either you want to know about all activity, or in the case of EC2 I guess network in/out to instances above a baseline value would help you figure out if a service is in use or not.
|
# ¿ Oct 31, 2017 23:28 |
|
Buy the courses from Udemy and then transfer them into ACloudGuru by sending a copy of your receipt.
|
# ¿ Dec 12, 2017 18:21 |
|
I get massively intimidated by the prospect of taking any of the AWS exams due to the ridiculously quick pace that it's developing at. Should really put some effort in and give it a shot.
|
# ¿ Feb 7, 2018 23:57 |
|
The region thing in AWS being a global setting is fairly annoying - I’d like to see all instances returned and then a column for region that can be filtered. I assume there’s a good technical reason for this and presumably it ensures that each region is separated from another so you don’t have issues with your local region meaning you also lose management of other regions but I’ve not read anything that explains why it’s the way it is.
|
# ¿ Apr 16, 2018 16:38 |
|
Can you build it in Lambda?
|
# ¿ Apr 17, 2018 19:41 |
|
Agrikk posted:It’s a blast radius thing. When you flip between regions, you are literally flipping to a new instance of AWS that lives somewhere else. As services are launched they start with a single target region and then a new block of service infrastructure is spun up in a new region, and things progress from there. Thanks for the explainer
|
# ¿ Apr 21, 2018 21:14 |
|
I don't see the security argument against cloud because it seems to be based on the assumption that just because someone doesn't know their private infrastructure is a compromised mess means that there's never been any issue, and hey look at all these advisories that Azure are publishing!
|
# ¿ May 2, 2018 17:57 |
|
Have a read of https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
|
# ¿ May 9, 2018 21:53 |
|
Your CloudFront costs are going to be miniscule - I host the images in our email signatures out of public S3 buckets with CloudFront in front of them to be able to use a custom domain, and the cost is under £5 a month.
|
# ¿ May 9, 2018 22:03 |
|
is there a best-practices anywhere for using SAML with AWS Cognito as well as the AWS control panel? Presumably I just create one app for Cognito and one for the other stuff, or is there a more elegant way to deal with this?
|
# ¿ May 20, 2018 20:32 |
|
Jesus, I feel dirty asking this question but here goes. If someone has come to me with a requirement to run a particular legacy application in a few locations (likely west coast, east coast, and London) that is qualified on bare metal, Hyper-V or VMware, am I insane for thinking that nested virt in Azure might tick this box nicely? I'm already familiar with Azure networking and if it needs ExpressRoute then loads of people offer it. The only alternative I can think of is managed VMware with someone like Rackspace, and gently caress that. AWS VMware would also do it but the pricing is way out of budget for this.
|
# ¿ Jun 5, 2018 21:27 |
|
The application is supplied as an ISO or an OVA, and upgrades involve mounting the ISO and using the (virtual) console. While I could deploy the thing into Hyper-V and then migrate it into Azure it basically paints me into a corner as far as future upgrades go.
|
# ¿ Jun 5, 2018 21:36 |
|
Yeah it's more the maintenance and upgrades that have to be done through the console with an ISO mounted. Exporting back to do the upgrade and then importing it again is a bit too much of a PITA, and also we'd completely gently caress over any chance of getting vendor support. This is like the only service that runs this badly and I'd rather just ditch it, but it's not my call.
|
# ¿ Jun 7, 2018 20:03 |
|
Have you looked at something like https://gravitational.com/teleport/ ? I'm sure it can be configured in a way that ensures your tooling can work.
|
# ¿ Jun 21, 2018 10:43 |
|
If anyone has 20 minutes to kill I'd appreciate some input on this presentation, because the point being made seems less "cloud isn't the right choice" and more "we built a legacy service and treated every client like a bespoke deployment, and were surprised when it didn't translate well for AWS". https://www.youtube.com/watch?v=6iOYtH1Ya1E Just from my non-expert eye it seems like deploying a load of VPNs (and having to configure something on the far end) is a batshit insane way to achieve this vs. just using http/websocket and maybe some sort of push for real-time updates, because then deploying a screen becomes "connect to your wifi or plug into cheap broadband service" and not "work with us to get a VPN tunnel set up, make sure the LAN address range doesn't overlap with what we're already doing, wow this is all expensive!" Agrikk I assume you've had clients that bring a turd like this to you and assume is just a place to run VMs for cheap?
|
# ¿ Jul 31, 2018 14:24 |
|
It's good how he's sort of lost track of the original aims of moving out of AWS (hitting scalability issues, 'needing' VRF for the dogshit mess that is the networking) and then has to shift the goalposts when the first question comes in.
|
# ¿ Jul 31, 2018 19:54 |
|
It's resolving the external IP, but is the traffic actually going externally? I know in Azure when you present a service endpoint into a vnet you still reference the 'public' DNS name but that traffic never actually leaves the private network.
|
# ¿ Sep 21, 2018 16:35 |
|
Can you have one private DNS zone and then add multiple VPC IDs to the configuration? Then just enable DNS resolution on each zone. No need for DNS traffic to transit your VPC peer. Edit: Ah, I see what you mean. A different IP gets returned depending on where the query has come from, and RDS doesn't see a query from a peered VPC as a private source. Thanks Ants fucked around with this message at 17:04 on Sep 21, 2018 |
# ¿ Sep 21, 2018 17:01 |
|
freeasinbeer posted:I most definitely have RDS servers that have cross region connectivity. It resolves the RDS domain name to a private IP and that is routed over the inter region peer. What AWS won’t let you do is chain VPCs, the two VPCs have to be explicitly peered. Are you running your own DNS servers (that are in the same region as the RDS server) and setting each instance inside a VPC to use those servers to resolve? That's the only way I can see that working, unless the documentation is different to the reality.
|
# ¿ Sep 21, 2018 21:20 |
|
It seems like you get a choice of having an RDS instance publicly accessible when you create it, and this changes how DNS behaves - if you don't have it publicly accessible then the DNS name will always resolve to a private address. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html#USER_VPC.Hiding Edit: You wrote that above, I missed it. I think this is the setting that you want Volguus. Thanks Ants fucked around with this message at 22:09 on Sep 21, 2018 |
# ¿ Sep 21, 2018 22:03 |
|
Volguus posted:Oooh, yes, I guess, maybe. But I would like to be able to access the db from work from time to time (I update the security group to allow my ip to access it, do my thing, then remove it). If I set it to private (assuming i'll be able to update it even) I presume that then this will be it. I have to go through an EC2 machine. Which may be fine I guess. Yeah you'll have to hop through another host or deploy a VPN appliance into your VPC. Or build a VPN tunnel back to your office. Or Direct Connect into your existing WAN. I'd push for AWS training (as well as hiring someone) because then that also benefits you.
|
# ¿ Sep 21, 2018 22:39 |
|
Would Athena be along the right lines?
|
# ¿ Nov 2, 2018 00:02 |
|
https://docs.aws.amazon.com/lambda/latest/dg/limits.html
|
# ¿ Nov 2, 2018 11:56 |
|
Not specifically AWS related but I have had a lot of luck at my current job of just sitting around keeping a vague eye on a problem that needs to be solved and having a current vendor we work with pop up to roll the fix into a product we already use, rather than migrating between different services for every little problem that comes around, or having billions of different subscriptions going with a load of feature overlap.
|
# ¿ Dec 12, 2018 01:17 |
|
For what it's worth I took it to mean something along the lines of "buy Gmail rather than some EC2 instances that you then put your own mail server on top". A small dev team that I work with on-and-off really needed to be kicked hard to stop seeing AWS as an empty VMware cluster that you just put Linux boxes on.
|
# ¿ Dec 18, 2018 23:37 |
|
If you've used AWS for anything then you don't really get much out of it. It's a very high level overview of the platform, emphasis on how you still need to do security yourself, and a couple of quick demos.
|
# ¿ Feb 19, 2019 22:30 |
|
Scaramouche posted:Apologies if this has come up before; I asked the above because some coworkers had asked my opinion of the AWSome day event and I didn't really have one so I passed along the info I got here. AWS partners are pushing something called the Well-Architected Framework which might be worth having a look at. From what I can tell they are days delivered by consulting partners and don't cost anything, you have an opportunity to talk 1:1 about the design of your application and will probably help with the decision on what areas to cover.
|
# ¿ Feb 27, 2019 00:14 |
|
Or look at doing a VPN tunnel between the two virtual networks
|
# ¿ Mar 21, 2019 22:03 |
|
Cloud networking is magic. We needed to move some Azure services into a different region so I just built the vnet, moved the VPN tunnels from the old region to the new, then peered the two vnets, allowing the old one to use the gateway of the new one. Total downtime was about 30 mins which included the time to redo the VPN tunnels on our firewalls. Everything works as it did before, except the packets are going via our local region and I can bring new things up gradually without any disruption. I have no idea how that all works in the backend in a way that can maintain segregation but it’s impressive.
|
# ¿ Apr 14, 2019 01:40 |
|
Yeah I figured something along those lines but it's the scale that is the bit for me
|
# ¿ Apr 14, 2019 18:24 |
|
I might suck at searching documentation, but does anybody know if Azure route tables support defining a route for a /16 and then a higher priority route for a /24 that falls within that /16, or will it just error out? I'm going to try and avoid doing this but might need this as a fall back option.
|
# ¿ Jun 19, 2019 15:24 |
|
Yeah I just don't want to hit some weird validation issue in the API/portal and need to push it through support to get fixed. Though I've just realised I can find out pretty quickly by just adding a /8 route to my test tenant and seeing what happens.
|
# ¿ Jun 19, 2019 18:08 |
|
|
# ¿ Apr 30, 2024 14:06 |
|
Thanks. If it needed confirming (having thought about this it was a question with an obvious answer) I've tested this in an Azure VNet with two IPsec tunnels, one to a site addressed as 10.1.0.0/16 and another 10.2.0.0/16 and I could add 10.1.250.0/24 to the second route without issue, and the route was listed in the effective routes for an interface in the VNet.
|
# ¿ Jun 19, 2019 20:11 |