|
https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.htmlquote:UPSERT: If a resource record set does not already exist, AWS creates it. If a resource set does exist, Route 53 updates it with the values in the request. Upsert doesn't mean append. It means create if the record doesn't exist at all, or overwrite with the specified value if it does. So yes, you need to read it into a variable, append the string you want added, and then make an API call to set it to that new value.
|
# ¿ Jan 24, 2019 17:43 |
|
|
# ¿ May 1, 2024 23:37 |
|
There isn’t a simple way to have it Shut Down Everything when you exceed a billing threshold. What you can do is set a CloudWatch alert that will email you when your estimated monthly spend goes over $X. Then set that to like $1 and you should be able to catch whatever the problem is before it amounts to anything significant. You can google up plenty of guides for this. If you make a one time mistake you can also usually talk support into giving you an account credit or something, in the worst case. I think you’d have to open a new AWS account to unlink it from your personal Amazon.com account. I actually didn’t even know you could have both services on the same login like that. Docjowles fucked around with this message at 14:27 on Jan 31, 2019 |
# ¿ Jan 31, 2019 14:24 |
|
Forgall posted:Could that alert execute lambda function that would shut things down automatically? That's why I said "no simple way" heh. Yes you could do this, but it seems like a lot more hassle than an email alert with a super low threshold. What does "shutting down" DynamoDB or S3 even mean for example? Delete all your data?
|
# ¿ Jan 31, 2019 15:20 |
|
This isn't really anything related to AWS. The requirements.txt file doesn't just magically do anything on its own. You need to do something like "pip install -r requirements.txt" first to actually download and install the dependencies. Then your app should work.
|
# ¿ Feb 4, 2019 21:25 |
|
Sorry I misunderstood. Didn’t realize you meant Elastic Beanstalk instead of Elastic Block Storage by EBS. Beanstalk should be installing your requirements when the app is deployed, yes.
|
# ¿ Feb 4, 2019 22:23 |
|
Agrikk posted:Note that enterprise support starts at $15,000 per month and goes up from there. You or some other Amazon goon gave me the advice that you always want to do your support interactions over the phone, and that has held extremely true for me. You'll need to block out some time to talk to someone, but it's worth it. My phone cases are resolved in like an hour. The asynchronous "web" option or whatever they call it where you post a message will take days to weeks unless it's something dead simple like "increase my EC2 instance limit in this region by 100".
|
# ¿ Feb 11, 2019 19:28 |
|
You may be overthinking it. I think you just need to do this? https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-associate-vpcs.html
|
# ¿ Feb 12, 2019 22:09 |
|
Scrapez posted:I associated both VPCs with the private hosted zone production.instance but the following query still fails from the instances in the source VPC: nslookup -type=SRV _sip._udp.pool.production.instance Just to ask the super dummy question, that same query works fine within the other VPC?
|
# ¿ Feb 13, 2019 04:34 |
|
Scrapez posted:Not dumb. I appreciate the feedback. It does work within the VPC. Oh cool, I missed this post and was wondering what the heck was still wrong with the setup. Glad you got it working!
|
# ¿ Feb 20, 2019 21:55 |
|
Ok I have my own Route53 question. We were hoping to switch from managing our own internal resolvers to using route53. We created a new private hosted zone with like 1000 records in it using Terraform. It took ages to complete which I kind of expected. But it also takes ages to do a subsequent plan/apply even if there are no changes. Like 15 minutes per no-op run. Which is uh not going to fly for a frequently changing zone. Anyone found a way to reasonably manage large route53 zones with terraform? We can come up with other solutions, including just keeping our own resolvers. Or writing a smarter script that calls the API directly and only handles records that actually need to change. It's just super nice to have everything in Terraform for a variety of reasons. But if it's the wrong tool for this job, then oh well.
|
# ¿ Feb 26, 2019 19:46 |
|
This was a real cool networking talk from the 2018 re:Invent. One of those sessions I was glad I went to even though it had no immediate value, because it was just Amazon nerds talking in depth about the kickass poo poo they get to do behind the scenes. Opened my eyes to things I never would have thought of. https://www.youtube.com/watch?v=tPUl96EEFps
|
# ¿ Apr 15, 2019 05:53 |
|
"Ephemeral as possible" kind of cries out for lambda, imo. You can configure an S3 bucket to invoke your function every time an object is uploaded, receiving info about the object as an argument. When it's done processing, it shuts off until the next invocation. Here are some random docs. The code sample is nodejs but java works fine, too. https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
|
# ¿ Apr 18, 2019 02:53 |
|
Prediction: it is, somehow, the fault of DNS.
|
# ¿ Apr 19, 2019 17:34 |
|
I can't speak to how Azure handles it, but that's just how routing works. A more specific route is perfectly acceptable and will win out over a more general route toward a larger subnet that happens to contain your /24. I would be surprised if it throws an error.
|
# ¿ Jun 19, 2019 15:28 |
|
I am all in on TF for better or worse, but there are takes on both sides. This came up a few months back in the CI/CD thread here and people had feelings for and against both tools. It’s worth reading back through that thread if you’re undecided. For a trivial single tenant single account use case (also: think about whether you should be using a single account. AWS support for multi account is light years past where it was even a year ago) it probably doesn’t matter much and just pick whichever you like more.
|
# ¿ Aug 1, 2019 04:28 |
|
The subnets are in different VPCs? I'm not sure you can do what you're talking about in that scenario, even if the VPCs are peered.
|
# ¿ Aug 15, 2019 19:13 |
|
Agrikk posted:Or you can do what a customer of mine did: holy poo poo I'm starting to see why you tout TAM as a fun and cool job so much. For anyone struggling with 2FA, I strongly recommend ditching individual IAM accounts and just using your corporate SSO solution. Because yeah, dealing with 2FA loving sucks. If you are at a company of any size you hopefully already have some sort of SSO backed by 2FA and you can just reuse that instead of making every AWS user set up a second solution. And not hate your life twice as much every time someone drops their phone in the toilet. This has the added benefit that engineers do not have permanent access keys. Can't upload your god-mode key to GitHub if you don't have a key You can request temporary keys once you authenticate via SSO, and we make users do this. I wrote a lovely script that makes it very easy to authenticate to our SSO, pick which AWS account you want to work in (filtered to the set this user can access based on their Active Directory groups), and then dump the temp creds to their local environment. Some of the SSO vendors even provide this out of the box. Doing this has already paid un(?)expected dividends like devs coming to us saying "hey I run this production critical job from my laptop every day under my user, and now that's not possible, what gives?" and we can gently repoint them toward not loving running critical jobs from their laptops with admin access. Apps running on EC2 instances should use IAM instance profiles to assume a role that can do what they need. There will always be service accounts that need an actual IAM user with a long-lived key. But that should be the last resort choice, IMO. Actual human using AWS? Access via SSO with 2FA, get temp API keys if needed Application running in AWS? Use IAM roles App running elsewhere that needs to access AWS resources? OK fine, you get a key but it's restricted to the minimal set of features said app requires. And it's expiring on a set schedule.
|
# ¿ Aug 19, 2019 05:25 |
|
We run 1password for our root MFAs for the same reason. It seems like the least terrible choice. Plus we already have a corporate subscription anyway.
|
# ¿ Aug 19, 2019 16:48 |
|
Riffing on your corrupt file theory, openssh is (rightly) very paranoid about file permissions. So maybe the .ssh dir or authorized_keys file is being created with inappropriate ownership or permissions? It should be 700 / 600 respectively and owned by the same user as the parent home directory. It's easy for these to be set overly broad in provisioning scripts because the defaults are usually like 755 / 644. If those didn't exist at all and were created as root during cloud-init, they probably have the wrong ownership and permissions unless you are actually logging in directly as root. Which is a bad idea, and also you're probably getting blocked by your sshd_config denying root login. Also a simple thing, but make sure you are using the right username for your AMI. Docjowles fucked around with this message at 08:27 on Aug 30, 2019 |
# ¿ Aug 30, 2019 08:23 |
|
Yeah exactly regarding why use an EIP. If the public IP matters and needs to survive your instance being rebuilt for whatever reason (external white lists being the biggie) you’re gonna want that EIP.
|
# ¿ Sep 6, 2019 04:17 |
|
It's beautiful
|
# ¿ Sep 12, 2019 19:09 |
|
No you gotta pay the big bucks for Enterprise Support to get a TAM.
|
# ¿ Sep 25, 2019 00:33 |
|
I also don’t know Azure. But from the docs, it seems like what you are trying to do should work fine? Maybe check that you haven’t somehow specified “source IP affinity mode”? Because if each varnish request comes from a different source port, it should be load balancing then across the back ends according to this: https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode
|
# ¿ Oct 17, 2019 14:36 |
|
I like all of the above suggestions. Either configure SSH proxy or some kind of S3 + Lambda setup that copies files to the destination every time a new object shows up in the bucket.
|
# ¿ Nov 23, 2019 00:17 |
|
Grats on passing! At this point, getting actual experience with real, non-toy work projects will probably be the most valuable thing. If you can in your current role. The cert is just a starting point, but a very good one, since AWS is so sprawling and complex that it's very easy to design a system that totally sucks or costs 10x what it should. So having that foundation of "here is how not to immediately shoot your foot off in the cloud" is awesome There's also the "DevOps" certification track, which as I understand it is more day two operational tasks. How to keep the thing the solutions architect handed you running, monitored, secured, etc. If that sounds interesting that's another area you could explore. There are TONS of jobs out there looking for people with AWS expertise. If you can get some real projects under your belt you should definitely be able to command a raise and/or a new job if you want it. I've even come across multiple networking specific jobs, and I wasn't even directly looking for them. Usually wanting someone with a traditional networking/security background plus some cloud chops to build a hybrid on-prem/cloud solution. So if you can prove you know your way around both a router CLI and VPCs, Direct Connect, Transit Gateways, etc there are opportunities for you out there.
|
# ¿ Dec 3, 2019 02:17 |
|
Have you checked that the output file contains the correct content, and isn't just that Access Denied XML?
|
# ¿ Feb 6, 2020 20:38 |
|
I'm glad it was just that because any other option would be worrisome, lol You probably want the private IP of the instance in your condition, not public (if that's the real IP you posted), if you are accessing it over a VPC endpoint. I think that should do the trick.
|
# ¿ Feb 6, 2020 20:59 |
|
This is probably dumb but the first idea I had (without totally rethinking the process or adding more scaffolding) was something like: Ditch the ASGs, they aren’t adding anything here. Whatever kicks off these jobs (Jenkins or w/e) uses the EC2 API directly to boot up up the proper number of instances for the job and give each windows/Linux pair a matching tag. Each server can query the EC2 API to find its mate based on the tags. Also configure them to terminate on instance shutdown. App does its thing. When done, have the last step of the job shutdown the operating system. It’ll go down and delete itself.
|
# ¿ Feb 19, 2020 02:26 |
|
I feel better about my harebrained design now that a TAM
|
# ¿ Feb 19, 2020 14:14 |
|
There's also a lot of people with
|
# ¿ Feb 25, 2020 20:21 |
|
terragrunt is just a wrapper around terraform that gives you some convenience features and syntactic sugar. And yeah I think module outputs are the solution to the original question. Make the id of the resource you need an output and then refer to that in the resource that needs to use it as a parameter.
|
# ¿ Feb 26, 2020 01:05 |
|
Sorry for the terse phone reply, but yes, all of what you posted is a good idea and the services are solid. Including making a new account to serve as the organizational master that does nothing except authentication and billing. AWS promotes this pattern all over the place in white papers, reInvent talks, the guidance we got from our account team when starting out, etc.
|
# ¿ Feb 27, 2020 23:19 |
|
Will the storage be used primarily by Windows hosts? Because they also have managed SMB/CIFS if that’s a better fit for your workload. https://aws.amazon.com/fsx/windows/
|
# ¿ May 15, 2020 13:04 |
|
Yeah cloud formation or terraform springs to mind. Are people just clicking around in the console to create those dependencies today? Get that into an IaC tool and version control.
|
# ¿ Jun 2, 2020 12:47 |
|
I think the advice I got when I had to do something similar was to attach a lifecycle policy that deletes anything older than 1 day. This processed pretty fast (like within a day) and I was able to delete the empty bucket.
|
# ¿ Jul 8, 2020 22:55 |
|
freeasinbeer posted:use a janky third party tool that costs $10k a month. This is really an excellent database_admin.txt summary. Third party database tools have to have one of the worst price:quality ratios of code in existence. They must be written by the same people who develop applications for banks or healthcare that look like MS-DOS ASCII art and cost 8 figures.
|
# ¿ Sep 21, 2021 16:28 |
|
Yeah welding seems pretty cool tbh although maybe not as a full time job. I used to be heavy into homebrewing beer and I was super jealous of the guys who whipped up these sick “brew sculptures” to hold all their equipment. You could buy them premade from a vendor for thousands of dollars bur it seemed like if you knew what you were doing it was like a day or two of work and some scrap metal.
|
# ¿ Dec 4, 2021 18:24 |
|
I’m mostly familiar with terraform rather than CF, but it looks like importing resources is supported? Does this doc help? https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html
|
# ¿ Apr 15, 2022 22:28 |
|
You can add cloudwatch as a data source in Grafana, that’s the first thing that comes to mind.
|
# ¿ Apr 24, 2022 04:38 |
|
|
# ¿ May 1, 2024 23:37 |
|
Pile Of Garbage posted:For your consideration, some absolute fuckin insanity: My guess was that the IP's were in an allow-list somewhere and this was their idiotic scheme to ensure the app could only "dynamically" choose from 1 or 2 IP's in the subnet. Reading the comments I wasn't that far off.
|
# ¿ May 11, 2022 21:19 |