Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Docjowles
Apr 9, 2009

https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html

quote:

UPSERT: If a resource record set does not already exist, AWS creates it. If a resource set does exist, Route 53 updates it with the values in the request.

Upsert doesn't mean append. It means create if the record doesn't exist at all, or overwrite with the specified value if it does. So yes, you need to read it into a variable, append the string you want added, and then make an API call to set it to that new value.

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

There isn’t a simple way to have it Shut Down Everything when you exceed a billing threshold. What you can do is set a CloudWatch alert that will email you when your estimated monthly spend goes over $X. Then set that to like $1 and you should be able to catch whatever the problem is before it amounts to anything significant. You can google up plenty of guides for this. If you make a one time mistake you can also usually talk support into giving you an account credit or something, in the worst case.

I think you’d have to open a new AWS account to unlink it from your personal Amazon.com account. I actually didn’t even know you could have both services on the same login like that.

Docjowles fucked around with this message at 14:27 on Jan 31, 2019

Docjowles
Apr 9, 2009

Forgall posted:

Could that alert execute lambda function that would shut things down automatically?

That's why I said "no simple way" heh. Yes you could do this, but it seems like a lot more hassle than an email alert with a super low threshold. What does "shutting down" DynamoDB or S3 even mean for example? Delete all your data?

Docjowles
Apr 9, 2009

This isn't really anything related to AWS. The requirements.txt file doesn't just magically do anything on its own. You need to do something like "pip install -r requirements.txt" first to actually download and install the dependencies. Then your app should work.

Docjowles
Apr 9, 2009

Sorry I misunderstood. Didn’t realize you meant Elastic Beanstalk instead of Elastic Block Storage by EBS. Beanstalk should be installing your requirements when the app is deployed, yes.

Docjowles
Apr 9, 2009

Agrikk posted:

Note that enterprise support starts at $15,000 per month and goes up from there.

I’m not sure your monthly spend makes having a TAM and the other perks worth it yet.

WRT your cases getting handled poorly, there is absolutely nothing wrong with copy/pasting the following text into your case:

“Dear [Blank]- I am feel frustrated with how this case has been handled thus far. Please engage with me more closely so we can resolve this case quickly and to our satisfaction. When can we schedule a call to talk about this?”

You or some other Amazon goon gave me the advice that you always want to do your support interactions over the phone, and that has held extremely true for me. You'll need to block out some time to talk to someone, but it's worth it. My phone cases are resolved in like an hour. The asynchronous "web" option or whatever they call it where you post a message will take days to weeks unless it's something dead simple like "increase my EC2 instance limit in this region by 100".

Docjowles
Apr 9, 2009

You may be overthinking it. I think you just need to do this?

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-associate-vpcs.html

Docjowles
Apr 9, 2009

Scrapez posted:

I associated both VPCs with the private hosted zone production.instance but the following query still fails from the instances in the source VPC: nslookup -type=SRV _sip._udp.pool.production.instance

Server: 10.0.0.2
Address: 10.0.0.2#53

** server can't find _sip._udp.bvr.production.instance: NXDOMAIN

Just to ask the super dummy question, that same query works fine within the other VPC?

Docjowles
Apr 9, 2009

Scrapez posted:

Not dumb. I appreciate the feedback. It does work within the VPC.

Edit: Follow-up associating the VPC with the private hosted zone DID resolve the issue. I just still had inbound and outbound endpoints setup that were breaking things. :negative:

Thanks, Docjowles!

Oh cool, I missed this post and was wondering what the heck was still wrong with the setup. Glad you got it working!

Docjowles
Apr 9, 2009

Ok I have my own Route53 question. We were hoping to switch from managing our own internal resolvers to using route53. We created a new private hosted zone with like 1000 records in it using Terraform. It took ages to complete which I kind of expected. But it also takes ages to do a subsequent plan/apply even if there are no changes. Like 15 minutes per no-op run. Which is uh not going to fly for a frequently changing zone.

Anyone found a way to reasonably manage large route53 zones with terraform?

We can come up with other solutions, including just keeping our own resolvers. Or writing a smarter script that calls the API directly and only handles records that actually need to change. It's just super nice to have everything in Terraform for a variety of reasons. But if it's the wrong tool for this job, then oh well.

Docjowles
Apr 9, 2009

This was a real cool networking talk from the 2018 re:Invent. One of those sessions I was glad I went to even though it had no immediate value, because it was just Amazon nerds talking in depth about the kickass poo poo they get to do behind the scenes. Opened my eyes to things I never would have thought of.

https://www.youtube.com/watch?v=tPUl96EEFps

Docjowles
Apr 9, 2009

"Ephemeral as possible" kind of cries out for lambda, imo. You can configure an S3 bucket to invoke your function every time an object is uploaded, receiving info about the object as an argument. When it's done processing, it shuts off until the next invocation.

Here are some random docs. The code sample is nodejs but java works fine, too.

https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html

Docjowles
Apr 9, 2009

Prediction: it is, somehow, the fault of DNS.

Docjowles
Apr 9, 2009

I can't speak to how Azure handles it, but that's just how routing works. A more specific route is perfectly acceptable and will win out over a more general route toward a larger subnet that happens to contain your /24. I would be surprised if it throws an error.

Docjowles
Apr 9, 2009

I am all in on TF for better or worse, but there are takes on both sides. This came up a few months back in the CI/CD thread here and people had feelings for and against both tools. It’s worth reading back through that thread if you’re undecided.

For a trivial single tenant single account use case (also: think about whether you should be using a single account. AWS support for multi account is light years past where it was even a year ago) it probably doesn’t matter much and just pick whichever you like more.

Docjowles
Apr 9, 2009

The subnets are in different VPCs? I'm not sure you can do what you're talking about in that scenario, even if the VPCs are peered.

Docjowles
Apr 9, 2009

Agrikk posted:

Or you can do what a customer of mine did:

Be really clever and buy an iPhone and put all eighty of their accounts’ root 2FA on an instance of google Authenticator and keep the iPhone in a bombproof safe.


They were all kinds of :smug: until someone dropped the phone.


I had to fly down there and get on a video call with our legal department and me sitting next to their leadership and vouch that their leadership was actually their leadership and we all had to present IDs and say who we were and that we were authorized to remove MFA from the account.

We got to do this eighty times.

:lol: holy poo poo :lol: I'm starting to see why you tout TAM as a fun and cool job so much.

For anyone struggling with 2FA, I strongly recommend ditching individual IAM accounts and just using your corporate SSO solution. Because yeah, dealing with 2FA loving sucks. If you are at a company of any size you hopefully already have some sort of SSO backed by 2FA and you can just reuse that instead of making every AWS user set up a second solution. And not hate your life twice as much every time someone drops their phone in the toilet.

This has the added benefit that engineers do not have permanent access keys. Can't upload your god-mode key to GitHub if you don't have a key :thunk: You can request temporary keys once you authenticate via SSO, and we make users do this. I wrote a lovely script that makes it very easy to authenticate to our SSO, pick which AWS account you want to work in (filtered to the set this user can access based on their Active Directory groups), and then dump the temp creds to their local environment. Some of the SSO vendors even provide this out of the box. Doing this has already paid un(?)expected dividends like devs coming to us saying "hey I run this production critical job from my laptop every day under my user, and now that's not possible, what gives?" and we can gently repoint them toward not loving running critical jobs from their laptops with admin access.

Apps running on EC2 instances should use IAM instance profiles to assume a role that can do what they need. There will always be service accounts that need an actual IAM user with a long-lived key. But that should be the last resort choice, IMO.

Actual human using AWS? Access via SSO with 2FA, get temp API keys if needed
Application running in AWS? Use IAM roles
App running elsewhere that needs to access AWS resources? OK fine, you get a key but it's restricted to the minimal set of features said app requires. And it's expiring on a set schedule.

Docjowles
Apr 9, 2009

We run 1password for our root MFAs for the same reason. It seems like the least terrible choice. Plus we already have a corporate subscription anyway.

Docjowles
Apr 9, 2009

Riffing on your corrupt file theory, openssh is (rightly) very paranoid about file permissions. So maybe the .ssh dir or authorized_keys file is being created with inappropriate ownership or permissions? It should be 700 / 600 respectively and owned by the same user as the parent home directory. It's easy for these to be set overly broad in provisioning scripts because the defaults are usually like 755 / 644. If those didn't exist at all and were created as root during cloud-init, they probably have the wrong ownership and permissions unless you are actually logging in directly as root. Which is a bad idea, and also you're probably getting blocked by your sshd_config denying root login.

Also a simple thing, but make sure you are using the right username for your AMI.

Docjowles fucked around with this message at 08:27 on Aug 30, 2019

Docjowles
Apr 9, 2009

Yeah exactly regarding why use an EIP. If the public IP matters and needs to survive your instance being rebuilt for whatever reason (external white lists being the biggie) you’re gonna want that EIP.

Docjowles
Apr 9, 2009

It's beautiful :allears:

Docjowles
Apr 9, 2009

No you gotta pay the big bucks for Enterprise Support to get a TAM.

Docjowles
Apr 9, 2009

I also don’t know Azure. But from the docs, it seems like what you are trying to do should work fine? Maybe check that you haven’t somehow specified “source IP affinity mode”? Because if each varnish request comes from a different source port, it should be load balancing then across the back ends according to this:


https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode

Docjowles
Apr 9, 2009

I like all of the above suggestions. Either configure SSH proxy or some kind of S3 + Lambda setup that copies files to the destination every time a new object shows up in the bucket.

Docjowles
Apr 9, 2009

Grats on passing! At this point, getting actual experience with real, non-toy work projects will probably be the most valuable thing. If you can in your current role. The cert is just a starting point, but a very good one, since AWS is so sprawling and complex that it's very easy to design a system that totally sucks or costs 10x what it should. So having that foundation of "here is how not to immediately shoot your foot off in the cloud" is awesome :)

There's also the "DevOps" certification track, which as I understand it is more day two operational tasks. How to keep the thing the solutions architect handed you running, monitored, secured, etc. If that sounds interesting that's another area you could explore.

There are TONS of jobs out there looking for people with AWS expertise. If you can get some real projects under your belt you should definitely be able to command a raise and/or a new job if you want it. I've even come across multiple networking specific jobs, and I wasn't even directly looking for them. Usually wanting someone with a traditional networking/security background plus some cloud chops to build a hybrid on-prem/cloud solution. So if you can prove you know your way around both a router CLI and VPCs, Direct Connect, Transit Gateways, etc there are opportunities for you out there.

Docjowles
Apr 9, 2009

Have you checked that the output file contains the correct content, and isn't just that Access Denied XML? :v:

Docjowles
Apr 9, 2009

I'm glad it was just that because any other option would be worrisome, lol

You probably want the private IP of the instance in your condition, not public (if that's the real IP you posted), if you are accessing it over a VPC endpoint. I think that should do the trick.

Docjowles
Apr 9, 2009

This is probably dumb but the first idea I had (without totally rethinking the process or adding more scaffolding) was something like:

Ditch the ASGs, they aren’t adding anything here. Whatever kicks off these jobs (Jenkins or w/e) uses the EC2 API directly to boot up up the proper number of instances for the job and give each windows/Linux pair a matching tag. Each server can query the EC2 API to find its mate based on the tags. Also configure them to terminate on instance shutdown.

App does its thing. When done, have the last step of the job shutdown the operating system. It’ll go down and delete itself.

Docjowles
Apr 9, 2009


I feel better about my harebrained design now that a TAM stole my sweet idea came up with the same thing :v:

Docjowles
Apr 9, 2009

There's also a lot of people with STRONG OPINIONS Terraform experience in the CI/CD thread if you don't get the answers you need here. It's kind of the de facto containers and infrastructure-as-code thread.

Docjowles
Apr 9, 2009

terragrunt is just a wrapper around terraform that gives you some convenience features and syntactic sugar.

And yeah I think module outputs are the solution to the original question. Make the id of the resource you need an output and then refer to that in the resource that needs to use it as a parameter.

Docjowles
Apr 9, 2009

Sorry for the terse phone reply, but yes, all of what you posted is a good idea and the services are solid. Including making a new account to serve as the organizational master that does nothing except authentication and billing. AWS promotes this pattern all over the place in white papers, reInvent talks, the guidance we got from our account team when starting out, etc.

Docjowles
Apr 9, 2009

Will the storage be used primarily by Windows hosts? Because they also have managed SMB/CIFS if that’s a better fit for your workload.


https://aws.amazon.com/fsx/windows/

Docjowles
Apr 9, 2009

Yeah cloud formation or terraform springs to mind. Are people just clicking around in the console to create those dependencies today? Get that into an IaC tool and version control.

Docjowles
Apr 9, 2009

I think the advice I got when I had to do something similar was to attach a lifecycle policy that deletes anything older than 1 day. This processed pretty fast (like within a day) and I was able to delete the empty bucket.

Docjowles
Apr 9, 2009

freeasinbeer posted:

use a janky third party tool that costs $10k a month.

This is really an excellent database_admin.txt summary. Third party database tools have to have one of the worst price:quality ratios of code in existence. They must be written by the same people who develop applications for banks or healthcare that look like MS-DOS ASCII art and cost 8 figures.

Docjowles
Apr 9, 2009

Yeah welding seems pretty cool tbh although maybe not as a full time job. I used to be heavy into homebrewing beer and I was super jealous of the guys who whipped up these sick “brew sculptures” to hold all their equipment. You could buy them premade from a vendor for thousands of dollars bur it seemed like if you knew what you were doing it was like a day or two of work and some scrap metal.

Docjowles
Apr 9, 2009

I’m mostly familiar with terraform rather than CF, but it looks like importing resources is supported? Does this doc help? https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html

Docjowles
Apr 9, 2009

You can add cloudwatch as a data source in Grafana, that’s the first thing that comes to mind.

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

Pile Of Garbage posted:

For your consideration, some absolute fuckin insanity:

https://twitter.com/xssfox/status/1524228883259994112

My guess was that the IP's were in an allow-list somewhere and this was their idiotic scheme to ensure the app could only "dynamically" choose from 1 or 2 IP's in the subnet. Reading the comments I wasn't that far off.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply