|
I'm trying to get my head around whether I need a Route 53 Outbound endpoint, Inbound inpoint, or both. I have two VPCs. Each have their own dhcp option sets associated and hostnames and dns resolution enabled. VPC 1 has option set production.instance and VPC 2 has option set scrapez.com I have a VPC peering connection setup between them. I want to be able to resolve the records in my production.instance. hosted zone from my instance in the scrapez.com VPC. iE: I need Instance 1: ip-10-0-0-200.scrapez.com to be able to resolve SRV record _sip._udp.pool.production.instance. which has underlying values of: ip-10-100-73-19.production.instance. and ip-10-100-96-92.production.instance. Is a Route 53 outbound endpoint from the originating VPC the way to accomplish this? Or an inbound endpoint to the target VPC? Or other?
|
# ? Feb 12, 2019 20:32 |
|
|
# ? May 21, 2024 13:52 |
|
You may be overthinking it. I think you just need to do this? https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-associate-vpcs.html
|
# ? Feb 12, 2019 22:09 |
|
Docjowles posted:You may be overthinking it. I think you just need to do this? I associated both VPCs with the private hosted zone production.instance but the following query still fails from the instances in the source VPC: nslookup -type=SRV _sip._udp.pool.production.instance Server: 10.0.0.2 Address: 10.0.0.2#53 ** server can't find _sip._udp.bvr.production.instance: NXDOMAIN
|
# ? Feb 12, 2019 22:28 |
|
freeasinbeer posted:Stupid question do I need to signup my sub accounts for enterprise? It’s the first time I’ve set one up in awhile and as far as I know our dedicated spend contract should just have that roll down right? If your payer is on enterprise, your subs are on enterprise as well. But your subs also contribute to the total spend attributed to your payer so your enterprise support costs might go up when you link them. It takes a support case to make it so: “hey. Please flip the bits that turn on ES for all accounts linked to our payer. Thank you.”
|
# ? Feb 13, 2019 00:57 |
|
Scrapez posted:I associated both VPCs with the private hosted zone production.instance but the following query still fails from the instances in the source VPC: nslookup -type=SRV _sip._udp.pool.production.instance Just to ask the super dummy question, that same query works fine within the other VPC?
|
# ? Feb 13, 2019 04:34 |
|
Docjowles posted:Just to ask the super dummy question, that same query works fine within the other VPC? Not dumb. I appreciate the feedback. It does work within the VPC. Edit: Follow-up associating the VPC with the private hosted zone DID resolve the issue. I just still had inbound and outbound endpoints setup that were breaking things. Thanks, Docjowles! Scrapez fucked around with this message at 15:43 on Feb 13, 2019 |
# ? Feb 13, 2019 05:12 |
|
When performing a describe-network-interfaces, is there a way to do wildcards in the description filter to return all matching ENIs? For example, I have two ENIs with descriptions of: TestAdapter0 and TestAdapter1 Is there a way to do something like `aws ec2 describe-network-interfaces --filters Name=description,Values="TestAdapter*"` Edit: Gosh I'm dumb...that does work. I just wasn't putting the double quotes around the Value
|
# ? Feb 15, 2019 17:35 |
|
Agrikk what’s the deal with these “Senior DevOps Consultant” jobs I’ve had land in my inbox. 2 so far this week. Is this a new professional services offering spinning up to help people do the DevOps? It’s almost a perfect match to the DevOps Enablement initiative I’ve been working on at my company for 6 months but I’m guessing the pay and perks are better.
|
# ? Feb 16, 2019 03:27 |
|
Is there a cost associated with Elastic Network Interfaces? I can't find anything that talks about pricing so I think they're free to use but I can't find anything definitive.
|
# ? Feb 18, 2019 17:05 |
|
Scrapez posted:Is there a cost associated with Elastic Network Interfaces? I can't find anything that talks about pricing so I think they're free to use but I can't find anything definitive. You pay for the EC2 instance that has the ENI but there's no additional charge unless you attach an EIP. Even then, it's still free as long as you use it and don't park it for a month.
|
# ? Feb 19, 2019 07:09 |
|
Is anyone going to one of the AWSome day events? Is it worth going to? I've been AWS off and on for the last couple years mostly as a repository for image CDN and/or EDI type file storage (mostly through hand-cribbed .NET code and automating CloudBerry backups) but have not really interacted with Amazon formally about it.
|
# ? Feb 19, 2019 22:26 |
|
If you've used AWS for anything then you don't really get much out of it. It's a very high level overview of the platform, emphasis on how you still need to do security yourself, and a couple of quick demos.
|
# ? Feb 19, 2019 22:30 |
|
Virigoth posted:Agrikk whats the deal with these Senior DevOps Consultant jobs Ive had land in my inbox. 2 so far this week. Is this a new professional services offering spinning up to help people do the DevOps? Its almost a perfect match to the DevOps Enablement initiative Ive been working on at my company for 6 months but Im guessing the pay and perks are better. That is correct. The DevOps Consultant is part of Proserve and is a combination of hands-on-keyboard and instructor. Scrapez posted:When performing a describe-network-interfaces, is there a way to do wildcards in the description filter to return all matching ENIs? FYI- Using a wildcard for the filter may result in multiple API calls being made in quick succession, which may result in RequestLimitExceeded errors depending on the amount of entries returned, other filters and other API activity in your account. I'm not saying that it will happen, but it could happen depending on your use case. Agrikk fucked around with this message at 07:16 on Feb 20, 2019 |
# ? Feb 20, 2019 07:10 |
|
Agrikk posted:FYI- So would it be better to set the description of all the ENIs to the same string (TestAdapter) and then instead do the query as: `aws ec2 describe-network-interfaces --filters Name=description,Values="TestAdapter"` There's really no reason I need to do it as a wildcard. I had just planned to set descriptions as TestAdapter1, TestAdapter2, etc but it isn't really a requirement to do that.
|
# ? Feb 20, 2019 15:34 |
|
Scrapez posted:So would it be better to set the description of all the ENIs to the same string (TestAdapter) and then instead do the query as: `aws ec2 describe-network-interfaces --filters Name=description,Values="TestAdapter"` That is correct. If this process is going to be anything other than a one-off you should probably build a tagging scheme and do your search based on tags.
|
# ? Feb 20, 2019 18:18 |
|
Agrikk posted:That is correct. If this process is going to be anything other than a one-off you should probably build a tagging scheme and do your search based on tags. Right. Searching for them via tags does make much more sense. Then I can add unique descriptions. Thank you!
|
# ? Feb 20, 2019 18:20 |
|
Does LB WAF consider X-Forwarded-For or true-client-ip for its rate limiting or is it just remote_addr and a counter threshold?
|
# ? Feb 20, 2019 21:45 |
|
Scrapez posted:Not dumb. I appreciate the feedback. It does work within the VPC. Oh cool, I missed this post and was wondering what the heck was still wrong with the setup. Glad you got it working!
|
# ? Feb 20, 2019 21:55 |
|
Architectural one - We had an argument today of whether or not we should have environment based transit. For context: the developers have to provision a "mainline", "staging" and "live" environment by having a VPC for each per region they want to be in (so more often than not teams end up with 6 VPCs for 2 region redundancy) this adds headaches as theoretically it also means a pair of VPN tunnels per VPC-Region if they want to hit our on-premise infrastructure and a hell of a lot of NAT gateways. We're partway to a solution by having Transit VPCs span everywhere so everyone can share the NAT, Internet, and VPN tunnels through one account, but would you go a step further and split the transit into a "mainline", "staging", "live" set of transit gateways? In the end it's all the same address space and due to how hosed our on-premise network already is QA is already visible to Live and vice versa, save for security groups locking this down; plus there's the risk of someone just smashing the transits all together in their account and giving a giant middle finger to routing. However it could mean we get a bit closer to some network sanity by actually segregating poo poo and allowing for the networking team to try things out which doesn't bring every conceivable environment down. One argument against this was our switching & routing on premises isn't segregated physically so why would we do it in AWS?
|
# ? Feb 25, 2019 21:47 |
|
Cancelbot posted:Architectural one - We had an argument today of whether or not we should have environment based transit. For context: the developers have to provision a "mainline", "staging" and "live" environment by having a VPC for each per region they want to be in (so more often than not teams end up with 6 VPCs for 2 region redundancy) this adds headaches as theoretically it also means a pair of VPN tunnels per VPC-Region if they want to hit our on-premise infrastructure and a hell of a lot of NAT gateways. Like everything else AWS, "it depends." What need does segregation solve? If your company got burned somehow by unsegregated networking, then yeah, culturally you might want to go the three transit networks route. But other than that you have to ask yourself what gains do you achieve by adding triple the complexity for all of your interconnects.
|
# ? Feb 25, 2019 22:47 |
|
We haven't been burned by it at all. I think there's a desire to start from a clean slate as our on premises network doesn't have that. But in reality there's very little risk of this messing up unless someone did some lovely Terraform that was missed by the review processes we already have in place.
|
# ? Feb 26, 2019 09:33 |
|
Ok I have my own Route53 question. We were hoping to switch from managing our own internal resolvers to using route53. We created a new private hosted zone with like 1000 records in it using Terraform. It took ages to complete which I kind of expected. But it also takes ages to do a subsequent plan/apply even if there are no changes. Like 15 minutes per no-op run. Which is uh not going to fly for a frequently changing zone. Anyone found a way to reasonably manage large route53 zones with terraform? We can come up with other solutions, including just keeping our own resolvers. Or writing a smarter script that calls the API directly and only handles records that actually need to change. It's just super nice to have everything in Terraform for a variety of reasons. But if it's the wrong tool for this job, then oh well.
|
# ? Feb 26, 2019 19:46 |
|
For S3, is there a preferred standard for object storage while keeping original location information? I'm planning on flattening my fullpath names into keys (removing the system name with a shortened reference name), which will make regenerating the files from objects as simple as performing the reverse. Am I going to run into any trouble this way? I'm sure I could store the old filepath within metadata but that seems just as messy. Example: \\Client1Server\Test\ExtraFolder\File1.csv becomes s3://Bucket/Client1/Test/ExtraFolder/File1.csv
|
# ? Feb 26, 2019 19:46 |
|
PierreTheMime posted:For S3, is there a preferred standard for object storage while keeping original location information? I'm planning on flattening my fullpath names into keys (removing the system name with a shortened reference name), which will make regenerating the files from objects as simple as performing the reverse. Am I going to run into any trouble this way? I'm sure I could store the old filepath within metadata but that seems just as messy. This is fine. I have clients using powershell scripts and robocopy to do a shake-n-bake backup nightly backup job. Robocopy generates a list of files with the archive but set Another command splits the list into 8 lists [one for each core on the server] Another command launches 8 [AWS s3 cp] commands to push the files to the bucket and reset the archive bit Not that it doesn’t reflect any deletes or renames or folder moves, so your bucket will end up with a lot of leftover artifacts. Agrikk fucked around with this message at 21:35 on Feb 26, 2019 |
# ? Feb 26, 2019 21:32 |
|
Agrikk posted:Not that it doesn’t reflect any deletes or renames or folder moves, so your bucket will end up with a lot of leftover artifacts. Using "s3 sync --delete" will remove files in the bucket that no longer exist in the source.
|
# ? Feb 26, 2019 22:24 |
|
Scaramouche posted:Is anyone going to one of the AWSome day events? Is it worth going to? I've been AWS off and on for the last couple years mostly as a repository for image CDN and/or EDI type file storage (mostly through hand-cribbed .NET code and automating CloudBerry backups) but have not really interacted with Amazon formally about it. Apologies if this has come up before; I asked the above because some coworkers had asked my opinion of the AWSome day event and I didn't really have one so I passed along the info I got here. However one of them has come back with another question, which is about courses and training. I don't think his interest is in expanding his certificate collection, but rather getting some real hands on training. Based on what I've gathered by reading the OP and some randomly selected pages of the thread there's a couple of options: - Amazon directly (? Do they do this, or just list certified training providers) - Third party training providers like the Cloud Guru guys mentioned in the OP - Local providers (there are many, we're in a big city with an Amazon satellite office) The developer in question is mostly concerned with DevOps questions rather than developing actual applications; we're already using AWS Lambda services (which this guy has set up) but we're probably going to do some S3 CDN style work in combo with that. So I guess the question is, how to identify the courses/training with most bang for time spent? Find a cert that describes his DevOps concerns and work backwards from that? (e.g. AWS Certified DevOps Engineer)
|
# ? Feb 26, 2019 23:41 |
|
Docjowles posted:Ok I have my own Route53 question. We were hoping to switch from managing our own internal resolvers to using route53. We created a new private hosted zone with like 1000 records in it using Terraform. It took ages to complete which I kind of expected. But it also takes ages to do a subsequent plan/apply even if there are no changes. Like 15 minutes per no-op run. Which is uh not going to fly for a frequently changing zone. Without a rewrite of the provider to be smarter; it's going to be bound by TF refreshing state of all of those records every time you do a plan. There's also a risk of hitting API call limits with AWS itself. You're probably better off doing something that takes the git diff on push/merge and translates that into R53 API calls. And something to reconcile the whole thing using a master file should that process inevitably fail
|
# ? Feb 26, 2019 23:48 |
|
terraform plan -refresh=false If you're sure the state is up-to-date No idea how your zone looks or why you need more than a thousands records in it, but you could always split up management of the zone into multiple state files and only apply changes to that subset. That's how we went from 20m planning times to ~2-3m. vanity slug fucked around with this message at 00:11 on Feb 27, 2019 |
# ? Feb 27, 2019 00:09 |
|
Scaramouche posted:Apologies if this has come up before; I asked the above because some coworkers had asked my opinion of the AWSome day event and I didn't really have one so I passed along the info I got here. AWS partners are pushing something called the Well-Architected Framework which might be worth having a look at. From what I can tell they are days delivered by consulting partners and don't cost anything, you have an opportunity to talk 1:1 about the design of your application and will probably help with the decision on what areas to cover.
|
# ? Feb 27, 2019 00:14 |
|
What are people's current experiences with workload automation across AWS and in mixed environments? My workplace is leaning more into AWS but we have a number of non-AWS licensed products, servers, and other services and I'd like to know how others control these. All I can really see for AWS is Batch or some vendor products, but maybe I'm missing something?
|
# ? Feb 27, 2019 21:46 |
|
PierreTheMime posted:What are people's current experiences with workload automation across AWS and in mixed environments? My workplace is leaning more into AWS but we have a number of non-AWS licensed products, servers, and other services and I'd like to know how others control these. All I can really see for AWS is Batch or some vendor products, but maybe I'm missing something? We're .NET based where I'm at right now, and in the past I've done similar. I call the S3/AWS "last mile" in that sense, where a lot of the heavy lifting occurs outside of AWS. So I might generate a big XML file of product updates, but only host it on an S3 bucket for subscribers to pick up, the images for those products in the same. Some of the selenium/build tests live there now, and we've moved our REST API logs to AWS as well. In our circumstance the "mix" is basically as a destination for an end product created elsewhere.
|
# ? Feb 27, 2019 22:45 |
|
Cancelbot posted:Architectural one - We had an argument today of whether or not we should have environment based transit. For context: the developers have to provision a "mainline", "staging" and "live" environment by having a VPC for each per region they want to be in (so more often than not teams end up with 6 VPCs for 2 region redundancy) this adds headaches as theoretically it also means a pair of VPN tunnels per VPC-Region if they want to hit our on-premise infrastructure and a hell of a lot of NAT gateways. if you do this please tag the vpc "area: 0"
|
# ? Mar 1, 2019 02:16 |
|
i'm not the most experienced with aws, so forgive me if this is simple, but a friend is having a weird issue. basically, they have the two chains setup that goes like: dev chain: route 53 -> api gateway (cname alias) -> api gateway (custom domain) -> api gateway -> api gateway stage (dev) -> api gateway "api" (dev) -> elastic beanstalk (node.js) -> snowflake prod chain: route 53 -> api gateway (cname alias) -> api gateway (custom domain) -> api gateway -> api gateway stage (prod) -> api gateway "api" (prod) -> elastic beanstalk (node.js) -> snowflake but for some reason, the prod chain still ends up with requests calling dev. yep, he sees those requests in the elastic beanstalk logs, as well as in snowflake. he did some research in route 53 and the api gateway, and he believes the issue is one of the bolded links in the chains. he even redid those believed-to-be-wrong parts, and yet the prod chain still ends up with requests calling dev. like i said, i'm not an aws expert, or even all that good at networking, so i'm curious: what do you guys think? where should i look first? ever seen anything like this? my instinct is there's something overriding those portions of the api gateway and directing that one flow into dev, but i'm not sure what that'd be. or even if that's a smart instinct. in any case, my friend is worse than i am at all this stuff, so it's possible he got something very fundamental wrong somewhere. i'm not sure where to start, except for browsing around the api gateway. what he did show me there did look sensible and correct to my intermediary eyes, though. thanks for your input! and i can provide further details if that'd help.
|
# ? Mar 1, 2019 04:21 |
|
found it. looks like dude’s proxy endpoint got hosed
|
# ? Mar 1, 2019 22:48 |
|
Cloudformation's 200 resource limit is a real bummer. I wanted to use CloudFormer to create a cloudformation template with everything in my us-east-1 region to replicate it in us-west-2 but I have 337 resources. It would be nice if Cloudformer could recognize this and break the resources into multiple nested templates.
|
# ? Mar 12, 2019 21:29 |
|
Use Terraform to create the CloudFormation stacks
|
# ? Mar 12, 2019 23:22 |
|
For anyone who has gone through the AWS certification process, about how much studying did it take you? I see that it basically goes Foundational, Associate, Professional, with subcategories along the way. Would it be unrealistic to try to get an Associate cert in a month? Did you do the live trainings or were the self-paced materials enough? The recommended experience is 6 months for Foundational, 1 year for Associate, 2 years for Professional. Did you find that to be accurate or is it something you can pick up with only a surface level of AWS experience?
|
# ? Mar 12, 2019 23:38 |
|
Internet Explorer posted:For anyone who has gone through the AWS certification process, about how much studying did it take you? I see that it basically goes Foundational, Associate, Professional, with subcategories along the way. Would it be unrealistic to try to get an Associate cert in a month? Did you do the live trainings or were the self-paced materials enough? The recommended experience is 6 months for Foundational, 1 year for Associate, 2 years for Professional. Did you find that to be accurate or is it something you can pick up with only a surface level of AWS experience? I just took and passed the certified solutions architect associate. I used a combination of a cloud.guru video courses and whizlabs exams to study. It's hard to say how many days I studied as life and work were so busy it was hard to dedicate a month continuously. I think if you focused everyday studying for a month with the two, along with reading the white papers and doing the practice exercises, you have a decent chance of passing. It really just depends how quickly you learn and how well you retain knowledge. The exam was hard and it isn't the type of exam where you can just memorize certain things and be good. You actually have to know what each AWS service does and how it can be used in conjunction with other services to solve an issue in the best and most cost effective manner.
|
# ? Mar 13, 2019 04:23 |
|
That's awesome info, thank you. I may be in between jobs for a bit while doing some consulting on the side and may take the opportunity to pick up a cert or two. I always end up too busy with work and it'd be nice to do some learning for a bit.
|
# ? Mar 13, 2019 05:16 |
|
|
# ? May 21, 2024 13:52 |
|
I sat for and passed the SA Pro cert without studying at all, but I basically teach this stuff to others all the time. That said, anyone with a rough overview to the AWS core services should be able to pass the associate after no more than a month of study from online materials and some practice tests. The associate is meant to be more of an initiation to the subject matter rather than a hurdle to cross. The Pro certs require a more in-depth knowledge. Protip: If the question mentions "real time" anywhere in the paragraph of words stop reading any further. The answer is kinesis. It is always kinesis.
|
# ? Mar 13, 2019 07:19 |