Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Scrapez
Feb 27, 2004

I'm trying to get my head around whether I need a Route 53 Outbound endpoint, Inbound inpoint, or both.

I have two VPCs. Each have their own dhcp option sets associated and hostnames and dns resolution enabled.
VPC 1 has option set production.instance and VPC 2 has option set scrapez.com
I have a VPC peering connection setup between them.

I want to be able to resolve the records in my production.instance. hosted zone from my instance in the scrapez.com VPC.

iE:
I need Instance 1: ip-10-0-0-200.scrapez.com to be able to resolve SRV record _sip._udp.pool.production.instance. which has underlying values of: ip-10-100-73-19.production.instance. and ip-10-100-96-92.production.instance.

Is a Route 53 outbound endpoint from the originating VPC the way to accomplish this? Or an inbound endpoint to the target VPC? Or other?

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

You may be overthinking it. I think you just need to do this?

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-associate-vpcs.html

Scrapez
Feb 27, 2004


I associated both VPCs with the private hosted zone production.instance but the following query still fails from the instances in the source VPC: nslookup -type=SRV _sip._udp.pool.production.instance

Server: 10.0.0.2
Address: 10.0.0.2#53

** server can't find _sip._udp.bvr.production.instance: NXDOMAIN

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

freeasinbeer posted:

Stupid question do I need to signup my sub accounts for enterprise? It’s the first time I’ve set one up in awhile and as far as I know our dedicated spend contract should just have that roll down right?


I guess I could bug our TAM, but :effort:

If your payer is on enterprise, your subs are on enterprise as well. But your subs also contribute to the total spend attributed to your payer so your enterprise support costs might go up when you link them.

It takes a support case to make it so: “hey. Please flip the bits that turn on ES for all accounts linked to our payer. Thank you.”

Docjowles
Apr 9, 2009

Scrapez posted:

I associated both VPCs with the private hosted zone production.instance but the following query still fails from the instances in the source VPC: nslookup -type=SRV _sip._udp.pool.production.instance

Server: 10.0.0.2
Address: 10.0.0.2#53

** server can't find _sip._udp.bvr.production.instance: NXDOMAIN

Just to ask the super dummy question, that same query works fine within the other VPC?

Scrapez
Feb 27, 2004

Docjowles posted:

Just to ask the super dummy question, that same query works fine within the other VPC?

Not dumb. I appreciate the feedback. It does work within the VPC.

Edit: Follow-up associating the VPC with the private hosted zone DID resolve the issue. I just still had inbound and outbound endpoints setup that were breaking things. :negative:

Thanks, Docjowles!

Scrapez fucked around with this message at 15:43 on Feb 13, 2019

Scrapez
Feb 27, 2004

When performing a describe-network-interfaces, is there a way to do wildcards in the description filter to return all matching ENIs?

For example, I have two ENIs with descriptions of: TestAdapter0 and TestAdapter1

Is there a way to do something like `aws ec2 describe-network-interfaces --filters Name=description,Values="TestAdapter*"`

Edit: Gosh I'm dumb...that does work. I just wasn't putting the double quotes around the Value

Virigoth
Apr 28, 2009

Corona rules everything around me
C.R.E.A.M. get the virus
In the ICU y'all......



Agrikk what’s the deal with these “Senior DevOps Consultant” jobs I’ve had land in my inbox. 2 so far this week. Is this a new professional services offering spinning up to help people do the DevOps? It’s almost a perfect match to the DevOps Enablement initiative I’ve been working on at my company for 6 months but I’m guessing the pay and perks are better.

Scrapez
Feb 27, 2004

Is there a cost associated with Elastic Network Interfaces? I can't find anything that talks about pricing so I think they're free to use but I can't find anything definitive.

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost

Scrapez posted:

Is there a cost associated with Elastic Network Interfaces? I can't find anything that talks about pricing so I think they're free to use but I can't find anything definitive.

You pay for the EC2 instance that has the ENI but there's no additional charge unless you attach an EIP. Even then, it's still free as long as you use it and don't park it for a month.

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Is anyone going to one of the AWSome day events? Is it worth going to? I've been AWS off and on for the last couple years mostly as a repository for image CDN and/or EDI type file storage (mostly through hand-cribbed .NET code and automating CloudBerry backups) but have not really interacted with Amazon formally about it.

Thanks Ants
May 21, 2004

#essereFerrari


If you've used AWS for anything then you don't really get much out of it. It's a very high level overview of the platform, emphasis on how you still need to do security yourself, and a couple of quick demos.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Virigoth posted:

Agrikk what’s the deal with these “Senior DevOps Consultant” jobs I’ve had land in my inbox. 2 so far this week. Is this a new professional services offering spinning up to help people do the DevOps? It’s almost a perfect match to the DevOps Enablement initiative I’ve been working on at my company for 6 months but I’m guessing the pay and perks are better.

That is correct. The DevOps Consultant is part of Proserve and is a combination of hands-on-keyboard and instructor.

Scrapez posted:

When performing a describe-network-interfaces, is there a way to do wildcards in the description filter to return all matching ENIs?

For example, I have two ENIs with descriptions of: TestAdapter0 and TestAdapter1

Is there a way to do something like `aws ec2 describe-network-interfaces --filters Name=description,Values="TestAdapter*"`

Edit: Gosh I'm dumb...that does work. I just wasn't putting the double quotes around the Value

FYI-

Using a wildcard for the filter may result in multiple API calls being made in quick succession, which may result in RequestLimitExceeded errors depending on the amount of entries returned, other filters and other API activity in your account.

I'm not saying that it will happen, but it could happen depending on your use case.

Agrikk fucked around with this message at 07:16 on Feb 20, 2019

Scrapez
Feb 27, 2004

Agrikk posted:

FYI-

Using a wildcard for the filter may result in multiple API calls being made in quick succession, which may result in RequestLimitExceeded errors depending on the amount of entries returned, other filters and other API activity in your account.

I'm not saying that it will happen, but it could happen depending on your use case.

So would it be better to set the description of all the ENIs to the same string (TestAdapter) and then instead do the query as: `aws ec2 describe-network-interfaces --filters Name=description,Values="TestAdapter"`

There's really no reason I need to do it as a wildcard. I had just planned to set descriptions as TestAdapter1, TestAdapter2, etc but it isn't really a requirement to do that.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Scrapez posted:

So would it be better to set the description of all the ENIs to the same string (TestAdapter) and then instead do the query as: `aws ec2 describe-network-interfaces --filters Name=description,Values="TestAdapter"`

There's really no reason I need to do it as a wildcard. I had just planned to set descriptions as TestAdapter1, TestAdapter2, etc but it isn't really a requirement to do that.

That is correct. If this process is going to be anything other than a one-off you should probably build a tagging scheme and do your search based on tags.

Scrapez
Feb 27, 2004

Agrikk posted:

That is correct. If this process is going to be anything other than a one-off you should probably build a tagging scheme and do your search based on tags.

Right. Searching for them via tags does make much more sense. Then I can add unique descriptions. Thank you!

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
Does LB WAF consider X-Forwarded-For or true-client-ip for its rate limiting or is it just remote_addr and a counter threshold?

Docjowles
Apr 9, 2009

Scrapez posted:

Not dumb. I appreciate the feedback. It does work within the VPC.

Edit: Follow-up associating the VPC with the private hosted zone DID resolve the issue. I just still had inbound and outbound endpoints setup that were breaking things. :negative:

Thanks, Docjowles!

Oh cool, I missed this post and was wondering what the heck was still wrong with the setup. Glad you got it working!

Cancelbot
Nov 22, 2006

Canceling spam since 1928

Architectural one - We had an argument today of whether or not we should have environment based transit. For context: the developers have to provision a "mainline", "staging" and "live" environment by having a VPC for each per region they want to be in (so more often than not teams end up with 6 VPCs for 2 region redundancy) this adds headaches as theoretically it also means a pair of VPN tunnels per VPC-Region if they want to hit our on-premise infrastructure and a hell of a lot of NAT gateways.

We're partway to a solution by having Transit VPCs span everywhere so everyone can share the NAT, Internet, and VPN tunnels through one account, but would you go a step further and split the transit into a "mainline", "staging", "live" set of transit gateways? In the end it's all the same address space and due to how hosed our on-premise network already is QA is already visible to Live and vice versa, save for security groups locking this down; plus there's the risk of someone just smashing the transits all together in their account and giving a giant middle finger to routing.

However it could mean we get a bit closer to some network sanity by actually segregating poo poo and allowing for the networking team to try things out which doesn't bring every conceivable environment down. One argument against this was our switching & routing on premises isn't segregated physically so why would we do it in AWS?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Cancelbot posted:

Architectural one - We had an argument today of whether or not we should have environment based transit. For context: the developers have to provision a "mainline", "staging" and "live" environment by having a VPC for each per region they want to be in (so more often than not teams end up with 6 VPCs for 2 region redundancy) this adds headaches as theoretically it also means a pair of VPN tunnels per VPC-Region if they want to hit our on-premise infrastructure and a hell of a lot of NAT gateways.

We're partway to a solution by having Transit VPCs span everywhere so everyone can share the NAT, Internet, and VPN tunnels through one account, but would you go a step further and split the transit into a "mainline", "staging", "live" set of transit gateways? In the end it's all the same address space and due to how hosed our on-premise network already is QA is already visible to Live and vice versa, save for security groups locking this down; plus there's the risk of someone just smashing the transits all together in their account and giving a giant middle finger to routing.

However it could mean we get a bit closer to some network sanity by actually segregating poo poo and allowing for the networking team to try things out which doesn't bring every conceivable environment down. One argument against this was our switching & routing on premises isn't segregated physically so why would we do it in AWS?

Like everything else AWS, "it depends."

What need does segregation solve? If your company got burned somehow by unsegregated networking, then yeah, culturally you might want to go the three transit networks route.

But other than that you have to ask yourself what gains do you achieve by adding triple the complexity for all of your interconnects.

Cancelbot
Nov 22, 2006

Canceling spam since 1928

We haven't been burned by it at all. I think there's a desire to start from a clean slate as our on premises network doesn't have that. But in reality there's very little risk of this messing up unless someone did some lovely Terraform that was missed by the review processes we already have in place.

Docjowles
Apr 9, 2009

Ok I have my own Route53 question. We were hoping to switch from managing our own internal resolvers to using route53. We created a new private hosted zone with like 1000 records in it using Terraform. It took ages to complete which I kind of expected. But it also takes ages to do a subsequent plan/apply even if there are no changes. Like 15 minutes per no-op run. Which is uh not going to fly for a frequently changing zone.

Anyone found a way to reasonably manage large route53 zones with terraform?

We can come up with other solutions, including just keeping our own resolvers. Or writing a smarter script that calls the API directly and only handles records that actually need to change. It's just super nice to have everything in Terraform for a variety of reasons. But if it's the wrong tool for this job, then oh well.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
For S3, is there a preferred standard for object storage while keeping original location information? I'm planning on flattening my fullpath names into keys (removing the system name with a shortened reference name), which will make regenerating the files from objects as simple as performing the reverse. Am I going to run into any trouble this way? I'm sure I could store the old filepath within metadata but that seems just as messy.

Example:
\\Client1Server\Test\ExtraFolder\File1.csv becomes s3://Bucket/Client1/Test/ExtraFolder/File1.csv

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

PierreTheMime posted:

For S3, is there a preferred standard for object storage while keeping original location information? I'm planning on flattening my fullpath names into keys (removing the system name with a shortened reference name), which will make regenerating the files from objects as simple as performing the reverse. Am I going to run into any trouble this way? I'm sure I could store the old filepath within metadata but that seems just as messy.

Example:
\\Client1Server\Test\ExtraFolder\File1.csv becomes s3://Bucket/Client1/Test/ExtraFolder/File1.csv

This is fine.

I have clients using powershell scripts and robocopy to do a shake-n-bake backup nightly backup job.

Robocopy generates a list of files with the archive but set

Another command splits the list into 8 lists [one for each core on the server]

Another command launches 8 [AWS s3 cp] commands to push the files to the bucket and reset the archive bit

Not that it doesn’t reflect any deletes or renames or folder moves, so your bucket will end up with a lot of leftover artifacts.

Agrikk fucked around with this message at 21:35 on Feb 26, 2019

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Agrikk posted:

Not that it doesn’t reflect any deletes or renames or folder moves, so your bucket will end up with a lot of leftover artifacts.

Using "s3 sync --delete" will remove files in the bucket that no longer exist in the source.

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Scaramouche posted:

Is anyone going to one of the AWSome day events? Is it worth going to? I've been AWS off and on for the last couple years mostly as a repository for image CDN and/or EDI type file storage (mostly through hand-cribbed .NET code and automating CloudBerry backups) but have not really interacted with Amazon formally about it.

Apologies if this has come up before; I asked the above because some coworkers had asked my opinion of the AWSome day event and I didn't really have one so I passed along the info I got here.

However one of them has come back with another question, which is about courses and training. I don't think his interest is in expanding his certificate collection, but rather getting some real hands on training. Based on what I've gathered by reading the OP and some randomly selected pages of the thread there's a couple of options:
- Amazon directly (? Do they do this, or just list certified training providers)
- Third party training providers like the Cloud Guru guys mentioned in the OP
- Local providers (there are many, we're in a big city with an Amazon satellite office)

The developer in question is mostly concerned with DevOps questions rather than developing actual applications; we're already using AWS Lambda services (which this guy has set up) but we're probably going to do some S3 CDN style work in combo with that. So I guess the question is, how to identify the courses/training with most bang for time spent? Find a cert that describes his DevOps concerns and work backwards from that? (e.g. AWS Certified DevOps Engineer)

Cancelbot
Nov 22, 2006

Canceling spam since 1928

Docjowles posted:

Ok I have my own Route53 question. We were hoping to switch from managing our own internal resolvers to using route53. We created a new private hosted zone with like 1000 records in it using Terraform. It took ages to complete which I kind of expected. But it also takes ages to do a subsequent plan/apply even if there are no changes. Like 15 minutes per no-op run. Which is uh not going to fly for a frequently changing zone.

Anyone found a way to reasonably manage large route53 zones with terraform?

We can come up with other solutions, including just keeping our own resolvers. Or writing a smarter script that calls the API directly and only handles records that actually need to change. It's just super nice to have everything in Terraform for a variety of reasons. But if it's the wrong tool for this job, then oh well.

Without a rewrite of the provider to be smarter; it's going to be bound by TF refreshing state of all of those records every time you do a plan. There's also a risk of hitting API call limits with AWS itself. You're probably better off doing something that takes the git diff on push/merge and translates that into R53 API calls. And something to reconcile the whole thing using a master file should that process inevitably fail :v:

vanity slug
Jul 20, 2010

terraform plan -refresh=false

If you're sure the state is up-to-date :D

No idea how your zone looks or why you need more than a thousands records in it, but you could always split up management of the zone into multiple state files and only apply changes to that subset. That's how we went from 20m planning times to ~2-3m.

vanity slug fucked around with this message at 00:11 on Feb 27, 2019

Thanks Ants
May 21, 2004

#essereFerrari


Scaramouche posted:

Apologies if this has come up before; I asked the above because some coworkers had asked my opinion of the AWSome day event and I didn't really have one so I passed along the info I got here.

However one of them has come back with another question, which is about courses and training. I don't think his interest is in expanding his certificate collection, but rather getting some real hands on training. Based on what I've gathered by reading the OP and some randomly selected pages of the thread there's a couple of options:
- Amazon directly (? Do they do this, or just list certified training providers)
- Third party training providers like the Cloud Guru guys mentioned in the OP
- Local providers (there are many, we're in a big city with an Amazon satellite office)

The developer in question is mostly concerned with DevOps questions rather than developing actual applications; we're already using AWS Lambda services (which this guy has set up) but we're probably going to do some S3 CDN style work in combo with that. So I guess the question is, how to identify the courses/training with most bang for time spent? Find a cert that describes his DevOps concerns and work backwards from that? (e.g. AWS Certified DevOps Engineer)

AWS partners are pushing something called the Well-Architected Framework which might be worth having a look at. From what I can tell they are days delivered by consulting partners and don't cost anything, you have an opportunity to talk 1:1 about the design of your application and will probably help with the decision on what areas to cover.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
What are people's current experiences with workload automation across AWS and in mixed environments? My workplace is leaning more into AWS but we have a number of non-AWS licensed products, servers, and other services and I'd like to know how others control these. All I can really see for AWS is Batch or some vendor products, but maybe I'm missing something?

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

PierreTheMime posted:

What are people's current experiences with workload automation across AWS and in mixed environments? My workplace is leaning more into AWS but we have a number of non-AWS licensed products, servers, and other services and I'd like to know how others control these. All I can really see for AWS is Batch or some vendor products, but maybe I'm missing something?

We're .NET based where I'm at right now, and in the past I've done similar. I call the S3/AWS "last mile" in that sense, where a lot of the heavy lifting occurs outside of AWS. So I might generate a big XML file of product updates, but only host it on an S3 bucket for subscribers to pick up, the images for those products in the same. Some of the selenium/build tests live there now, and we've moved our REST API logs to AWS as well. In our circumstance the "mix" is basically as a destination for an end product created elsewhere.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS

Cancelbot posted:

Architectural one - We had an argument today of whether or not we should have environment based transit. For context: the developers have to provision a "mainline", "staging" and "live" environment by having a VPC for each per region they want to be in (so more often than not teams end up with 6 VPCs for 2 region redundancy) this adds headaches as theoretically it also means a pair of VPN tunnels per VPC-Region if they want to hit our on-premise infrastructure and a hell of a lot of NAT gateways.

We're partway to a solution by having Transit VPCs span everywhere so everyone can share the NAT, Internet, and VPN tunnels through one account, but would you go a step further and split the transit into a "mainline", "staging", "live" set of transit gateways? In the end it's all the same address space and due to how hosed our on-premise network already is QA is already visible to Live and vice versa, save for security groups locking this down; plus there's the risk of someone just smashing the transits all together in their account and giving a giant middle finger to routing.

However it could mean we get a bit closer to some network sanity by actually segregating poo poo and allowing for the networking team to try things out which doesn't bring every conceivable environment down. One argument against this was our switching & routing on premises isn't segregated physically so why would we do it in AWS?

if you do this please tag the vpc "area: 0"

abelwingnut
Dec 23, 2002


i'm not the most experienced with aws, so forgive me if this is simple, but a friend is having a weird issue.

basically, they have the two chains setup that goes like:

dev chain:

route 53 -> api gateway (cname alias) -> api gateway (custom domain) -> api gateway -> api gateway stage (dev) -> api gateway "api" (dev) -> elastic beanstalk (node.js) -> snowflake

prod chain:

route 53 -> api gateway (cname alias) -> api gateway (custom domain) -> api gateway -> api gateway stage (prod) -> api gateway "api" (prod) -> elastic beanstalk (node.js) -> snowflake

but for some reason, the prod chain still ends up with requests calling dev. yep, he sees those requests in the elastic beanstalk logs, as well as in snowflake. he did some research in route 53 and the api gateway, and he believes the issue is one of the bolded links in the chains. he even redid those believed-to-be-wrong parts, and yet the prod chain still ends up with requests calling dev.

like i said, i'm not an aws expert, or even all that good at networking, so i'm curious: what do you guys think? where should i look first? ever seen anything like this? my instinct is there's something overriding those portions of the api gateway and directing that one flow into dev, but i'm not sure what that'd be. or even if that's a smart instinct.

in any case, my friend is worse than i am at all this stuff, so it's possible he got something very fundamental wrong somewhere. i'm not sure where to start, except for browsing around the api gateway. what he did show me there did look sensible and correct to my intermediary eyes, though.

thanks for your input! and i can provide further details if that'd help.

abelwingnut
Dec 23, 2002


found it. looks like dude’s proxy endpoint got hosed

Scrapez
Feb 27, 2004

Cloudformation's 200 resource limit is a real bummer. I wanted to use CloudFormer to create a cloudformation template with everything in my us-east-1 region to replicate it in us-west-2 but I have 337 resources. It would be nice if Cloudformer could recognize this and break the resources into multiple nested templates.

vanity slug
Jul 20, 2010

Use Terraform to create the CloudFormation stacks :v:

Internet Explorer
Jun 1, 2005





For anyone who has gone through the AWS certification process, about how much studying did it take you? I see that it basically goes Foundational, Associate, Professional, with subcategories along the way. Would it be unrealistic to try to get an Associate cert in a month? Did you do the live trainings or were the self-paced materials enough? The recommended experience is 6 months for Foundational, 1 year for Associate, 2 years for Professional. Did you find that to be accurate or is it something you can pick up with only a surface level of AWS experience?

Scrapez
Feb 27, 2004

Internet Explorer posted:

For anyone who has gone through the AWS certification process, about how much studying did it take you? I see that it basically goes Foundational, Associate, Professional, with subcategories along the way. Would it be unrealistic to try to get an Associate cert in a month? Did you do the live trainings or were the self-paced materials enough? The recommended experience is 6 months for Foundational, 1 year for Associate, 2 years for Professional. Did you find that to be accurate or is it something you can pick up with only a surface level of AWS experience?

I just took and passed the certified solutions architect associate.

I used a combination of a cloud.guru video courses and whizlabs exams to study. It's hard to say how many days I studied as life and work were so busy it was hard to dedicate a month continuously.

I think if you focused everyday studying for a month with the two, along with reading the white papers and doing the practice exercises, you have a decent chance of passing. It really just depends how quickly you learn and how well you retain knowledge.

The exam was hard and it isn't the type of exam where you can just memorize certain things and be good. You actually have to know what each AWS service does and how it can be used in conjunction with other services to solve an issue in the best and most cost effective manner.

Internet Explorer
Jun 1, 2005





That's awesome info, thank you. I may be in between jobs for a bit while doing some consulting on the side and may take the opportunity to pick up a cert or two. I always end up too busy with work and it'd be nice to do some learning for a bit.

Adbot
ADBOT LOVES YOU

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I sat for and passed the SA Pro cert without studying at all, but I basically teach this stuff to others all the time.

That said, anyone with a rough overview to the AWS core services should be able to pass the associate after no more than a month of study from online materials and some practice tests. The associate is meant to be more of an initiation to the subject matter rather than a hurdle to cross.

The Pro certs require a more in-depth knowledge.


Protip: If the question mentions "real time" anywhere in the paragraph of words stop reading any further. The answer is kinesis. It is always kinesis. :D

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply