Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.
Include whatever library in the zip file for your Lambda and you can use it in your function.

Instructions and randomly googled libraries...
Node: https://docs.aws.amazon.com/lambda/latest/dg/nodejs-create-deployment-pkg.html
https://www.npmjs.com/package/tiff2pdf
Python: https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html
https://pypi.org/project/img2pdf/

You'll get passed the S3 location of the new object in the event data, then you pull the object, convert it, and put it back. I haven't used those libraries but if whatever you use wants to write the file locally, you get 500mb in /tmp.
Example of Lambda/S3 image processing: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html

Adbot
ADBOT LOVES YOU

Mind_Taker
May 7, 2007



Arzakon posted:

Include whatever library in the zip file for your Lambda and you can use it in your function.

Instructions and randomly googled libraries...
Node: https://docs.aws.amazon.com/lambda/latest/dg/nodejs-create-deployment-pkg.html
https://www.npmjs.com/package/tiff2pdf
Python: https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html
https://pypi.org/project/img2pdf/

You'll get passed the S3 location of the new object in the event data, then you pull the object, convert it, and put it back. I haven't used those libraries but if whatever you use wants to write the file locally, you get 500mb in /tmp.
Example of Lambda/S3 image processing: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html


very stable genius posted:

If you're familiar with python it has a library called img2pdf that you can download with pip and package into a lambda function. It's fairly straightforward.

Thank you both for these answers. This looks pretty straightforward!

SurgicalOntologist
Jun 17, 2004

SurgicalOntologist posted:

I've got a hobbyist script I'd like to put in the cloud and not sure what the cheapest option will be. Pretty similar to mearn's stock market backtesting actually.

I have a PSQL database in RDS (free tier). I currently manage it with an SQL-Alchemy based client library I run on my laptop.

The operation I'm doing is essentially specified by one row in a table in the database. So, the function loads an instance from that row, then from that instance runs a computation and stores the result in another table. I want to run this as many times as possible for each row. The function has ~30s startup time then it can run about 10 per second. Each computation is completely independent, so it seems perfect for a spot instance. I would just run db.commit() every so often, doesn't matter if I get preempted.

Ideally I'd have some kind of manager so when a new worker comes on it chooses which row to start working on. Or I could predetermine a task queue based on a set target of computations per row.

Not sure if Lambda works because I need an apt package (glpk-utils). In that case, what are my options, given:
  • I want to spend as little as possible...
  • ...although assuming my spend is per computation, I'm willing to spend faster to get it done faster (i.e. more instances or nodes)
  • I need 2GB instances (on my machine it's under 1GB, but I tried it on a free tier t2.small and ran out of memory)
  • AWS or GCP
  • I'm willing to try something overkill for a hobbyist project (e.g. Kubernetes??) as a learning opportunity, assuming it fits the project well

I mean I can easily put this on a spot EC2 or GCE instance but is there something in either ecosystem that's well suited for the job management aspect?

Probably no one cares but I got this working in GCE pre-emptible instances and it's pretty sweet! Had to learn docker and some other concepts, but now I get to watch my results my roll in. Instead of using some sort of master node, I'm just keeping track in the database of what's been worked on most recently and having each worker choose its own node at startup.

GCP is now recommending I increase the memory size, but I'm not seeing any memory usage beyond 400MB in docker stats, and I provisioned 2 GB. Everything seems to be running fine, so I assume I'm safe to ignore the recommendation.

Mind_Taker
May 7, 2007



Mind_Taker posted:

Thank you both for these answers. This looks pretty straightforward!

Update: probably SHOULD have been straightforward, but I was an idiot and it took me like 3 hours to figure out that I was using the wrong version of Python as my Lambda runtime environment. After I "debugged" that, everything went smoothly.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

SurgicalOntologist posted:

Probably no one cares but I got this working in GCE pre-emptible instances and it's pretty sweet! Had to learn docker and some other concepts, but now I get to watch my results my roll in. Instead of using some sort of master node, I'm just keeping track in the database of what's been worked on most recently and having each worker choose its own node at startup.

GCP is now recommending I increase the memory size, but I'm not seeing any memory usage beyond 400MB in docker stats, and I provisioned 2 GB. Everything seems to be running fine, so I assume I'm safe to ignore the recommendation.

This is good work.

I know I am late to the party here, but your master node idea would have introduced a bottleneck, a potential single point of failure and data loss.

You have built a perfectly stateless application environment that is scaleable and resilient to failure. You, sir, understand how to cloud and I hope you are working for a company that appreciates that.

Otherwise AWS is always hiring!

jiffypop45
Dec 30, 2011

Do we have positions that need native AWS experience outside of SA's? Most engineering jobs I'm familiar with need Linux/Programming/Networking skills. I've actually had the misfortune of interviewing people who thought they needed to study AWS for my team only to not hire them because they didn't study what they should have. To be fair I don't have that much sympathy for them because the skills are clearly outlined on the job listing but still.

I'm also probably skewed somewhat since I work C2S/SC2S.

Disclaimer: my own opinion not Amazon's.

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.
SAs don’t even need prior AWS experience although it doesn’t hurt, especially ramping up. TAM is similar.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
When I started at AWS I had literally zero cloud experience, but I had a ton of virtualization and infra experience. During the interview process I was able to demonstrate an ability to quickly learn new technology, adapt to the changing needs of an organization, dive deep into a problem/technology as needed and not do dumb things more than once.

Tech can be taught. Processes can be taught. Common sense, insight, and a willingness to grow and learn cannot.

jiffypop45
Dec 30, 2011

Agrikk posted:

When I started at AWS I had literally zero cloud experience, but I had a ton of virtualization and infra experience. During the interview process I was able to demonstrate an ability to quickly learn new technology, adapt to the changing needs of an organization, dive deep into a problem/technology as needed and not do dumb things more than once.

Tech can be taught. Processes can be taught. Common sense, insight, and a willingness to grow and learn cannot.

I tell candidates regularly not to be discouraged if they didnt totally ace the tech. As LPs are much more important. However my program is also different than most and has a very extensive onboarding meant to fill gaps.

SurgicalOntologist
Jun 17, 2004

Agrikk posted:

This is good work.

I know I am late to the party here, but your master node idea would have introduced a bottleneck, a potential single point of failure and data loss.

You have built a perfectly stateless application environment that is scaleable and resilient to failure. You, sir, understand how to cloud and I hope you are working for a company that appreciates that.

Otherwise AWS is always hiring!

Wow, thanks! :blush: I'm actually transitioning from academia to being the technical founder of a startup, so I really appreciate the encouragement.

I guess to redirect your hiring comment, any advice for how/when to start thinking about hiring cloud architects/engineers? I.e. if we only have 3 developers we're not getting a dedicated cloud architect, but rather hoping at least one of them has cloud experience or is smart enough to cobble something together for the MVP. On the other hand if we grow to a tech team of 30, I'd hope we'd have a cloud specialist by then.

Of course it depends on the company, for us it will be dealing with incoming data and processing it as it comes in, serving videos and timeseries data to our frontends. A small number of clients, so massive scaling not a major concern, but receiving large quantities of data from them all at once so some automation of scaling up and down will be needed.

Part of me thinks it's a situation like "we'll know when we need it" but some people (more business-y types) like to see more specific plans. So I'm trying to think about how to prioritize these more peripheral (no offense) positions... like, would we hire a UX designer before a cloud specialist? Of course that's impossible to answer from the outside, so maybe I can salvage this post into an easier question, like what qualifications should I look for if hiring someone to design our cloud systems? Or, in the smaller-team scenario, how do I assess if someone has meaningful cloud knowledge/experience when it's not their primary role?

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
this is out of my rear end but i base it on the 80/20 rule, aka 4:1

your hire your first four devs and then one person who's a 'backend type' thats also handling the infra poo poo

then you hire your next four devs and your first "devops" person

then you hire your next 8 devs and a pair of SREs

then your next 16 devs and an architect, a security person, a manager, and jr sre/devops type.

after that... well gently caress if I know, usually you just dive straight into fulltime middle managmeent meeting fest stalemates by then.

StabbinHobo fucked around with this message at 06:11 on Dec 11, 2018

SurgicalOntologist
Jun 17, 2004

Thanks, wasn't expecting such a specific breakdown. 4:1 seems like a reasonable rule of thumb.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
keep in mind that only works if the dev side are the first tier for the oncall rotation and they're reasonably self disciplined about "don't add a new loving toy for every project" engineering.

edit: vv 100% agree, aggregate saas and paas solutions as much as possible before you touch iaas

StabbinHobo fucked around with this message at 21:45 on Dec 11, 2018

Doh004
Apr 22, 2007

Mmmmm Donuts...
Let me caveat this first by saying that every situation is different and every business' needs are different. This is by no means gospel, but based off of my experiences:

You might *not* need to go straight into AWS and requiring cloud/devops folks for your startup. People are expensive and building teams/hiring is incredibly hard - harder most other things. You might be better suited as your startup/company/product is quickly ideating and discovering itself to use a PAAS (like Heroku). I know this is the AWS thread, and I really like AWS, but just figured I'd say it.

Doh004 fucked around with this message at 21:13 on Dec 11, 2018

12 rats tied together
Sep 7, 2006

Definitely recommend not using AWS if you can avoid it.

If you must use AWS I'd also really suggest you start with the managed services like beanstalk, emr, athena, redshift, etc. I've joined a few orgs now where several years of effort have gone into reinventing "basically _____ but worse" and it's always a nightmare mountain of technical debt and team silos.

If you feel like you can't use whatever the managed service is for your use case it's always worth engaging your TAM / support team and confirming your suspicions. Generally I've had good experiences with account management staff being upfront about "yes, x service will not work for your use case at this time, but we have y,z feature requests open and we will keep you updated".

Thanks Ants
May 21, 2004

#essereFerrari


Not specifically AWS related but I have had a lot of luck at my current job of just sitting around keeping a vague eye on a problem that needs to be solved and having a current vendor we work with pop up to roll the fix into a product we already use, rather than migrating between different services for every little problem that comes around, or having billions of different subscriptions going with a load of feature overlap.

ChromaticLlama
Sep 2, 2011

I passed my Solutions Architect and SysOps Administrator Associate certs about 6 months back. I used a combination of ACloudGuru and ITPro TV for video learning. The practice tests I got from Whizlabs.com were amazing and I highly recommend them. Ironically, all my work related AWS opportunities fell through so now I'm looking for interesting personal projects to work on.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

12 rats tied together posted:

Definitely recommend not using AWS if you can avoid it.

Yes, please go encourage others to spend business capital on non differentiated work in a dev-shop. Stand up your VMs and your email and storage and then hire people to manage that stuff in that office of 3-4 developers.

quote:

If you must use AWS I'd also really suggest you start with the managed services like beanstalk, emr, athena, redshift, etc. I've joined a few orgs now where several years of effort have gone into reinventing "basically _____ but worse" and it's always a nightmare mountain of technical debt and team silos.

This is better advice.

quote:

If you feel like you can't use whatever the managed service is for your use case it's always worth engaging your TAM / support team and confirming your suspicions. Generally I've had good experiences with account management staff being upfront about "yes, x service will not work for your use case at this time, but we have y,z feature requests open and we will keep you updated".

This is solid. Never not engage with your AWS account team.

Get yourself a cloud specialist as soon as you can and set the tone of your operations and developers early. You only get one shot to set the tone and pace of your shop and many a startup has failed simply because it stumbled out the gate trying to figure out how to implement and execute on its projects.

12 rats tied together
Sep 7, 2006

Agrikk posted:

Yes, please go encourage others to spend business capital on non differentiated work in a dev-shop. Stand up your VMs and your email and storage and then hire people to manage that stuff in that office of 3-4 developers.

To clarify, I'm coming from the mindset that in an ideal situation your company pays for an internet connection and then magically makes money from the internet. All tech is essentially tech debt in some way or another, if you can run your entire business profitably on top of zendesk cloud, g suite email, and google docs/sheets you should absolutely 100% do that instead of spending a single dime in AWS.

That being said we are in the cavern of cobol so this might not have been as obvious an assertion as I'd liked. If your business must write code that is executed by compute, AWS is easily the best place for it to run unless you have some very specific needs. Just to be clear. :shobon:

Thanks Ants
May 21, 2004

#essereFerrari


For what it's worth I took it to mean something along the lines of "buy Gmail rather than some EC2 instances that you then put your own mail server on top".

A small dev team that I work with on-and-off really needed to be kicked hard to stop seeing AWS as an empty VMware cluster that you just put Linux boxes on.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
you were clear rats, agrikk just misfired his hot take pistol

Doh004
Apr 22, 2007

Mmmmm Donuts...

StabbinHobo posted:

you were clear rats, agrikk just misfired his hot take pistol

Okay cool, I thought I was just having trouble understanding his response.

SnatchRabbit
Feb 23, 2006

by sebmojo
Does anyone have a recommendation for a good tutorial for appsync? I'm trying to put together a management tool for our clients to execute simple tasks on their Oracle enterprise environments. The idea is to have a simple webpage that displays the cloudformation stacks associated with a given environment and then buttons to, say, reboot all the EC2 instances for said environment or refresh a database, launch a new environemt, etc. I realize all this can be done in the AWS console but we'd like to simplify it for the client. At first I was messing around with CSS, api gateway and lambda functions, but I kind of suck at javascript programming so I was peaking around at appsync. Wondering if theres a good overview video and/or tutorial to see if it will do what I need with the least amount of friction.

Forgall
Oct 16, 2012

by Azathoth
Question: do I have to use cloudfront to set up s3-hosted website with custom domain and ssl?

I've started with template posted here: https://rgfindl.github.io/2017/08/07/static-website-cloudformation-template/ but I figured cloudfront part would be optional and complicate things, so I tried removing it and pointing dns record to s3 enpoint instead. Unfortunately that didn't work and domain name wouldn't get resolved. I ended up manually creating cloudfront distribution, pointing it at s3 and pointing dns to cloudfront. Then it worked. But I'm still not sure if I messed something up initially, or is the second method the only one that works.

fluppet
Feb 10, 2009
You can have a bucket called my.domain and then set a cname for my.domain to point at the s3 url which will work but you need cloudfront to provide ssl but your bucket name will have to match your url

Forgall
Oct 16, 2012

by Azathoth

fluppet posted:

You can have a bucket called my.domain and then set a cname for my.domain to point at the s3 url which will work but you need cloudfront to provide ssl but your bucket name will have to match your url
Alright, I assume connection between cloudfront and bucket is encrypted in some other way. I've managed to make template that works, creating that cloudfront distribution takes half hour though, that's quite something.

JHVH-1
Jun 28, 2002
Maybe I’m wrong but I think the bucket might not need have to have the same name as the domain if cloudfront is there. For static sites with http it does cause the s3 endpoint forwards it but for cloudfront it just uses the bucket as the origin.

Well it can’t hurt anyway to keep it matching just for organization sake.

Also for the cloudfront distro you can often start using it right away while it deploys to all their endpoints, but every time you tweak settings it will update again anyway. Might as well get it all right and then wait till it no longer says InProgress.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

StabbinHobo posted:

you were clear rats, agrikk just misfired his hot take pistol

This.

Sorry all. I misread the OP to mean standing up infrastructure rather than doing it managed, instead of using services in place of using infra at all.

SurgicalOntologist
Jun 17, 2004

GCP question but this seems the best place.

I posted a month or so ago about my hobbyist project that I set up on GCE to run some computations. I'm backtesting a stochastic Daily Fantasy Sports strategy and I want it to run it over a million times per setting to see what setting works best (these are contests with up to 1:100,000 odds so it takes a lot of trials to approximate an EV).

Well, it turns out Google thinks I'm mining crypto. Probably because these instances just use 100% CPU continuously rather than having a varying workload based on volume of requests or whatever. I've gotten my account suspended three times so far, without warning. Each time I submit an appeal saying "actually I'm not mining crypto" and they reverse the suspension a couple days later. Then another week or so and I'm suspended again.

Obviously I'm reaching out to support to stop this from happening. But in the meantime I thought I'd ask for a sanity check: am I doing something wrong? Am I not supposed to be using 100% of every instance I get; should I be throttling somehow?


Ninja edit: also just wanted to add, thanks to those of you who chimed in with startup advice. Definitely going to use as many turnkey/managed services as possible and not over-engineer the infrastructure from the start.

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.
You mentioned hobbyist project so I'm assuming you aren't paying for their developer support level where you can open a ticket with them. In AWS, I'd turn on Developer level support, open a ticket, then turn it off when I'm done. Looks like that might cost $100 on GCP and it might not be pro-rated if you turn it on/off in the same month.

I can't imagine them just detecting crypto mining off of CPU, and 100% CPU wouldn't be a problem in AWS. Are you still in a free trial period or are you giving them money? I guess I could see them heavy handed looking for free trial accounts running 100% and assuming they are crypto but that seems a bit lazy when they could be looking at network rules on the instance or the traffic itself to get the false positives down.

SurgicalOntologist
Jun 17, 2004

Yeah not sure how helpful support will be without paying for it. We'll see. I am paying them for the instances though, past the free tier.

Hmm so you don't think it's the CPU usage pattern, what else could it be? The first time I thought maybe my account was compromised but now I'm 100% certain that's not the case. Network-wise my instances are connecting to Google Cloud SQL via Google's Cloud SQL Proxy docker image and reading and writing to my DB. The work is done from a docker image command which I arbitrarily set for an amount of work that takes a couple hours. But it's run with --restart=always so it keeps going until the instance gets preempted.

It's definitely the instances and not the database because the suspension only blocks Compute Engine. The suspension and unsuspension messages don't contain any useful information.

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.
It could be CPU usage all alone, but that just seems like a really dumb way for Google to make lots of support tickets like yours and I'd like to think they are smarter than that. Speculating on ways they could/should be doing it... Are the network rules on your instances allowing all traffic in/out? If you restrict to only what you need for SQL and remote access that might help you if they are checking for what the instances could be talking to because they might check for that.

A better method would be inspecting DNS requests or IP addresses your instances are actually talking to and only suspending instances talking to know mining related destinations like AWS GuardDuty does but if you aren't mining and are certain you don't have malware/account compromise then that wouldn't be it.

Scrapez
Feb 27, 2004

Pretty new to AWS and trying to understand autoscaling and how I can initiate an event to increase group size based upon a log file on one of my instances rather than a cloudwatch metric like CPU utilization, etc.

For instance, I would have an instance receiving calls that had a finite amount of telephony. Once the maximum simultaneous calls reached a certain value, I would like to spin up another EC2 instance. I have log files on the EC2 instance that I could monitor and use to trigger the event but I'm not sure how exactly to do that.

The only thing I can think of would be to write a script that runs on the EC2 instance itself and once the max simultaneous calls were reached, it would spin up another EC2 instance via the AWS CLI.

Anyone doing something similar to this?

Edit: It looks like CloudWatch Agent may have the ability to do this...

Scrapez fucked around with this message at 21:42 on Jan 3, 2019

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.

Scrapez posted:

Anyone doing something similar to this?

Assuming each server has X slots available for a call but the slots were eventually recycled at the end of the call, create a custom CloudWatch metric and have a script on each server to report "Free Slots". Trigger your scale up based off of the sum of Free Slots going below some number and scale down based off of some other number. The latter gets a bit complicated because you don't want to terminate an instance that still has active calls so you need to write a lifecycle hook and respond to that when the active calls falls to zero, or don't have any automated scale down termination and manage deprecating and terminating the instances yourself.

Here is a blog post of someone triggering it based off of a Lambda function he has that queries active connections on his database.
https://blog.powerupcloud.com/aws-autoscaling-based-on-database-query-custom-metrics-f396c16e5e6a

Scrapez
Feb 27, 2004

Arzakon posted:

Assuming each server has X slots available for a call but the slots were eventually recycled at the end of the call, create a custom CloudWatch metric and have a script on each server to report "Free Slots". Trigger your scale up based off of the sum of Free Slots going below some number and scale down based off of some other number. The latter gets a bit complicated because you don't want to terminate an instance that still has active calls so you need to write a lifecycle hook and respond to that when the active calls falls to zero, or don't have any automated scale down termination and manage deprecating and terminating the instances yourself.

Here is a blog post of someone triggering it based off of a Lambda function he has that queries active connections on his database.
https://blog.powerupcloud.com/aws-autoscaling-based-on-database-query-custom-metrics-f396c16e5e6a

That is very helpful. Thank you. I think scaling up is all I will need so managing the scale down termination manually is perfectly fine. I expect the scaling to be slow and predictable and likely will never need to scale down as the platform will continue to grow and become busier over time.

Scrapez
Feb 27, 2004

Is there a way from the command line on an EC2 instance to retrieve just the public IP address associated with that instance based on the private IP?

I can do `aws ec2 --region us-east-1 describe-addresses` which returns a list of all addresses and I could parse out the PublicIP of the instance I'm looking for with a combination of grep and awk but is there a better way of doing this?

I would be putting the private IP in a variable as I can obtain that via ifconfig and then I'd like to return the public IP based on the private IP.

I'm writing a bootstrap script that will update a config file on the instance with the public IP of that instance. Thoughts?

2nd Rate Poster
Mar 25, 2004

i started a joke

Scrapez posted:

Is there a way from the command line on an EC2 instance to retrieve just the public IP address associated with that instance based on the private IP?

I can do `aws ec2 --region us-east-1 describe-addresses` which returns a list of all addresses and I could parse out the PublicIP of the instance I'm looking for with a combination of grep and awk but is there a better way of doing this?

I would be putting the private IP in a variable as I can obtain that via ifconfig and then I'd like to return the public IP based on the private IP.

I'm writing a bootstrap script that will update a config file on the instance with the public IP of that instance. Thoughts?

From within the instance you can use the metadata service to find the public ip.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval


code:
curl [url]http://169.254.169.254/latest/meta-data/public-ipv4[/url]

JHVH-1
Jun 28, 2002

Scrapez posted:

Is there a way from the command line on an EC2 instance to retrieve just the public IP address associated with that instance based on the private IP?

I can do `aws ec2 --region us-east-1 describe-addresses` which returns a list of all addresses and I could parse out the PublicIP of the instance I'm looking for with a combination of grep and awk but is there a better way of doing this?

I would be putting the private IP in a variable as I can obtain that via ifconfig and then I'd like to return the public IP based on the private IP.

I'm writing a bootstrap script that will update a config file on the instance with the public IP of that instance. Thoughts?

Also be aware if you use public-hostname it will resolve to the public IP or private IP depending on where it is resolved. That way you can do things like route the traffic internally for some systems with the same hostname.

Scrapez
Feb 27, 2004

2nd Rate Poster posted:

From within the instance you can use the metadata service to find the public ip.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval


code:
curl [url]http://169.254.169.254/latest/meta-data/public-ipv4[/url]

JHVH-1 posted:

Also be aware if you use public-hostname it will resolve to the public IP or private IP depending on where it is resolved. That way you can do things like route the traffic internally for some systems with the same hostname.

Much easier way of doing it. Thank you!

Adbot
ADBOT LOVES YOU

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
FYI the internal metadata web site has all kinds of information available. You should take a moment to point a browser at it and see what’s there.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply