Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Thanks Ants
May 21, 2004

#essereFerrari


freeasinbeer posted:

I most definitely have RDS servers that have cross region connectivity. It resolves the RDS domain name to a private IP and that is routed over the inter region peer. What AWS won’t let you do is chain VPCs, the two VPCs have to be explicitly peered.



In the AWS docs it’s called transitive peering and shows you what it will and won’t do.

Are you running your own DNS servers (that are in the same region as the RDS server) and setting each instance inside a VPC to use those servers to resolve? That's the only way I can see that working, unless the documentation is different to the reality.

Adbot
ADBOT LOVES YOU

freeasinbeer
Mar 26, 2015

by Fluffdaddy
All my RDS instances are set to be private only so it just resolves the private IP.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
There is also private link which would theoretically work.

SnatchRabbit
Feb 23, 2006

by sebmojo
Does anyone have any links, tips, or tricks for managing patches in Systems Manager? We have a bunch of environments running Oracle apps on RHEL, so I'm just throwing out the bat signal for anything people have found that works.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
code:
dig random-rds.us-east-2.rds.amazonaws.com @8.8.8.8

; <<>> DiG 9.10.6 <<>> random-rds.us-east-2.rds.amazonaws.com @8.8.8.8
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7648
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;random-rds.us-east-2.rds.amazonaws.com. IN A

;; ANSWER SECTION:
random-rds.us-east-2.rds.amazonaws.com. 4 IN	A 10.110.208.241

;; Query time: 170 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Fri Sep 21 16:50:37 EDT 2018
;; MSG SIZE  rcvd: 102
It just resolves the private IP no matter where I query DNS from.

Thanks Ants
May 21, 2004

#essereFerrari


It seems like you get a choice of having an RDS instance publicly accessible when you create it, and this changes how DNS behaves - if you don't have it publicly accessible then the DNS name will always resolve to a private address.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html#USER_VPC.Hiding

Edit: You wrote that above, I missed it. I think this is the setting that you want Volguus.

Thanks Ants fucked around with this message at 22:09 on Sep 21, 2018

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Let me see if I understand this:

- You have b1b1b1b1b.1a1a1a1a1a.us-east-1.rds.amazonaws.com that resolves to 10.1.0.23 in VPC1

- You have EC2 instance myinstance.us-west-2.ec2.amazon.com that resolves to 10.99.087 in VPC99

- When you try to ping b1b1b1b1b.1a1a1a1a1a.us-east-1.rds.amazonaws.com from myinstance.us-west-2.ec2.amazon.com it does not resolve because VPC100 in us-west-2 does not know about what is in VPC99 in us-east-1.

Is this what you are saying?


If so: a workaround is to stand up 1 DNS server in both VPCs with conditional forwarders to 10.1.0.2 and 10.99.0.2 and point all of your resources to your internal DNS servers. Each VPC has a set of AWS DNS servers that get queried by any object local to that VPC. The point is to collect all these disparate VPC namespaces into a single place at a single point that knows about all of them.


For your situation, though, the AWS recommended solution is to set up a replica at the destination and the stuff local to that queries the local instance.

Agrikk fucked around with this message at 22:16 on Sep 21, 2018

Volguus
Mar 3, 2009

Thanks Ants posted:

It seems like you get a choice of having an RDS instance publicly accessible when you create it, and this changes how DNS behaves - if you don't have it publicly accessible then the DNS name will always resolve to a private address.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html#USER_VPC.Hiding

Edit: You wrote that above, I missed it. I think this is the setting that you want Volguus.

Oooh, yes, I guess, maybe. But I would like to be able to access the db from work from time to time (I update the security group to allow my ip to access it, do my thing, then remove it). If I set it to private (assuming i'll be able to update it even) I presume that then this will be it. I have to go through an EC2 machine. Which may be fine I guess.

Agrikk posted:

Let me see if I understand this:

- You have b1b1b1b1b.1a1a1a1a1a.us-east-1.rds.amazonaws.com that resolves to 10.1.0.23 in VPC1

- You have EC2 instance myinstance.us-west-2.ec2.amazon.com that resolves to 10.99.087 in VPC99

- When you try to ping b1b1b1b1b.1a1a1a1a1a.us-east-1.rds.amazonaws.com from myinstance.us-west-2.ec2.amazon.com it does not resolve because VPC100 in us-west-2 does not know about what is in VPC99 in us-east-1.

Is this what you are saying?


If so: a workaround is to stand up 1 DNS server in both VPCs with conditional forwarders to 10.1.0.2 and 10.99.0.2 and point all of your resources to your internal DNS servers. Each VPC has a set of AWS DNS servers that get queried by any object local to that VPC. The point is to collect all these disparate VPC namespaces into a single place at a single point that knows about all of them.


For your situation, though, the AWS recommended solution is to set up a replica at the destination and the stuff local to that queries the local instance.

Yes, that's what I'm saying. From region-15 when I ping db.aaa.long.name.RDS.amazon.com I get the public IP (18.x.y.117) instead of the internal IP (172.31.x.y) that I get when I ping from region-4 . And peering 2 VPCs from the 2 zones doesn't apparently do the name resolution correctly to give me the internal IP.
Is that AWS DNS a service that they provide?
About replica: Can PostgreSQL do that? Or is it an AWS service? What's the latency (synchronization time)? How would such a thing work? Who would know about this, a DBA?

I'm just trying to gather as much information as I can to hopefully push the drat CEO to hire the right people for the job as I have enough on my plate to not have to worry about crap like aws (as important as it is).

Thanks Ants
May 21, 2004

#essereFerrari


Volguus posted:

Oooh, yes, I guess, maybe. But I would like to be able to access the db from work from time to time (I update the security group to allow my ip to access it, do my thing, then remove it). If I set it to private (assuming i'll be able to update it even) I presume that then this will be it. I have to go through an EC2 machine. Which may be fine I guess.

Yeah you'll have to hop through another host or deploy a VPN appliance into your VPC. Or build a VPN tunnel back to your office. Or Direct Connect into your existing WAN.

I'd push for AWS training (as well as hiring someone) because then that also benefits you.

JHVH-1
Jun 28, 2002
Anyone mentioned VPC peering yet? https://docs.aws.amazon.com/vpc/latest/peering/create-vpc-peering-connection.html

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
FYI-

The decision to go multi-region should not be taken lightly. As you have already discovered, the architectural decisions that must be made, and made correctly, determine the success of your workload.

Ideally you would have an AWS architect on staff as well as a DBA and together these two will explore options that AWS provides as well as the requirements of your application. there isn’t a Right Way of doing Multi Region. There is only the Right Way For You.


That said, RDS Postgres can do Multi-Region, but only with one write master and the rest read replicas. But that might change with re:Invent so you might want to wait until then before committing to an architecture.

Also, consider the Postgres flavor of Aurora for your database. It’s more performant and less expensive to run at scale.

You could always do Postgres on EC2 and turn on and configure replication yourself but please don’t do this.

Volguus
Mar 3, 2009

Agrikk posted:

The decision to go multi-region should not be taken lightly. As you have already discovered, the architectural decisions that must be made, and made correctly, determine the success of your workload.

Ideally you would have an AWS architect on staff as well as a DBA and together these two will explore options that AWS provides as well as the requirements of your application. there isn’t a Right Way of doing Multi Region. There is only the Right Way For You.

This, this 100 times. I have to shove these sentences down the "powers that be" throats until they'll "get it". My entire hope of all this is that the beers that I'm having right now will knock me out and make me forget that AWS exists by tomorrow morning. The less I know about it, the happier I am.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
I have to run but my suggestion Would be to make it private and use a VPN appliance running on an EC2 instance. Pritunl and OpenVPN-AS are decent options that run OpenVPN and have a web UI for management.

If you just like SSH forwarding teleport and pritunl zero are also decent.

I’d stay away from any of the branded stuff unless you have folks on hand that insist on paying out the rear end for it because you run that on site, and even then it’s often just a reskinned Linux box, and to make matters worse it’s most likely an ancient version of CentOS and totally separate from their on premise codebase.


This is out of left field but If you want to be able to forward from anywhere google cloud sql has a magical proxy that uses client certificates and works from anywhere that blows RDS connectivity out of the water.



Also yeah hire someone who knows AWS(I only charge $250 an hour, so feel free to call me)

Volguus
Mar 3, 2009

freeasinbeer posted:

Also yeah hire someone who knows AWS(I only charge $250 an hour, so feel free to call me)

I'd pay $2500/hour (not my money) to not have to deal with this.But yeah, thanks for the good advice. If i don't forget about aws by monday, i'll relay my findings and suggestions (thanks to you all) to the higher ups who hold the purse (too tight in my opinion).

Edit: tested the availability setting in RDS and I can confirm that setting the database to "private" makes it work as expected (that is, from Region B, it gets properly resolved to the internal IP of Region A which makes it work fine over a VPC peering). Will need to setup either a VPN or some ssh tunnel to access it from outside, but that's perfectly fine.
In the next few days we also have a meeting with some AWS expert, curious what advice he/she will give regarding our AWS setup so far and in the future.

Volguus fucked around with this message at 16:58 on Sep 24, 2018

an skeleton
Apr 23, 2012

scowls @ u
Long shot but uh

Anyone work at Amazon AWS and would be willing to let me pick your brain about a possible opportunity I have there?

The Fool
Oct 16, 2003


Agrik is your man

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Somebody called?

SnatchRabbit
Feb 23, 2006

by sebmojo
I want to use IAM to control my users' access to Session Manager and restrict access to only certain instances in my AWS account. I found the example policies here which should give me most of what I need. I'd like this to be completely automated and expire session manager access after, say 24 hours, or whatever. What I'm thinking is using lambda to create the policy, and attach it to a user which is simple enough. The tricky part is going to be detaching/deleting the policy after the expiration period. I don't think I can use a single lambda to do everything since the timeout is like 5 minutes. I guess I could use that same lambda to invoke another lambda but I feel like that's an overwrought solution. Is there a way to either set a policy with an expiration, or some other way to achieve this that I'm not thinking of?

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


You would want a second job running on a timer to handle the cleanup. You could either write the policy info into a dynamo db table and reference that, or since the policy version contains the date it was last modified you could just inspect that.

mearn
Aug 2, 2011

Kevin Harvick's #1 Fan!

I’ve got a problem I’m looking to AWS for a solution to but other than EC2 and S3 I don’t really know much about the services offered.

Essentially I have a PostgresQL database of historical stock market data. I have a python script that tests different trading strategies historically based on various parameters and spits out a CSV of the results (eventually I want the results to be written to a database). Running this on my home machine, testing a single strategy over the timeframe I’m using takes between 15-30 minutes to fully process. This isn’t too bad but I’m testing each strategy with numerous parameters and sometimes that means somewhere in the range of 200 tests.

I’d like to have a system to speed this process up by running multiple instances and then export the results either to a database or an S3 bucket. I just don’t know where to start or what the best options are.

Thanks Ants
May 21, 2004

#essereFerrari


Would Athena be along the right lines?

SnatchRabbit
Feb 23, 2006

by sebmojo
Athena sounds like it might be a good fit, but alternatively you could run a managed Postgres database in RDS and query that with say Lambda using Python, although, the timeout on Lambda queries are five minutes so you might need to break up the operations you're doing. Lambda might be a nice fit because assuming the queries run in a reasonable timeframe you could write the results directly to S3 or dynamodb using the boto3 library in Python.

SnatchRabbit fucked around with this message at 00:11 on Nov 2, 2018

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
move your postgres db to an rds instance with 2x the power of your home machine
then put your python code on an ec2 instance with 2x the power of your home machine

run it as-is, see how you like it. did it get 4x faster? probably not, but at least now you can look at some graphs and see which side was bottlenecking.

there's a hojillion things you can do from there but anything beyond the above is getting ahead of yourself

oh and make sure to turn the instances off when you're done.

12 rats tied together
Sep 7, 2006

If you're comfortable splitting up your script into containers that can run simultaneously I would recommend fargate / ecs as a good middle ground between something super heavy like Glue/DataPipeline -> S3 -> Athena and something super lightweight like running a bigger ec2 instance.

Another alternative would be writing a lambda function that recursively calls itself until it's done, but you should be careful with those if the aws account is on your own dime.

JHVH-1
Jun 28, 2002

SnatchRabbit posted:

Athena sounds like it might be a good fit, but alternatively you could run a managed Postgres database in RDS and query that with say Lambda using Python, although, the timeout on Lambda queries are five minutes so you might need to break up the operations you're doing. Lambda might be a nice fit because assuming the queries run in a reasonable timeframe you could write the results directly to S3 or dynamodb using the boto3 library in Python.

FYI they increased the lambda timeouts not too long ago, and it’s 30 minutes now.

I haven’t used it yet, but step functions is kinda cool for some of this stuff. It lets you mix different services, even human interaction on into a process flow.

Vanadium
Jan 8, 2005

I thought they only just increased lambda from 5 to 15 minutes max?

Thanks Ants
May 21, 2004

#essereFerrari


https://docs.aws.amazon.com/lambda/latest/dg/limits.html

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
The new timeout is great, we’d been kicking to lambda from other lambda for some operations and now don’t have to be as careful.

For the sql crunching guy- if you’ve already process for it I’d probably just use Batch and RDS. It’s the easiest service to deal with if you don’t care about the mix of instance types or the occasional run lag. Not the best, step functions or Athena, if it’s a fit for the data are smarter, but would probably take longer to implement.

cheque_some
Dec 6, 2006
The Wizard of Menlo Park
I actually saw a session at an AWS Summit where a hedge fund talked about how they used AWS for backtesting models, it was pretty interesting.

May not be that relevant to what you're doing at a much smaller scale, but if you're interested I was able to track down the slides.

https://www.slideshare.net/AmazonWebServices/how-aqr-capital-uses-aws-to-research-new-investment-signals

mearn
Aug 2, 2011

Kevin Harvick's #1 Fan!

Thanks for the suggestions everyone. Athena's looking super useful. I've had to make some adjustments to the size of my jobs to optimize costs but between this and Batch I should be in good shape!

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Does Amazon publish a list of changes it makes to the API anywhere? It seems like something changed behind the scenes and I just want validation.

I have a piece of code that runs ec2.describe_images(ImageIds=big_list_of_image_ids) where big_list_of_image_ids is a list of every image ID referenced by an ec2 instance or launch configuration in our account. On November 1st the job that runs this code started failing in one of our accounts and I tracked it down to the fact that an old launch configuration in our account is referencing an AMI that no longer exists. However, that piece of code also includes this comment from back when I originally wrote the function:

code:
        # the describe_images function will strip bad image ids out of the response so we end up with a list of good image ids
After reading this I was reminded that a launch configuration or instance can reference an image that has been cleaned up. So in order to verify if an image ID is valid you have to confirm with the API. Since I was trying to limit the number of calls to the API I wanted to do this in a single request. Making a call to describe_images with no arguments is slow as poo poo and returns a huge list that's long as gently caress. So I found this fun little work around where if you call describe_images and pass in a list of image IDs the response will strip out any images that don't exist.

That no longer seems to be working. I've since fixed the code by looping through the list and doing individual calls but I'm just wondering if these changes are documented anywhere.

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.
If this did change behind the scenes, it is possible this did not make it into any release announcement. If you have support you can ask them to check with the EC2 team to confirm if this was intended and berate them about changing things without notification if they did.

Just testing with the CLI, I can see when I put in an invalid image ID anywhere in the list it throws:
An error occurred (InvalidAMIID.Malformed) when calling the DescribeImages operation: Invalid id: "ami-poopbutt"

So I assume that is what you are running into. As a workaround, if you feed it the list of images with filter(Name=image-ids,Values=ami-11111111,ami-22222222) instead of imageId(ami-11111111,ami-22222222) it looks like it is acting correctly whether there are bad values in the list or not and just returns a null list if you feed it only bad values.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Arzakon posted:

If this did change behind the scenes, it is possible this did not make it into any release announcement. If you have support you can ask them to check with the EC2 team to confirm if this was intended and berate them about changing things without notification if they did.

Just testing with the CLI, I can see when I put in an invalid image ID anywhere in the list it throws:
An error occurred (InvalidAMIID.Malformed) when calling the DescribeImages operation: Invalid id: "ami-poopbutt"

So I assume that is what you are running into. As a workaround, if you feed it the list of images with filter(Name=image-ids,Values=ami-11111111,ami-22222222) instead of imageId(ami-11111111,ami-22222222) it looks like it is acting correctly whether there are bad values in the list or not and just returns a null list if you feed it only bad values.

That’s interesting. It looks like the filter behavior acts exactly like the old behavior of the ImageId argument. That makes my life a little easier. Still really strange that they changed this out of the blue. Or maybe not out of the blue, I just don’t know where it would be announced. Which was really what I was trying to determine here.

Thanks for the tip about the filter behavior though.

Doh004
Apr 22, 2007

Mmmmm Donuts...
Hey thread, I host MY FIANCE'S WordPress blog right now on an EC2 instance that I configured myself using Centos and NGINX (what I was already familiar with). I did this primarily to start teaching myself more about AWS and I'm liking it so far. I'm still on my 12 month free tier for a lot of things, but I had to upgrade the EC2 instance to something with more memory (kept running out of memory when dealing with a lot of her assets and plugins).

Right now, it's about ~$18 a month in charges to run her site, which is absolutely A-OK with me and well worth it, money isn't the issue. But, I just saw Lightsail and it looks like I could be running her site on a Lightsail instance for ~$10 a month.

Would it be worth migrating it over? I have the domain hosted on Google Domains (before I was over on AWS and had done it with Route53) and it's using a Let's Encrypt SSL cert, and I'm hosting all of her assets on S3 on a Cloudfront distribution. Would it end up costing about the same for all of that added up?

I of course could just do this myself now, but I'd rather not have to migrate her site if I don't need to.

JHVH-1
Jun 28, 2002

Doh004 posted:

Hey thread, I host MY FIANCE'S WordPress blog right now on an EC2 instance that I configured myself using Centos and NGINX (what I was already familiar with). I did this primarily to start teaching myself more about AWS and I'm liking it so far. I'm still on my 12 month free tier for a lot of things, but I had to upgrade the EC2 instance to something with more memory (kept running out of memory when dealing with a lot of her assets and plugins).

Right now, it's about ~$18 a month in charges to run her site, which is absolutely A-OK with me and well worth it, money isn't the issue. But, I just saw Lightsail and it looks like I could be running her site on a Lightsail instance for ~$10 a month.

Would it be worth migrating it over? I have the domain hosted on Google Domains (before I was over on AWS and had done it with Route53) and it's using a Let's Encrypt SSL cert, and I'm hosting all of her assets on S3 on a Cloudfront distribution. Would it end up costing about the same for all of that added up?

I of course could just do this myself now, but I'd rather not have to migrate her site if I don't need to.

Wordpress main issue is the database and its memory usage. I used to run my personal site on EC2 and then come back a week later and realize it ran out of memory and killed mysql. Then I added a swap file and it seemed ok for the most part but still was slow and occasionally had issues.

I ended up switching to a $5/mo digital ocean instance which worked out better. Lightsail didn't exist then, but its the same idea... more of a dedicated virtual server than elastic. Wordpress runs fine in AWS if you just use RDS for the DB.

Thinking about it, another option now is Aurora serverless. If that cost is low enough you could stick with a small t3 instances in EC2.

Doh004
Apr 22, 2007

Mmmmm Donuts...

JHVH-1 posted:

Wordpress main issue is the database and its memory usage. I used to run my personal site on EC2 and then come back a week later and realize it ran out of memory and killed mysql. Then I added a swap file and it seemed ok for the most part but still was slow and occasionally had issues.

I ended up switching to a $5/mo digital ocean instance which worked out better. Lightsail didn't exist then, but its the same idea... more of a dedicated virtual server than elastic. Wordpress runs fine in AWS if you just use RDS for the DB.

Thinking about it, another option now is Aurora serverless. If that cost is low enough you could stick with a small t3 instances in EC2.

Hmm that's a good point, I could move over to RDS instead of running MariaDB on the EC2 instance. I blame old-school habits dying hard as I just moved off having my own dedicated box. I will say, I got the memory errors primarily around image uploading and plugin management, so I'm not sure how much of an impact it'll actually have?

I will also say, I haven't been having any issues with it whatsoever, so this task is mostly educational more than anything (makes it harder to justify).

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Doh004 posted:

Hmm that's a good point, I could move over to RDS instead of running MariaDB on the EC2 instance. I blame old-school habits dying hard as I just moved off having my own dedicated box. I will say, I got the memory errors primarily around image uploading and plugin management, so I'm not sure how much of an impact it'll actually have?

I will also say, I haven't been having any issues with it whatsoever, so this task is mostly educational more than anything (makes it harder to justify).

Yeah, you're going to see WP pin in those instances because things like paging or chunking requests are alien concepts to most plugin developers. I remember once on WooCommerce (wp-based ecommerce plugin) I was trying to import about 300,000 products and had to manually break it into 10,000 product pieces because there was no memory management involved, it would just allocate more and more as the product list grew. Images are a pain too because I believe by default WP will do a crush and resize on them (depending on your thumbnail settings). AWS is entirely doable for it, but there are more out of band scenarios that pop up when it comes to memory usage and database utilisation that kind of make it a pain, especially on initial setup/expansion, that makes a more dedicated service like DO et al attractive.

SurgicalOntologist
Jun 17, 2004

I've got a hobbyist script I'd like to put in the cloud and not sure what the cheapest option will be. Pretty similar to mearn's stock market backtesting actually.

I have a PSQL database in RDS (free tier). I currently manage it with an SQL-Alchemy based client library I run on my laptop.

The operation I'm doing is essentially specified by one row in a table in the database. So, the function loads an instance from that row, then from that instance runs a computation and stores the result in another table. I want to run this as many times as possible for each row. The function has ~30s startup time then it can run about 10 per second. Each computation is completely independent, so it seems perfect for a spot instance. I would just run db.commit() every so often, doesn't matter if I get preempted.

Ideally I'd have some kind of manager so when a new worker comes on it chooses which row to start working on. Or I could predetermine a task queue based on a set target of computations per row.

Not sure if Lambda works because I need an apt package (glpk-utils). In that case, what are my options, given:
  • I want to spend as little as possible...
  • ...although assuming my spend is per computation, I'm willing to spend faster to get it done faster (i.e. more instances or nodes)
  • I need 2GB instances (on my machine it's under 1GB, but I tried it on a free tier t2.small and ran out of memory)
  • AWS or GCP
  • I'm willing to try something overkill for a hobbyist project (e.g. Kubernetes??) as a learning opportunity, assuming it fits the project well

I mean I can easily put this on a spot EC2 or GCE instance but is there something in either ecosystem that's well suited for the job management aspect?

Mind_Taker
May 7, 2007



I assume this is the best place for this, and sorry if the question seems simplistic.

I need to be able to convert a .tif image to .pdf immediately after the .tif image is uploaded to an S3 bucket, and then I need to upload this .pdf file back to the same S3 bucket. This seems like a perfect use for Lambda (triggered when a .tif object is created in the bucket), however I am unsure how to actually do the conversion of .tif to .pdf in any of the Lambda frameworks. Can anyone point me in the right direction?

Adbot
ADBOT LOVES YOU

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Mind_Taker posted:

I assume this is the best place for this, and sorry if the question seems simplistic.

I need to be able to convert a .tif image to .pdf immediately after the .tif image is uploaded to an S3 bucket, and then I need to upload this .pdf file back to the same S3 bucket. This seems like a perfect use for Lambda (triggered when a .tif object is created in the bucket), however I am unsure how to actually do the conversion of .tif to .pdf in any of the Lambda frameworks. Can anyone point me in the right direction?

If you're familiar with python it has a library called img2pdf that you can download with pip and package into a lambda function. It's fairly straightforward.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply