Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Soaring Kestrel
Nov 7, 2009

For Whiterock.
Fun Shoe

fluppet posted:

Am i misremembering or was there a way to trigger an ssm runcommand action on a failing elb healthcheck?

We have have instances running multiple services and arnt allowed to set the asg to use the elb health to trigger a termination

This might not be the best way, but if you have the health check trigger a CloudWatch alarm, you can use that to throw a notification into SNS and trigger a Lambda function which you could run a Run Command through. Details at this AWS blog, although you'd have to extend the function they provide to add that functionality.

I am wondering if a better option might not be to leverage CloudWatch event processing rules to directly trigger SSM Run Command, but I straight up cannot find sufficient documentation on what event you'd trigger that on, or even whether that event exists. Sorry!

Adbot
ADBOT LOVES YOU

SnatchRabbit
Feb 23, 2006

by sebmojo
I think this might be what you are looking for:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EC2_Run_Command.html

quote:

You can use Amazon CloudWatch Events to invoke AWS Systems Manager Run Command and perform actions on Amazon EC2 instances when certain events happen. In this tutorial, set up Run Command to run shell commands and configure each new instance that is launched in an Amazon EC2 Auto Scaling group. This tutorial assumes that you have already assigned a tag to the Amazon EC2 Auto Scaling group, with environment as the key and production as the value.

Shrimpy
May 18, 2004

Sir, I'm going to need to see your ticket.
It was suggested I crosspost this here:

The company I work for currently uses the edge compute resources now in CDNs to manage a security solution. In CloudFront, that's Lambda@Edge, Cloudflare Workers, etc.

We've now got a customer that wants to do a similar integration, but with Alicloud/Aliyun. I've been diving through their offerings, but though they have a serverless Lambda analogue, there doesn't seem to be anything that's a Lambda@Edge equivalent.

Does anyone know if there is something that I'm missing?

Thanks Ants
May 21, 2004

#essereFerrari


Is ENS the equivalent?

https://www.alibabacloud.com/help/doc-detail/63837.htm?spm=a2c63.l28256.a3.1.4bc51c82bt5NKi

Shrimpy
May 18, 2004

Sir, I'm going to need to see your ticket.

So from what I can make out of the limited docs there (and the Google Translate-d ones from the Aliyun site), ENS lets you run serverless functions on edge, which is a little different from the other use cases which tie into the CDN, run the function, then either terminate the request or serve content from origin/cache.

I may be misunderstanding though. The content is sparse and most of what does exist is in Chinese. ENS is actually missing from the English Alibaba Cloud portal itself.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
Right now I have an AWS Java library I've created with a number of different handlers for Lamdba to use so I can reference one common .jar for all functions. I've hit the point where we have client data that's going to take longer than 15 minutes to process, so I'm going create a Batch queue and create jobs that the Lamdba functions will now call, passing in the necessary variables, which will spin up an EC2 to do the work. My plan is to continue using the library and just create an executable class that parses the arguments from the docker script and then executes the same code the handler would have after connecting to the appropriate AWS services.

Is this a sane way to do this? It seems fine, but I wanted to bounce it off people to make sure I'm not doing something incredibly dumb.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





i'm working with a client too cheap to pay for aws support but i have some questions about the service level guarantees of cloudwatch events, sns and sqs. whats my best bet for finding these?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

the talent deficit posted:

i'm working with a client too cheap to pay for aws support but i have some questions about the service level guarantees of cloudwatch events, sns and sqs. whats my best bet for finding these?

You can always try the AWS forums. They are moderated by AWS employees who are SMEs, but YMMV.

Startyde
Apr 19, 2007

come post with us, forever and ever and ever

PierreTheMime posted:

Is this a sane way to do this? It seems fine, but I wanted to bounce it off people to make sure I'm not doing something incredibly dumb.

There’s a lot if ways to do this but I think Batch is an alluring solution for doing it without much work. The caveat is that sometimes Batch is slow to start a job, especially if it has to start a new instance. Not a big deal just something to keep in mind. What I usually do is just have a lambda watch consume the SNS from job state changes to give impatient people a dashboard or rest call.
If you need it as durable but more hands-on/faster, I’d do it as step function instead. More glue to write but good service to learn anyway.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord

Startyde posted:

There’s a lot if ways to do this but I think Batch is an alluring solution for doing it without much work. The caveat is that sometimes Batch is slow to start a job, especially if it has to start a new instance. Not a big deal just something to keep in mind. What I usually do is just have a lambda watch consume the SNS from job state changes to give impatient people a dashboard or rest call.
If you need it as durable but more hands-on/faster, I’d do it as step function instead. More glue to write but good service to learn anyway.

It’s intended for automated files transfers and other hands-off S3 file management stuff that’s not especially time-sensitive. I’ve got the handler set up for Lambda and a main application starting point to pass in arguments if it’s called from a job or manually run. I’m keen to use it as a single library so we can just push a general update to everything if we need to change the code but I wanted to make sure I’m not falling into some kind of trap doing it this way.

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
No that’s fine. Unless it’s something that gets big/disparate enough purpose to warrant multiple bins, whether it’s in lambda or not is the first thing I check when arg parsing.
E:pardon any misuse of terms of art, I’m coming from sh/go, not java.

SnatchRabbit
Feb 23, 2006

by sebmojo
Does anyone have experience with using Private API Gateways? I have a client that has their Datacenter's public IPs hitting an API Gateway which they've protected with a AWS WAF and a list of whitelisted ips and Cognito for authentication. They're asking for a Private API Gateway that's built "the right way", but their set up doesn't seem to be "private" by definition in AWS terms. I'm struggling to see how setting the APIGW to private would have much of a benefit. They already have a ton of VPN connections to the datacenters and we're currently troublshooting latency issues on an unrelated project, so my gut feeling is that routing all the API calls through those VPNs which are already causing issues is a recipe for pulling my hair out. I guess my question is this: Is there a relatively straight forward way to route the datacenter traffic through the VPNs and into the APIGW while keeping it relatively secure? They're not doing any VPC peering and their traffic is already a mess so I have my doubts.

Twlight
Feb 18, 2005

I brag about getting free drinks from my boss to make myself feel superior
Fun Shoe

SnatchRabbit posted:

Does anyone have experience with using Private API Gateways? I have a client that has their Datacenter's public IPs hitting an API Gateway which they've protected with a AWS WAF and a list of whitelisted ips and Cognito for authentication. They're asking for a Private API Gateway that's built "the right way", but their set up doesn't seem to be "private" by definition in AWS terms. I'm struggling to see how setting the APIGW to private would have much of a benefit. They already have a ton of VPN connections to the datacenters and we're currently troublshooting latency issues on an unrelated project, so my gut feeling is that routing all the API calls through those VPNs which are already causing issues is a recipe for pulling my hair out. I guess my question is this: Is there a relatively straight forward way to route the datacenter traffic through the VPNs and into the APIGW while keeping it relatively secure? They're not doing any VPC peering and their traffic is already a mess so I have my doubts.

to use a private aws api gateway, you need to setup a vpc endpoint in accounts you wish to use it in. I believe that is the only way to access private APIs, I *think* with the resource policy within the API you can white list IPs, I'm not sure if that would allow you to jump to it but i'm not sure.

SnatchRabbit
Feb 23, 2006

by sebmojo

Twlight posted:

to use a private aws api gateway, you need to setup a vpc endpoint in accounts you wish to use it in. I believe that is the only way to access private APIs, I *think* with the resource policy within the API you can white list IPs, I'm not sure if that would allow you to jump to it but i'm not sure.

Right that's my understanding as well. So in that VPC with the VPC endpoint would we need an EC2 instance to pass the information from the VPN to the APIGW?

Twlight
Feb 18, 2005

I brag about getting free drinks from my boss to make myself feel superior
Fun Shoe

SnatchRabbit posted:

Right that's my understanding as well. So in that VPC with the VPC endpoint would we need an EC2 instance to pass the information from the VPN to the APIGW?

I beleive so, this of course makes the entire setup less than ideal with that ec2 being the weak link in the chain. We use private APIGWs to provide a metadata service for customers within our different accounts, interesting data like proxy information. The other rub with private APIs is they don't accept any sort of CNAME so you're stuck with the long aws name provided.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
Is there any kind of general consensus on what a decent speed for some S3 operations is? My "transfer SFTP to S3" and "unzipping/untarring files" jobs run at ~60MB/s and 80MB/s respectively and I'm not sure to be satisfied with this or not.

SnatchRabbit
Feb 23, 2006

by sebmojo

Twlight posted:

I beleive so, this of course makes the entire setup less than ideal with that ec2 being the weak link in the chain. We use private APIGWs to provide a metadata service for customers within our different accounts, interesting data like proxy information. The other rub with private APIs is they don't accept any sort of CNAME so you're stuck with the long aws name provided.

Gotcha, that's kind of where I'm headed: get them to give me a good explanation of why this is even necessary given the amount of effort it would take the complexity of their network. Thanks!

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

PierreTheMime posted:

Is there any kind of general consensus on what a decent speed for some S3 operations is? My "transfer SFTP to S3" and "unzipping/untarring files" jobs run at ~60MB/s and 80MB/s respectively and I'm not sure to be satisfied with this or not.
You can get better performance in a VPC with privatelink / gateway setup for the S3 service and through some better multithreading of various parts of your object. Also, make sure that your network interface for the instance it’s running on has 2.5 Gbps or higher network bandwidth advertised. On a c4.8xlarge ish or so two years ago I was getting 200+ MBps for roughly 10 GB sized objects without even bothering with the privatelink and gateway interface setup.

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
The boto3 based aws cli has some envars you can tweak to increase the worker pool. We ended up rolling our own client to maximize speed. If your needs are somewhere in the middle ‘s5cmd’ is much faster than I could tweak awscli into being. Just mind the differences in its verbs.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord

necrobobsledder posted:

You can get better performance in a VPC with privatelink / gateway setup for the S3 service and through some better multithreading of various parts of your object. Also, make sure that your network interface for the instance it’s running on has 2.5 Gbps or higher network bandwidth advertised. On a c4.8xlarge ish or so two years ago I was getting 200+ MBps for roughly 10 GB sized objects without even bothering with the privatelink and gateway interface setup.

Thanks, I'll check it out. Right now I'm running it from a workspace, but it's eventually going to be run from its own EC2 with better controls so I'll make sure to check those settings when we set them up.

Thanks Ants
May 21, 2004

#essereFerrari


PierreTheMime posted:

Is there any kind of general consensus on what a decent speed for some S3 operations is? My "transfer SFTP to S3" and "unzipping/untarring files" jobs run at ~60MB/s and 80MB/s respectively and I'm not sure to be satisfied with this or not.

Are you doing the untarring in Lambda or on an EC2 instance?

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord

Thanks Ants posted:

Are you doing the untarring in Lambda or on an EC2 instance?

Depends on the size of the file. Right now if it’s <10GB it’s from Lamba, otherwise it’ll be from EC2 via Batch job (once I get that working). My test I ran recently was using a Workspace as an intermediary to manually target a 86GB file, which it untarred/ungzipped in ~55 minutes.

This is in a land where people expect things in hours so a matter of minutes doesn’t really matter, but there’s always an opportunity to learn how to do it better.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The instance size that Batch instances run on can be of significant help if you’re willing to pay a bit more for the speed (paying 16x more for EC2 won’t necessarily get you 16X faster S3 transfers) but if you’re using Batch you should be able to run the jobs on spot fleets saving a good deal of cash over even reserved instances. Most larger instance sizes seem to have a glut of spare capacity so it’s worth a try.

Hughlander
May 11, 2005

This may be the wrong thread for it but I figure to try it...

Some people at work are doing some really low level optimizations in a C++ App. Custom memory allocators to keep memory contiguous, and cache localization where it needs to be. The catch is that the app in question will be run in AWS. However, my understanding is to prevent side channel attacks the Kernel, the VM, and an LXC in a k8s pod (I guess that's just the kernel again.) . All will work against you to invalidate those optimizations. Is that correct? Are there any white papers I can float around about why this is a bad idea? Ignoring for a moment the k8s part, would using one of the new AWS Native instances be able to alleviate this?

Cancelbot
Nov 22, 2006

Canceling spam since 1928

This might help: http://www.brendangregg.com/blog/2017-11-29/aws-ec2-virtualization-2017.html

With the Nitro hypervisor you're probably not going to reap the full benefits of those optimisations. But a bare metal one makes it more likely; "Also announced at AWS re:Invent: the Amazon EC2 Bare Metal instances, which are just that – bare metal servers. 0% performance overhead. Run whatever you like: Xen, KVM, containers. Access all PMCs, and other processor features."

But bare metal is hella expensive and I imagine you want to run this on a couple of much smaller servers, if so just go for the C5 if you're CPU bound, or R5 if you're memory bound. What does the app do?

crazypenguin
Mar 9, 2005
nothing witty here, move along

Hughlander posted:

However, my understanding is to prevent side channel attacks the Kernel, the VM, and an LXC in a k8s pod (I guess that's just the kernel again.) . All will work against you to invalidate those optimizations. Is that correct?

I don’t think so, no. Why do you think effort to reduce cache misses wouldn’t be helpful?

Everything about side channel attacks involves context switches across permissions boundaries.

And at any rate this is something a simple benchmark would answer.

Thanks Ants
May 21, 2004

#essereFerrari


Thanks Ants posted:

Thanks. If it needed confirming (having thought about this it was a question with an obvious answer) I've tested this in an Azure VNet with two IPsec tunnels, one to a site addressed as 10.1.0.0/16 and another 10.2.0.0/16 and I could add 10.1.250.0/24 to the second route without issue, and the route was listed in the effective routes for an interface in the VNet.

As a follow-up to this, there's a bit of a caveat. You can't have your VNet gateway subnet sit within a subnet defined as a local network - e.g. you cannot use 10.1.180.0/29 as a gateway subnet and then have one of your local networks defined as 10.1.0.0/16. The exception to this is where you are using BGP with your VPN tunnels, in which case the only restriction on addressing is that you cannot advertise a route that exactly matches one of your VNets, but you can advertise a much larger route.

https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-bgp-overview#prefix

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
if you're starting greenfield on a single-aws-account setup whats the right ci/cd pipeline setup for IaC/cloudformation work?

i'm amazed at how many half baked blog posts from 2017 there are and then... nothing.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Don’t use cloudformation. You will just be sad. I would recommend terraform.

Cue someone coming in to comment about how cloudformation is actually good. But do not believe their lies.

12 rats tied together
Sep 7, 2006

CloudFormation is frankly awful if you're using it by itself without any kind of helper or document templating. There are a bunch of open source projects that can help though: troposphere, sparkleformation, ansible, etc. For a single account, AWS only environment I would really recommend going with CloudFormation + a helper of your choice.

If you only want to learn one tool, Terraform is not so bad now that they've released 0.12. You will undoubtedly run into some problems with it, but so did the rest of the internet, so it's not too hard to find information or advice. The documentation now is also a bit better than it was last year.

Erwin
Feb 17, 2006

StabbinHobo posted:

if you're starting greenfield on a single-aws-account setup whats the right ci/cd pipeline setup for IaC/cloudformation work?

i'm amazed at how many half baked blog posts from 2017 there are and then... nothing.

Terraform + kitchen-terraform. Pipeline runs tests, then runs plans on any state-generating deployment to see if a PR would result in changes. The terraform plan command exits with an exit code of 2 if a plan shows changes, so you can do something with that (slack notification, prompt for approval, etc).

Atlantis I guess if you don't want to use general CI server.

Erwin fucked around with this message at 23:28 on Jul 31, 2019

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
ClownFormation and Errorform are both awful in different ways and have different limitations making them meh at their respective goals. There is absolutely nothing like Terratest out there for Troposphere code though and Terraform will not support rolling back changes for years if ever. It is much, much easier (read: possible) to migrate from CloudFormation to Terraform than the other way. There are newer AWS centric options that avoid some of the CF warts like CDK but this is all a bit late when people probably want something like Pulumi.

Also, the CodePipeline to CF workflow hasn’t changed for like 3 years pretty much.

I’ve written thousands of lines of code for using and generating and testing Terraform and CloudFormation code - you will probably hate whatever you pick anyway, just like any other programming system or framework.

Docjowles
Apr 9, 2009

I am all in on TF for better or worse, but there are takes on both sides. This came up a few months back in the CI/CD thread here and people had feelings for and against both tools. It’s worth reading back through that thread if you’re undecided.

For a trivial single tenant single account use case (also: think about whether you should be using a single account. AWS support for multi account is light years past where it was even a year ago) it probably doesn’t matter much and just pick whichever you like more.

12 rats tied together
Sep 7, 2006

necrobobsledder posted:

There is absolutely nothing like Terratest out there for Troposphere code though

This is an interesting point and, in my experience anyway, it seems like it's not something people really consider when discussing IaC tooling: If your tool is a language, a library, or some attempted combination of both, it's going to be really hard to write tests for it. If your tool is a markup language, it's comparatively trivial.

This is why I generally do not recommend Terraform, IMO it's a "worst of both worlds" approach where you get almost none of the benefits of an actual programming language, but the markup language itself is also worst-in-class by just about any measurement you might care to take. When you use something like Ansible + CloudFormation you have complete control over every stage of your abstraction -> markup rendering phase. It's absolutely trivial to test anything about it, even using something as simple as assert.

You can use assert to perform preflight checks on your input data like "assert that the various name slugs, combined, do not exceed 64 characters", rendering to CloudFormation template just creates a yaml document which you can yaml.load and do your thing, and then you have all of the normal ways of testing AWS infrastructure: test accounts, environments, alarms, etc.

By comparison, you couldn't even get plan output as json from terraform until earlier this year (you had to parse shell command return codes which is comedy gold for IaC tooling), and the json plan output is frankly insane compared to CloudFormation change sets.

Basically: the more you try to abstract away the part where you have to actually transform your intent into serialized resources, the harder it becomes to do simple stuff like "hey make sure this alarm doesn't fire after you roll those new ec2 instances". To me the absolutely gigantic terratest readme looks more like parody than actual tooling.

I agree that for OP's use case it really doesn't matter which one they pick though.

12 rats tied together fucked around with this message at 17:03 on Aug 1, 2019

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

12 rats tied together posted:

By comparison, you couldn't even get plan output as json from terraform until earlier this year (you had to parse shell command return codes which is comedy gold for IaC tooling), and the json plan output is frankly insane compared to CloudFormation change sets.
https://github.com/palantir/tfjson <- 3 years? There's another project I saw from a few years ago that literally imported the Terraform code as a Go module (just like this one) and it's worked for me as well between different Terraform releases as long as you vendored your Go modules consistently.

I've had to parse both CF change set JSON and TF output and I don't think there's anything terribly convenient about either - they're both different forms of cancer to me and shows how we don't really have standardized change management as an industry. Change sets didn't even exist for CF until a few years ago either while with Terraform you could at least hack away at the state file and generate plans from basically day one. Having to run application deploys without previews to production years ago with CF resulted in horrific release parties and "OMG, ABORT ABORT" too many times that resulted in worse problems than if we had just rolled servers out by hand or with Ansible.

Terraform testing for me is a combination of testing your input HCL or JSON files to your modules and making sure that your module is at least internally quite consistent, and that's about the same thing as testing CloudFormation templates. But both are still meh when it comes to testing our application deployments because both tools are bulldozers and backhoes when application testing is much more surgical and cross-functional (cfn-signal and friends is something that Terraform desperately needs to be reliable for app changes because null resources and post provisioners are not very easily testable hooks into other black boxes). It's also possible with some limitations to emit Terraform code as JSON and thereby have it in a more compact form as YAML.

I will reiterate though that both tools are pretty awful and not for lack of trying or for design reasons either - infrastructure is super messy, IaaS is wonky for every cloud provider (see: Azure VM templates taking 30+ minutes to deploy causing Terraform time-outs that were hard-capped), and most IaC efforts in my experience tend to turn into the worst of code and the worst of operational reproducibility. The only successes I've seen have been to use containers and let both CF and Terraform stick to just bare bones infrastructure exclusively.

12 rats tied together
Sep 7, 2006

Fair point about tfjson (and friends) -- if you dont mind integrating multiple third party tools you can get around some of the awfulness of using Terraform in production. I'm a big fan of landscape, personally (especially for the OP who is considering tools still).

Re: horrific release parties, I can't say I've ever experienced that, but like I said previously I've never been a huge fan of cloudformation and only cloudformation. What you describe with containers and relying on cloudformation/terraform only for base infrastructure matches my own experiences, but I don't think it's specific to containers, you can get all of the modern niceties using only ec2/cloudwatch/route53 by using ansible -> cloudformation -> ansible.

I've talked a lot about it without giving a concrete example, so I'll try to briefly illustrate without creating a post longer than a page:
code:
ansible/
  playbooks/
    aws-$account-$region-base.yaml
    roles/
      aws/vpc-setup/
        tasks/, vars/, templates/, etc
      $app/$component/
        tasks/, vars/, templates/, etc
Each account-region's base.yaml is a playbook that calls a series of roles: vpc setup, per-application setup, ancillary service or config setup, etc. They also usually contain a series of preflight checks, sanity tests, etc that are implemented as pre_tasks so we can ensure they always run before pushing changes. Big one here is usually making sure you're targeting the correct account, since that is handled outside of ansible.

Items in roles/aws/ configure aws primitives. The expectation is that they are more or less like a terraform module -- if you need 3 vpcs, you call "vpc-setup" 3 times and feed it 3 sets of parameters. These use cloudformation to provision themselves and track state using "register". A service that runs on aws infrastructure will provision resources through a role in this subfolder. Often these fire a number of post-assertions to make sure we didn't blow something up in a really obvious manner.

Items in roles/$app/$component configure services and servers, or perform other ad-hoc config that is not supported by cloudformation (example: permanently disabling an autoscaling process).

Inside the playbook we track the result of cloudformation calls into a series of vars which are available inside the application stack roles. The overall play strategy, ordering, chunking, etc is handled in the playbook, so the scope of working on a role
is intentionally kept extremely limited. The playbook is also where all of the control logic lives -- your typical "canary deploy -> if metrics stabilize do a blue/green in chunks of 20% -> otherwise rollback the canary and fire a grafana annotation/slack message", typical deployment logic which ansible supports extremely well.

The result is much better to work with than anything you can do in Terraform -- the entrypoint to infinitely many terraform state folders and module dependencies is a single playbook. Ansible role dependencies are well engineered, simple to understand and debug, and the concept of an "ansible role" is well geared towards managing _any_ type of multi-resource/multi-dependency environment (not just servers as the name would have you believe). This is where you put all of your (well phrased) surgical and cross-functional concerns and logic.

You get all of the benefits of working with modules with all of the benefits of working on a monostate, you can trivially debug, inspect, and short circuit any stage of any process including the ability to drop into an interactive debugger in the middle of a playbook run. Lastly, you also get Terraform Enterprise for $completely free through the open source version of Ansible Tower, which runs the exact same playbooks and automation in the exact same way. Working like this you get the scalpel and the bulldozer, which is awesome because you really do need both.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Like 5 posts in we're at Defcon-1 CI / DevOps / K8S land rather than being AWS-specific, but the take-away for anyone keeping score at home is that this is why every other company winds up hiring an engineer just to do all this stuff instead of ... normal developers.

My philosophy is to build what your organization can understand and support. With like 3 people outsource as many things as possible unless you're a bootstrapped cash-poor start-up, but that won't work with an engineering team of like 80 experienced engineers.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
Okay so I'm having a hard time understanding what I'm doing wrong here.

I have a Python function that I set up as a Lambda function. When I test it on with a "Test Event" it works perfectly fine each and every time. I set up API Gateway so that I can call it with a POST request. This again works perfectly fine when I go to the API Gateway website and press the "Test [Lightning Bolt]" button and paste in the JSON body.

I can use curl, python.requests, Postman, and even iOS Shortcuts to call the API and it works… but only on the very first invocation after I "save" the lambda function. Like, I can call it with curl and get my expected output. Then I press the up arrow button and hit enter, the exact same command, and I get a 502 "Internal server error". If I tab over to Postman and try to call it there I still get the 502 error code. It doesn't matter if I wait a few seconds or a few minutes before trying to call it again.

But if I go to the lambda function editor and put in a new linespace and save the function, changing absolutely nothing else, I can call the API again, but only one time, until I save the function again.


Can someone tell me where to look for information because this makes no sense to me.

e: Figured it out. I was creating a directory in /tmp and this raised an error cause I wasn't expecting /tmp to be populated.

Boris Galerkin fucked around with this message at 15:09 on Aug 6, 2019

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.
For future reference, you should turn on logging for both APIG and Lambda and check the appropriate CloudWatch logs.

The first invocation after saving the function will always be a cold start, so the symptoms you were seeing definitely pointed towards something being not quite stateless.

Adbot
ADBOT LOVES YOU

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
the only useful info in those answers was "hasn’t changed for like 3 years" (and therefore I should probably just follow the old blog posts).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply