|
Hey can anybody help me with some syntax in my CloudFormation template? I'm making secure pihole stack just for fun and when I use conditions and paramaters with AWS::CloudFormation::Init: things go to hell. I have been searching around but haven't been able to find any code examples for this scenario. This works fine: code:
This says YAML is malformed and won't run the template. code:
code:
RVWinkle fucked around with this message at 05:45 on Jan 16, 2019 |
# ? Jan 16, 2019 05:39 |
|
|
# ? May 21, 2024 14:27 |
|
RVWinkle posted:Hey can anybody help me with some syntax in my CloudFormation template? I'm making secure pihole stack just for fun and when I use conditions and paramaters with AWS::CloudFormation::Init: things go to hell. I have been searching around but haven't been able to find any code examples for this scenario. i don't know if your examples are just misformatted or what, but the first is equivalent to: code:
code:
you probably want: code:
|
# ? Jan 16, 2019 09:40 |
|
I'm writing a management web page for some aws resources and it's pretty daunting. I'm basically having to rewrite portions of the AWS console so that clients can mash buttons to interact with the environments we've built. What I'm currently doing is sending commands, pulling data using api gateway and lambda then displaying it on the webpage. It's a ton of work write all the buttons just to get a stripped down version of Amazon's web GUI so I'm wondering if I'm going about this all wrong. Is there a simple way to use say cloudwatch dashboards or something and pipe that over to another webpage somehow? I know you make widgets to check on EC2 stats and such but it seems like you can only pull data out. Anyone tried something like this? edit: to be clear, manipulating the aws resources isn't the hard part. It's getting status information back that's really proving to be a pain. I'm having to do multiple describe_instances, describe_instance_status calls and looping through everything to get what I information about the status of whatever it is I executed with the buttons. edit2: I guess I could try to pull the events from the stacks in cloudformation as well, but that might be as much of a pain. We'll also be doing a lot orchestration through codedeploy so I might be able to get get something out of there.... SnatchRabbit fucked around with this message at 23:06 on Jan 16, 2019 |
# ? Jan 16, 2019 22:51 |
|
the talent deficit posted:
Thanks for helping with that! I looked up what the pipe actually does and it all makes sense now. I'm pretty new to YAML but one look and I realized that I never want to touch JSON again. Edit: Now my stack is fully automated and can be redeployed while maintaining persistence! I know this is 'baby's first stack' but I love how powerful CloudFormation is. RVWinkle fucked around with this message at 23:54 on Jan 16, 2019 |
# ? Jan 16, 2019 23:43 |
|
Question on methodology. I want to create a cloudwatch event that will kick off when auto scaling launches a new instance successfully. Additionally, I want a script or a bunch of commands to be run on the ec2 instance that is launched. I've created the cloudwatch event with the correct service, event and group name as the source. I've set the Target as SSM Run Command with Document AWS-RunShellScript (Linux). I have my Target key set to "tag:Server Type" and target value of <kamailio>. (I have the launch configuration of the autoscaling group set to tag new instances with tag Server Type and value kamailio. Is this the above the proper way to say "execute the following commands on new instances with the tag Server Type and value kamailio? Additionally, is there a way to have it just execute a whole script rather than putting each command in separately as a Constant Configure Parameter? I hope the above makes sense. Ultimately, if an instance crashes, I want the autoscaling group to launch a replacement, I then want the cloudwatch event to be triggered and run a script that will basically grab the local and public IP address of the instance, put them into variables and then write them out to application config files and start the applications.
|
# ? Jan 21, 2019 21:26 |
|
Scrapez posted:Is this the above the proper way to say "execute the following commands on new instances with the tag Server Type and value kamailio? Have you looked into using user-data to execute the script on launch? You could bake the script into your AMI and just use the user-data to run the command, or put the whole script into the user-data. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html There is a UserData field in the Launch Configuration you are defining for your auto-scaling group so you don't have to use a CWE or apply it to specific tags. It will just run on anything launched by that ASG.
|
# ? Jan 21, 2019 21:45 |
|
Arzakon posted:Have you looked into using user-data to execute the script on launch? You could bake the script into your AMI and just use the user-data to run the command, or put the whole script into the user-data. I have successfully done it this way but was hoping to move it to a CloudWatch event as I'll have a subsequent Event that will need to happen when a new instance is launched as well. I thought it'd be better to have all the items together there for easier management.
|
# ? Jan 21, 2019 21:51 |
|
Scrapez posted:I have successfully done it this way but was hoping to move it to a CloudWatch event as I'll have a subsequent Event that will need to happen when a new instance is launched as well. I thought it'd be better to have all the items together there for easier management. So you have an related action you also want to fire on the event so you need to have the CloudWatch Event for another target anyways? Seems reasonable to do it through SSM then. On to your question about SSM, is the entire script not specified as a document, and only your variables in the parameters? Not in a place where I can get hands on right now, but I think that is the way I remember it. https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-doc-syntax.html
|
# ? Jan 22, 2019 06:38 |
|
My understanding was that I could select AWS-RunShellScript (Linux) in the Document type and then in the Commands section, I could just add commands to be run on the commandline. Below is how I have it setup currently for testing. My Auto Scaling group called kamailio successfully launches a new EC2 instance when I terminate one but that either is not triggering this event or once triggered, the event just isn't executing the commands. https://imgur.com/a/T7ZoqNN Edit: I'm working from this tutorial: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EC2_Run_Command.html Double Edit: I setup a cloudwatch alarm for invocation of my autoscaling group kamailio. Deleted my instance which triggered the autoscale function and I did get an alarm in Cloudwatch. I'm stumped. Scrapez fucked around with this message at 15:38 on Jan 22, 2019 |
# ? Jan 22, 2019 15:05 |
|
Can you manually run the SSM command to make sure it works? You have the instances set up with the agent and everything right? (Depending on what your base image is I think there is a chance its not installed already).
|
# ? Jan 22, 2019 16:27 |
|
JHVH-1 posted:Can you manually run the SSM command to make sure it works? You have the instances set up with the agent and everything right? (Depending on what your base image is I think there is a chance its not installed already). That could be the problem. I did not manually setup SSM at all on the image. I'll look into that. Thank you. Edit: I made sure the SSM agent was running, took a new image. Confirmed that when it launches a new instance, SSM agent is running on startup. Made sure IAM role for the Cloudwatch event has all permissions for SSM. No clue why it isn't working. 2nd Edit: This is everything related to ssm I see in /var/log/messages on the launched EC2 instance: code:
Scrapez fucked around with this message at 17:44 on Jan 22, 2019 |
# ? Jan 22, 2019 16:43 |
|
You shouldn't have to do any setup on the instance if you are launching a linux that comes with it installed and you don't have wonky requirements like a proxy to reach the API. It does need an IAM role attached to it via instance profile to be able to poll the SSM service for commands to run. If the instance isn't appearing as a managed instance in the console then its likely the instance doesn't have permission to access the systems manager API. https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-configuring-access-role.html
|
# ? Jan 23, 2019 05:52 |
|
Try loosening your security groups (even if they look ok) and see if that helps.
|
# ? Jan 23, 2019 08:38 |
|
Bit of an obscure question. I'm trying to update a dns SRV record when a new instance is launched. I have a Lambda/python function that is performing a ChangeResourceRecordSets upsert and inserting the IP of the newly launched instance into the Value portion of the SRV record. The problem is that when I launch an additional instance, it replaces the value in the SRV record instead of appending the info for the new instance. I thought with using UPSERT it was supposed to just update a record if it already exists. I'm assuming this somehow doesn't apply to SRV records or the value section specifically. Is my only recourse to list-resource-record-sets for the record, throw the current value in a variable and then perform my ChangeResourceRecordSets adding the existing value and my new value? Just trying to understand if UPSERT should be overwriting the value as I'm seeing or if it's something I'm doing incorrectly.
|
# ? Jan 24, 2019 17:36 |
|
https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.htmlquote:UPSERT: If a resource record set does not already exist, AWS creates it. If a resource set does exist, Route 53 updates it with the values in the request. Upsert doesn't mean append. It means create if the record doesn't exist at all, or overwrite with the specified value if it does. So yes, you need to read it into a variable, append the string you want added, and then make an API call to set it to that new value.
|
# ? Jan 24, 2019 17:43 |
|
Docjowles posted:https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html Thanks. Makes sense. I wish they had something that could just append. Seems like a useful function that people would use.
|
# ? Jan 24, 2019 20:08 |
|
I have a put together a cloudformation template for a job interview. Basically they want a simple web app to return the current datetime. I'm thinking of having an html page hosted in s3 with some javascript to hit api gateway which will then hit a lambda to return the date. Maybe I throw in a Rt53 entry. I want this to be as push button as possible but how do I get the html page into the cloudformation? Is there a way to code it in, or reference it from a git repo or something? Would I need to use codecommit/codedeploy?
|
# ? Jan 25, 2019 05:29 |
|
There isn't a great way to get an object into S3 from within CFN. One option would be to use a Lambda Custom Resource to drop the object in the S3 Bucket created in the CFN template. You essentially have to create another Lambda Function in the template, create the custom resource, which fires the Lambda Function to perform the put-object. If you are trying to look PRODUCTION READY you need to handle what the Custom Resource does on UPDATE (replace the file?), DELETE (delete the file, important for deleting the bucket). The custom resource code is probably more than all your other code but its what you do when you want to make AWS API calls that CFN can't do for you. If you can do it in 4096 characters you can put it inline in the CFN template, otherwise you have to stage it in S3. I'd love it, but I could see someone whining about it being overly complex. No matter what you do the first thing I'm looking at when I review your work is that your IAM and S3 Bucket policies are tight, really lock those thing down with resource level controls to show attention to detail.
|
# ? Jan 26, 2019 07:05 |
|
Arzakon posted:There isn't a great way to get an object into S3 from within CFN. One option would be to use a Lambda Custom Resource to drop the object in the S3 Bucket created in the CFN template. You essentially have to create another Lambda Function in the template, create the custom resource, which fires the Lambda Function to perform the put-object. If you are trying to look PRODUCTION READY you need to handle what the Custom Resource does on UPDATE (replace the file?), DELETE (delete the file, important for deleting the bucket). The custom resource code is probably more than all your other code but its what you do when you want to make AWS API calls that CFN can't do for you. If you can do it in 4096 characters you can put it inline in the CFN template, otherwise you have to stage it in S3. Thanks, that's pretty much the conclusion I came to myself after a while. Writing the lambda to put the html into s3 wasn't all that bad, I just have to write the custom resource now. I'll have to manage the cleanup a bit and empty the bucket but i think itll be pretty slick if i get it working properly.
|
# ? Jan 26, 2019 07:55 |
|
I did something similar recently where it's easy enough to use cnf-init to create a file and then run aws s3 cp. Of course you need a Lambda function to delete the contents of a bucket. I haven't really wrapped my head around update since it seems easy enough to delete and redeploy. You can also run all your services with docker and use the watchtower docker to pull updated containers from your repo.Arzakon posted:No matter what you do the first thing I'm looking at when I review your work is that your IAM and S3 Bucket policies are tight, really lock those thing down with resource level controls to show attention to detail. I'm glad you brought this up because it's something I have been thinking about. I'm hoping that in AWS::IAM::Policy I can just use something like Resource: !Ref S3Bucket.
|
# ? Jan 26, 2019 23:16 |
|
RVWinkle posted:I'm glad you brought this up because it's something I have been thinking about. I'm hoping that in AWS::IAM::Policy I can just use something like Resource: !Ref S3Bucket. Yup that's exactly what I did. I had the bucket set to public read but I might remove that since I have my lambda using extraargs to set the index.html to public when it uploads the file, so I don't think I really need a bucket policy, right?
|
# ? Jan 27, 2019 01:03 |
|
Oh drat I just spent two days in AWS::IAM::Policy hell. ListBucket only wants the Arn and DeleteObject wants /* so I came up with this.code:
|
# ? Jan 30, 2019 02:39 |
|
So... Why not use two statements?
|
# ? Jan 30, 2019 10:35 |
|
I have aws account that I barely use except for learning purposes and it keeps getting restricted in various ways. First it wouldn't allow me to create certificate for website I was trying to host at s3, now it won't allow me to create cloudfront distribution even though it worked before. Support ticket for this is a week old already. Is this normal?
|
# ? Jan 30, 2019 15:18 |
|
Forgall posted:I have aws account that I barely use except for learning purposes and it keeps getting restricted in various ways. First it wouldn't allow me to create certificate for website I was trying to host at s3, now it won't allow me to create cloudfront distribution even though it worked before. Support ticket for this is a week old already. Is this normal? PM me your account number and I’ll have a look when I have a moment.
|
# ? Jan 31, 2019 01:30 |
|
Jeoh posted:So... Why not use two statements? Yeah I get what you're saying. I thought I was being clever but it's probably better to be explicit.
|
# ? Jan 31, 2019 03:39 |
|
I'm just trying out several AWS services to learn how to use them for fun, not for a job or anything. I signed up for AWS ages ago, so the free 12-month thing no longer applies to me. Most of the services I'm playing with (eg Lambda and DynamoDB) seem to offer some free usage per month and realistically I'm never going to hit the threshold to start paying. Nonetheless I'd rather not be in a position where I wake up to a large bill due to my incompetence so is there a way I could just make it so that the service just fails to work altogether whenever I do something that will charge me? The management console is literally the most confusing thing ever and there are options saved all over the place. Also, my AWS account is also my regular Amazon (Prime) account, same login and everything. Is there a way I can split out my AWS account into its own thing? I think I'd feel better if I could because my Amazon account has my billing details saved for easy purchases but I don't want that to be accessible by AWS.
|
# ? Jan 31, 2019 10:26 |
|
There isn’t a simple way to have it Shut Down Everything when you exceed a billing threshold. What you can do is set a CloudWatch alert that will email you when your estimated monthly spend goes over $X. Then set that to like $1 and you should be able to catch whatever the problem is before it amounts to anything significant. You can google up plenty of guides for this. If you make a one time mistake you can also usually talk support into giving you an account credit or something, in the worst case. I think you’d have to open a new AWS account to unlink it from your personal Amazon.com account. I actually didn’t even know you could have both services on the same login like that. Docjowles fucked around with this message at 14:27 on Jan 31, 2019 |
# ? Jan 31, 2019 14:24 |
|
Agrikk posted:PM me your account number and I’ll have a look when I have a moment. Docjowles posted:There isn’t a simple way to have it Shut Down Everything when you exceed a billing threshold. What you can do is set a CloudWatch alert that will email you when your estimated monthly spend goes over $X. Boris Galerkin posted:Also, my AWS account is also my regular Amazon (Prime) account, same login and everything. Is there a way I can split out my AWS account into its own thing? I think I'd feel better if I could because my Amazon account has my billing details saved for easy purchases but I don't want that to be accessible by AWS. Forgall fucked around with this message at 15:01 on Jan 31, 2019 |
# ? Jan 31, 2019 14:58 |
|
Forgall posted:Could that alert execute lambda function that would shut things down automatically? That's why I said "no simple way" heh. Yes you could do this, but it seems like a lot more hassle than an email alert with a super low threshold. What does "shutting down" DynamoDB or S3 even mean for example? Delete all your data?
|
# ? Jan 31, 2019 15:20 |
|
Docjowles posted:That's why I said "no simple way" heh. Yes you could do this, but it seems like a lot more hassle than an email alert with a super low threshold. What does "shutting down" DynamoDB or S3 even mean for example? Delete all your data?
|
# ? Jan 31, 2019 15:24 |
|
Docjowles posted:That's why I said "no simple way" heh. Yes you could do this, but it seems like a lot more hassle than an email alert with a super low threshold. What does "shutting down" DynamoDB or S3 even mean for example? Delete all your data? Well in my case shutting down would mean returning a "error resource not accessible" or something like that error.
|
# ? Jan 31, 2019 15:51 |
|
Boris Galerkin posted:Well in my case shutting down would mean returning a "error resource not accessible" or something like that error.
|
# ? Jan 31, 2019 16:16 |
|
I'm positive that this has been covered at some point, but what's considered the best S3 GUI for Windows? I'm perfectly fine working in CLI/API, but I have a few users asking and I honestly don't know. Someone mentioned wanting to have the S3 bucket as a mounted drive, which I'm sure is done. I figure I'd trust the working knowledge here more than just a quick Googling.
|
# ? Jan 31, 2019 23:16 |
|
CloudBerry works.
|
# ? Jan 31, 2019 23:34 |
|
What is the best method for triggering an autoscaling event based on output into a log file on the EC2 instances in the group? Use case is a SIP platform and I'd like to be able to trigger a scale out event when number of calls on any given instance reaches X.
|
# ? Feb 1, 2019 15:04 |
|
Scrapez posted:What is the best method for triggering an autoscaling event based on output into a log file on the EC2 instances in the group? Use case is a SIP platform and I'd like to be able to trigger a scale out event when number of calls on any given instance reaches X. You can create a custom cloudwatch metric and then use it as your scaling criteria https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html
|
# ? Feb 1, 2019 15:09 |
|
Put the logs into CloudWatch, trigger CloudWatch Event, get Lambda to scale ASG? e: ^ is even better
|
# ? Feb 1, 2019 15:11 |
|
JHVH-1 posted:You can create a custom cloudwatch metric and then use it as your scaling criteria Thank you. That is exactly what I was looking for but google searches had not gotten me to that.
|
# ? Feb 1, 2019 16:02 |
|
|
# ? May 21, 2024 14:27 |
|
Scrapez posted:Thank you. That is exactly what I was looking for but google searches had not gotten me to that. Probably have to play around with the alarms and getting the right metrics so you have both scale up and scale down criteria based on something that covers the whole cluster. My last company we had a developer that was populating a metric in their code and never thought about creating a scale down one, so the thing would get busy or a bug would scale it out like crazy and then never reduce it.
|
# ? Feb 1, 2019 16:35 |