Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost
Hey can anybody help me with some syntax in my CloudFormation template? I'm making secure pihole stack just for fun and when I use conditions and paramaters with AWS::CloudFormation::Init: things go to hell. I have been searching around but haven't been able to find any code examples for this scenario.


This works fine:
code:
command: !If [NewEBS, "mkfs -t ext4 /dev/sdh", "echo not formatting"]
But if I have to pass paramaters with !Sub, it has various issues.


This says YAML is malformed and won't run the template.
code:
command: !If [NewEBS, !Sub |
	docker run -e foo=${bar}, "echo skip"]
This validates but sends !Sub to the shell and will fail to create.
code:
command: !If [NewEBS, "!Sub |
	docker run -e foo=${bar}", "echo skip"]
Edit: I've tried all kinds of variations but basically the condition won't validate unless there's a quote after the comma and cfn-init passes along everything after the quote.

RVWinkle fucked around with this message at 05:45 on Jan 16, 2019

Adbot
ADBOT LOVES YOU

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





RVWinkle posted:

Hey can anybody help me with some syntax in my CloudFormation template? I'm making secure pihole stack just for fun and when I use conditions and paramaters with AWS::CloudFormation::Init: things go to hell. I have been searching around but haven't been able to find any code examples for this scenario.


This works fine:
code:
command: !If [NewEBS, "mkfs -t ext4 /dev/sdh", "echo not formatting"]
But if I have to pass paramaters with !Sub, it has various issues.


This says YAML is malformed and won't run the template.
code:
command: !If [NewEBS, !Sub |
	docker run -e foo=${bar}, "echo skip"]
This validates but sends !Sub to the shell and will fail to create.
code:
command: !If [NewEBS, "!Sub |
	docker run -e foo=${bar}", "echo skip"]
Edit: I've tried all kinds of variations but basically the condition won't validate unless there's a quote after the comma and cfn-init passes along everything after the quote.

i don't know if your examples are just misformatted or what, but the first is equivalent to:
code:
command: !If [ NewEBS, !Sub "docker run -r foo=${bar}, \"echo skip\" ]"
and the second:
code:
command: !If [ NewEBS, "!Sub |
    docker run -e foo=${bar}", "echo skip"]
neither of which are valid yaml

you probably want:

code:
command: !If [ NewEBS, !Sub "docker run -e foo=${bar}", "echo skip" ]

SnatchRabbit
Feb 23, 2006

by sebmojo
I'm writing a management web page for some aws resources and it's pretty daunting. I'm basically having to rewrite portions of the AWS console so that clients can mash buttons to interact with the environments we've built. What I'm currently doing is sending commands, pulling data using api gateway and lambda then displaying it on the webpage. It's a ton of work write all the buttons just to get a stripped down version of Amazon's web GUI so I'm wondering if I'm going about this all wrong. Is there a simple way to use say cloudwatch dashboards or something and pipe that over to another webpage somehow? I know you make widgets to check on EC2 stats and such but it seems like you can only pull data out. Anyone tried something like this?

edit: to be clear, manipulating the aws resources isn't the hard part. It's getting status information back that's really proving to be a pain. I'm having to do multiple describe_instances, describe_instance_status calls and looping through everything to get what I information about the status of whatever it is I executed with the buttons.

edit2: I guess I could try to pull the events from the stacks in cloudformation as well, but that might be as much of a pain. We'll also be doing a lot orchestration through codedeploy so I might be able to get get something out of there....

SnatchRabbit fucked around with this message at 23:06 on Jan 16, 2019

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost

the talent deficit posted:



you probably want:

code:
command: !If [ NewEBS, !Sub "docker run -e foo=${bar}", "echo skip" ]

Thanks for helping with that! I looked up what the pipe actually does and it all makes sense now.

I'm pretty new to YAML but one look and I realized that I never want to touch JSON again.

Edit: Now my stack is fully automated and can be redeployed while maintaining persistence! I know this is 'baby's first stack' but I love how powerful CloudFormation is.

RVWinkle fucked around with this message at 23:54 on Jan 16, 2019

Scrapez
Feb 27, 2004

Question on methodology. I want to create a cloudwatch event that will kick off when auto scaling launches a new instance successfully. Additionally, I want a script or a bunch of commands to be run on the ec2 instance that is launched.

I've created the cloudwatch event with the correct service, event and group name as the source. I've set the Target as SSM Run Command with Document AWS-RunShellScript (Linux). I have my Target key set to "tag:Server Type" and target value of <kamailio>. (I have the launch configuration of the autoscaling group set to tag new instances with tag Server Type and value kamailio.

Is this the above the proper way to say "execute the following commands on new instances with the tag Server Type and value kamailio?

Additionally, is there a way to have it just execute a whole script rather than putting each command in separately as a Constant Configure Parameter?

I hope the above makes sense. Ultimately, if an instance crashes, I want the autoscaling group to launch a replacement, I then want the cloudwatch event to be triggered and run a script that will basically grab the local and public IP address of the instance, put them into variables and then write them out to application config files and start the applications.

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.

Scrapez posted:

Is this the above the proper way to say "execute the following commands on new instances with the tag Server Type and value kamailio?

Have you looked into using user-data to execute the script on launch? You could bake the script into your AMI and just use the user-data to run the command, or put the whole script into the user-data.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

There is a UserData field in the Launch Configuration you are defining for your auto-scaling group so you don't have to use a CWE or apply it to specific tags. It will just run on anything launched by that ASG.

Scrapez
Feb 27, 2004

Arzakon posted:

Have you looked into using user-data to execute the script on launch? You could bake the script into your AMI and just use the user-data to run the command, or put the whole script into the user-data.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

There is a UserData field in the Launch Configuration you are defining for your auto-scaling group so you don't have to use a CWE or apply it to specific tags. It will just run on anything launched by that ASG.

I have successfully done it this way but was hoping to move it to a CloudWatch event as I'll have a subsequent Event that will need to happen when a new instance is launched as well. I thought it'd be better to have all the items together there for easier management.

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.

Scrapez posted:

I have successfully done it this way but was hoping to move it to a CloudWatch event as I'll have a subsequent Event that will need to happen when a new instance is launched as well. I thought it'd be better to have all the items together there for easier management.

So you have an related action you also want to fire on the event so you need to have the CloudWatch Event for another target anyways? Seems reasonable to do it through SSM then. On to your question about SSM, is the entire script not specified as a document, and only your variables in the parameters? Not in a place where I can get hands on right now, but I think that is the way I remember it.

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-doc-syntax.html

Scrapez
Feb 27, 2004

My understanding was that I could select AWS-RunShellScript (Linux) in the Document type and then in the Commands section, I could just add commands to be run on the commandline. Below is how I have it setup currently for testing. My Auto Scaling group called kamailio successfully launches a new EC2 instance when I terminate one but that either is not triggering this event or once triggered, the event just isn't executing the commands.

https://imgur.com/a/T7ZoqNN

Edit: I'm working from this tutorial: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EC2_Run_Command.html

Double Edit: I setup a cloudwatch alarm for invocation of my autoscaling group kamailio. Deleted my instance which triggered the autoscale function and I did get an alarm in Cloudwatch. I'm stumped.

Scrapez fucked around with this message at 15:38 on Jan 22, 2019

JHVH-1
Jun 28, 2002
Can you manually run the SSM command to make sure it works? You have the instances set up with the agent and everything right? (Depending on what your base image is I think there is a chance its not installed already).

Scrapez
Feb 27, 2004

JHVH-1 posted:

Can you manually run the SSM command to make sure it works? You have the instances set up with the agent and everything right? (Depending on what your base image is I think there is a chance its not installed already).

That could be the problem. I did not manually setup SSM at all on the image. I'll look into that. Thank you.

Edit: I made sure the SSM agent was running, took a new image. Confirmed that when it launches a new instance, SSM agent is running on startup. Made sure IAM role for the Cloudwatch event has all permissions for SSM. No clue why it isn't working.

2nd Edit: This is everything related to ssm I see in /var/log/messages on the launched EC2 instance:
code:
Jan 22 16:38:09 ip-10-100-10-55 systemd: Started amazon-ssm-agent.
Jan 22 16:38:09 ip-10-100-10-55 amazon-ssm-agent: 2019/01/22 16:38:09 Failed to load instance info from vault. RegistrationKey does not exist.
Jan 22 16:38:09 ip-10-100-10-55 amazon-ssm-agent: Error occurred fetching the seelog config file path:  open /etc/amazon/ssm/seelog.xml: no such file or directory
Jan 22 16:38:09 ip-10-100-10-55 amazon-ssm-agent: Initializing new seelog logger
Jan 22 16:38:09 ip-10-100-10-55 amazon-ssm-agent: New Seelog Logger Creation Complete
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO Create new startup processor
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [LongRunningPluginsManager] registered plugins: {}
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO Initializing bookkeeping folders
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO removing the completed state files
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO Initializing bookkeeping folders for long running plugins
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO Initializing replies folder for MDS reply requests that couldn't reach the service
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO Initializing healthcheck folders for long running plugins
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO Initializing locations for inventory plugin
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO Initializing default location for custom inventory
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO Initializing default location for file inventory
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO Initializing default location for role inventory
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO Init the cloudwatchlogs publisher
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO Starting Agent: amazon-ssm-agent - v2.3.372.0
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO OS: linux, Arch: amd64
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: datastore file /var/lib/amazon/ssm/i-030c68d4bb30a2241/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessagingDeliveryService] Starting document processing engine...
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessagingDeliveryService] [EngineProcessor] Starting
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessagingDeliveryService] Starting message polling
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessagingDeliveryService] Starting send replies to MDS
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [instanceID=i-030c68d4bb30a2241] Starting association polling
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessagingDeliveryService] [Association] Launching response handler
Jan 22 16:38:10 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing
Jan 22 16:38:11 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service
Jan 22 16:38:11 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized
Jan 22 16:38:11 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] Starting session document processing engine...
Jan 22 16:38:11 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] [EngineProcessor] Starting
Jan 22 16:38:11 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] [EngineProcessor] Initial processing
Jan 22 16:38:11 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module.
Jan 22 16:38:11 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-030c68d4bb30a2241, requestId: e5ac230f-b6f4-43b5-a269-e0aac69a4076
Jan 22 16:38:11 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [OfflineService] Starting document processing engine...
Jan 22 16:38:11 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [OfflineService] [EngineProcessor] Starting
Jan 22 16:38:11 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [OfflineService] [EngineProcessor] Initial processing
Jan 22 16:38:12 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [OfflineService] Starting message polling
Jan 22 16:38:12 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [OfflineService] Starting send replies to MDS
Jan 22 16:38:12 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [LongRunningPluginsManager] starting long running plugin manager
Jan 22 16:38:12 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute
Jan 22 16:38:12 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [HealthCheck] HealthCheck reporting agent health.
Jan 22 16:38:12 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] listening reply.
Jan 22 16:38:12 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck
Jan 22 16:38:12 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [StartupProcessor] Executing startup processor tasks
Jan 22 16:38:12 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.372.0 is running
Jan 22 16:38:12 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [StartupProcessor] Write to serial port: OsProductName: CentOS Linux
Jan 22 16:38:13 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [StartupProcessor] Write to serial port: OsVersion: 7
Jan 22 16:38:13 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] Opening websocket connection to: %!(EXTRA string=wss://ssmmessages.us-east-1.amazonaws.com/v1/control-channel/i-030c68d4bb30a2241?role=subscribe&stream=input)
Jan 22 16:38:13 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] Successfully opened websocket connection to: %!(EXTRA string=wss://ssmmessages.us-east-1.amazonaws.com/v1/control-channel/i-030c68d4bb30a2241?role=subscribe&stream=input)
Jan 22 16:38:13 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] Starting receiving message from control channel
Jan 22 16:38:13 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] ssm-user already exists.
Jan 22 16:38:13 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] File /etc/sudoers.d/ssm-agent-users already exists
Jan 22 16:38:13 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:10 INFO [MessageGatewayService] Successfully changed mode of /etc/sudoers.d/ssm-agent-users to 288
Jan 22 16:38:13 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:11 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds.
Jan 22 16:38:41 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:38:41 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated
Jan 22 16:40:51 ip-10-100-10-55 amazon-ssm-agent: 2019-01-22 16:40:51 INFO [HealthCheck] HealthCheck reporting agent health.

Scrapez fucked around with this message at 17:44 on Jan 22, 2019

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.
You shouldn't have to do any setup on the instance if you are launching a linux that comes with it installed and you don't have wonky requirements like a proxy to reach the API. It does need an IAM role attached to it via instance profile to be able to poll the SSM service for commands to run. If the instance isn't appearing as a managed instance in the console then its likely the instance doesn't have permission to access the systems manager API.

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-configuring-access-role.html

Ajaxify
May 6, 2009
Try loosening your security groups (even if they look ok) and see if that helps.

Scrapez
Feb 27, 2004

Bit of an obscure question.

I'm trying to update a dns SRV record when a new instance is launched. I have a Lambda/python function that is performing a ChangeResourceRecordSets upsert and inserting the IP of the newly launched instance into the Value portion of the SRV record. The problem is that when I launch an additional instance, it replaces the value in the SRV record instead of appending the info for the new instance.

I thought with using UPSERT it was supposed to just update a record if it already exists. I'm assuming this somehow doesn't apply to SRV records or the value section specifically.

Is my only recourse to list-resource-record-sets for the record, throw the current value in a variable and then perform my ChangeResourceRecordSets adding the existing value and my new value?

Just trying to understand if UPSERT should be overwriting the value as I'm seeing or if it's something I'm doing incorrectly.

Docjowles
Apr 9, 2009

https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html

quote:

UPSERT: If a resource record set does not already exist, AWS creates it. If a resource set does exist, Route 53 updates it with the values in the request.

Upsert doesn't mean append. It means create if the record doesn't exist at all, or overwrite with the specified value if it does. So yes, you need to read it into a variable, append the string you want added, and then make an API call to set it to that new value.

Scrapez
Feb 27, 2004

Docjowles posted:

https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html


Upsert doesn't mean append. It means create if the record doesn't exist at all, or overwrite with the specified value if it does. So yes, you need to read it into a variable, append the string you want added, and then make an API call to set it to that new value.

Thanks. Makes sense. I wish they had something that could just append. Seems like a useful function that people would use.

SnatchRabbit
Feb 23, 2006

by sebmojo
I have a put together a cloudformation template for a job interview. Basically they want a simple web app to return the current datetime. I'm thinking of having an html page hosted in s3 with some javascript to hit api gateway which will then hit a lambda to return the date. Maybe I throw in a Rt53 entry. I want this to be as push button as possible but how do I get the html page into the cloudformation? Is there a way to code it in, or reference it from a git repo or something? Would I need to use codecommit/codedeploy?

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.
There isn't a great way to get an object into S3 from within CFN. One option would be to use a Lambda Custom Resource to drop the object in the S3 Bucket created in the CFN template. You essentially have to create another Lambda Function in the template, create the custom resource, which fires the Lambda Function to perform the put-object. If you are trying to look PRODUCTION READY you need to handle what the Custom Resource does on UPDATE (replace the file?), DELETE (delete the file, important for deleting the bucket). The custom resource code is probably more than all your other code but its what you do when you want to make AWS API calls that CFN can't do for you. If you can do it in 4096 characters you can put it inline in the CFN template, otherwise you have to stage it in S3.

I'd love it, but I could see someone whining about it being overly complex.

No matter what you do the first thing I'm looking at when I review your work is that your IAM and S3 Bucket policies are tight, really lock those thing down with resource level controls to show attention to detail.

SnatchRabbit
Feb 23, 2006

by sebmojo

Arzakon posted:

There isn't a great way to get an object into S3 from within CFN. One option would be to use a Lambda Custom Resource to drop the object in the S3 Bucket created in the CFN template. You essentially have to create another Lambda Function in the template, create the custom resource, which fires the Lambda Function to perform the put-object. If you are trying to look PRODUCTION READY you need to handle what the Custom Resource does on UPDATE (replace the file?), DELETE (delete the file, important for deleting the bucket). The custom resource code is probably more than all your other code but its what you do when you want to make AWS API calls that CFN can't do for you. If you can do it in 4096 characters you can put it inline in the CFN template, otherwise you have to stage it in S3.

I'd love it, but I could see someone whining about it being overly complex.

No matter what you do the first thing I'm looking at when I review your work is that your IAM and S3 Bucket policies are tight, really lock those thing down with resource level controls to show attention to detail.

Thanks, that's pretty much the conclusion I came to myself after a while. Writing the lambda to put the html into s3 wasn't all that bad, I just have to write the custom resource now. I'll have to manage the cleanup a bit and empty the bucket but i think itll be pretty slick if i get it working properly.

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost
I did something similar recently where it's easy enough to use cnf-init to create a file and then run aws s3 cp. Of course you need a Lambda function to delete the contents of a bucket. I haven't really wrapped my head around update since it seems easy enough to delete and redeploy. You can also run all your services with docker and use the watchtower docker to pull updated containers from your repo.

Arzakon posted:

No matter what you do the first thing I'm looking at when I review your work is that your IAM and S3 Bucket policies are tight, really lock those thing down with resource level controls to show attention to detail.

I'm glad you brought this up because it's something I have been thinking about. I'm hoping that in AWS::IAM::Policy I can just use something like Resource: !Ref S3Bucket.

SnatchRabbit
Feb 23, 2006

by sebmojo

RVWinkle posted:

I'm glad you brought this up because it's something I have been thinking about. I'm hoping that in AWS::IAM::Policy I can just use something like Resource: !Ref S3Bucket.

Yup that's exactly what I did. I had the bucket set to public read but I might remove that since I have my lambda using extraargs to set the index.html to public when it uploads the file, so I don't think I really need a bucket policy, right?

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost
Oh drat I just spent two days in AWS::IAM::Policy hell. ListBucket only wants the Arn and DeleteObject wants /* so I came up with this.

code:
          - Effect: Allow
            Action:
              - s3:DeleteObject
              - s3:ListBucket
            Resource:
                !Join
                  - ''
                  - - !GetAtt myS3Bucket.Arn
                    - '*'

vanity slug
Jul 20, 2010

So... Why not use two statements?

Forgall
Oct 16, 2012

by Azathoth
I have aws account that I barely use except for learning purposes and it keeps getting restricted in various ways. First it wouldn't allow me to create certificate for website I was trying to host at s3, now it won't allow me to create cloudfront distribution even though it worked before. Support ticket for this is a week old already. Is this normal?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Forgall posted:

I have aws account that I barely use except for learning purposes and it keeps getting restricted in various ways. First it wouldn't allow me to create certificate for website I was trying to host at s3, now it won't allow me to create cloudfront distribution even though it worked before. Support ticket for this is a week old already. Is this normal?

PM me your account number and I’ll have a look when I have a moment.

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost

Jeoh posted:

So... Why not use two statements?

Yeah I get what you're saying. I thought I was being clever but it's probably better to be explicit.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
I'm just trying out several AWS services to learn how to use them for fun, not for a job or anything. I signed up for AWS ages ago, so the free 12-month thing no longer applies to me. Most of the services I'm playing with (eg Lambda and DynamoDB) seem to offer some free usage per month and realistically I'm never going to hit the threshold to start paying. Nonetheless I'd rather not be in a position where I wake up to a large bill due to my incompetence so is there a way I could just make it so that the service just fails to work altogether whenever I do something that will charge me? The management console is literally the most confusing thing ever and there are options saved all over the place.

Also, my AWS account is also my regular Amazon (Prime) account, same login and everything. Is there a way I can split out my AWS account into its own thing? I think I'd feel better if I could because my Amazon account has my billing details saved for easy purchases but I don't want that to be accessible by AWS.

Docjowles
Apr 9, 2009

There isn’t a simple way to have it Shut Down Everything when you exceed a billing threshold. What you can do is set a CloudWatch alert that will email you when your estimated monthly spend goes over $X. Then set that to like $1 and you should be able to catch whatever the problem is before it amounts to anything significant. You can google up plenty of guides for this. If you make a one time mistake you can also usually talk support into giving you an account credit or something, in the worst case.

I think you’d have to open a new AWS account to unlink it from your personal Amazon.com account. I actually didn’t even know you could have both services on the same login like that.

Docjowles fucked around with this message at 14:27 on Jan 31, 2019

Forgall
Oct 16, 2012

by Azathoth

Agrikk posted:

PM me your account number and I’ll have a look when I have a moment.
I don't have pms here and it's honestly not urgent right now, I was just wondering if it's a common issue for low activity accounts.

Docjowles posted:

There isn’t a simple way to have it Shut Down Everything when you exceed a billing threshold. What you can do is set a CloudWatch alert that will email you when your estimated monthly spend goes over $X.
Could that alert execute lambda function that would shut things down automatically?

Boris Galerkin posted:

Also, my AWS account is also my regular Amazon (Prime) account, same login and everything. Is there a way I can split out my AWS account into its own thing? I think I'd feel better if I could because my Amazon account has my billing details saved for easy purchases but I don't want that to be accessible by AWS.
I'm in the same boat. I wanted to create separate AWS account and they asked me to fax them my ID documents and where do you even find fax machine in 2018. So I just gave up on that.

Forgall fucked around with this message at 15:01 on Jan 31, 2019

Docjowles
Apr 9, 2009

Forgall posted:

Could that alert execute lambda function that would shut things down automatically?

That's why I said "no simple way" heh. Yes you could do this, but it seems like a lot more hassle than an email alert with a super low threshold. What does "shutting down" DynamoDB or S3 even mean for example? Delete all your data?

Forgall
Oct 16, 2012

by Azathoth

Docjowles posted:

That's why I said "no simple way" heh. Yes you could do this, but it seems like a lot more hassle than an email alert with a super low threshold. What does "shutting down" DynamoDB or S3 even mean for example? Delete all your data?
If cost overrun is because tons of people are suddenly using your service you could take it private at least.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Docjowles posted:

That's why I said "no simple way" heh. Yes you could do this, but it seems like a lot more hassle than an email alert with a super low threshold. What does "shutting down" DynamoDB or S3 even mean for example? Delete all your data?

Well in my case shutting down would mean returning a "error resource not accessible" or something like that error.

Forgall
Oct 16, 2012

by Azathoth

Boris Galerkin posted:

Well in my case shutting down would mean returning a "error resource not accessible" or something like that error.
If resource in question is API Gateway, you can create usage plan for your api deployment and limit total number of calls available per day/week/month. Sorry I only know how to do it through cloudformation and not console.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
I'm positive that this has been covered at some point, but what's considered the best S3 GUI for Windows? I'm perfectly fine working in CLI/API, but I have a few users asking and I honestly don't know. Someone mentioned wanting to have the S3 bucket as a mounted drive, which I'm sure is done. I figure I'd trust the working knowledge here more than just a quick Googling.

vanity slug
Jul 20, 2010

CloudBerry works.

Scrapez
Feb 27, 2004

What is the best method for triggering an autoscaling event based on output into a log file on the EC2 instances in the group? Use case is a SIP platform and I'd like to be able to trigger a scale out event when number of calls on any given instance reaches X.

JHVH-1
Jun 28, 2002

Scrapez posted:

What is the best method for triggering an autoscaling event based on output into a log file on the EC2 instances in the group? Use case is a SIP platform and I'd like to be able to trigger a scale out event when number of calls on any given instance reaches X.

You can create a custom cloudwatch metric and then use it as your scaling criteria

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html

vanity slug
Jul 20, 2010

Put the logs into CloudWatch, trigger CloudWatch Event, get Lambda to scale ASG?

e: ^ is even better

Scrapez
Feb 27, 2004

JHVH-1 posted:

You can create a custom cloudwatch metric and then use it as your scaling criteria

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html

Thank you. That is exactly what I was looking for but google searches had not gotten me to that.

Adbot
ADBOT LOVES YOU

JHVH-1
Jun 28, 2002

Scrapez posted:

Thank you. That is exactly what I was looking for but google searches had not gotten me to that.

Probably have to play around with the alarms and getting the right metrics so you have both scale up and scale down criteria based on something that covers the whole cluster.
My last company we had a developer that was populating a metric in their code and never thought about creating a scale down one, so the thing would get busy or a bug would scale it out like crazy and then never reduce it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply