|
Quick question here. I'm trying to use EBS to set up a Moodle platform. I've set up the environment, uploaded the moodle zip package which deploys correctly. I run through the web installation, connect to the db, but I get stuck at the pre-requisite checks with the following error:code:
|
# ¿ Apr 20, 2017 21:13 |
|
|
# ¿ Apr 30, 2024 21:13 |
|
xpander posted:I'm not sure where EBS comes in here - were you given a disk image to use somehow? EBS is Amazon's "hard drive in the cloud" offering, so it shouldn't have much to do with Moodle. But I don't know Moodle at all. Sorry, I'm referring to Elastic Beanstalk. Essentially, Moodle is just a php application that I can download in zip form and elastic beanstalk will accept it. The trouble is getting the environment I set up to play nice with moodle vis a vis php extensions.
|
# ¿ Apr 20, 2017 22:55 |
|
I have an older EC2 instance from around 2014, I think in EC2 Classic. I built a complete upgraded server but I can't seem to associate my elastic IP on my new server. I'm sure there's somethin I'm missing regarding EC2 classic, but does anyone have a quick tutorial on how to associate an existing elastic ip to a new server? I tried associating it in AWS, but it only allows me to associate with the old server. edit: figured it out, had to migrate the elastic ip to the VPC scope of the new instance. SnatchRabbit fucked around with this message at 20:56 on Aug 4, 2017 |
# ¿ Aug 4, 2017 20:34 |
|
Speaking of certs is there a good resource for free practice exams? I just finished a course for associate solutions architect so I'm looking for some more exams to make sure I'm on my A game.
|
# ¿ Dec 11, 2017 17:29 |
|
Rapner posted:Not free but a cloud guru is very cheap. Do they sell the practice exams separately? All I see are the $99 courses.
|
# ¿ Dec 12, 2017 18:03 |
|
Seventh Arrow posted:I booked my Solutions Architect - Associate exam for Feb 12 so I'm going to try and do as many labs and practice exams as I can until then. I've heard that there are a lot of scenario questions, so it seems best to have a well-rounded knowledge of the material instead of just mastering AWS trivia questions. Looking at the A Cloud Guru forums, however, it seems that the exams take a keen interest in subjects that one would never think to focus on initially - like bastion hosts, SWF use cases, and so on. I scraped by on my AWS Solutions Architect exam. I took courses and practice exams. My experience is that I got way more questions about newer stuff like lambda than I was led to believe would be there. They seem to favor newer technologies even though the courses say to focus on core services like EC2, S3, EBS, etc. So definitely don't skip on the obscure stuff.
|
# ¿ Feb 7, 2018 21:48 |
|
Has anyone messed around with Lambda functions? My boss asked me to come up with a function to do some penetration testing on our instances, essentially, check to see whether a certain port on a certain instance is open. I'm thinking about using the API gateway and maybe a simple webpage front end that will run the lambda function but I'm open to ideas. I kind of suck with coding in python but this should be a pretty simple. Anyone have any suggestions?
|
# ¿ Feb 21, 2018 16:34 |
|
JHVH-1 posted:Sounds pretty complicated, could just run nc or something. I had a script like this I wrote in python though a couple jobs ago to make sure a port was open so the app was running just using sockets I think. Thanks, those are very useful links. Re: Lambda, I wasn't sure how feasible it was, it was more like an idea for us to dip our toe into serverless. Yeah, I'll bring that up that we are supposed to notify AWS. the problem with flow logs is that this is a build/test environment so I don't think there's much traffic flowing through as of yet.
|
# ¿ Feb 21, 2018 21:26 |
|
Thanks for the replies. Another question: can anyone recommend a good S3 viewer for Mac OS?
|
# ¿ Mar 2, 2018 20:01 |
|
Thanks again, yet another question: Is there a tool or command I can use to get a list of all the AMIs that are currently being used with EC2 in my environment? Essentially, we want to be able to prove for regulatory reasons we are only using base AMIs with all services disabled by default.
|
# ¿ Mar 14, 2018 16:10 |
|
Has anyone used Lambda to parse SNS event? I've been trying to parse an AWSConfig rule's SNS events and using the data to do various things. I'm able to parse the data up to a point, but I can't seem to get the message JSON data that I want. I'm using python 2.7, and json, boto3 imports. I think my issue is the JSON is being read as a single key. Anyone know how to do this? code:
output: code:
SnatchRabbit fucked around with this message at 23:30 on Mar 29, 2018 |
# ¿ Mar 29, 2018 23:13 |
|
Does anyone know if its possible in cloudformation to do a GetAtt on a resource that's already been created manually? Like, not something from another stack, just an Arn from say an SNS topic you turned on by hand in the console? Yeah I could make a parameter and have the user enter it at runtime, but what's the fun in that. It doesn't look like this is possible but figured I would ask.
|
# ¿ Apr 19, 2018 20:41 |
|
We currently have a docuwiki site up and running on EC2. It's essentially just a web server for various internal documentation. Some of the pages, however, contain iframe links to documents we have hosted on S3. The S3 policy just has those documents made public. I'd like to get this a bit more secure, but I'm trying to figure out the best solution. I thought about granting the instance a role with S3 privileges, but I think the iframe links for the documents would be technically be requested from the end user on the web and so giving the EC2 instance an s3 role wouldn't solve that problem, would it? I've also thought about having the docs stored locally on the instance and having it do an s3 sync periodically, but that seems kind of overwrought. Is there a simpler solution I'm overlooking? edit: would this be something I could set up with CORS configuration or presigned urls?
SnatchRabbit fucked around with this message at 21:27 on May 9, 2018 |
# ¿ May 9, 2018 20:26 |
|
Thanks Ants posted:Have a read of https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html So, I thought about using cloudfront but I don't know if management will go for the added cost given that this is just an internal wiki/documentation site. I was thinking about adding CORS to the S3 bucket, and allowing GET method from the instance's domain. Would that work while keeping the S3 bucket private, or invalidate the entire purpose of having the bucket private in the first place?
|
# ¿ May 9, 2018 22:01 |
|
Does anyone know if it is possible to get the output from System Manager's Run Command all in a single file? I have a bunch of instances running a script but the outputs are all separated in S3 according to the commands and the instance IDs. I'd prefer to have all the outputs appended to a single file. Anyone know how?
|
# ¿ May 15, 2018 22:55 |
|
Does anyone have any links, tips, or tricks for managing patches in Systems Manager? We have a bunch of environments running Oracle apps on RHEL, so I'm just throwing out the bat signal for anything people have found that works.
|
# ¿ Sep 21, 2018 21:49 |
|
I want to use IAM to control my users' access to Session Manager and restrict access to only certain instances in my AWS account. I found the example policies here which should give me most of what I need. I'd like this to be completely automated and expire session manager access after, say 24 hours, or whatever. What I'm thinking is using lambda to create the policy, and attach it to a user which is simple enough. The tricky part is going to be detaching/deleting the policy after the expiration period. I don't think I can use a single lambda to do everything since the timeout is like 5 minutes. I guess I could use that same lambda to invoke another lambda but I feel like that's an overwrought solution. Is there a way to either set a policy with an expiration, or some other way to achieve this that I'm not thinking of?
|
# ¿ Oct 29, 2018 15:51 |
|
Athena sounds like it might be a good fit, but alternatively you could run a managed Postgres database in RDS and query that with say Lambda using Python, although, the timeout on Lambda queries are five minutes so you might need to break up the operations you're doing. Lambda might be a nice fit because assuming the queries run in a reasonable timeframe you could write the results directly to S3 or dynamodb using the boto3 library in Python.
SnatchRabbit fucked around with this message at 00:11 on Nov 2, 2018 |
# ¿ Nov 2, 2018 00:09 |
|
Does anyone have a recommendation for a good tutorial for appsync? I'm trying to put together a management tool for our clients to execute simple tasks on their Oracle enterprise environments. The idea is to have a simple webpage that displays the cloudformation stacks associated with a given environment and then buttons to, say, reboot all the EC2 instances for said environment or refresh a database, launch a new environemt, etc. I realize all this can be done in the AWS console but we'd like to simplify it for the client. At first I was messing around with CSS, api gateway and lambda functions, but I kind of suck at javascript programming so I was peaking around at appsync. Wondering if theres a good overview video and/or tutorial to see if it will do what I need with the least amount of friction.
|
# ¿ Dec 28, 2018 20:29 |
|
I'm writing a management web page for some aws resources and it's pretty daunting. I'm basically having to rewrite portions of the AWS console so that clients can mash buttons to interact with the environments we've built. What I'm currently doing is sending commands, pulling data using api gateway and lambda then displaying it on the webpage. It's a ton of work write all the buttons just to get a stripped down version of Amazon's web GUI so I'm wondering if I'm going about this all wrong. Is there a simple way to use say cloudwatch dashboards or something and pipe that over to another webpage somehow? I know you make widgets to check on EC2 stats and such but it seems like you can only pull data out. Anyone tried something like this? edit: to be clear, manipulating the aws resources isn't the hard part. It's getting status information back that's really proving to be a pain. I'm having to do multiple describe_instances, describe_instance_status calls and looping through everything to get what I information about the status of whatever it is I executed with the buttons. edit2: I guess I could try to pull the events from the stacks in cloudformation as well, but that might be as much of a pain. We'll also be doing a lot orchestration through codedeploy so I might be able to get get something out of there.... SnatchRabbit fucked around with this message at 23:06 on Jan 16, 2019 |
# ¿ Jan 16, 2019 22:51 |
|
I have a put together a cloudformation template for a job interview. Basically they want a simple web app to return the current datetime. I'm thinking of having an html page hosted in s3 with some javascript to hit api gateway which will then hit a lambda to return the date. Maybe I throw in a Rt53 entry. I want this to be as push button as possible but how do I get the html page into the cloudformation? Is there a way to code it in, or reference it from a git repo or something? Would I need to use codecommit/codedeploy?
|
# ¿ Jan 25, 2019 05:29 |
|
Arzakon posted:There isn't a great way to get an object into S3 from within CFN. One option would be to use a Lambda Custom Resource to drop the object in the S3 Bucket created in the CFN template. You essentially have to create another Lambda Function in the template, create the custom resource, which fires the Lambda Function to perform the put-object. If you are trying to look PRODUCTION READY you need to handle what the Custom Resource does on UPDATE (replace the file?), DELETE (delete the file, important for deleting the bucket). The custom resource code is probably more than all your other code but its what you do when you want to make AWS API calls that CFN can't do for you. If you can do it in 4096 characters you can put it inline in the CFN template, otherwise you have to stage it in S3. Thanks, that's pretty much the conclusion I came to myself after a while. Writing the lambda to put the html into s3 wasn't all that bad, I just have to write the custom resource now. I'll have to manage the cleanup a bit and empty the bucket but i think itll be pretty slick if i get it working properly.
|
# ¿ Jan 26, 2019 07:55 |
|
RVWinkle posted:I'm glad you brought this up because it's something I have been thinking about. I'm hoping that in AWS::IAM::Policy I can just use something like Resource: !Ref S3Bucket. Yup that's exactly what I did. I had the bucket set to public read but I might remove that since I have my lambda using extraargs to set the index.html to public when it uploads the file, so I don't think I really need a bucket policy, right?
|
# ¿ Jan 27, 2019 01:03 |
|
Can someone answer a simple S3 encryption question for me? If I set default bucket encryption to S3 using AES256 that will encrypt objects in the bucket at rest, correct. Now what about in transit? Currently, I have a off-site QRadar server which I have configured to ingest Cloudtrail and GuardDuty logs with log sources. These log sources each have their own IAM user with a policy that allows them to access the S3 bucket. The cloudtrail encrypts objects with a KMS key in addition to the default bucket encryption. The Cloudtrail QRadar IAM user has access to this KMS key as well as the bucket and can fetch the logs no problem via Access Key and Secret Access key using Amazon AWS S3 REST API Protocol. GuardDuty only has the bucket level encryption and so it's IAM user policy only has access to the encrypted bucket. Now, my question is: will either of these scenarios encrypt the data in transit to the off-site Qradar? In either case, is there a relevant AWS Docs page explaining why or why not?
|
# ¿ Jul 2, 2019 18:48 |
|
JHVH-1 posted:You would use SSL/HTTPS so its encrypted in transit. You can enforce it by add a deny to the policy with a condition "aws:SecureTransport": "false" Awesome, thanks!
|
# ¿ Jul 2, 2019 19:27 |
|
I think this might be what you are looking for: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EC2_Run_Command.html quote:You can use Amazon CloudWatch Events to invoke AWS Systems Manager Run Command and perform actions on Amazon EC2 instances when certain events happen. In this tutorial, set up Run Command to run shell commands and configure each new instance that is launched in an Amazon EC2 Auto Scaling group. This tutorial assumes that you have already assigned a tag to the Amazon EC2 Auto Scaling group, with environment as the key and production as the value.
|
# ¿ Jul 3, 2019 20:09 |
|
Does anyone have experience with using Private API Gateways? I have a client that has their Datacenter's public IPs hitting an API Gateway which they've protected with a AWS WAF and a list of whitelisted ips and Cognito for authentication. They're asking for a Private API Gateway that's built "the right way", but their set up doesn't seem to be "private" by definition in AWS terms. I'm struggling to see how setting the APIGW to private would have much of a benefit. They already have a ton of VPN connections to the datacenters and we're currently troublshooting latency issues on an unrelated project, so my gut feeling is that routing all the API calls through those VPNs which are already causing issues is a recipe for pulling my hair out. I guess my question is this: Is there a relatively straight forward way to route the datacenter traffic through the VPNs and into the APIGW while keeping it relatively secure? They're not doing any VPC peering and their traffic is already a mess so I have my doubts.
|
# ¿ Jul 15, 2019 18:56 |
|
Twlight posted:to use a private aws api gateway, you need to setup a vpc endpoint in accounts you wish to use it in. I believe that is the only way to access private APIs, I *think* with the resource policy within the API you can white list IPs, I'm not sure if that would allow you to jump to it but i'm not sure. Right that's my understanding as well. So in that VPC with the VPC endpoint would we need an EC2 instance to pass the information from the VPN to the APIGW?
|
# ¿ Jul 15, 2019 22:29 |
|
Twlight posted:I beleive so, this of course makes the entire setup less than ideal with that ec2 being the weak link in the chain. We use private APIGWs to provide a metadata service for customers within our different accounts, interesting data like proxy information. The other rub with private APIs is they don't accept any sort of CNAME so you're stuck with the long aws name provided. Gotcha, that's kind of where I'm headed: get them to give me a good explanation of why this is even necessary given the amount of effort it would take the complexity of their network. Thanks!
|
# ¿ Jul 16, 2019 17:57 |
|
Does anoyone know how to setup sns notifications for Config on a per rule basis? I know that I can remediate rules with PublishSNSTopic SSM document, but the remediation seems like a manual step unless I'm missing something. Basically I need some way to either query the config rule on a schedule and then publish to SNS or kick off the remediation on a schedule? edit: maybe a python lambda for start_remediation_execution() on the particular rule running on a cloudwatch schedule? Or am I overthinking? SnatchRabbit fucked around with this message at 19:26 on Aug 21, 2019 |
# ¿ Aug 21, 2019 19:23 |
|
Cross posting from the SQL thread: I have a client that wants to migrate two MSSQL database servers with 200+ db objects between them to AWS Cloud. Now, up until this point we've been fine using Data Migration Service to move the table data from their on-prem servers into AWS EC2 (RDS was ruled out due to cost). The problem is that DMS doesn't migrate indexes, users, privileges, stored procedures, and other database changes not directly related to table data. So now we have generate scripts by hand for these 200+, at minimum, objects. What I'm asking is, is there some hacky way to automate this migration or are we just stuck having to do it all by hand over and over again? Is there some option in DMS to make this happen?
|
# ¿ Sep 24, 2019 21:01 |
|
Agrikk posted:Always this. Thanks, this is for a client account. Would there be a TAM assigned even on a basic support plan? SnatchRabbit fucked around with this message at 23:46 on Sep 24, 2019 |
# ¿ Sep 24, 2019 23:38 |
|
Has anyone had any luck copying objects in bulk from a FTP server (or any server really) into s3, ideally using sync command but not required, and keeping the source file's attributes, such as file created/updated, tags etc, and populating that data into the s3 object's custom metadata? Really, my only requirement is I just want to know the source files creation date/time on the FTP and just have that value stuck into a custom metadata tag on S3. This sounds like an easy thing I'm just not seeing any obvious solution. I thought maybe something like S3Browser might have that built in but I'm just not seeing it.
|
# ¿ Oct 5, 2020 14:13 |
|
deedee megadoodoo posted:I think you’ll have to write a custom script. Either upload the files one at a time and set the tag on each object or run a sync and then have a script set the date tag on every object in s3. Yeah, I think that's the conclusion I've come to. Just sketched out a basic bash script like ya: code:
|
# ¿ Oct 5, 2020 15:37 |
|
|
# ¿ Apr 30, 2024 21:13 |
|
Has anyone taken the AWS Networking Specialty Exam? I've been working with AWS for almost 10 years and I can build VPCs, TGWs, routes in my sleep at this point. I'm sure there are some service level gotchas but for anyone that has passed the exam how did you find it? I aced the Security Specialty and it was mostly rehashed SA Pro questions...
|
# ¿ Jun 29, 2021 15:09 |