|
it has elastic beanstalk which is kinda like heroku if it was made by the government if you want that kind of experience you should stick with google imo
|
# ? Apr 3, 2017 14:16 |
|
|
# ? May 21, 2024 14:33 |
|
the talent deficit posted:it has elastic beanstalk which is kinda like heroku if it was made by the government Oh man that's a great line. I can't wait to steal it.
|
# ? Apr 3, 2017 14:25 |
|
oliveoil posted:Does Amazon have anything like Google's App Engine? I don't want to know how to set up a system out of multiple components. I just want to write some code and then upload it and magically have a working application and never worry about virtual machines or how many instances of my code are running etc Lambda is probably what you want
|
# ? Apr 3, 2017 14:50 |
|
Wouldn't lambda be more similar to Google Functions (which is still beta)?
|
# ? Apr 3, 2017 15:03 |
|
cosmin posted:Wouldn't lambda be more similar to Google Functions (which is still beta)? Maybe but given the requirements: quote:Does Amazon have anything like Google's App Engine? I don't want to know how to set up a system out of multiple components. I just want to write some code and then upload it and magically have a working application and never worry about virtual machines or how many instances of my code are running etc And the requirement for it to be AWS I was thinking: static content on s3; AJAX hitting lambda is the closest match. It's pretty much their textbook mobile backend example on the lambda websites. Basically it's the fusion of IaaS and PaaS.
|
# ? Apr 3, 2017 15:21 |
For managing sprawl of EC2 instances, check out http://www.parkmycloud.com/ . The main purpose is for scheduling instances to start/stop to save money, but it pulls all instances from all regions into one view so you can more easily see if something still running in some region you never look at regularly. RDS is getting added soon, and other services will come later, but it's great for keeping EC2 in check.
|
|
# ? Apr 3, 2017 16:49 |
|
if you were launching a node app today: ec2 instances, elastic beanstalk, or ecs?
|
# ? Apr 13, 2017 03:25 |
|
StabbinHobo posted:if you were launching a node app today: ec2 instances, elastic beanstalk, or ecs? You forgot Lambda! Is it a production node app >>> ec2 Is it a dev node app >>>> ecs maybe There are some hippa and pci issues with ecs still I think. Since someone referred to beanstalk as govt like built that just makes me not want to recommend it.
|
# ? Apr 13, 2017 04:01 |
|
Beanstalk is just CloudFormation and a bunch of shell scripts. It's not terrible if you just want to deploy an app. We'll need a little more info about what you're trying to do to be able to give you a good recommendation. Will you need to scale? What sort of volume of traffic are you looking at? Are there any persistence requirements or is the app stateless? How much time/money/energy do you have to throw at solving this?
|
# ? Apr 13, 2017 12:51 |
|
Virigoth posted:You forgot Lambda! "You can use services such as AWS Lambda, AWS OpsWorks, and Amazon EC2 Container Service (Amazon ECS) to orchestrate and schedule EC2 instances as long as the actual PHI is processed on EC2 and stored in S3 (or other eligible services). You must still ensure that EC2 instances processing, storing, or transmitting PHI are launched in dedicated tenancy and that PHI is encrypted at rest and in transit. Any application metadata stored in Lambda functions, Chef scripts, or task metadata must not contain PHI." Tl;dr don't put PCI in your task definition and you're a-okay.
|
# ? Apr 13, 2017 15:03 |
|
Lambda functions can be annoying to call over HTTP, you have to set up an API Gateway instance to make it possible and then if you want to put that behind CloudFront there's even more weird tweaks to get it to work. I wouldn't recommend them for HTTP consumers until they sort that out but they seem okay for responding to SNS/SQS/etc type events.
|
# ? Apr 16, 2017 13:29 |
|
Is anyone using the Storage Gateway product in production? It seems great on paper but my early experiences are very mixed and I'm wondering if I should stick with it or just give up already.
|
# ? Apr 19, 2017 18:52 |
|
Any thoughts on CodeStar? https://aws.amazon.com/blogs/aws/new-aws-codestar/ Seems like price and the fact that everything is confusing and I feel like I need to know how all the different products work in order to know which products I need and how to configure and deploy to each one to make a basic CRUD app are all that make me from tinkering with AWS. Seems like this would help with the non-procrastinators related stuff? I write code and push buttons and and magically have all the pieces needed for a web app set up for me?
|
# ? Apr 19, 2017 22:41 |
|
oliveoil posted:Any thoughts on CodeStar? https://aws.amazon.com/blogs/aws/new-aws-codestar/ It looks pretty cool - you could manage a lot of those pieces with CloudFormation templates, or Terraform, or any number of other solutions. But for a lone developer or small shop/department, this sounds like a great first step into bringing all of that stuff under one roof. At the very least, it creates most of the infrastructure you'd use in a modern dev stack and gives you a nice dashboard for a bird's eye view. And it will likely continuing being developed, so it should get better. I'm going to try it out for my next project!
|
# ? Apr 19, 2017 23:23 |
|
Quick question here. I'm trying to use EBS to set up a Moodle platform. I've set up the environment, uploaded the moodle zip package which deploys correctly. I run through the web installation, connect to the db, but I get stuck at the pre-requisite checks with the following error:code:
|
# ? Apr 20, 2017 21:13 |
|
SnatchRabbit posted:Quick question here. I'm trying to use EBS to set up a Moodle platform. I've set up the environment, uploaded the moodle zip package which deploys correctly. I run through the web installation, connect to the db, but I get stuck at the pre-requisite checks with the following error: When you spun up the EC2 instance, it should have asked you to either generate or specify a key, depending on if you've done so before. Assuming everything was on default and you chose to give it a public IP, you should be able to ssh into it with the aforementioned key. Also check the security group(s) associated with the instance to make sure they are allowing port 22 either to the world, or your IP address. I'm not sure where EBS comes in here - were you given a disk image to use somehow? EBS is Amazon's "hard drive in the cloud" offering, so it shouldn't have much to do with Moodle. But I don't know Moodle at all.
|
# ? Apr 20, 2017 22:42 |
|
xpander posted:I'm not sure where EBS comes in here - were you given a disk image to use somehow? EBS is Amazon's "hard drive in the cloud" offering, so it shouldn't have much to do with Moodle. But I don't know Moodle at all. Sorry, I'm referring to Elastic Beanstalk. Essentially, Moodle is just a php application that I can download in zip form and elastic beanstalk will accept it. The trouble is getting the environment I set up to play nice with moodle vis a vis php extensions.
|
# ? Apr 20, 2017 22:55 |
|
SnatchRabbit posted:Sorry, I'm referring to Elastic Beanstalk. Essentially, Moodle is just a php application that I can download in zip form and elastic beanstalk will accept it. The trouble is getting the environment I set up to play nice with moodle vis a vis php extensions. My bad, I don't encounter Beanstalk that much so I don't equate it to the acronym. Looks like it's possible to install PHP extensions with an EB command: http://stackoverflow.com/questions/38730483/how-to-install-a-php-extension-witn-amazon-aws-elastic-beanstalk More general info on using EB configuration: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_PHP.container.html
|
# ? Apr 20, 2017 23:35 |
|
Our AMI bake times have ballooned to 40-50 minutes, and I really want to dive and debug why that's happening. I've tried reaching out to my coworker that's in charge of the pipeline/AWS stuff, but he's unhelpful and reluctant to walk me through the process, and what we're doing and why. I want to just bypass him and do some of my own digging to figure out how to reduce the amount of time it takes to bake. What're the common reasons why baking an AMI might take so long? Something about the files involved to do so? Is there a way to debug/step through the process? I never got an answer re: why we're baking an AMI for each new commit to a branch, besides "that's the commonly accepted pattern". I get that it's technically correct, but it's also bullshit slow, and I question whether or not it's worth it given that we commit early and often and therefore deploy to ticket-specific servers early and often, and this time fuckin' adds up man. We're behind schedule as-is and this process is making it so much worse.
|
# ? Apr 26, 2017 17:32 |
|
Pollyanna posted:Our AMI bake times have ballooned to 40-50 minutes, and I really want to dive and debug why that's happening. I've tried reaching out to my coworker that's in charge of the pipeline/AWS stuff, but he's unhelpful and reluctant to walk me through the process, and what we're doing and why. I want to just bypass him and do some of my own digging to figure out how to reduce the amount of time it takes to bake. What're the common reasons why baking an AMI might take so long? Something about the files involved to do so? Is there a way to debug/step through the process? If you go into the dashboard you can see the actual process of volumes being snapshotted to create the AMI. That is usually what takes the time up. Volume snapshots are supposed to be deltas, so if you snapshot the same volume it shouldn't take as long after repeated imaging assuming you are doing it from the same instance each time. If a new instance is spun up each time and you keep adding more data then that would add to the time. Oh also the type of volumes you are attaching could impact it as well, so if you aren't using EBS backed instances it would take longer.
|
# ? Apr 26, 2017 19:37 |
|
It seems like most of the time spent during the Bake AMI step is when running this command:code:
code:
I know literally nothing about AWS, so maybe I'm missing something... Edit: How big are Docker images supposed to be, generally? Ours ends up at like 315 MB or so. Is that normal? Pollyanna fucked around with this message at 20:06 on Apr 26, 2017 |
# ? Apr 26, 2017 20:02 |
|
Huh, so creating an AMI is more complicated than taking a tarball with a root fs in it, adding a dir with your app to it and uploading that to S3?
|
# ? Apr 26, 2017 20:16 |
|
Vanadium posted:Huh, so creating an AMI is more complicated than taking a tarball with a root fs in it, adding a dir with your app to it and uploading that to S3? Don't ask me. I'm just trying to debug our deployment pipeline built by our ops team (who recently quit en masse) cause it's slow as gently caress.
|
# ? Apr 26, 2017 20:20 |
|
So the AMI baking before the Docker container is deployed is what takes a long time? What sort of provisioning tools are you using to provision the AMI? If Chef, you should look at the recipes being run and see if you can figure out where it's spending most of its time.
|
# ? Apr 27, 2017 04:31 |
|
Is something running a yum update or equivalent on an increasingly out of date base AMI?
|
# ? Apr 27, 2017 06:19 |
|
oliveoil posted:Any thoughts on CodeStar? https://aws.amazon.com/blogs/aws/new-aws-codestar/
|
# ? Apr 27, 2017 06:55 |
|
Does anybody on here login to their AWS console through SAML? I'm looking to sort out our sprawl of independent accounts as the company grows. I have AWS linked to G Suite so everybody pick a role when they log in based on strings stored in the directory schema, and this works well. However, a legit issue that has been raised relates to generating access keys for services - since SAML just grants access rather than actually creating an account, there's no user object to add access keys to. If I make a SAML role that enables people to create users that they can then add access keys to it defeats the purpose of using SAML in the first place, since there's extra workload created to audit these accounts and the permissions attached to them. Is this a thing that anyone has solved, or are people just using something like Spinnaker / their own internal tools which use internal directory details, and not letting people touch the AWS console? Thanks Ants fucked around with this message at 21:40 on Apr 30, 2017 |
# ? Apr 30, 2017 21:37 |
|
Does anyone know anything about kubernetes, specifically kube-aws? I'm trying to set up a pipeline to go from a Jenkinsfile/Dockerfile github to Jenkins to AWS, but I'm getting hung up by the fact that despite following the tutorial, I get this when I run kube-aws validate (with the proper s3 URI): code:
code:
|
# ? May 1, 2017 02:19 |
|
That just sounds like your kubeconfig doesn't properly set credentials and other stuff like tls in order to use the cluster API
|
# ? May 1, 2017 03:12 |
|
Thanks Ants posted:Does anybody on here login to their AWS console through SAML? I'm looking to sort out our sprawl of independent accounts as the company grows. I have AWS linked to G Suite so everybody pick a role when they log in based on strings stored in the directory schema, and this works well. However, a legit issue that has been raised relates to generating access keys for services - since SAML just grants access rather than actually creating an account, there's no user object to add access keys to. If I make a SAML role that enables people to create users that they can then add access keys to it defeats the purpose of using SAML in the first place, since there's extra workload created to audit these accounts and the permissions attached to them. Don't generate access keys for services at all. Instead, use IAM roles and instance profiles and the aws-sdk to authenticate your service on-instance.
|
# ? May 1, 2017 04:26 |
|
Thanks. I've had a look at the documentation for instance profiles and that seems to be applicable for accessing AWS resources from EC2. The dev team are using access keys for things like connecting their desktop applications to the service - sorry I have to be light on details, I need to have a catch-up with the team lead to figure out what they are actually doing. Is the answer here to just use tools that can authenticate using a SAML workflow? I've asked them to put me in touch with our account manager as well to see if we can get on a chat and figure something out, or at least get a clearer idea of what we're trying to do. Thanks Ants fucked around with this message at 11:09 on May 2, 2017 |
# ? May 2, 2017 11:00 |
|
Ok, I understand. We've implemented this but I don't know what was done on the SAML side to get it working. Basically you use whatever SSO provider you're using to authenticate to AWS via IAM::AssumeRoleWithSAML and then have some sort of service running locally that rotates your credentials accordingly. The project is open source but I really don't want to doxx myself so if you're interested please PM me.
|
# ? May 2, 2017 11:55 |
This is technically a Google Compute Engine question, but I think the answer would also apply for AWS and I can't find a GCE thread. I have a Linux instance that I use for: 1) Small personal projects 2) Hosting example REST services for my students to hit when working on their assignments I needed a GUI for something I was working on, so I installed Gnome and I access it using VNC Server. This works reasonably well, but after a while when I attempt to sign in using a VNC viewer, I get back the message "ERROR: Too many security failures". I can get back up and running by SSHing into the instance, killing my VNC Server process, and restarting. I'd rather not need to do that, though, and I'm not sure what best practices are. Is there a way to restrict VNC logins to just known IPs? Did I configure something wrong? I also noticed that this problem started after I set up my students' sample REST app. I temporarily exposed 8080 to get that working, but I closed it thereafter. Is it possible I did something wrong there? Any guidance would be appreciated!
|
|
# ? May 3, 2017 10:32 |
|
EC2: security groups are your friend. Trivial to limit IPs it can be accessed from. If those aren't available, iptables/firewalld. No experience with GCE, so I dunno. On AWS I always set up rngd (for more available randomness) and fail2ban with a SSH ban rule, at the very least. There's probably some way to do fail2ban with VNC, but to be honest you should not be running VNC unencrypted on a standard port (people are gonna portscan the ever-loving poo poo out of it), and instead consider making it only accessible from localhost and requiring a SSH tunnel. If that isn't an option...dunno.
|
# ? May 3, 2017 10:51 |
|
I'm having a really hard time with DynamoDB + throttling. I'm continuously updating a couple thousand rows, with a couple dozen distinct partition keys. This works fine most of the time, but I get sporadically throttled bad enough that my writes back up and I need to drop updates. I'm not sure why that happens and I'd like it to stop. Is this just a capacity thing? According to the write capacity graph in the DynamoDB console, I have over twice as much provisioned capacity as I'm using when averaged over a minute, though I tend to spend five seconds writing a lot and then like twenty seconds not writing anything, so in those brief few seconds I consume more write capacity units than I have provisioned. So I use x capacity units averaged over a minute, I have x*2.5 provisioned, the actual usage pattern is a burst of consuming x*7 - x*8 over a few seconds, and then do nothing for the rest of the time. During the update cycles where I get throttled, I end up consuming like x*1.5 capacity units per second during the burst period, no change in usage averaged over a minute unless it's bad enough that I end up dropping writes. My assumption was that the provisioned capacity isn't literally "per second", so I can unevenly consume it across a short timespan like a minute or five, is that wrong? Even if it is I don't get my provisioned per-second capacity during the throttled periods so I'm really confused what I'm actually provisioning. (Edit: I think http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html#GuidelinesForTables.Bursting is what I was thinking of.) Maybe it's just my partitions being very uneven, so the capacity is partially provisioned to partitions I'm not actually using very much? It's a small set of partition keys so I wouldn't be very surprised. On the other hand they never change and the periods in which I'm getting throttled always seem to be fairly short, with much longer periods (hours, maybe days) of things working smoothly, so I dunno what is changing. Is there a way to actually look at how many partitions exist and how load is distributed among them, or are they just an explanatory device and not how things actually work? But according to the illustrative numbers in the DynamoDB docs, my data is way too small and my writes are way too few on average to even get close to filling up a single partition, so I don't have much hope in this direction. The throttling doesn't seem to coincide with, like, peak daily traffic or anything either. I'm just super mystified and basically resigned to magical thinking at this point. So far I've been looking at the CloudWatch metrics for provisioned capacity, capacity usage and write throttle events, is there anything else I should be looking at to figure out what I'm doing wrong? Vanadium fucked around with this message at 13:48 on Jun 19, 2017 |
# ? Jun 19, 2017 13:30 |
|
your capacity is measured per-second, but you can burst up to 5 minutes worth of capacity if you've banked it. the analogy is a bucket, where the bucket is as a big as 5 minutes worth of capacity, but only is refilled at your per-second capacity. every request pulls some stuff out of the bucket, and if the bucket is empty your request is throttled.
|
# ? Jun 19, 2017 16:19 |
|
most dynamo clients should be able to return your remaining capacity per call, so you just need to throttle on your end
|
# ? Jun 19, 2017 17:11 |
|
If banking unused capacity happens on a 1:1 basis, I shouldn't have any issues. Averaged over 30 seconds I'm consistently under the provisioned capacity. Getting throttled at that point makes sense to me if the banked capacity is only provided on a best-effort basis, if the underlying hardware has capacity to spare or whatever. What I can't figure out is why I seem get throttled below my provisioned capacity on a per-second basis, but maybe I'm actually measuring that wrong and averaging too much there. I haven't realized I can get numbers for my remaining capacity, that sounds a lot more useful than the consumed capacity I've looked at. I'm not sure what I get out of throttling myself, though--does getting throttled on the AWS side punish me by consuming even more capacity? Right now I just add exponentially backoff whenever at least one write in my batch gets throttled, but with how quickly a little bit of capacity refills that doesn't really do much, maybe I need to be more aggressive about backing off. Vanadium fucked around with this message at 22:50 on Jun 19, 2017 |
# ? Jun 19, 2017 22:40 |
|
Do you have indexes that might be consuming "extra" write capacity?
|
# ? Jun 19, 2017 22:53 |
|
|
# ? May 21, 2024 14:33 |
|
Nah, I have one local secondary index and I'm pretty sure it gets included in the ConsumedCapacity CapacityUnits total.
|
# ? Jun 19, 2017 23:22 |