|
Pollyanna posted:I have a question about EBS and baking AMIs. We're currently baking a new AMI for every new version of our app we want to deploy, and I'm wondering if there's a way around that? It takes 15~20 minutes to band one which means that every commit I push to Bitbucket takes half an hour to show up on its staging server. I have to debug some pipeline related poo poo and ensuring that long to run into yet another bug is driving me crazy. What can I do to mitigate this? spinnaker or something like it can help here by building the ami state on a locally attached ebs volume instead of spinning up a new instance and building from scratch each time but i think most organizations create ami's that include everything but the application code/configuration and then use a minimal deploy tool + launch configurations
|
# ¿ Mar 10, 2017 03:04 |
|
|
# ¿ Apr 30, 2024 18:55 |
|
i use this to track what exists where: https://github.com/Netflix/edda/wiki
|
# ¿ Apr 3, 2017 03:56 |
|
it has elastic beanstalk which is kinda like heroku if it was made by the government if you want that kind of experience you should stick with google imo
|
# ¿ Apr 3, 2017 14:16 |
|
most dynamo clients should be able to return your remaining capacity per call, so you just need to throttle on your end
|
# ¿ Jun 19, 2017 17:11 |
|
Vanadium posted:If banking unused capacity happens on a 1:1 basis, I shouldn't have any issues. Averaged over 30 seconds I'm consistently under the provisioned capacity. Getting throttled at that point makes sense to me if the banked capacity is only provided on a best-effort basis, if the underlying hardware has capacity to spare or whatever. switch from exponential backoff to a codel queue. you'll get better throughput and lower average latency at the cost of p99 latency. (does not apply if you need ordered writes)
|
# ¿ Jun 20, 2017 03:17 |
|
Blinkz0rz posted:Anyone have suggestions for the best way to analyze CloudTrail logs? We're getting rate limited on some of our EC2 API calls and it's unclear why at a glance. Happens most in eu-central-1 fwiw. athena if it's a one time thing, redshift if you want to do it frequently
|
# ¿ Jun 20, 2017 18:48 |
|
UnfurledSails posted:I have a Redshift table that needs to be migrated over to DynamoDB. I've found a lot of resources regarding moving from Dynamo to Redshift, but not much for the opposite. Any ideas on how I can go about this? you could run a spark job on emr that just reads straight from the db and inserts into dynamo
|
# ¿ Jul 2, 2017 06:35 |
|
i worked with redshift every day and you can't do queries async and you can't hold open a lambda for more than five minutes. athena is basically redshift but slower however you can query it async if your queries don't need to be fast. otherwise you probably want to use something like emr and use lambda to kick off a cluster that runs a spark job with redshift or s3 as the backing store and have it write out results to a bucket and trigger sns on that write
|
# ¿ Sep 11, 2017 03:19 |
|
he wants the buckets removed or the contents of the buckets? either way I'd do something with boto probably
|
# ¿ Feb 6, 2018 20:39 |
|
AWWNAW posted:Are there any better options for creating IAM users with access keys via cloud formation than outputting the access key as part of the template? don't create access keys in cfn. grant users the ability to create and manage their own keys if they need them or use federation to allow them to assume roles instead
|
# ¿ Apr 16, 2018 16:43 |
|
AWWNAW posted:Yeah I’d rather not create them in cloud formation but how can I federate Kubernetes pods? I guess I can google it. i thought this was for actual people users, my bad i've only ever used https://github.com/jtblin/kube2iam for iam with k8s pods but i didn't set it up. as far as i know it doesn't require access keys or users at all
|
# ¿ Apr 17, 2018 20:18 |
|
Volguus posted:What is the best way to run jobs/executable on demand in AWS? ECS/Fargate? AWS Batch? Some other mechanism? i solved a similar problem to yours with sqs + cloudwatch alarms + lambda. we'd post new jobs to an sqs queue. an alarm on the queue would fire when the queue was nonempty and that would trigger a lambda that read the sqs message and started the job (in our case an emr cluster).
|
# ¿ Apr 17, 2018 20:22 |
|
devops isn't a person, it's a methodology you wouldn't hire an agile person to agile up your software. altho i guess tons of places do this too
|
# ¿ Apr 22, 2018 18:30 |
|
Volguus posted:Wait, devops is not a job title, a particular job description? "Devops" guys is totally I thing that I heard. And people do hire agile consultants, although I don't think many know what to actually expect of them. you are wrong, absolutely (but it's probably not your fault) what devops is supposed to mean is that developers take responsibility for the operation of the software they write. this doesn't necessarily mean they need to carry a pager (but they probably should) and learn the difference between `/etc/some_garbage_i_wrote` and `/opt/some_garbage_i_wrote/config` but it means operational concerns like logging, telemetry, configuration, packaging and deployment need to be accounted for and handled during development. there's a bunch of ways you can accomplish this (embed operations people on development teams, include operations in planning, hire developers with operational experience, whatever) but it doesn't really matter how you do it. what matters is that the operational aspects are surfaced and dealt with during development on the other side, operations don't just ssh into a machine and run `apt install some_garbage_someone_else_wrote`, put some db settings in a conf file and write an init script. when there's some production incident it's not sufficient they just restart the service or reboot the machine. they need to know enough about the software to do more than just open an issue and assign it to the developers. they should be actively involved in helping the developers discover and address the issue so the key thing is that development and operations share a goal and are evaluated in the same way and they collaborate to deliver better results if you have a 'devops' role or person and they are your jenkins admin or something or they write terraform modules to setup your infra then whatever that's fine, but what you really have is an infrastructure developer or software reliability engineer or just an operations person who knows how to code. if you're not doing the shared responsibility thing you're not doing devops even if you call it that
|
# ¿ Apr 22, 2018 23:20 |
|
cloudformation was pretty mediocre until late 2016. since then they've added a ton of great features
|
# ¿ May 31, 2018 02:59 |
|
RVWinkle posted:Hey can anybody help me with some syntax in my CloudFormation template? I'm making secure pihole stack just for fun and when I use conditions and paramaters with AWS::CloudFormation::Init: things go to hell. I have been searching around but haven't been able to find any code examples for this scenario. i don't know if your examples are just misformatted or what, but the first is equivalent to: code:
code:
you probably want: code:
|
# ¿ Jan 16, 2019 09:40 |
|
what's my best option for pushing records to kinesis from languages with poor support for the kinesis producer library? what i've considered so far: a: pushing directly to the stream using a native client b: writing a basic http wrapper around the kpl and pushing events to a pair of fargate containers running it, letting them batch them and push to the stream c: using cloudwatch events instead and taking advantage of it's ability to persist events to a kinesis stream d: using the kinesis log agent and just writing json lines to a file i don't like a because we're using some sketch programming languages and they have iffy quality clients i don't like b because i hate operating things and also latency is a concern i don't like c because i can't find out if order is preserved and also latency is a concern i really don't like d because i can't afford to lose events and orchestrating things so all logs are written and shipped seems hard have i missed something obvious?
|
# ¿ Apr 19, 2019 04:07 |
|
i'm working with a client too cheap to pay for aws support but i have some questions about the service level guarantees of cloudwatch events, sns and sqs. whats my best bet for finding these?
|
# ¿ Jul 14, 2019 00:14 |
|
a hot gujju bhabhi posted:Curious to hear some feedback from anyone who has tried LocalStack for developing AWS stuff without actual resource spin up and pull down. It looks super promising to me, but I've never used it in practice, I'm keen to hear some thoughts from you much more experienced gurus? localstack is ok for when you just need to satisfy a dependency and can't be bothered to swap out sqs/sns/whatever for something locally runnable but it's not a very accurate recreation of the services it replaces so you can't rely on it if you're testing the thing itself
|
# ¿ Nov 30, 2019 04:24 |
|
|
# ¿ Apr 30, 2024 18:55 |
|
you could try Glue, it's basically serverless spark
|
# ¿ Apr 15, 2020 04:54 |