|
I have a question about EBS and baking AMIs. We're currently baking a new AMI for every new version of our app we want to deploy, and I'm wondering if there's a way around that? It takes 15~20 minutes to band one which means that every commit I push to Bitbucket takes half an hour to show up on its staging server. I have to debug some pipeline related poo poo and ensuring that long to run into yet another bug is driving me crazy. What can I do to mitigate this?
|
# ¿ Mar 9, 2017 22:06 |
|
|
# ¿ May 2, 2024 08:25 |
|
Yeah, I'm really confused on why things are being made from scratch every time. I'll have to confirm that's actually happening, but since the only thing changing is pulling a different commit of the master branch at any point in time, then there's no reason to bake entire AMIs.
|
# ¿ Mar 10, 2017 15:30 |
|
Our AMI bake times have ballooned to 40-50 minutes, and I really want to dive and debug why that's happening. I've tried reaching out to my coworker that's in charge of the pipeline/AWS stuff, but he's unhelpful and reluctant to walk me through the process, and what we're doing and why. I want to just bypass him and do some of my own digging to figure out how to reduce the amount of time it takes to bake. What're the common reasons why baking an AMI might take so long? Something about the files involved to do so? Is there a way to debug/step through the process? I never got an answer re: why we're baking an AMI for each new commit to a branch, besides "that's the commonly accepted pattern". I get that it's technically correct, but it's also bullshit slow, and I question whether or not it's worth it given that we commit early and often and therefore deploy to ticket-specific servers early and often, and this time fuckin' adds up man. We're behind schedule as-is and this process is making it so much worse.
|
# ¿ Apr 26, 2017 17:32 |
|
It seems like most of the time spent during the Bake AMI step is when running this command:code:
code:
I know literally nothing about AWS, so maybe I'm missing something... Edit: How big are Docker images supposed to be, generally? Ours ends up at like 315 MB or so. Is that normal? Pollyanna fucked around with this message at 20:06 on Apr 26, 2017 |
# ¿ Apr 26, 2017 20:02 |
|
Vanadium posted:Huh, so creating an AMI is more complicated than taking a tarball with a root fs in it, adding a dir with your app to it and uploading that to S3? Don't ask me. I'm just trying to debug our deployment pipeline built by our ops team (who recently quit en masse) cause it's slow as gently caress.
|
# ¿ Apr 26, 2017 20:20 |
|
Has anyone here used DMS to migrate their Mongo database to DocumentDB? One of my migration tasks is horribly slow (which makes sense, the table is 190gb large) and it has a nasty tendency to silently fail partway through the multihour process. I tried changing the task settings with the CLI to enable debug logs, but it just gives me an “invalid task settings JSON” message when I try. Any way to get details on why it’s a bad input?
|
# ¿ Aug 26, 2019 22:57 |
|
Pollyanna posted:Has anyone here used DMS to migrate their Mongo database to DocumentDB? One of my migration tasks is horribly slow (which makes sense, the table is 190gb large) and it has a nasty tendency to silently fail partway through the multihour process. I tried changing the task settings with the CLI to enable debug logs, but it just gives me an “invalid task settings JSON” message when I try. Any way to get details on why it’s a bad input? Figured this out: AWS expects you to append file:// to reference a local file for input, unlike literally every other program I’ve used. gently caress off, AWS.
|
# ¿ Aug 27, 2019 18:01 |
|
s/ap/pre
|
# ¿ Aug 27, 2019 18:06 |
|
Jeoh posted:Contact your AWS TAM. We've been working intensively with the DMS team and they're really eager to change things based on customer feedback. drat, really? I’ll have to get ours involved too.
|
# ¿ Sep 24, 2019 22:11 |
|
Agrikk posted:Always this. This is good to know, thanks. Would have helped when doing our migration.
|
# ¿ Sep 25, 2019 12:58 |
|
Throwing in my two cents on DocumentDB - it has performed far worse than MongoDB in each of our performance tests, to the point where we jump from 20% of our time in Mongo to >80% of our time in DocDB. I can’t in all honesty recommend it if latency/performance is critical.
|
# ¿ Oct 18, 2019 00:58 |
|
I have a CloudWatch log group (made of multiple log streams) and a feed of CloudTrail events (basically a change log/access log for an S3 bucket) that I want to compose into a single log stream for ease of viewing, searching, and auditing. I was under the impression that I could accomplish this with CloudWatch itself, but on further investigation, I’m not sure if that’s actually the case. What’s my best option for composing a CloudWatch log group and a series of CloudTrail events into one single log stream?
|
# ¿ Nov 15, 2019 21:16 |
|
PierreTheMime posted:For AWS Transfer, is there any way to set the "Restricted" flag for a user account programmatically? Using CLI I'm not seeing any value in the "describe-user" JSON that matches that setting and don't see it as an option in the "create/user-user" command. I have a client that wants to use the same bucket for a ton of clients and generated and locking their user accounts to their S3 key location without clicking through dozens of accounts would be nice. I takes to my company’s TAM recently asking if it was possible, and their response was basically “we’ll keep it in mind”. Prolly not happening anytime soon. I feel your pain friend We’ve been using scope-down policies in lieu of that option. The details escape me right now, but I can ask our SREs to explain tomorrow. There might even be docs on it somewhere, but I don’t know for sure.
|
# ¿ Dec 5, 2019 23:40 |
|
|
# ¿ May 2, 2024 08:25 |
|
Where did my post go? EDIT: Oh there it is.
|
# ¿ Dec 6, 2019 00:36 |