Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Pollyanna
Mar 5, 2005

Milk's on them.


I have a question about EBS and baking AMIs. We're currently baking a new AMI for every new version of our app we want to deploy, and I'm wondering if there's a way around that? It takes 15~20 minutes to band one which means that every commit I push to Bitbucket takes half an hour to show up on its staging server. I have to debug some pipeline related poo poo and ensuring that long to run into yet another bug is driving me crazy. What can I do to mitigate this?

Adbot
ADBOT LOVES YOU

Pollyanna
Mar 5, 2005

Milk's on them.


Yeah, I'm really confused on why things are being made from scratch every time. I'll have to confirm that's actually happening, but since the only thing changing is pulling a different commit of the master branch at any point in time, then there's no reason to bake entire AMIs.

Pollyanna
Mar 5, 2005

Milk's on them.


Our AMI bake times have ballooned to 40-50 minutes, and I really want to dive and debug why that's happening. I've tried reaching out to my coworker that's in charge of the pipeline/AWS stuff, but he's unhelpful and reluctant to walk me through the process, and what we're doing and why. I want to just bypass him and do some of my own digging to figure out how to reduce the amount of time it takes to bake. What're the common reasons why baking an AMI might take so long? Something about the files involved to do so? Is there a way to debug/step through the process?

I never got an answer re: why we're baking an AMI for each new commit to a branch, besides "that's the commonly accepted pattern". I get that it's technically correct, but it's also bullshit slow, and I question whether or not it's worth it given that we commit early and often and therefore deploy to ticket-specific servers early and often, and this time fuckin' adds up man. We're behind schedule as-is and this process is making it so much worse.

Pollyanna
Mar 5, 2005

Milk's on them.


It seems like most of the time spent during the Bake AMI step is when running this command:

code:
packer build \
    -var "name=app-name" \
    <more vars> \
    ./packer.json
and right after it pulls from Docker for our app. So it does seem to be leveraging Docker somehow. The problem is that it tends to go really really silent and not output anything after printing a bunch of this:

code:
amazon-ebs: 
amazon-ebs: 
amazon-ebs: 
amazon-ebs:
amazon-ebs: 
amazon-ebs: 
amazon-ebs: 
amazon-ebs: 
amazon-ebs: 
amazon-ebs: 
amazon-ebs: 
amazon-ebs: 
and it just stays there for easily 30-40 minutes. There's little to nothing shown about what exactly is taking that long, and I have no idea where to debug from there.

I know literally nothing about AWS, so maybe I'm missing something...

Edit: How big are Docker images supposed to be, generally? Ours ends up at like 315 MB or so. Is that normal?

Pollyanna fucked around with this message at 20:06 on Apr 26, 2017

Pollyanna
Mar 5, 2005

Milk's on them.


Vanadium posted:

Huh, so creating an AMI is more complicated than taking a tarball with a root fs in it, adding a dir with your app to it and uploading that to S3?

Don't ask me. I'm just trying to debug our deployment pipeline built by our ops team (who recently quit en masse) cause it's slow as gently caress.

Pollyanna
Mar 5, 2005

Milk's on them.


Has anyone here used DMS to migrate their Mongo database to DocumentDB? One of my migration tasks is horribly slow (which makes sense, the table is 190gb large) and it has a nasty tendency to silently fail partway through the multihour process. I tried changing the task settings with the CLI to enable debug logs, but it just gives me an “invalid task settings JSON” message when I try. Any way to get details on why it’s a bad input?

Pollyanna
Mar 5, 2005

Milk's on them.


Pollyanna posted:

Has anyone here used DMS to migrate their Mongo database to DocumentDB? One of my migration tasks is horribly slow (which makes sense, the table is 190gb large) and it has a nasty tendency to silently fail partway through the multihour process. I tried changing the task settings with the CLI to enable debug logs, but it just gives me an “invalid task settings JSON” message when I try. Any way to get details on why it’s a bad input?

Figured this out: AWS expects you to append file:// to reference a local file for input, unlike literally every other program I’ve used. gently caress off, AWS.

Pollyanna
Mar 5, 2005

Milk's on them.


:negative: s/ap/pre

Pollyanna
Mar 5, 2005

Milk's on them.


Jeoh posted:

Contact your AWS TAM. We've been working intensively with the DMS team and they're really eager to change things based on customer feedback.

drat, really? I’ll have to get ours involved too.

Pollyanna
Mar 5, 2005

Milk's on them.


Agrikk posted:

Always this.

For every project, you should be engaging your TAM (or entire account team) before you start the project. This way you don’t have to reinvent the wheel ad you’ll be given best practices for your project- ensuring you get it right straight from the beginning.

This is good to know, thanks. Would have helped when doing our migration.

Pollyanna
Mar 5, 2005

Milk's on them.


Throwing in my two cents on DocumentDB - it has performed far worse than MongoDB in each of our performance tests, to the point where we jump from 20% of our time in Mongo to >80% of our time in DocDB. I can’t in all honesty recommend it if latency/performance is critical.

Pollyanna
Mar 5, 2005

Milk's on them.


I have a CloudWatch log group (made of multiple log streams) and a feed of CloudTrail events (basically a change log/access log for an S3 bucket) that I want to compose into a single log stream for ease of viewing, searching, and auditing. I was under the impression that I could accomplish this with CloudWatch itself, but on further investigation, I’m not sure if that’s actually the case. What’s my best option for composing a CloudWatch log group and a series of CloudTrail events into one single log stream?

Pollyanna
Mar 5, 2005

Milk's on them.


PierreTheMime posted:

For AWS Transfer, is there any way to set the "Restricted" flag for a user account programmatically? Using CLI I'm not seeing any value in the "describe-user" JSON that matches that setting and don't see it as an option in the "create/user-user" command. I have a client that wants to use the same bucket for a ton of clients and generated and locking their user accounts to their S3 key location without clicking through dozens of accounts would be nice.

I takes to my company’s TAM recently asking if it was possible, and their response was basically “we’ll keep it in mind”. Prolly not happening anytime soon. I feel your pain friend :(

We’ve been using scope-down policies in lieu of that option. The details escape me right now, but I can ask our SREs to explain tomorrow. There might even be docs on it somewhere, but I don’t know for sure.

Adbot
ADBOT LOVES YOU

Pollyanna
Mar 5, 2005

Milk's on them.


Where did my post go? EDIT: Oh there it is.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply