Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hughlander
May 11, 2005

Pollyanna posted:

Yeah, I'm really confused on why things are being made from scratch every time. I'll have to confirm that's actually happening, but since the only thing changing is pulling a different commit of the master branch at any point in time, then there's no reason to bake entire AMIs.

It guarantees availability of the deployment at any point in time in the future. As long as the instance can launch the app can be brought up. If you launch an instance and then deploy to it (for instance as part of an autoscaling group.) You now need to ensure that the second step bootstrap is available as well. If you're pulling from git you need then to ensure that your git host can sustain the pull of a mass deployment as well. If you have a large autoscale event and need to bootstrap 3000 instances will your git server fall over?

I've been at a place that did something similar, and even just doing an instance launch of 400 servers had a non-zero failure rate. We eventually replaced that system with EFS.

As above though Docker with a private container registery is probably the better approach but from a cost perspective I have a small issue with Docker in AWS.

Adbot
ADBOT LOVES YOU

Hughlander
May 11, 2005

oliveoil posted:

Does Amazon have anything like Google's App Engine? I don't want to know how to set up a system out of multiple components. I just want to write some code and then upload it and magically have a working application and never worry about virtual machines or how many instances of my code are running etc

Lambda is probably what you want

Hughlander
May 11, 2005

cosmin posted:

Wouldn't lambda be more similar to Google Functions (which is still beta)?

Maybe but given the requirements:

quote:

Does Amazon have anything like Google's App Engine? I don't want to know how to set up a system out of multiple components. I just want to write some code and then upload it and magically have a working application and never worry about virtual machines or how many instances of my code are running etc

And the requirement for it to be AWS I was thinking: static content on s3; AJAX hitting lambda is the closest match. It's pretty much their textbook mobile backend example on the lambda websites.
Basically it's the fusion of IaaS and PaaS.

Hughlander
May 11, 2005

UnfurledSails posted:

I have an application that reads a small amount (less than a dozen) of key value pairs as input, and the values need some frequent tuning in the next few weeks. Currently they are read from a configuration file, but I want to be able to change them without having to deploy every time. My first instinct is to just create a DynamoDB table and put the key value pairs there, but I know that's because I use DynamoDB heavily so of course I'd think that. Is there a better option?

PubSub redis elasticache?

Hughlander
May 11, 2005

Sylink posted:

Our aws guys refused to do anything. For the money just hire full time staff with aws knowledge.

Lol. What’s your lead time for hiring people with aws knowledge. Quickest we ever did was 90 days.

Hughlander
May 11, 2005

Docjowles posted:

I can't speak to how Azure handles it, but that's just how routing works. A more specific route is perfectly acceptable and will win out over a more general route toward a larger subnet that happens to contain your /24. I would be surprised if it throws an error.

Yep. A common use case used to be a low bandwidth backup route to a data center for use when the main route was down. It wouldn’t cover the entire route just to a few cabinets.

Hughlander
May 11, 2005

This may be the wrong thread for it but I figure to try it...

Some people at work are doing some really low level optimizations in a C++ App. Custom memory allocators to keep memory contiguous, and cache localization where it needs to be. The catch is that the app in question will be run in AWS. However, my understanding is to prevent side channel attacks the Kernel, the VM, and an LXC in a k8s pod (I guess that's just the kernel again.) . All will work against you to invalidate those optimizations. Is that correct? Are there any white papers I can float around about why this is a bad idea? Ignoring for a moment the k8s part, would using one of the new AWS Native instances be able to alleviate this?

Hughlander
May 11, 2005

Not sure what thread this should go to, but I want to get an elastic ip and vpn it to a set of containers on my NAS. Is that just going to be a Vpc, elastic ip and vpn endpoint? Or is there more to it than that?

Rational: I need to upgrade my ec2 to a higher machine class or I could just use my home nas but I don’t want people knowing my home ip / have a stable ip when it changes.

Hughlander
May 11, 2005

Agrikk posted:

I'm not really sure what you are asking here, but I'll take a swing at it:

You will want to create a VPC, set up a virtual private gateway (that in itself will have public IP addresses - you don''t have to create them) and then create your VPN tunnel to it. Then you can route in/out traffic through a NAT gateway which in itself will have a public IP address. This IP address changes so you'll use the AWS DNS name or point an alias to whatever DNS hostname you prefer. You can specify a elastic IP address upon creation.



FYI: bumping an EC2 machine to a different class (T3 to M5) or size (large to 2xlarge) is trivial and requires only a reboot.

Thanks I’ll look at it more.

What I want to do is I have a service on a port in digital ocean now that is barely worth the cost. It’s out grown the instance there and id need to pay $30 more / mo if I stay. I have the resources on my home network to run it, I just don’t want to have my ip published for it. I’m looking to having an aws public ip port be routed to a docked container on a node here.

I know upping the machine size is trivial I just don’t want the expense for a hobby project.

Hughlander
May 11, 2005

deedee megadoodoo posted:

https://github.com/boto/boto3/issues/2596

loving horse poo poo. gently caress this dumbass company. All of our deploys are broken.

I mean it's not going to help your frustrations now, but I guess you don't pin versions of things in production? That's pretty much step 0, you want to know what version you're running in production. And otherwise yah something leaked out from how their requirements.txt differs from yours, but from the look of it, it was detected within 2 hours of release and a workaround with-in 3. That's pretty good turnaround time for when a bug escapes to the wild.

Hughlander
May 11, 2005

deedee megadoodoo posted:

I am aware of both the fix and the crappiness of our infra code. My frustration lies in the fact that there is no way this was even tested before it was pushed. You can't even run "aws --version". It's not like this is some hidden error. It was just completely non-functional code.

Unless their requirements.txt pinned the version of awscli...

Hughlander
May 11, 2005

deedee megadoodoo posted:

You are talking about boto. I am talking about the new version of the awscli code being pushed to the ec2 yum repo without being tested.

We had a fleet of ec2 instances start up this morning that were all broken.

Got it! Yep I'm talking the wrong thing, is it too early in the day to drink?

Hughlander
May 11, 2005

With google's change to google apps for domains where you need to pay $$$ I have some 15 year old domains that I have gmail accounts for routing to other gmail accounts that I now need to get rid of. So I plan on following this blog post https://aws.amazon.com/blogs/messaging-and-targeting/forward-incoming-email-to-an-external-destination/ about being able to:

- Set up route 53 mx records
- Use SES to save incoming mail to an S3 bucket
- Use a lambda function to trigger on file writing to S3
- To resend outgoing mail via SES to the permenant email address

And set it up for about 5 domains. (IE anything sent to *@hughlander.com goes to hughlander@gmail.com)

Since there's a reasonable number of domains, I figure also to go do that with some infrastructure as code and make it repeatable, maybe get my own blog post or at least a github link out of it. So my question is, what's the appropriate infrastructure as code system for this? I've used puppet and ansible in the past and neither seem appropriate. Since the tech is all AWS specific Cloud Formation sounds like a possibility, though I have some interest in learning terraform but not sure how terraform would work with R53, SES, S3, Lambda.

Anything I'm missing / Any thoughts?

Adbot
ADBOT LOVES YOU

Hughlander
May 11, 2005

Docjowles posted:

+1 if you don't actually need a traditional database, don't use one. DynamoDB or S3 + Athena could end up costing pennies compared to Aurora.

At work we have a large no-sql database in mongodb, with no cross session writes. I'm going to ask for a research project next year to do a proof of concept of replacing the whole thing with EFS.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply