Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
JHVH-1
Jun 28, 2002

a hot gujju bhabhi posted:

I'm fairly new to AWS so I apologise for the super basic question, but what service(s) would I use if I wanted to make a website that could compile less into CSS for a user to download? I figure that I should to do this in a Node.js Lambda and then send the result to S3 and publish to an SNS to say that the download is ready, which my webpage can then react to. Am I on the right track?

I think you could do something with the new code pipeline type services, but lambda could be fine from the looks of it. I am thinking about using something similar myself (using our existing bamboo server to build/deploy instead).

A whole bunch of options here using different languages https://www.staticgen.com
Like here is a blog post example using hexo https://medium.com/@TedYav/using-hexo-and-aws-to-build-a-fast-massively-scalable-website-for-pennies-ea3c0f1115a
It has its own built in s3 publisher.

Adbot
ADBOT LOVES YOU

jiffypop45
Dec 30, 2011

Correct me if I'm wrong but from my understanding pipelines sends packaged code to an ec2 host or autoscaling group. I don't think you can send it to an s3 bucket from there.

JHVH-1
Jun 28, 2002

jiffypop45 posted:

Correct me if I'm wrong but from my understanding pipelines sends packaged code to an ec2 host or autoscaling group. I don't think you can send it to an s3 bucket from there.

Yeah for the most part, but it sounds like you could use the pipeline to do the build and put the zip on s3 and then trigger deploying that to a public bucket with lambda. Kinda a round about way to replace having a dedicated build server or just pushing from a machine that has s3 access to the bucket where the site lives.

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION
You guys have both given me plenty to go on, thanks! In a way I'm fine with a little bit of over engineering too, since I'm also trying to learn how everything plays together.

jiffypop45
Dec 30, 2011

I'm trying to do a dynamodb backup to an s3 bucket without using data pipelines (which I realize is designed for this specific purpose). What's the easiest way to go about this? I didn't see anything that stuck out to me as doing this in the AWS cli under the dynamodb docs.

Edit: I ended up stealing some code off of GitHub that uses python boto to backup from dynamodb to s3. As emr would be extremely overkill for a simple etl job.

jiffypop45 fucked around with this message at 07:03 on Oct 28, 2017

fluppet
Feb 10, 2009
Just found out I need to deploy a couple of environments on alibaba cloud. Given that we're only using rds, ec2 and s3 and they look to have equivalents on alibaba are there any major gotchas that I'm likely to run into?

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
If mainland china deploy- You'll need to get an ICP license for yourself and if you're hosting clients/reselling one for each of them too.

Heisenberg1276
Apr 13, 2007
6 months ago I joined a team where we have a few quite messy AWS environments. All of our current stuff is created through cloudformation (with templates generated through Troposphere). This is pretty neat and not too hard to keep track of what we're using.

The environments also contain a number of resources that were created ad-hoc at some point, and many of these resources I don't know if they're used at all.

Are there any tools that will help me figure out what resources are no longer needed? So far I've seen Janitor Monkey from Netflix which seems like it might help.

For S3 resources I'm thinking of just setting up access logging on all buckets then writing a script to parse the access logs over some time and see what isn't accessed at all.

Thanks Ants
May 21, 2004

#essereFerrari


Is CloudWatch not going to help you here? Either you want to know about all activity, or in the case of EC2 I guess network in/out to instances above a baseline value would help you figure out if a service is in use or not.

FamDav
Mar 29, 2008

fluppet posted:

Just found out I need to deploy a couple of environments on alibaba cloud. Given that we're only using rds, ec2 and s3 and they look to have equivalents on alibaba are there any major gotchas that I'm likely to run into?

just out of curiosity but why doesn't the mainland china region for aws work here?

jiffypop45
Dec 30, 2011

You have to be a Chinese citizen, a business in China or a multinational company with interests in China to be able to get a Chinese AWS account.

fluppet
Feb 10, 2009

FamDav posted:

just out of curiosity but why doesn't the mainland china region for aws work here?

It's not China we need to be in otherwise we would still be on aws

Walked
Apr 14, 2003

Anyone have documentation on installing an 'unsupported' Operating System in EC2? I've seen documentation on an unsupported process where you import an image from the source, mount it to the EC2 instance, and then dd over to the root disk; but I'm unable to find a good document on this.

Anyone happen to know about this?

fluppet
Feb 10, 2009
Does the vm import service not cover this for you? How exotic an os are we talking about?

Walked
Apr 14, 2003

fluppet posted:

Does the vm import service not cover this for you? How exotic an os are we talking about?

This is an R&D project and the development team wants to run a custom kernel developed in-house. We just had a conference call with Amazon and their SA was adamant that it's possible to do an import the way I described above to get around the limitations of the AWS Image-Import service, but that Amazon cant support it or provide docs outside of the image import service (and the kernel thats being developed certainly isnt on the supported list).

I know I've come across documentation on this; but luck would have it that I cant find the documents now.

FamDav
Mar 29, 2008

jiffypop45 posted:

You have to be a Chinese citizen, a business in China or a multinational company with interests in China to be able to get a Chinese AWS account.

I’m aware, which is exactly what their use case sounded like.

fluppet posted:

It's not China we need to be in otherwise we would still be on aws

Ah. Looking at the region list on their site it looks like only Hong Kong and Kuala Lumpur aren’t covered by aws. This is why we just need a region in every country (and whatever the heck you want to define Hong Kong as).

MrMoo
Sep 14, 2000

Singapore usually works for HK, I know Google created a data centre in HK and Amazon may have started something small but that may be like CloudFront only.

https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/

Destroyenator
Dec 27, 2004

Don't ask me lady, I live in beer
If you're able to boot whatever it is from grub you might be able to copy the steps from here: https://mirage.io/wiki/xen-boot on how to do it for unikernels and just substitute whatever disk image you're putting on there.

It looks like the pv-grub image number may be outdated (and it varies per region anyway) so check that if you do want to try it that way.

Walked
Apr 14, 2003

Destroyenator posted:

If you're able to boot whatever it is from grub you might be able to copy the steps from here: https://mirage.io/wiki/xen-boot on how to do it for unikernels and just substitute whatever disk image you're putting on there.

It looks like the pv-grub image number may be outdated (and it varies per region anyway) so check that if you do want to try it that way.

Thank you. I think this is what I was looking for!

UnfurledSails
Sep 1, 2011

I have an application that reads a small amount (less than a dozen) of key value pairs as input, and the values need some frequent tuning in the next few weeks. Currently they are read from a configuration file, but I want to be able to change them without having to deploy every time. My first instinct is to just create a DynamoDB table and put the key value pairs there, but I know that's because I use DynamoDB heavily so of course I'd think that. Is there a better option?

Vanadium
Jan 8, 2005

People here have been polling a json config file on S3, ymmv.

Hughlander
May 11, 2005

UnfurledSails posted:

I have an application that reads a small amount (less than a dozen) of key value pairs as input, and the values need some frequent tuning in the next few weeks. Currently they are read from a configuration file, but I want to be able to change them without having to deploy every time. My first instinct is to just create a DynamoDB table and put the key value pairs there, but I know that's because I use DynamoDB heavily so of course I'd think that. Is there a better option?

PubSub redis elasticache?

Steve French
Sep 8, 2003

This is the sort of thing we use ZooKeeper for, though probably a bit much of a hassle to set up for just this one thing.

fluppet
Feb 10, 2009
Simple db still exists even if it is a little unloved

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
We wrote an on-instance sidecar app that provides tuneable properties for a service based on application and instance attributes and metadata. For example, we can provide a set of global defaults but then overrides for properties in a specific account, region, auto-scaling group, or for a specific instance with user-defined precedence levels. It's backed by S3 and Consul although Consul is purely optional. We have an extension system for it that handles decrypting properties encrypted either with KMS or Vault's transit encryption.

It's been almost perfect for us because we're still using immutable images but the team is starting to think about what it looks like when we migrate to Kubernetes and it's looking like we're going to have to do a near complete rewrite.

Vanadium
Jan 8, 2005

Y'all are a lot fancier than we are I guess.

Virigoth
Apr 28, 2009

Corona rules everything around me
C.R.E.A.M. get the virus
In the ICU y'all......



Blinkz0rz posted:

It's been almost perfect for us because we're still using immutable images but the team is starting to think about what it looks like when we migrate to Kubernetes and it's looking like we're going to have to do a near complete rewrite.

If you're in an immutable environment already with AMIs what/why is pushing you into looking at going to Kubernetes? We've got a fully immutable environment and have had a few meetings but can't come up with enough solid points to add it into our deployment pipeline for a PoC and it just seems like adding another layer of complexity to the environment. Most of our AMIs that are baked spin up super fast with the exception being our Jenkins executor slaves coming in at around 3 minutes right now.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Virigoth posted:

If you're in an immutable environment already with AMIs what/why is pushing you into looking at going to Kubernetes? We've got a fully immutable environment and have had a few meetings but can't come up with enough solid points to add it into our deployment pipeline for a PoC and it just seems like adding another layer of complexity to the environment. Most of our AMIs that are baked spin up super fast with the exception being our Jenkins executor slaves coming in at around 3 minutes right now.

We have a few problems to solve that are simplified in favor of solving k8s

1. Our AMIs take too long to bake. A lot of that is down to how we assemble our base AMI and the number of things we provision. We have an engineering team in LA with some political cache that has been clamoring for anything to speed up deployments and baking is a pretty big chunk of that time.

2. Instances take too long to boot. We have chef doing a boot time run to adjust settings based on region, ASG, chef environment, etc. and that takes a not insignificant amount of time. Coupled with the length of time it takes services and some of our bundled software, like Consul, to start makes the average time between creating a new ASG and having a deployed service on the order of 5+ minutes.

3. Multi-region deployments get gross when we have to copy an AMI across the world. Docker images are much easier in that regard.

4. Our chef recipes are an unmaintainable mess. There's 3+ years of bad decisions in there and while we've tried to make it better, a lot of the improvements are trying to shine poo poo. Moving to a different delivery method lets us sweep a lot of that away and makes deploying and maintaining k8s the big problem to focus on.

5. Resume driven development. Unfortunately. Part of it is promoted by that team in LA but a bunch of it is the general desire not to be stuck maintaining legacy software.

A lot of these problems came up pretty organically so u don't think there's really a specific thing to point to and try to resolve. It's just a whole mess that we need to wipe clean and start as close to fresh as we can.

AWWNAW
Dec 30, 2008

My experiences with Kubernetes on AWS have been positive, save for some random DNS issues. They’re also about to announce fully managed Kubernetes soon?

Methanar
Sep 26, 2013

by the sex ghost
Idiot question:

What is the difference between https://bucket.s3-us-east-1.amazonaws.com and https://bucket.s3.amazonaws.com

https://github.com/jie123108/lua-resty-s3 This library for accessing s3 over lua has helpfully hardcoded in the string s3-us-east-1 into all requests. Aside from hardcoding a region into a library is stupid, what is the difference between s3 and s3-region in the subdomain? Why does my bucket only respond to .s3.?

➜ Scripts ping bucket.s3-us-east-1.amazonaws.com
ping: cannot resolve bunstributor.s3-us-east-1.amazonaws.com: Unknown host
➜ Scripts ping bucket.s3.amazonaws.com
PING s3-1-w.amazonaws.com (52.216.225.176): 56 data bytes
64 bytes from 52.216.225.176: icmp_seq=0 ttl=44 time=68.532 ms

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Vanadium posted:

Y'all are a lot fancier than we are I guess.

It's actually open source if you're interested. I'm reluctant to doxx myself so if you'd like more info PM me.

Virigoth
Apr 28, 2009

Corona rules everything around me
C.R.E.A.M. get the virus
In the ICU y'all......



Blinkz0rz posted:

We have a few problems to solve that are simplified in favor of solving k8s

1. Our AMIs take too long to bake. A lot of that is down to how we assemble our base AMI and the number of things we provision. We have an engineering team in LA with some political cache that has been clamoring for anything to speed up deployments and baking is a pretty big chunk of that time.

2. Instances take too long to boot. We have chef doing a boot time run to adjust settings based on region, ASG, chef environment, etc. and that takes a not insignificant amount of time. Coupled with the length of time it takes services and some of our bundled software, like Consul, to start makes the average time between creating a new ASG and having a deployed service on the order of 5+ minutes.

3. Multi-region deployments get gross when we have to copy an AMI across the world. Docker images are much easier in that regard.

4. Our chef recipes are an unmaintainable mess. There's 3+ years of bad decisions in there and while we've tried to make it better, a lot of the improvements are trying to shine poo poo. Moving to a different delivery method lets us sweep a lot of that away and makes deploying and maintaining k8s the big problem to focus on.

5. Resume driven development. Unfortunately. Part of it is promoted by that team in LA but a bunch of it is the general desire not to be stuck maintaining legacy software.

A lot of these problems came up pretty organically so u don't think there's really a specific thing to point to and try to resolve. It's just a whole mess that we need to wipe clean and start as close to fresh as we can.

Ah ok I can see that. I'm having the political fight with Docker right now. We're getting ready to fix a problem with #3 that has been a big security bug for awhile so we'll see what that does to our multi-region deployments and time. For #2 I'm not a Chef guy but is there no way to setup your playbooks (We use ansible) so that when your service does the "configure" playbook you can just run a quick set of scripts or invoke something you baked on there? I'm looking at this from a amazon linux AMI perspective we bake on top of.

If I was you I wouldn't put any major cycles into Kubernetes until after reInvent. Like I'd go full stop if you were thinking of starting right now. It just seems like Kubernetes is ripe enough that AWS might pick it up for some sort of support.

Skier
Apr 24, 2003

Fuck yeah.
Fan of Britches

AWWNAW posted:

My experiences with Kubernetes on AWS have been positive, save for some random DNS issues. They’re also about to announce fully managed Kubernetes soon?

I'd put money on it being announced at reInvent at the end of the month, but everyone I know has been tight lipped about it.

Methanar posted:

Idiot question:

What is the difference between https://bucket.s3-us-east-1.amazonaws.com and https://bucket.s3.amazonaws.com

https://github.com/jie123108/lua-resty-s3 This library for accessing s3 over lua has helpfully hardcoded in the string s3-us-east-1 into all requests. Aside from hardcoding a region into a library is stupid, what is the difference between s3 and s3-region in the subdomain? Why does my bucket only respond to .s3.?

➜ Scripts ping bucket.s3-us-east-1.amazonaws.com
ping: cannot resolve bunstributor.s3-us-east-1.amazonaws.com: Unknown host
➜ Scripts ping bucket.s3.amazonaws.com
PING s3-1-w.amazonaws.com (52.216.225.176): 56 data bytes
64 bytes from 52.216.225.176: icmp_seq=0 ttl=44 time=68.532 ms

S3 in us-east-1 is a bit of a special snowflake: it doesn't have the region prefix on it. See http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region for more info. There's also a few different ways of accessing buckets such as virtual hosted style or path style: http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html .

I don't know any Lua so I can't get too deep into the linked codebase, but hopefully this will help.

Steve French
Sep 8, 2003

We started our migration from baking AMIs to docker/mesos/marathon deployment a few years ago, kicking and screaming. It has been hugely beneficial: among other things, building and pushing a docker image is miles upon miles faster than bringing up an EC2 instance and imaging it to bake an AMI, especially because many steps like installing infrequently updated base packages are performed less redundantly and easier to break out into individual and clearly defined steps

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Virigoth posted:

Ah ok I can see that. I'm having the political fight with Docker right now. We're getting ready to fix a problem with #3 that has been a big security bug for awhile so we'll see what that does to our multi-region deployments and time. For #2 I'm not a Chef guy but is there no way to setup your playbooks (We use ansible) so that when your service does the "configure" playbook you can just run a quick set of scripts or invoke something you baked on there? I'm looking at this from a amazon linux AMI perspective we bake on top of.

This is what we're doing with chef. We have a bake cycle which does the initial provisioning and then a boot cycle which effectively does runtime provisioning. Most everything that's configured at runtime is already on the instance, it's just that actually running chef and going through the runlist takes time. The problem is that everything needs to be fully set up and configured before the actual application is started, so while there's not a lot of things that are done on boot, it still takes time. So does starting up most of the spring boot apps. The boot time on them is insane and they don't pass ELB health checks until the spring boot health check endpoint returns a value.

quote:

If I was you I wouldn't put any major cycles into Kubernetes until after reInvent. Like I'd go full stop if you were thinking of starting right now. It just seems like Kubernetes is ripe enough that AWS might pick it up for some sort of support.

Oh for sure, everything is pointing to a managed solution this year so most of the team is just waiting. People did say the same thing about managed service discovery last year and nothing happened there so who knows. Either way this isn't a problem to solve in the short term.

Blinkz0rz fucked around with this message at 04:15 on Nov 11, 2017

JHVH-1
Jun 28, 2002

Virigoth posted:

Ah ok I can see that. I'm having the political fight with Docker right now. We're getting ready to fix a problem with #3 that has been a big security bug for awhile so we'll see what that does to our multi-region deployments and time. For #2 I'm not a Chef guy but is there no way to setup your playbooks (We use ansible) so that when your service does the "configure" playbook you can just run a quick set of scripts or invoke something you baked on there? I'm looking at this from a amazon linux AMI perspective we bake on top of.

If I was you I wouldn't put any major cycles into Kubernetes until after reInvent. Like I'd go full stop if you were thinking of starting right now. It just seems like Kubernetes is ripe enough that AWS might pick it up for some sort of support.

Guess what, they announced it. Also containers without having to manage the hosts.

Also re:invent was pretty nuts. Next time I’m going to try and plan everything out earlier.

Rapner
May 7, 2013


Anyone done the advanced networking cert? I'm 6/7 and just failed it - need another good course resource other than acloud.guru. Has anyone used linuxacademy?

SnatchRabbit
Feb 23, 2006

by sebmojo
Speaking of certs is there a good resource for free practice exams? I just finished a course for associate solutions architect so I'm looking for some more exams to make sure I'm on my A game.

Rapner
May 7, 2013


Not free but a cloud guru is very cheap.

Adbot
ADBOT LOVES YOU

SnatchRabbit
Feb 23, 2006

by sebmojo

Rapner posted:

Not free but a cloud guru is very cheap.

Do they sell the practice exams separately? All I see are the $99 courses.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply