|
Lutha Mahtin posted:The Lambda service might work for me, but again I don't really know. When I was just trying to learn about EC2, I had the vague outline of an idea for a setup where basically a server is spawned that has my code on it already, or it automatically loads it somehow, and if the server gets destroyed or shut down or whatever, a trigger is in place to spawn it again when possible. Data storage would be taken care of by moving my output (XML from APIs and maybe some text files) to a storage thing that doesn't care if my server gets nuked. The code on the server would also maybe read some from the data store to determine a few things, such as when the last queries were made, or which queries I am interested in at the moment; this would allow me to not be hard-coding state into the code or the server image. Is this sounding like something that makes some sort of sense, in terms of ~~the cloud~~? There are a number of ways to set it up so that you can instantly spawn a server that has/gets your code, and keeps at least one instance up. Here's two options:
Now once you've done this, go to Autoscaling and set up an autoscaling group with a min of 1 and a max of 1 with your created launch configurations. If the instance goes down for whatever reason, autoscaling will kick in and create a new one. If you ever need to re-deploy that instance, set max to 2 temporarily and desired to 2, then once the second instance is live, set it back down to 1. That means that if you have a load balancer attached to this autoscaling group, you have literally no downtime for the re-deploy. Regarding data storage:
Regarding potential trip-ups with EC2:
Summary: If you're trying to learn ~~the cloud~~ then look into Lambda/ElasticBeanstalk/etc, but be aware your knowledge will literally only be useful for AWS ever. If you're just trying to learn server development at small scale, maybe don't use AWS. If you're trying to learn mid/large-scale development, then use AWS, but also pretend that you have a large number of servers everywhere and you can't afford downtime. Otherwise everything you learn will have to be changed or re-learned when you actually have such a system in production. e: And if you just want to make your app in peace, go for a VPS provider, preferably a very very cheap one. You have providers that offer year-long machines with 1 modest CPU, 256 MB RAM and enough storage, all for £5/year or similar. You can also try DigitalOcean, which constantly have 'promotions' where you can sign up with a promocode and get more than one month's credit when you start up. Working with a VPS provider will also tell you all you need about small scale development, and the knowledge transfers to some degree (depending on provider) over to EC2 once you learn the lingo. Red Mike fucked around with this message at 12:07 on Oct 8, 2016 |
# ¿ Oct 8, 2016 12:05 |
|
|
# ¿ May 3, 2024 11:18 |
|
xenilk posted:- Is running a micro instance sounds like a viable option for a set of 5 sites that get probably <10,000 views daily total and probably less than a gb of database storage. t2.micro should be more than enough, assuming you don't do incredibly expensive queries. If you run the CPU up to 100% constantly because you're running a query with 3 dependent subqueries for every one of those daily views, then no it's not going to work. xenilk posted:- I noticed that the network is marked as "LOW" for t2.micro instances, does that mean that it will be slow as hell? That "network" thing just relates to your bandwidth limits (this is how many bytes you can push into/out of the instance in a single instant). It's also misleading across different classes (t2, m4, m3, etc) since t2.micro actually has slightly higher bandwidth limits compared to the cheapest m4. xenilk posted:- For EC2, instances have CPU Credits for bursts, is it the same for RDS? Does it mean my instance can go down for any reason? That would blow. CPU credits don't mean instances going down for either EC2 or RDS. You get baseline performance unless you have cpu credits to spend in order to burst up to higher performance. When you're not using up all your resources, you recover cpu credits slowly. This means that if you only end up needing to burst for 50% of the day, you should be able to maintain it permanently. Ideally, you should be ignoring CPU credits entirely and making sure that your baseline performance is enough to handle whatever you're throwing at the machine. CPU credits are dumb and will cause you to lock up your machine at 100% CPU in the middle of the night because oops suddenly your machine became quite a bit slower at peak times. Unrelating to your questions, if you're not also moving to EC2, double check your traffic costs. Data transfer costs money unless your database is talking to an EC2 instance in the same availability zone using private IPs.
|
# ¿ Dec 9, 2016 09:45 |
|
zerofunk posted:Red Mike brings up a good point about additional costs due to data transfer as well. If you kept it all within AWS, you wouldn't have that issue. Amazon just announced a new VPS product called Lightsail that is supposed to be more akin to Digital Ocean's offering (I haven't read too much about it myself) than EC2. It may be worth looking into that if you did want to move everything over, but keep a similar setup aside from database hosting. Lightsail is looking competitively priced, although setting up a bridge between AWS and it (so you can access your RDS instance) will mean you'll have to learn about AWS specific notions (which is a good thing if you're trying to learn; annoying if not). It does however seem to be US-only for now. Otherwise you'll need EC2 instances which end up costing more money.
|
# ¿ Dec 9, 2016 14:29 |
|
Assuming I'm understanding correctly, the setup I've generally seen (especially with Windows servers that take ages to start up if it's a custom AMI) is that the AMI never changes (and preferably is an Amazon one, so that instances are brought up from the waiting pool of pre-instanced servers) but Launch Configurations are used instead, with a userdata script being passed in that downloads and sets up everything as needed. Launch Configurations are instantly created and available, and there's no overhead on bringing instances online from them. Only downside: you're limited to 300 or so configurations at any one time, and I don't believe you can increase the limit.
|
# ¿ Mar 9, 2017 22:27 |
|
I can cover the SSL part, although be aware that a load balancer for this might be a bit overkill. It sounds more like you've got an issue/not enough resources on your instance. If scaling up is an option, consider that first. Importing a third party certificate into the certificate manager will not let you renew it and it won't happen automatically. It's solely a way to get third party certs to be used on load balancers in the same way Amazon certificates are used. You can use the certificate manager to get Amazon-issued certificates (that you can't download to use on your own) which will let you use them in stuff like the load balancer, their CDN, etc. Just go into the certificate manager and go through the process of getting a new certificate. You'll need to prove you 'own' the domain via a couple methods (DNS validation, email validation, etc, they're all explained there), or if you want to make it simpler in future just transfer it to Route 53. These certificates will renew on their own behind-the-scenes. You set it up once and just leave it (like Let's Encrypt when set up properly). When you get that certificate, you can bind it to a load balancer, or a cloudfront zone, etc. The load balancer then serves HTTPS under an SSL certificate issued by Amazon, but you can set it up to talk to your instances over HTTP. That means the traffic between the load balancer and your instances is not encrypted, but is otherwise the same to an end-user. Caution though: depending on how your app works, you may have to wrangle it into functioning properly. This is because it'll be serving requests off of port 80 over HTTP, however as far as the user's concerned it's running on HTTPS. I know plenty of web software that breaks down horribly under that setup (forums software, CMSes, etc) ending up with infinite redirects, or refusing requests because they're not secure.
|
# ¿ Jul 16, 2018 18:56 |
|
That's a good point, although I would argue that you should weigh it against the ease of load balancing. If it ends up being one of those bits of software that is difficult to load balance (or load balancer SSL terminate), and the cost is an extra $10/mo or something, you might consider it worth it to not spend weeks fixing things. Time is as much a resource as money.
|
# ¿ Jul 17, 2018 08:20 |
|
Lumpy posted:So if I do this and get a cert / SSL on the load balancer, if my front-end makes a call to https://myservice.blah.com the request from *and* response to the client will be encrypted, but the traffic from the load balancer to the ec2 instance will not? (this is fine, I just don't want to have any CORS issues because the front-end is https and doing AJAX to http will cause issues) Correct. From the end-user (so from the browser) it's as if everything is HTTPS. It's just on the app server itself that it knows it's not HTTPS, so it'll have to be configured to serve on port 80, non-secure (unless you want to have to get another certificate to encrypt app server <-> load balancer traffic which defeats the point).
|
# ¿ Jul 17, 2018 20:19 |
|
|
# ¿ May 3, 2024 11:18 |
|
You can leave whatever options you already have alone, unless it breaks later (in which case you need to figure out why it's breaking and won't be easy to explain in a forum post). The only thing you need to make sure of is that you're listening over port 80, with no SSL certificate. You then tie the load balancer to the instance's port 80.
|
# ¿ Jul 18, 2018 07:24 |