|
Even if he needed IPSec tunnels running strongswan in the cloud isn’t that bad.
|
# ? Jul 31, 2018 21:31 |
|
|
# ? May 21, 2024 14:45 |
|
There's this AWS thread, but do we have an azure thread? I got stuck in their ecosystem on a project and need help deploying a rails app
|
# ? Aug 3, 2018 16:17 |
|
There are a handful of azure folks that read the thread.
|
# ? Aug 3, 2018 16:58 |
|
I'm trying to deploy my rails app to Azure, using the linux ruby 2.3.3 image. My app won't come up though, and my current issue is that it's not running bundle install on deployment. It did run it a few times, but I have no idea how I triggered it. Does anyone have azure experience that can tell me how to get the bundle install running when I push to azure? I also can't ssh into the server, I can only kudu with a bash which doesn't let me run the commands I normally would. Azure is bad I hate it. I found out that the startup script should mostly be running bundle install all the time, so I don't know what the hell I'm doing wrong. The issue is that the initial bundle install that ran crapped out due to a gem that was having an issue, so I changed my gem file but it still isn't running. It just finishes and says "missing dependencies, try redeploying" After 5 deployments I think I'm missing something necrotic posted:It should always run it on start. Here's the script that handles everything in their image: https://github.com/Azure-App-Service/ruby/blob/master/2.3.3/startup.sh
|
# ? Aug 3, 2018 17:02 |
|
AWS batch- as far as I can tell a queue's attached managed compute environment being disabled doesn't prevent the service from launching a machine to sit and do nothing if something enters the queue. Am I missing a config somewhere or is that just how it goes?
|
# ? Aug 4, 2018 00:22 |
|
I'd love for someone to help me understand this: https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/ Specifically: quote:This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications. I've got a legacy app that, unfortunately, uses a sequential naming pattern that we've known is not ideal but for various reasons is very difficult to fix. This wording reaaallly seems to imply that it's not an issue anymore. However, since that announcement was made we've started seeing failures writing to that bucket: we'll start getting nearly 100% internal server errors and/or slow down 503s for some period of time. There are errors on reads too, but at a much lower rate. There have been several instances of this since this announcement, and it had never been a problem before. We're not seeing peak traffic, so the fact that things have gotten worse right at the same time that they were supposedly made better is highly suspect.
|
# ? Aug 5, 2018 18:13 |
|
If you're getting errors you should probably open a support ticket with AWS; my org works with thousands of buckets at crazy levels of usage and I've never seen any of them exhibit that behavior.
|
# ? Aug 5, 2018 19:07 |
|
This. This S3 feature launch should have had no impact on your existing workload. If you are seeing issues with S3 I always suggest opening a support case first.
|
# ? Aug 5, 2018 19:33 |
|
Less Fat Luke posted:If you're getting errors you should probably open a support ticket with AWS; my org works with thousands of buckets at crazy levels of usage and I've never seen any of them exhibit that behavior. Oh of course, I've already done that and will continue to do so. But they haven't been of much help yet
|
# ? Aug 5, 2018 19:34 |
|
Steve French posted:Oh of course, I've already done that and will continue to do so. But they haven't been of much help yet PM me with your case number and I’ll see what I can do.
|
# ? Aug 5, 2018 19:35 |
|
And the response I've gotten so far is not consistent with that quoted statement. What support has said is that performance is improved over all, but that randomizing keys is still a good idea (I totally believe that to be the case, but it's not consistent with my read of that statement). And maybe it's a coincidence that we started having these issues after the announcement, but our access patterns haven't changed and we are not hitting previous peak request rates where we had no problem before the supposed improved performance, so that's why I am skeptical
|
# ? Aug 5, 2018 19:37 |
|
https://aws.amazon.com/blogs/aws/aurora-serverless-ga/ Orkiec fucked around with this message at 03:53 on Aug 10, 2018 |
# ? Aug 10, 2018 03:50 |
|
Wish this was out for PostgresSQL. Those workloads end up being more expensive and aurora version has a higher minimum instance type. Might try it out for something else though.
|
# ? Aug 14, 2018 17:44 |
|
Alright I'm back! How this convo started a bit ago:Rex-Goliath posted:Hi everyone! Total AWS newbie here. I'm a consultant working for a niche NoSQL database company and we're in the middle of pivoting our customers into using AWS rather than running their own hosting, or at least providing them the option. I'm currently responsible for bird-dogging the AWS training program to find out which courses will be most relevant for our consultants to get them up to speed as well as in what order they should be taken in. StabbinHobo posted:start here: https://acloud.guru/learn/aws-certified-cloud-practitioner Bossman is now asking me for options on getting resources trained up enough that they can be minimally sufficient in administering one of these instances. I'm essentially looking for two different plans: the first being the correct amazon-recommended way to get someone technically certified in AWS and the second being if there's any way we can half-rear end it. Essentially we've realized that our major weakness right now is that we have very few, if any, technical resources that are proficient in Azure/AWS and want to make a strong push in the direction. If there's any online resources that a handful to a dozen technical guys could all sit down together with and tech themselves AWS it'd be greatly appreciated. Alternatively- if we have to go with the expensive training- what courses would you guys consider absolutely essential to becoming proficient? Thanks a bunch
|
# ? Sep 11, 2018 19:19 |
|
AWS offers trainings where instructors come on site to train your peeps up. That’s the expensive option. You might want to look at Qwiklabs. Their AWS stuff is fairly straightforward and gout for getting your feet wet.
|
# ? Sep 11, 2018 20:24 |
|
They have online training, some of which is free as well. Also a ton of their seminars on different tech are archived on youtube.
|
# ? Sep 11, 2018 21:43 |
|
When I looked around on their site a few months ago the only free stuff that seemed relevant to us was introductory 'cloud practitioner' classes that while broad and helpful didn't go into a lot of details. I think that broad course plus the quiklabs stuff might be what my boss is looking for. I'll see what he thinks.
|
# ? Sep 11, 2018 22:09 |
|
Rex-Goliath posted:When I looked around on their site a few months ago the only free stuff that seemed relevant to us was introductory 'cloud practitioner' classes that while broad and helpful didn't go into a lot of details. I think that broad course plus the quiklabs stuff might be what my boss is looking for. I'll see what he thinks. On the off chance you are on AWS Enterprise Support you get free Qwiklabs credits every year.
|
# ? Sep 11, 2018 22:54 |
|
Agrikk posted:On the off chance you are on AWS Enterprise Support you get free Qwiklabs credits every year. Ooh this is very good to know- thanks
|
# ? Sep 11, 2018 23:11 |
|
the halfassed way is you take the acloudguru video courses and you just don't go back through them a second/third time when you score lovely on the mock tests, and then you don't take the real test. its the same videos/labs either way.
|
# ? Sep 12, 2018 01:55 |
|
Is there really no way to find your current Redshift limits for nodes total/per cluster short of trying to exceed them?
|
# ? Sep 12, 2018 15:39 |
|
This is my personal opinion We get free access to acloudguru at AWS so I think that's a pretty good indicator of how great it is. Alternatively you can reach out to your TAM and buy training through them however I don't know what cost is like on that but it's an option I know they give to people who need to meet budget in academia/government in lieu of buying RI's. We used to use qwiklabs as part of our onboarding but we scrapped them as the instructions were poor. It was basically 1. Read this 2. Now design a kubernetes like platform using AWS.
|
# ? Sep 12, 2018 18:18 |
|
Vanadium posted:Is there really no way to find your current Redshift limits for nodes total/per cluster short of trying to exceed them? You could always put in a support ticket.
|
# ? Sep 13, 2018 11:55 |
|
Or AWS could just implement describe-limits for Redshift like, I dunno, half their other poo poo with Service Limits.
|
# ? Sep 13, 2018 16:44 |
|
Jeoh posted:Or AWS could just implement describe-limits for Redshift like, I dunno, half their other poo poo with Service Limits. Imagine each service is it’s own company acting as a subsidiary to a parent shell corporation. Imagine each company implementing things their own way, with their own requirements, APIs and so forth. Now imagine trying to put all hundred forty plus companies under a single pane of glass for management, billing and limits. That is AWS in a nutshell. Some teams do a better job of playing with others than others. There is an ongoing effort to make that single pane of glass more transparent, but there are still gaps as you have found. Agrikk fucked around with this message at 20:18 on Sep 13, 2018 |
# ? Sep 13, 2018 20:16 |
|
Is anybody friendly with the Batch team? I’d use it way more at work if you could give a job fractions of CPU, even if it were just one option at x0.5. As is it’s ok but we’re not throwing more and varied workloads at it because of that. Our needs are memory bound not cpu.
|
# ? Sep 13, 2018 21:06 |
|
Startyde posted:. Our needs are memory bound not cpu. I hear ya. I hear of lots of customers running into memory constraints and having to upgrade to a larger instance/tier/etc because they need more memory and the cpu stats mostly idle. It’s a cost and logistics function: an underlying fleet has x amount of cpu and y amount of memory. Therefore all things come in a fixed ratio of x to y. But it certainly would be nice to have a tomcat machine running on one half x and ten times y.
|
# ? Sep 14, 2018 01:23 |
|
Agrikk posted:It’s a cost and logistics function: an underlying fleet has x amount of cpu and y amount of memory. Therefore all things come in a fixed ratio of x to y. Maybe they're just taking a bath on it, but Google seems to have demonstrated pretty conclusively that the quoted bit need not be so. I generally prefer AWS as a platform, because I've spent long enough on it to have a level of comfort with its APIs and how it operates, but stuff like this creates an awkward conversation around costs when evaluating AWS for new deployments. (I guess "it's not Azure, thank god" is still worth something.)
|
# ? Sep 15, 2018 02:48 |
|
Agrikk posted:I hear ya. I hear of lots of customers running into memory constraints and having to upgrade to a larger instance/tier/etc because they need more memory and the cpu stats mostly idle. It's not even that, it's that a job won't be dispatched to an instance regardless of actual cpu utilization. There's plenty of RAM and compute but it's looking at cores only. GCP's ultimate flexibility flies in the face of your point though. Yes I understand google lit money on fire until their clusters became self-aware and their offering is unique in this regard, but it exists, so... I stick around because of familiarity too but AWS has got to reign in the bs like wacky fee schedules especially with regards to bandwidth, echo above, pitching aws sure hasn't gotten easier.
|
# ? Sep 15, 2018 03:09 |
|
Got another question for you AWS gurus: how does one make an RDS db (postgresql) available across regions? I tried the VPC peering option, but it does not have DNS name resolution across regions. I would like to not have the database open to the internet (even if restricted to a single IP or range of IPs). Or, is there a better option for this? Database replication maybe? My goal is to have a web application available in multiple regions (for latency purposes), now looking at US-EAST (where we're currently) and EU-WEST (Paris). The DB would be relatively fine to be across the ocean as for the latency-sensitive operations it is not used that heavily. But the web application itself should be in the closest region to the user.
|
# ? Sep 21, 2018 14:56 |
|
If just using CloudFront for the web app in its original region isn't enough, RDS Postgresql supports cross-region read replicas. But then the app needs to be redesigned to send read-only queries to X, and read-write queries to Y. RDS Aurora was supposed to offer multi-region multi-master (i.e. full read/write nodes) capabilities by the end of this year. You could also look into whether adding an Elasticache layer would be a good fit for the workload.
|
# ? Sep 21, 2018 15:46 |
|
Extremely Penetrated posted:If just using CloudFront for the web app in its original region isn't enough, RDS Postgresql supports cross-region read replicas. But then the app needs to be redesigned to send read-only queries to X, and read-write queries to Y. Interesting, I didn't know about cross-region replicas. But the same issue stands: At the moment, the only way I know of that allows a web application (we're talking about API here, not plain pages, so Cloudfront doesn't really help) to talk with an RDS db in a different region is over the public internet, since VPC peering doesn't allow name resolution across regions (it works fine if I access the database using the internal 172.31.x.x IP). And I would very much like to not send db traffic over the internet. Is VPN a service that AWS provides and they manage and it works and i don't have to worry about that I could use to connect two (or more) VPCs?
|
# ? Sep 21, 2018 15:56 |
|
I haven't tried it, but I thought that Route53's Private Hosted Zones would let requests resolve to their internal IPs across any VPCs you associate them with.
|
# ? Sep 21, 2018 16:07 |
|
Extremely Penetrated posted:I haven't tried it, but I thought that Route53's Private Hosted Zones would let requests resolve to their internal IPs across any VPCs you associate them with. I tried that, but it doesn't resolve the IP, it resolves the name. Essentially, for RDS you configure Route 53 private zones to resolve a pretty name (db.example.com) to the uglier and longer RDS provided name. And with VPC peering, that works fine, except that the remote site gets the ugly name, which it again tries to resolve and then it only gets the external IP for it. Which takes me back to square one. I want the database available to the remote zone via a private IP, and it should communicate with it over a private network .
|
# ? Sep 21, 2018 16:19 |
|
It's resolving the external IP, but is the traffic actually going externally? I know in Azure when you present a service endpoint into a vnet you still reference the 'public' DNS name but that traffic never actually leaves the private network.
|
# ? Sep 21, 2018 16:35 |
|
That sounded like maybe DNS resolution isn't enabled in the VPC Peering settings. Which led me to discover that "You cannot enable DNS resolution support for an inter-region VPC peering connection." lol so now you're looking at doing something like a TCP proxy in EC2 to forward stuff to RDS, or your own DNS. Hopefully someone else has less lovely ideas.
|
# ? Sep 21, 2018 16:45 |
|
Thanks Ants posted:It's resolving the external IP, but is the traffic actually going externally? I know in Azure when you present a service endpoint into a vnet you still reference the 'public' DNS name but that traffic never actually leaves the private network. Hmm, i don't know. I guess it tries to since the web application cannot connect to the database when it's presented with the name (since it resolves to the external IP), but it can connect fine if configured with the internal (172) IP. Extremely Penetrated posted:That sounded like maybe DNS resolution isn't enabled in the VPC Peering settings. Which led me to discover that "You cannot enable DNS resolution support for an inter-region VPC peering connection." lol so now you're looking at doing something like a TCP proxy in EC2 to forward stuff to RDS, or your own DNS. Hopefully someone else has less lovely ideas. Yes, the fact that you can't I saw that, I was hoping that people have done that (it should be a solved problem, right?) and they have ideas. But with TCP proxy to ... you kinda lost me. Edit again: I was struggling when I set up this poo poo for the first time, in one region. I hate the drat CEO for not wanting to hire someone who knows this crap. He was happy though that I was able to put together some lovely (but one button) solution for building and deploying the application automatically, and scaling it if needed and the traffic gets too high. But now when we need to expand, he still pulls that poo poo, that i can come up with something. It doesn't look that I will this time. Volguus fucked around with this message at 16:51 on Sep 21, 2018 |
# ? Sep 21, 2018 16:45 |
|
Can you have one private DNS zone and then add multiple VPC IDs to the configuration? Then just enable DNS resolution on each zone. No need for DNS traffic to transit your VPC peer. Edit: Ah, I see what you mean. A different IP gets returned depending on where the query has come from, and RDS doesn't see a query from a peered VPC as a private source. Thanks Ants fucked around with this message at 17:04 on Sep 21, 2018 |
# ? Sep 21, 2018 17:01 |
|
I most definitely have RDS servers that have cross region connectivity. It resolves the RDS domain name to a private IP and that is routed over the inter region peer. What AWS won’t let you do is chain VPCs, the two VPCs have to be explicitly peered. In the AWS docs it’s called transitive peering and shows you what it will and won’t do.
|
# ? Sep 21, 2018 20:44 |
|
|
# ? May 21, 2024 14:45 |
|
Im looking at https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html#transitive-peering and it shows 3 VPCs. I don't have 3, I only have 2 VPCs, and according to https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-basics.html#vpc-peering-limitations I have these limitations: quote:An inter-region VPC peering connection has additional limitations: And indeed, I cannot enabled DNS on the peer. So, what am I missing here that would allow me from Region B to resolve "mydb.rds.amazon.com" to an internal address (172.xxx) instead of the public one?
|
# ? Sep 21, 2018 20:59 |