Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
freeasinbeer
Mar 26, 2015

by Fluffdaddy
Even if he needed IPSec tunnels running strongswan in the cloud isn’t that bad.

Adbot
ADBOT LOVES YOU

KoRMaK
Jul 31, 2012



There's this AWS thread, but do we have an azure thread? I got stuck in their ecosystem on a project and need help deploying a rails app :(

The Fool
Oct 16, 2003


:justpost:

There are a handful of azure folks that read the thread.

KoRMaK
Jul 31, 2012



:yeah:

I'm trying to deploy my rails app to Azure, using the linux ruby 2.3.3 image. My app won't come up though, and my current issue is that it's not running bundle install on deployment. It did run it a few times, but I have no idea how I triggered it.

Does anyone have azure experience that can tell me how to get the bundle install running when I push to azure? I also can't ssh into the server, I can only kudu with a bash which doesn't let me run the commands I normally would.



Azure is bad I hate it.

I found out that the startup script should mostly be running bundle install all the time, so I don't know what the hell I'm doing wrong. The issue is that the initial bundle install that ran crapped out due to a gem that was having an issue, so I changed my gem file but it still isn't running. It just finishes and says "missing dependencies, try redeploying" After 5 deployments I think I'm missing something

necrotic posted:

It should always run it on start. Here's the script that handles everything in their image: https://github.com/Azure-App-Service/ruby/blob/master/2.3.3/startup.sh

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
AWS batch- as far as I can tell a queue's attached managed compute environment being disabled doesn't prevent the service from launching a machine to sit and do nothing if something enters the queue. Am I missing a config somewhere or is that just how it goes?

Steve French
Sep 8, 2003

I'd love for someone to help me understand this:

https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/

Specifically:

quote:

This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications.

I've got a legacy app that, unfortunately, uses a sequential naming pattern that we've known is not ideal but for various reasons is very difficult to fix. This wording reaaallly seems to imply that it's not an issue anymore. However, since that announcement was made we've started seeing failures writing to that bucket: we'll start getting nearly 100% internal server errors and/or slow down 503s for some period of time. There are errors on reads too, but at a much lower rate. There have been several instances of this since this announcement, and it had never been a problem before. We're not seeing peak traffic, so the fact that things have gotten worse right at the same time that they were supposedly made better is highly suspect.

Less Fat Luke
May 23, 2003

Exciting Lemon
If you're getting errors you should probably open a support ticket with AWS; my org works with thousands of buckets at crazy levels of usage and I've never seen any of them exhibit that behavior.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
This.

This S3 feature launch should have had no impact on your existing workload. If you are seeing issues with S3 I always suggest opening a support case first.

Steve French
Sep 8, 2003

Less Fat Luke posted:

If you're getting errors you should probably open a support ticket with AWS; my org works with thousands of buckets at crazy levels of usage and I've never seen any of them exhibit that behavior.

Oh of course, I've already done that and will continue to do so. But they haven't been of much help yet

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Steve French posted:

Oh of course, I've already done that and will continue to do so. But they haven't been of much help yet

PM me with your case number and I’ll see what I can do.

Steve French
Sep 8, 2003

And the response I've gotten so far is not consistent with that quoted statement. What support has said is that performance is improved over all, but that randomizing keys is still a good idea (I totally believe that to be the case, but it's not consistent with my read of that statement).

And maybe it's a coincidence that we started having these issues after the announcement, but our access patterns haven't changed and we are not hitting previous peak request rates where we had no problem before the supposed improved performance, so that's why I am skeptical

Orkiec
Dec 28, 2008

My gut, huh?
https://aws.amazon.com/blogs/aws/aurora-serverless-ga/
:toot:

Orkiec fucked around with this message at 03:53 on Aug 10, 2018

JHVH-1
Jun 28, 2002

Wish this was out for PostgresSQL. Those workloads end up being more expensive and aurora version has a higher minimum instance type.

Might try it out for something else though.

PIZZA.BAT
Nov 12, 2016


:cheers:


Alright I'm back! How this convo started a bit ago:

Rex-Goliath posted:

Hi everyone! Total AWS newbie here. I'm a consultant working for a niche NoSQL database company and we're in the middle of pivoting our customers into using AWS rather than running their own hosting, or at least providing them the option. I'm currently responsible for bird-dogging the AWS training program to find out which courses will be most relevant for our consultants to get them up to speed as well as in what order they should be taken in.

We have our own standalone application which has it's own built-in web server, rest api, load balancing, etc. Our primary concern is mostly in being able to spin up servers, allocating resources to them, deploying our application, having the servers communicate with each other. I'm looking at the practicioner essentials for today but was wondering what you guys feel will be the best courses for us to take.

StabbinHobo posted:

start here: https://acloud.guru/learn/aws-certified-cloud-practitioner

after that they should still probably do one of the 'associate' level courses that come next, but for $100 everyone can just start with 'practitioner' and try to plow through it before the end of the quarter.

Bossman is now asking me for options on getting resources trained up enough that they can be minimally sufficient in administering one of these instances. I'm essentially looking for two different plans: the first being the correct amazon-recommended way to get someone technically certified in AWS and the second being if there's any way we can half-rear end it.

Essentially we've realized that our major weakness right now is that we have very few, if any, technical resources that are proficient in Azure/AWS and want to make a strong push in the direction. If there's any online resources that a handful to a dozen technical guys could all sit down together with and tech themselves AWS it'd be greatly appreciated. Alternatively- if we have to go with the expensive training- what courses would you guys consider absolutely essential to becoming proficient?

Thanks a bunch

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
AWS offers trainings where instructors come on site to train your peeps up. That’s the expensive option.

You might want to look at Qwiklabs. Their AWS stuff is fairly straightforward and gout for getting your feet wet.

JHVH-1
Jun 28, 2002
They have online training, some of which is free as well. Also a ton of their seminars on different tech are archived on youtube.

PIZZA.BAT
Nov 12, 2016


:cheers:


When I looked around on their site a few months ago the only free stuff that seemed relevant to us was introductory 'cloud practitioner' classes that while broad and helpful didn't go into a lot of details. I think that broad course plus the quiklabs stuff might be what my boss is looking for. I'll see what he thinks.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Rex-Goliath posted:

When I looked around on their site a few months ago the only free stuff that seemed relevant to us was introductory 'cloud practitioner' classes that while broad and helpful didn't go into a lot of details. I think that broad course plus the quiklabs stuff might be what my boss is looking for. I'll see what he thinks.

On the off chance you are on AWS Enterprise Support you get free Qwiklabs credits every year.

PIZZA.BAT
Nov 12, 2016


:cheers:


Agrikk posted:

On the off chance you are on AWS Enterprise Support you get free Qwiklabs credits every year.

Ooh this is very good to know- thanks

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
the halfassed way is you take the acloudguru video courses and you just don't go back through them a second/third time when you score lovely on the mock tests, and then you don't take the real test. its the same videos/labs either way.

Vanadium
Jan 8, 2005

Is there really no way to find your current Redshift limits for nodes total/per cluster short of trying to exceed them?

jiffypop45
Dec 30, 2011

This is my personal opinion

We get free access to acloudguru at AWS so I think that's a pretty good indicator of how great it is. Alternatively you can reach out to your TAM and buy training through them however I don't know what cost is like on that but it's an option I know they give to people who need to meet budget in academia/government in lieu of buying RI's.

We used to use qwiklabs as part of our onboarding but we scrapped them as the instructions were poor. It was basically 1. Read this 2. Now design a kubernetes like platform using AWS.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Vanadium posted:

Is there really no way to find your current Redshift limits for nodes total/per cluster short of trying to exceed them?

You could always put in a support ticket.

vanity slug
Jul 20, 2010

Or AWS could just implement describe-limits for Redshift like, I dunno, half their other poo poo with Service Limits.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Jeoh posted:

Or AWS could just implement describe-limits for Redshift like, I dunno, half their other poo poo with Service Limits.

Imagine each service is it’s own company acting as a subsidiary to a parent shell corporation. Imagine each company implementing things their own way, with their own requirements, APIs and so forth.

Now imagine trying to put all hundred forty plus companies under a single pane of glass for management, billing and limits.

That is AWS in a nutshell. Some teams do a better job of playing with others than others. There is an ongoing effort to make that single pane of glass more transparent, but there are still gaps as you have found.

Agrikk fucked around with this message at 20:18 on Sep 13, 2018

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
Is anybody friendly with the Batch team? :v:
I’d use it way more at work if you could give a job fractions of CPU, even if it were just one option at x0.5. As is it’s ok but we’re not throwing more and varied workloads at it because of that. Our needs are memory bound not cpu.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Startyde posted:

. Our needs are memory bound not cpu.

I hear ya. I hear of lots of customers running into memory constraints and having to upgrade to a larger instance/tier/etc because they need more memory and the cpu stats mostly idle.

It’s a cost and logistics function: an underlying fleet has x amount of cpu and y amount of memory. Therefore all things come in a fixed ratio of x to y.

But it certainly would be nice to have a tomcat machine running on one half x and ten times y.

tracecomplete
Feb 26, 2017

Agrikk posted:

It’s a cost and logistics function: an underlying fleet has x amount of cpu and y amount of memory. Therefore all things come in a fixed ratio of x to y.

Maybe they're just taking a bath on it, but Google seems to have demonstrated pretty conclusively that the quoted bit need not be so.

I generally prefer AWS as a platform, because I've spent long enough on it to have a level of comfort with its APIs and how it operates, but stuff like this creates an awkward conversation around costs when evaluating AWS for new deployments.

(I guess "it's not Azure, thank god" is still worth something.)

Startyde
Apr 19, 2007

come post with us, forever and ever and ever

Agrikk posted:

I hear ya. I hear of lots of customers running into memory constraints and having to upgrade to a larger instance/tier/etc because they need more memory and the cpu stats mostly idle.

It’s a cost and logistics function: an underlying fleet has x amount of cpu and y amount of memory. Therefore all things come in a fixed ratio of x to y.

But it certainly would be nice to have a tomcat machine running on one half x and ten times y.

It's not even that, it's that a job won't be dispatched to an instance regardless of actual cpu utilization. There's plenty of RAM and compute but it's looking at cores only.
GCP's ultimate flexibility flies in the face of your point though. Yes I understand google lit money on fire until their clusters became self-aware and their offering is unique in this regard, but it exists, so... I stick around because of familiarity too but AWS has got to reign in the bs like wacky fee schedules especially with regards to bandwidth, echo above, pitching aws sure hasn't gotten easier.

Volguus
Mar 3, 2009
Got another question for you AWS gurus: how does one make an RDS db (postgresql) available across regions?
I tried the VPC peering option, but it does not have DNS name resolution across regions. I would like to not have the database open to the internet (even if restricted to a single IP or range of IPs). Or, is there a better option for this? Database replication maybe?
My goal is to have a web application available in multiple regions (for latency purposes), now looking at US-EAST (where we're currently) and EU-WEST (Paris). The DB would be relatively fine to be across the ocean as for the latency-sensitive operations it is not used that heavily. But the web application itself should be in the closest region to the user.

Extremely Penetrated
Aug 8, 2004
Hail Spwwttag.
If just using CloudFront for the web app in its original region isn't enough, RDS Postgresql supports cross-region read replicas. But then the app needs to be redesigned to send read-only queries to X, and read-write queries to Y.

RDS Aurora was supposed to offer multi-region multi-master (i.e. full read/write nodes) capabilities by the end of this year.

You could also look into whether adding an Elasticache layer would be a good fit for the workload.

Volguus
Mar 3, 2009

Extremely Penetrated posted:

If just using CloudFront for the web app in its original region isn't enough, RDS Postgresql supports cross-region read replicas. But then the app needs to be redesigned to send read-only queries to X, and read-write queries to Y.

RDS Aurora was supposed to offer multi-region multi-master (i.e. full read/write nodes) capabilities by the end of this year.

You could also look into whether adding an Elasticache layer would be a good fit for the workload.

Interesting, I didn't know about cross-region replicas. But the same issue stands: At the moment, the only way I know of that allows a web application (we're talking about API here, not plain pages, so Cloudfront doesn't really help) to talk with an RDS db in a different region is over the public internet, since VPC peering doesn't allow name resolution across regions (it works fine if I access the database using the internal 172.31.x.x IP). And I would very much like to not send db traffic over the internet. Is VPN a service that AWS provides and they manage and it works and i don't have to worry about that I could use to connect two (or more) VPCs?

Extremely Penetrated
Aug 8, 2004
Hail Spwwttag.
I haven't tried it, but I thought that Route53's Private Hosted Zones would let requests resolve to their internal IPs across any VPCs you associate them with.

Volguus
Mar 3, 2009

Extremely Penetrated posted:

I haven't tried it, but I thought that Route53's Private Hosted Zones would let requests resolve to their internal IPs across any VPCs you associate them with.

I tried that, but it doesn't resolve the IP, it resolves the name. Essentially, for RDS you configure Route 53 private zones to resolve a pretty name (db.example.com) to the uglier and longer RDS provided name. And with VPC peering, that works fine, except that the remote site gets the ugly name, which it again tries to resolve and then it only gets the external IP for it. Which takes me back to square one. I want the database available to the remote zone via a private IP, and it should communicate with it over a private network .

Thanks Ants
May 21, 2004

#essereFerrari


It's resolving the external IP, but is the traffic actually going externally? I know in Azure when you present a service endpoint into a vnet you still reference the 'public' DNS name but that traffic never actually leaves the private network.

Extremely Penetrated
Aug 8, 2004
Hail Spwwttag.
That sounded like maybe DNS resolution isn't enabled in the VPC Peering settings. Which led me to discover that "You cannot enable DNS resolution support for an inter-region VPC peering connection." lol so now you're looking at doing something like a TCP proxy in EC2 to forward stuff to RDS, or your own DNS. Hopefully someone else has less lovely ideas.

Volguus
Mar 3, 2009

Thanks Ants posted:

It's resolving the external IP, but is the traffic actually going externally? I know in Azure when you present a service endpoint into a vnet you still reference the 'public' DNS name but that traffic never actually leaves the private network.

Hmm, i don't know. I guess it tries to since the web application cannot connect to the database when it's presented with the name (since it resolves to the external IP), but it can connect fine if configured with the internal (172) IP.

Extremely Penetrated posted:

That sounded like maybe DNS resolution isn't enabled in the VPC Peering settings. Which led me to discover that "You cannot enable DNS resolution support for an inter-region VPC peering connection." lol so now you're looking at doing something like a TCP proxy in EC2 to forward stuff to RDS, or your own DNS. Hopefully someone else has less lovely ideas.

Yes, the fact that you can't I saw that, I was hoping that people have done that (it should be a solved problem, right?) and they have ideas. But with TCP proxy to ... you kinda lost me.

Edit again: I was struggling when I set up this poo poo for the first time, in one region. I hate the drat CEO for not wanting to hire someone who knows this crap. He was happy though that I was able to put together some lovely (but one button) solution for building and deploying the application automatically, and scaling it if needed and the traffic gets too high. But now when we need to expand, he still pulls that poo poo, that i can come up with something. It doesn't look that I will this time.

Volguus fucked around with this message at 16:51 on Sep 21, 2018

Thanks Ants
May 21, 2004

#essereFerrari


Can you have one private DNS zone and then add multiple VPC IDs to the configuration? Then just enable DNS resolution on each zone. No need for DNS traffic to transit your VPC peer.

Edit: Ah, I see what you mean. A different IP gets returned depending on where the query has come from, and RDS doesn't see a query from a peered VPC as a private source.

Thanks Ants fucked around with this message at 17:04 on Sep 21, 2018

freeasinbeer
Mar 26, 2015

by Fluffdaddy
I most definitely have RDS servers that have cross region connectivity. It resolves the RDS domain name to a private IP and that is routed over the inter region peer. What AWS won’t let you do is chain VPCs, the two VPCs have to be explicitly peered.



In the AWS docs it’s called transitive peering and shows you what it will and won’t do.

Adbot
ADBOT LOVES YOU

Volguus
Mar 3, 2009
Im looking at https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html#transitive-peering and it shows 3 VPCs.
I don't have 3, I only have 2 VPCs, and according to https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-basics.html#vpc-peering-limitations I have these limitations:

quote:

An inter-region VPC peering connection has additional limitations:
You cannot create a security group rule that references a peer VPC security group.
You cannot enable support for an EC2-Classic instance that's linked to a VPC via ClassicLink to communicate with the peer VPC.
You cannot enable DNS resolution support (a VPC cannot resolve public IPv4 DNS hostnames to private IPv4 addresses when queried from instances in the peer VPC).
Communication over IPv6 is not supported.
The Maximum Transmission Unit (MTU) across the VPC peering connection is 1500 bytes (jumbo frames are not supported).

And indeed,

I cannot enabled DNS on the peer.

So, what am I missing here that would allow me from Region B to resolve "mydb.rds.amazon.com" to an internal address (172.xxx) instead of the public one?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply