Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Cancelbot
Nov 22, 2006

Canceling spam since 1928

Scrapez posted:

This is a voip telephony application running on the ec2 instances and our outbound carrier has to whitelist IPs to allow them to make calls.

Based on this; https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-addresses.html

Could you apply a filter where association-id is null/empty string? Or pipe the JSON into jq where that attribute is missing?

Adbot
ADBOT LOVES YOU

Scrapez
Feb 27, 2004

Cancelbot posted:

Based on this; https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-addresses.html

Could you apply a filter where association-id is null/empty string? Or pipe the JSON into jq where that attribute is missing?

This is what I ended up using:

aws ec2 describe-addresses --region us-east-2 --query 'Addresses[?AssociationId==null]' | jq -r '.[] | .PublicIp' | head -n 1

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
TAMs are always in Enterprise support since TAMs are only assigned to customers with enterprise-tier support contracts. What office will you be working out of? I’m curious if I’ll be your trainer.

And don’t feel bad. I don’t return anyone’s calls. It’s what makes me such an effective TAM.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
Out of curiosity what kind of support turnaround time from TAMs should we expect? We’ve had some monthly calls with ours and a few people have asked for meetings with various SMEs and have been waiting weeks with no follow up. This has not exactly thrilled our teams and I was wondering if it’s something we should discuss with someone or if it’s usual.

TheCog
Jul 30, 2012

I AM ZEPA AND I CLAIM THESE LANDS BY RIGHT OF CONQUEST

Startyde posted:

The first rule of AWS is Amazon hates you
The second rule is never forget Rule Number One
The cloudwatch logs interface alone is proof enough that Bezos hates you and wants you to suffer.

I was catching up with the thread, and well:



Thinking of getting it printed.

Docjowles
Apr 9, 2009

It's beautiful :allears:

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

TheCog posted:

I was catching up with the thread, and well:



Thinking of getting it printed.

:perfect:


And asking your TAM for a SME should not take three weeks usually. Most importantly, your TAM should not have you wondering what is going on.

Pinging your TAM with “uh, we haven’t had an update in a while. What’s up?” is completely warranted any time you are wondering about something.

Cancelbot
Nov 22, 2006

Canceling spam since 1928

Agrikk posted:

What office will you be working out of? I’m curious if I’ll be your trainer.

London, or to use what I've head from other AWS people: LHR14. I've been told I'm going to Dublin for a couple of weeks to do the CSE training and then another week of TAM training. There's a new office opening much closer to me but not sure when it's open.

For TAM engagement: We had some issues with CodeDeploy and got to a service team within 2 weeks via a joint call with the SA, TAM, and product owner. We got a MS-SQL SME in a week.

Cancelbot fucked around with this message at 08:24 on Sep 13, 2019

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
Thanks all, I wanted to make sure it wasn't the norm before bugging people about it.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
At a previous gig our poor TAMs would get called during practically half our random outages (they were frequent). I think they were our secondary on-call team. But given how much money the company was paying and their visibility they were probably some of their top ones. We had engineer access and got responses usually within hours to get an engineer on the phone (I first wrangled our network guys for dumps and getting cleared with general counsel sometimes depending upon the impact to the corporate network, so it’s not like I used AWS as a surrogate sysadmin team).

Thanks Ants
May 21, 2004

#essereFerrari


Cancelbot posted:

London, or to use what I've head from other AWS people: LHR14.

Holborn Viaduct?

Cancelbot
Nov 22, 2006

Canceling spam since 1928

Yeah, that's the spot. Couldn't remember the name.

nexxai
Jul 17, 2002

quack quack bjork
Fun Shoe
I know this question isn't specifically AWS related, but I wasn't really sure where it would fit better.

We have an API Gateway that basically fronts a bunch of customer connections to a vendor's service we purchase and offer. We're seeing a average failure rate of around 0.1% (e.g. ~11,000 failures on ~11,000,000 API requests in a month) and since this is the first time I've ever worked on a "real" API, I don't know if that's good, bad, about average, or what. We're well within our contractual rights to our customers (guaranteed 1.5% or less failure rate) but our contracts were written by people who have even less knowledge in this space than I do.

I've tried searching for "typical failure rates" or "acceptable failure rates" but nothing really comes up. Is anyone able to give some insight here?

vanity slug
Jul 20, 2010

Depends on why they're failing, I guess?

nexxai
Jul 17, 2002

quack quack bjork
Fun Shoe

Jeoh posted:

Depends on why they're failing, I guess?
Assume it's because we're a bunch of monkeys and have no idea what we're doing. Basically, assume it's all our fault and we alone should be responsible for fixing everything (this is definitely not the case, but I want to play Devil's advocate). Is a 0.1% failure rate too high?

JHVH-1
Jun 28, 2002

nexxai posted:

I know this question isn't specifically AWS related, but I wasn't really sure where it would fit better.

We have an API Gateway that basically fronts a bunch of customer connections to a vendor's service we purchase and offer. We're seeing a average failure rate of around 0.1% (e.g. ~11,000 failures on ~11,000,000 API requests in a month) and since this is the first time I've ever worked on a "real" API, I don't know if that's good, bad, about average, or what. We're well within our contractual rights to our customers (guaranteed 1.5% or less failure rate) but our contracts were written by people who have even less knowledge in this space than I do.

I've tried searching for "typical failure rates" or "acceptable failure rates" but nothing really comes up. Is anyone able to give some insight here?

If its important enough you probably would want a retry or queuing system, but still good to break down the errors and look for patterns like similar types of request, type of error, time of day, amount of requests being sent, if the errors occur grouped together, what the app is doing leading up to the error etc.

Then at least if you go to them they might have some insight or you could be finding an issue on their end they aren't aware of. Even if you aren't hitting the SLA they probably would want to avoid it if its a good service.

nexxai
Jul 17, 2002

quack quack bjork
Fun Shoe

JHVH-1 posted:

If its important enough you probably would want a retry or queuing system
Sorry, I should have mentioned - there is already a retry system and there is a ~100% success rate on all retries. My question is more around whether 0.1% of errors is acceptable at any level, even if the request eventually succeeds

quote:

[...] still good to break down the errors and look for patterns like similar types of request, type of error, time of day, amount of requests being sent, if the errors occur grouped together, what the app is doing leading up to the error etc.
We've done this and we're fairly certain as to the cause (and it's not us) but if it's something I should be doing, I want to be able to go to our vendor liaison and tell them that 0.1% of failures is something they need to be working on

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.
You want to Google for availability SLAs (not failure rates). Here are AWS' own SLAs for availability: https://aws.amazon.com/legal/service-level-agreements/.

Your availability is currently 99.9% (100 - 0.1), which is not great, not terrible. At AWS, that's where we start paying our customers credits for most services. Your advertised SLA is apparently 98.5% (100 - 1.5), which is pretty terrible. I wouldn't want to use such a service personally. Seems like you could make a stronger promise there.

SnatchRabbit
Feb 23, 2006

by sebmojo
Cross posting from the SQL thread:

I have a client that wants to migrate two MSSQL database servers with 200+ db objects between them to AWS Cloud. Now, up until this point we've been fine using Data Migration Service to move the table data from their on-prem servers into AWS EC2 (RDS was ruled out due to cost). The problem is that DMS doesn't migrate indexes, users, privileges, stored procedures, and other database changes not directly related to table data. So now we have generate scripts by hand for these 200+, at minimum, objects. What I'm asking is, is there some hacky way to automate this migration or are we just stuck having to do it all by hand over and over again? Is there some option in DMS to make this happen?

vanity slug
Jul 20, 2010

Contact your AWS TAM. We've been working intensively with the DMS team and they're really eager to change things based on customer feedback.

Pollyanna
Mar 5, 2005

Milk's on them.


Jeoh posted:

Contact your AWS TAM. We've been working intensively with the DMS team and they're really eager to change things based on customer feedback.

drat, really? I’ll have to get ours involved too.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Jeoh posted:

Contact your AWS TAM. We've been working intensively with the DMS team and they're really eager to change things based on customer feedback.

Always this.

For every project, you should be engaging your TAM (or entire account team) before you start the project. This way you don’t have to reinvent the wheel ad you’ll be given best practices for your project- ensuring you get it right straight from the beginning.

SnatchRabbit
Feb 23, 2006

by sebmojo

Agrikk posted:

Always this.

For every project, you should be engaging your TAM (or entire account team) before you start the project. This way you don’t have to reinvent the wheel ad you’ll be given best practices for your project- ensuring you get it right straight from the beginning.

Thanks, this is for a client account. Would there be a TAM assigned even on a basic support plan?

SnatchRabbit fucked around with this message at 23:46 on Sep 24, 2019

Docjowles
Apr 9, 2009

No you gotta pay the big bucks for Enterprise Support to get a TAM.

Internet Explorer
Jun 1, 2005





What if you have a Solutions Architect and not a TAM?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Internet Explorer posted:

What if you have a Solutions Architect and not a TAM?

Then reach out to them. Solutions architects exist to help architect solutions. See?

A TAM gets assigned to a customer only when the customer signs up for enterprise support (a minimum of $15,000 per year), but technically there is an SA and an Account Manager assigned to every account. That said, territory account managers can have hundreds of customers so access to the SA associated with your account might be limited. YMMV.

Pollyanna
Mar 5, 2005

Milk's on them.


Agrikk posted:

Always this.

For every project, you should be engaging your TAM (or entire account team) before you start the project. This way you don’t have to reinvent the wheel ad you’ll be given best practices for your project- ensuring you get it right straight from the beginning.

This is good to know, thanks. Would have helped when doing our migration.

Internet Explorer
Jun 1, 2005





Agrikk posted:

Then reach out to them. Solutions architects exist to help architect solutions. See?

A TAM gets assigned to a customer only when the customer signs up for enterprise support (a minimum of $15,000 per year), but technically there is an SA and an Account Manager assigned to every account. That said, territory account managers can have hundreds of customers so access to the SA associated with your account might be limited. YMMV.

That's basically what I was getting at, the difference between a TAM and an SA for the customer. You got me the info I needed. Thank you.

Scrapez
Feb 27, 2004

Is there a way to move files into an EFS directly from my desktop machine?

Right now, I have to SCP the files up to an EC2 instance and then copy them over to the mounted EFS.

Cancelbot
Nov 22, 2006

Canceling spam since 1928

This is the only guide I found and requires a VPN between you and your VPC: https://docs.aws.amazon.com/efs/latest/ug/efs-onpremises.html

I'm not great with Linux but you could potentially have something to detect & rsync the uploaded/changed files and auto sync that to EFS? It cuts out a step and you might be able to go further and build a rube-goldberg of rsync transactions :v:

Cancelbot fucked around with this message at 08:32 on Sep 27, 2019

freeasinbeer
Mar 26, 2015

by Fluffdaddy
Let’s tune Goldberg it up!

You could spin up the SFTP to s3 adaptor and then invoke lambdas to move it over to EFS.




As to the actual question I’d need to double check but want to say I’ve connected to EFS across VPCs maybe even over a NATed client VPN.

Edit: this exists https://aws.amazon.com/datasync/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc

Scrapez
Feb 27, 2004

I did see DataSync. That seems like the method I'll have to go.

To expand on my reason for needing this. I've setup our environment in the secure way that AWS suggests with a bastion ec2 host in a public subnet and then all of our ec2 machines in private subnets. The EFS storage is mounted on all the ec2 machines in the private subnets. So, if I want to transfer something up, I have to scp the files to the bastion host and then scp them from there over to the ec2 instance in the private subnets.

I don't want to attach the EFS to the bastion as the EFS may contain sensitive data that I wouldn't want accessible from a machine that's in a public subnet.

I'll give DataSync a try and see how that does. The crap part is that you have to pay for the service but it does seem very cheap (4 cents per gb of transfer.)

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION
Not AWS but hoping someone can help. We have a Varnish server configured to cache requests and behind that we have an Azure load balancer that balances between 3-4 VMs depending on requirements. The problem is that something about the Varnish server being there is causing the load balancer to go stupid and it seems to be confusing the traffic as one visitor and sending it all to the one VM. In other words, it doesn't seem to know or care about the X-Forwarded-For header when determining where to send requests.

Am I right in this assumption? Is there any way to configure the load balancer to ignore the client IP and use the X-Forwarded-For header instead?

JHVH-1
Jun 28, 2002

a hot gujju bhabhi posted:

Not AWS but hoping someone can help. We have a Varnish server configured to cache requests and behind that we have an Azure load balancer that balances between 3-4 VMs depending on requirements. The problem is that something about the Varnish server being there is causing the load balancer to go stupid and it seems to be confusing the traffic as one visitor and sending it all to the one VM. In other words, it doesn't seem to know or care about the X-Forwarded-For header when determining where to send requests.

Am I right in this assumption? Is there any way to configure the load balancer to ignore the client IP and use the X-Forwarded-For header instead?

I don’t know anything about azures load balancer but you might want to see if it could be the load balancer has some stickiness and treats the varnish server as the same user requesting which it would send to the same server.

12 rats tied together
Sep 7, 2006

a hot gujju bhabhi posted:

Not AWS but hoping someone can help. We have a Varnish server configured to cache requests and behind that we have an Azure load balancer that balances between 3-4 VMs depending on requirements. The problem is that something about the Varnish server being there is causing the load balancer to go stupid and it seems to be confusing the traffic as one visitor and sending it all to the one VM. In other words, it doesn't seem to know or care about the X-Forwarded-For header when determining where to send requests.

Am I right in this assumption? Is there any way to configure the load balancer to ignore the client IP and use the X-Forwarded-For header instead?

There are a lot of different ways to load balance traffic but commonly you'll see a load balancer perform some kind of source NAT on incoming traffic, replace the destination IP on it with a selection from its available targets, and then forward it along.

The target receives the traffic and perceives it as originating from the load balancer on an IP address level -- almost all of the time this is a good thing. Your target will respond back to the load balancer which usually implements some kind of connection-level tracking and caching and the load balancer does the same thing again: switches the source IP on the traffic to itself, replaces the destination IP, and forwards it back to whoever sent the original request.

If you're load balancing traffic like this you actually want the client IP regardless of the X-Forwarded-For header, those headers are usually application specific and outside of some specialized use cases you generally don't want your load balancers inspecting them.

If you're seeing your requests through your load balancer not actually being balanced and you've confirmed that you aren't intentionally doing this by setting sticky sessions or similar, you should probably start by answering 2 questions: Are all of your targets healthy, and what algorithm is the load balancer using to balance traffic? It looks like azure load balancers default to a 5-tuple hash based algorithm? The linked page has better documentation but the short version of this is that any time any attribute of your traffic changes, you should get a new backend host.

For something like varnish initiating requests to backend servers through a load balancer, each individual request should have a different source port, the source port changing is what should get you a new backend host. You should be able to find out whether or not this is happening pretty easily by tcpdumping from your varnish host and looking at the outbound traffic.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


OK, I have hit a brick wall with IAM policies and I am looking for help. We are trying to limit access to resources based on application groups. Essentially every application team will have their own groups, one for read-only access and one for read-write. These groups will allow general access to pretty much everything but a few actions (ssm, kms, iam) will be restricted to specific resource paths. Most users will be members of only a single application group but some of our leads will have to be members of multiple groups. In practice this is working perfectly fine for the read-write groups since we've written a policy from scratch that works fine. The problem I'm running into is with the read-only groups. I'm leveraging the AWS-managed ReadOnlyAccess policy which grants List/Describe/Get access to everything and then dropping a another policy on top to restrict access to the resources only allowed in the path. Like this, which will restrict anyone in the group to only have access to read ssm parameters in the /app1 path and prevents a user from decrypting any secrets in the path:

code:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "ssm:GetParameter*"
            ],
            "Effect": "Deny",
            "NotResource": [
                "arn:aws:ssm:us-east-1:111111111111:parameter/app1/*"
            ]
        },
        {
            "Condition": {
                "StringLike": {
                    "kms:EncryptionContext:PARAMETER_ARN": "arn:aws:ssm:us-east-1:111111111111:parameter/app1/*"
                }
            },
            "Action": [
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:111111111111:key/1234abcd-56ab-89cd-01ab-234567cdefabc"
            ],
            "Effect": "Deny"
        }
    ]
}
But when I do this it has the side effect of making it so that the user can only be in a single read-only group due to the way multiple Deny/NotResource statements on the same actions will be resolved. So the way around this I thought would be to duplicate the ReadOnlyAccess policy and strip out the restricted actions and do an Allow/Resource for the specific resource paths. However, I can't create that policy because it's over the size limit.

So my question is how would I go about making these groups work. Do I just split the policy up into smaller pieces? Is there some other way to write a policy that will handle this scenario?

Cerberus911
Dec 26, 2005
Guarding the damned since '05
Any reason you can’t take the read-write policy, drop the write actions and use that for the read only groups?

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


That’s essentially the solution I came up with. I copied the ReadOnlyAccess policy and broke it up into several pieces to get under the policy size limit.

It’s ugly as hell but it works.

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION

12 rats tied together posted:

There are a lot of different ways to load balance traffic but commonly you'll see a load balancer perform some kind of source NAT on incoming traffic, replace the destination IP on it with a selection from its available targets, and then forward it along.

The target receives the traffic and perceives it as originating from the load balancer on an IP address level -- almost all of the time this is a good thing. Your target will respond back to the load balancer which usually implements some kind of connection-level tracking and caching and the load balancer does the same thing again: switches the source IP on the traffic to itself, replaces the destination IP, and forwards it back to whoever sent the original request.

If you're load balancing traffic like this you actually want the client IP regardless of the X-Forwarded-For header, those headers are usually application specific and outside of some specialized use cases you generally don't want your load balancers inspecting them.

If you're seeing your requests through your load balancer not actually being balanced and you've confirmed that you aren't intentionally doing this by setting sticky sessions or similar, you should probably start by answering 2 questions: Are all of your targets healthy, and what algorithm is the load balancer using to balance traffic? It looks like azure load balancers default to a 5-tuple hash based algorithm? The linked page has better documentation but the short version of this is that any time any attribute of your traffic changes, you should get a new backend host.

For something like varnish initiating requests to backend servers through a load balancer, each individual request should have a different source port, the source port changing is what should get you a new backend host. You should be able to find out whether or not this is happening pretty easily by tcpdumping from your varnish host and looking at the outbound traffic.

Thanks for the info, super helpful. I looked at the traffic using tcpdump as suggested and it definitely initiates using a different port each time, but always requests port 80 on the LB. Is this a problem? Sorry for the potentially stupid question, I'm far more Dev than Ops unfortunately.

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

I also don’t know Azure. But from the docs, it seems like what you are trying to do should work fine? Maybe check that you haven’t somehow specified “source IP affinity mode”? Because if each varnish request comes from a different source port, it should be load balancing then across the back ends according to this:


https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply