Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.

Jeoh posted:

What's up with the X1-series' lovely disk performance? I know it's supposed to be for in-memory applications, but it's also an awesome budget SQL server (R-series has too many cores so the licensing cost fucks ya). I mean, you're supposed to use RDS, but some of us are still stuck in medieval times.

I assume you mean the EBS throughput on the small X1e series? Yeah that is pretty typical of any low-CPU count instance, and the ones that do have higher networking capacity probably don't meet your memory requirements?

Unless you are saying you want more instance store SSDs for your SQL server then I salute how dedicated you are to bad ideas.

Adbot
ADBOT LOVES YOU

vanity slug
Jul 20, 2010

Arzakon posted:

I assume you mean the EBS throughput on the small X1e series? Yeah that is pretty typical of any low-CPU count instance, and the ones that do have higher networking capacity probably don't meet your memory requirements?

Unless you are saying you want more instance store SSDs for your SQL server then I salute how dedicated you are to bad ideas.

Yeah the x1e.4xlarge would be a fantastic alternative to our r4.8xlarge instances if they could sustain the same IOPS and throughput. Our company is pretty much based on throwing hardware at legacy software problems though :v:

nolbishop
Sep 4, 2002

I wish I were this hip.
I hope this is the right place to ask. Quick background: Running a Laravel app on AWS with a MySQL RDS instance. Currently, my program is only used by one branch of a company, but it looks like the whole company is interested in using it. They want to keep the branches separate in regards to viewing the data. Is it possible to setup a RDS instance that holds the user table, and based on a branch for the user, route them to another RDS instance that holds that branch's data?

Woodsy Owl
Oct 27, 2004

nolbishop posted:

I hope this is the right place to ask. Quick background: Running a Laravel app on AWS with a MySQL RDS instance. Currently, my program is only used by one branch of a company, but it looks like the whole company is interested in using it. They want to keep the branches separate in regards to viewing the data. Is it possible to setup a RDS instance that holds the user table, and based on a branch for the user, route them to another RDS instance that holds that branch's data?

How are you currently managing user authr/authz to the app? Custom user table? IAM? And for DB access, are you using multiple SQL users with various privs or a single user?

Easiest way is just to manage all this in the code. If you had to do it at the database later then You could create SQL views for each branch. Create SQL user for each branch and GRANT them access to the corresponding view. That’s gonna get really messy real quick though. If you have to segregate the data then you could use separate RDS instances, but that’s gonna get expensive.

Are there any additional constraints or requirements you could share?

Votlook
Aug 20, 2005
What is a good way to manage ssh access to ec2 servers?
We currently have the public keys of the dev team baked into all our AMI's, but this creates a lot of work whenever a new team member leaves or joins, as we have to
rebuilt all the ami's and update all our services.
We have tried using bastion server, but it complicates some of out tooling.
I'm looking for a solution that where it is easy to add and remove access to each machine on-the-fly; preferably something really simple and robust.

Thanks Ants
May 21, 2004

#essereFerrari


Have you looked at something like https://gravitational.com/teleport/ ? I'm sure it can be configured in a way that ensures your tooling can work.

JehovahsWetness
Dec 9, 2005

bang that shit retarded

Votlook posted:

What is a good way to manage ssh access to ec2 servers?
We currently have the public keys of the dev team baked into all our AMI's, but this creates a lot of work whenever a new team member leaves or joins, as we have to
rebuilt all the ami's and update all our services.
We have tried using bastion server, but it complicates some of out tooling.
I'm looking for a solution that where it is easy to add and remove access to each machine on-the-fly; preferably something really simple and robust.

I've seen this used in the wild: https://github.com/widdix/aws-ec2-ssh

Assumes you're using individual IAM users, etc, and doesn't gently caress around w/ userdata and other stuff like OpsWorks does.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
We deploy users during every chef run. The users are created from databags containing our dev's public keys. If a databag is deleted (someone leaves) all of our instances remove that user the next time chef runs. Same if a new user is onboarded.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Part of our deploy process installs a cron job that re-runs the ansible deploy playbook every half hour. We do this so we can make app configuration changes on the fly without needing to redeploy the whole app. This playbook also contains a role which downloads a user list from SSM parameter store and installs the associated public keys and updates the user list in the sudoers files.

It’s not a great solution but it works for the most part. It’s something I inherited and cleaned up a bit, but I’m looking to migrate to something better as soon as I have time.

JHVH-1
Jun 28, 2002



Votlook posted:

What is a good way to manage ssh access to ec2 servers?
We currently have the public keys of the dev team baked into all our AMI's, but this creates a lot of work whenever a new team member leaves or joins, as we have to
rebuilt all the ami's and update all our services.
We have tried using bastion server, but it complicates some of out tooling.
I'm looking for a solution that where it is easy to add and remove access to each machine on-the-fly; preferably something really simple and robust.


I don’t even like having devs on machines if I can help it. It kind of encourages the whole keeping servers as pets instead of cattle thing and makes things harder later on when you have to figure out what random config change some dev did that isn’t there in your automation.

I’ve contemplated setting up a directory service and using that. Right now we started using ansible as a quick and dirty way. You could probably do something with the ec2 config service as well.

This was one of the things I liked about OpsWorks was it could generate temporary keys for you to get in and do some work. The privs were by AWS users and on a per stack basis so easy to dole out.

12 rats tied together
Sep 7, 2006

Erwin posted:

Unit tests is inaccurate, but that's not important. Again, kitchen-terraform is geared towards testing your reusable modules, which if you have a Terraform configuration "past a certain point of complexity" you should be doing. You can certainly also test your base state-producing config with kitchen-terraform, but it's easier if the tests mainly focus around each module in their own repo and you're mainly testing your base configuration for successful applies.

I do actually appreciate your point by point breakdown, thank you -- however reusable modules in terraform is kind of a running joke for me in my 9-5 right now. It's not that they're impossible, it's just that "past a certain point of complexity" the tooling literally falls apart.

Here's my favorite from since the last time I posted in this thread: https://github.com/hashicorp/terraform/issues/9858#issuecomment-263654145

The situation: a subnet module that stamps out boilerplate subnets, sometimes we want to create between 1 and 6 nat gateways, other times we want to pass in a default nat gateway, other times we don't want to create any nat gateways at all. A couple of problems we've run into so far:

Both sides of a ternary get executed regardless of which one is actually used, so in a situation where you are creating 0 nat gateways, your aws_nat_gateway.whatever.*.id list is empty, and element() and friends will fail on it because you can't pull a list index out of something that is null.

Coercing something that is null into something that is a list by appending a dummy string into it and incrementing your index grabber by 1 every time doesn't work if you need to wrap this list in any way (since you would increment past the list boundary and then end up grabbing the dummy value). Explicitly casting it to a list might work, but both sides of a ternary _must_ be the same type, so you can't be like "this value is either (string) the default nat gatway, or index 0 of (list) the actual list of nat gateways, or (list) the fake empty list I made so you wouldn't error for no reason.

Basically we have like 50 instances of "join("", list(""))" and "element(join(split(concat())))" in all of our "reusable modules" and the project has gone from hey wow this syntax is kind of messy sometimes straight to this is unreadable garbage that is impossible to maintain and we're not doing it anymore. For a CloudFormation comparison you would just use AWS::NoValue when necessary and then be able to actually do your job without spending a full 1/3rd of your day combing through github issues from 2016.

12 rats tied together
Sep 7, 2006

Votlook posted:

What is a good way to manage ssh access to ec2 servers?

The way I've seen this done in the past is cloud-init at launch to get a base set of keys on the instance, and then ansible playbooks take over and ensure that authorized_keys lists are up to date for everyone or everything that should be using the machines.

In an ideal world you would just put whatever key your closest compliant AWX server uses and call it a day, if you have a new hire on a team that should have admin access to a server, that new hire can just trigger an awx playbook run from master after merging in his public key. If you aren't using AWX, you just have whoever is helping onboard that guy run master after merging in new public keys.

IMHO this particular situation is like one of the textbook reasons not to go overboard on baking amis for every type of change. I actually hate baking amis a lot.

Votlook
Aug 20, 2005

JHVH-1 posted:

I don’t even like having devs on machines if I can help it. It kind of encourages the whole keeping servers as pets instead of cattle thing and makes things harder later on when you have to figure out what random config change some dev did that isn’t there in your automation.

I’ve contemplated setting up a directory service and using that. Right now we started using ansible as a quick and dirty way. You could probably do something with the ec2 config service as well.

This was one of the things I liked about OpsWorks was it could generate temporary keys for you to get in and do some work. The privs were by AWS users and on a per stack basis so easy to dole out.

Yeah we discourage devs from messing around with config on servers, but every now and then they need
to ssh in to debug stuff when poo poo hits the fan.

I'm going to use sshd's AuthorizedKeysCommand to pull the keys from s3 everytime someone wants to log in,
that should do it for now.

Votlook
Aug 20, 2005

12 rats tied together posted:

IMHO this particular situation is like one of the textbook reasons not to go overboard on baking amis for every type of change. I actually hate baking amis a lot.

Yeah, I am finding this out now, looks like I will be baking at least 40 AMI's to remove a stupid ssh key.

fluppet posted:

And don't forget the classic

Hey CF, you've failed to update a stack, failed to rollback, and I can't delete that stack as it's running production workloads. How long will it take for support to reset that state?

You should use *immutable stacks*

(I am not kidding, this is how I update critical stuff ATM: create a new stack, slow switch traffic to new stack using weighted DNS records, then delete the old stack)

12 rats tied together
Sep 7, 2006

https://www.hashicorp.com/blog/terraform-0-1-2-preview

4 years later we get an announcement that were will be a for loop, later this summer. Nice.

vanity slug
Jul 20, 2010

Squealed like a little girl at the announcement. Finally.

Cancelbot
Nov 22, 2006

Canceling spam since 1928

AND lazy evaluation of conditional results. The amount of voodoo shite we've had to pull because it evaluates both the true & false case regardless is infuriating.

Walked
Apr 14, 2003

Cancelbot posted:

AND lazy evaluation of conditional results. The amount of voodoo shite we've had to pull because it evaluates both the true & false case regardless is infuriating.

Yes. This is such a poo poo show now. I'm excited

Ape Fist
Feb 23, 2007

Nowadays, you can do anything that you want; anal, oral, fisting, but you need to be wearing gloves, condoms, protection.
How is AWS going to suit me as a small solo Developer whose just baby stepping into full stack stuff? I've just cancelled my Azure account because I had the AUDACITY to be running 4 CosmosDB collections with 400/RUs assigned to each collection and my little baby webApp was going to cost £70 a month for 500kb of data sitting in a Db.

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
GCP has the friendliest pricing model of the big three imo. AWS’ is Byzantine and will gently caress you if you’re not mindful about it.

Ape Fist
Feb 23, 2007

Nowadays, you can do anything that you want; anal, oral, fisting, but you need to be wearing gloves, condoms, protection.
I might look at that then.

JehovahsWetness
Dec 9, 2005

bang that shit retarded
GCP will also give you a $300 credit (for 1yr) on signup and their free-tier is reasonable, although no managed DBMS is in the free-tier: https://cloud.google.com/free/

JHVH-1
Jun 28, 2002
If you attend one of the AWS Summits you can often snag some credits to add to an account and combine it with free tier for 1 year.

I don't think DynamoDB costs as much as that Azure offering. Lambda (the serverless option) has quite a high threshold before it costs anything but you still pay for things like data storage if you need it.
Mainly just have to check the pricing either per each service or use their pricing calculator to add things up. I think you can also set up alarms for pricing to let you know when you might hit a personal limit.

For my personal site I originally had EC2 with a local mysql db, but it kept running out of memory. It was double the cost to move to managed service so I just ended up getting a dedicated VPC from digital ocean for flat $5 month.

tracecomplete
Feb 26, 2017

Yeah - for my own projects I mostly use DigitalOcean and Terraform (which is a gong show, but Terraform always is) because I can't leverage what makes AWS useful as a solo developer.

Lumpy
Apr 26, 2002

La! La! La! Laaaa!



College Slice
Preface: I went to Art School so I am an idiot when it comes to this stuff, so forgive my probably basic question.

I currently have a Linux EC2 instance running. It has nginx and gunicorn running a flask application on it. The app is a Tensorflow machine learning categorizer: you pass it a URL in a GET request, it does it's thing and returns some JSON. No DB, no external resources or anything. It has a Let's Encrypt certificate on it, so it listens on https://myservice.mydomain.com

Once it gets to a sustained ~20 requests per second, it slows down a great deal, which I would like to not happen. So I am hoping to do some sort of auto-scaling where when that machine gets busy, a copy can be made and traffic will be split. This seems exactly what Elastic Load Balancing is, so I think that is the correct path to take.

I found this tutorial: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html

And I think I can probably bungle my way through it, but I want to clarify a couple things and ask about something not covered in the tutorial:

  • Am I on the right track here, or do I want to do something completely different?
  • Since my server is self-contained, I just make an AMI out of it and if / when instances of it are created, it will "just work"?
  • How do I get SSL set up on the load balancer. I found an article about using the Let's Encrypt cert, but I will have to manually re-upload. In the comments, somebody links to this: https://aws.amazon.com/certificate-manager/?nc1=f_ls and it seems I can basically give Amazon my current SSL cert, and they will magically replace it with one they manage somehow? If so, is this free / cheap?


Thanks in advance!

Red Mike
Jul 11, 2011
I can cover the SSL part, although be aware that a load balancer for this might be a bit overkill. It sounds more like you've got an issue/not enough resources on your instance. If scaling up is an option, consider that first.


Importing a third party certificate into the certificate manager will not let you renew it and it won't happen automatically. It's solely a way to get third party certs to be used on load balancers in the same way Amazon certificates are used.

You can use the certificate manager to get Amazon-issued certificates (that you can't download to use on your own) which will let you use them in stuff like the load balancer, their CDN, etc. Just go into the certificate manager and go through the process of getting a new certificate. You'll need to prove you 'own' the domain via a couple methods (DNS validation, email validation, etc, they're all explained there), or if you want to make it simpler in future just transfer it to Route 53. These certificates will renew on their own behind-the-scenes. You set it up once and just leave it (like Let's Encrypt when set up properly).

When you get that certificate, you can bind it to a load balancer, or a cloudfront zone, etc. The load balancer then serves HTTPS under an SSL certificate issued by Amazon, but you can set it up to talk to your instances over HTTP. That means the traffic between the load balancer and your instances is not encrypted, but is otherwise the same to an end-user.

Caution though: depending on how your app works, you may have to wrangle it into functioning properly. This is because it'll be serving requests off of port 80 over HTTP, however as far as the user's concerned it's running on HTTPS. I know plenty of web software that breaks down horribly under that setup (forums software, CMSes, etc) ending up with infinite redirects, or refusing requests because they're not secure.

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.

Lumpy posted:

  • Am I on the right track here, or do I want to do something completely different?
  • Since my server is self-contained, I just make an AMI out of it and if / when instances of it are created, it will "just work"?
  • How do I get SSL set up on the load balancer. I found an article about using the Let's Encrypt cert, but I will have to manually re-upload. In the comments, somebody links to this: https://aws.amazon.com/certificate-manager/?nc1=f_ls and it seems I can basically give Amazon my current SSL cert, and they will magically replace it with one they manage somehow? If so, is this free / cheap?

You probably do want an ALB, you just need to figure out which CloudWatch metric to monitor and scale based on that.

If you make an AMI that will launch and respond on your ports without needing you to touch it, yes. You can bake all your stuff into the AMI, and/or use the instance user-data to bootstrap any actions you need to happen on launch to happen.

If you want to use your own certificate you need to import it but then you will need to manage it yourself (ie: get a new cert when it expires). You could also do some old bullshit using a classic load balancer and terminating SSL on your instances but there is no benefit to this and just means you get to deal with updating the cert in your AMI instead of the load balancer. Ignore the renewing section of that blog you posted, you'll need to do this manually and upload the cert when it expires. You should probably just ditch the LetsEncrypt cert and use Certificate Manager so you don't ever have to do any of this.

Or just stop being a scrub "EC2 User" and go serverless: https://medium.com/tooso/serving-tensorflow-predictions-with-python-and-aws-lambda-facb4ab87ddd

Red Mike posted:

I can cover the SSL part, although be aware that a load balancer for this might be a bit overkill. It sounds more like you've got an issue/not enough resources on your instance. If scaling up is an option, consider that first.

If you just scale up you can't have it automatically drop back down and end up paying for peak performance all the time.

Red Mike
Jul 11, 2011
That's a good point, although I would argue that you should weigh it against the ease of load balancing.

If it ends up being one of those bits of software that is difficult to load balance (or load balancer SSL terminate), and the cost is an extra $10/mo or something, you might consider it worth it to not spend weeks fixing things. Time is as much a resource as money.

Lumpy
Apr 26, 2002

La! La! La! Laaaa!



College Slice

Arzakon posted:

You probably do want an ALB, you just need to figure out which CloudWatch metric to monitor and scale based on that.

If you make an AMI that will launch and respond on your ports without needing you to touch it, yes. You can bake all your stuff into the AMI, and/or use the instance user-data to bootstrap any actions you need to happen on launch to happen.

If you want to use your own certificate you need to import it but then you will need to manage it yourself (ie: get a new cert when it expires). You could also do some old bullshit using a classic load balancer and terminating SSL on your instances but there is no benefit to this and just means you get to deal with updating the cert in your AMI instead of the load balancer. Ignore the renewing section of that blog you posted, you'll need to do this manually and upload the cert when it expires. You should probably just ditch the LetsEncrypt cert and use Certificate Manager so you don't ever have to do any of this.

Or just stop being a scrub "EC2 User" and go serverless: https://medium.com/tooso/serving-tensorflow-predictions-with-python-and-aws-lambda-facb4ab87ddd


If you just scale up you can't have it automatically drop back down and end up paying for peak performance all the time.

I was looking into Lambda and even read that blog post. Unfortunately, our model is 17Megs, so it seems you have to jump through all these hoops to get it working, and then in the comments somebody says they are doing image classification with a large model and it takes 52s per request. Maybe I'll give it a try and see how it performs, but I was turned off from that by those comments.

EDIT: also because he uses a totally different way of doing predictions than we do and I suspect it will take me 8237124 hours to figure out how to do what I need to do =)

EDIT Part 2: Yeah, just tried and got: An error occurred: PredictLambdaFunction - Unzipped size must be smaller than 262144000 bytes (Service: AWSLambda; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: XXXXXXXXX-XXXX-XXXXXXXXXX). So I will make an AMI and a bigger instance and load test that!

Lumpy fucked around with this message at 15:49 on Jul 17, 2018

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.
Yeah it looks like the newer versions of TensorFlow are too big to package for Lambda and you have to spend some time trimming the fat. And if you want GPUs you are out of luck. If you want to be in with the cool kids you need to figure out EKS I guess.

Or just launch EC2 because it does what you want it to.

Lumpy
Apr 26, 2002

La! La! La! Laaaa!



College Slice

Arzakon posted:

Yeah it looks like the newer versions of TensorFlow are too big to package for Lambda and you have to spend some time trimming the fat. And if you want GPUs you are out of luck. If you want to be in with the cool kids you need to figure out EKS I guess.

Or just launch EC2 because it does what you want it to.

I don't want to be a cool kid, I just want this to not suck so I can get back to doing the other 97104 things I have to get done... :smith:

Thank you and Red Mike for the info though. Very helpful.

Lumpy
Apr 26, 2002

La! La! La! Laaaa!



College Slice

Red Mike posted:

I can cover the SSL part....

So if I do this and get a cert / SSL on the load balancer, if my front-end makes a call to https://myservice.blah.com the request from *and* response to the client will be encrypted, but the traffic from the load balancer to the ec2 instance will not? (this is fine, I just don't want to have any CORS issues because the front-end is https and doing AJAX to http will cause issues)

Red Mike
Jul 11, 2011

Lumpy posted:

So if I do this and get a cert / SSL on the load balancer, if my front-end makes a call to https://myservice.blah.com the request from *and* response to the client will be encrypted, but the traffic from the load balancer to the ec2 instance will not? (this is fine, I just don't want to have any CORS issues because the front-end is https and doing AJAX to http will cause issues)

Correct. From the end-user (so from the browser) it's as if everything is HTTPS.

It's just on the app server itself that it knows it's not HTTPS, so it'll have to be configured to serve on port 80, non-secure (unless you want to have to get another certificate to encrypt app server <-> load balancer traffic which defeats the point).

Lumpy
Apr 26, 2002

La! La! La! Laaaa!



College Slice

Red Mike posted:

Correct. From the end-user (so from the browser) it's as if everything is HTTPS.

It's just on the app server itself that it knows it's not HTTPS, so it'll have to be configured to serve on port 80, non-secure (unless you want to have to get another certificate to encrypt app server <-> load balancer traffic which defeats the point).

Awesome. The traffic for this service is nothing worth encrypting... the need for https on this is just to make the client's ajax requests not complain, so reconfiguring the service to be non-ssl behind the balancer is fine. I'm guessing that I need to now modify my server to just be a default_server instead of listening for a domain? Or do I leave that as-is and it will be okay?

Red Mike
Jul 11, 2011
You can leave whatever options you already have alone, unless it breaks later (in which case you need to figure out why it's breaking and won't be easy to explain in a forum post).

The only thing you need to make sure of is that you're listening over port 80, with no SSL certificate. You then tie the load balancer to the instance's port 80.

Lumpy
Apr 26, 2002

La! La! La! Laaaa!



College Slice

Red Mike posted:

You can leave whatever options you already have alone, unless it breaks later (in which case you need to figure out why it's breaking and won't be easy to explain in a forum post).

The only thing you need to make sure of is that you're listening over port 80, with no SSL certificate. You then tie the load balancer to the instance's port 80.

Thank you again for all the help!

Thanks Ants
May 21, 2004

#essereFerrari


If anyone has 20 minutes to kill I'd appreciate some input on this presentation, because the point being made seems less "cloud isn't the right choice" and more "we built a legacy service and treated every client like a bespoke deployment, and were surprised when it didn't translate well for AWS".

https://www.youtube.com/watch?v=6iOYtH1Ya1E

Just from my non-expert eye it seems like deploying a load of VPNs (and having to configure something on the far end) is a batshit insane way to achieve this vs. just using http/websocket and maybe some sort of push for real-time updates, because then deploying a screen becomes "connect to your wifi or plug into cheap broadband service" and not "work with us to get a VPN tunnel set up, make sure the LAN address range doesn't overlap with what we're already doing, wow this is all expensive!"

Agrikk I assume you've had clients that bring a turd like this to you and assume :yaycloud: is just a place to run VMs for cheap?

The Fool
Oct 16, 2003


I'm just 2 minutes in and he already has the wrong definition of cloud.

e: he's obsessed with the cloud being just vm's on other peoples harder and ignoring literally every other service provided.
e2: "everythings working fine except we have hardware reliability problems"
e3: He does have a point that IPSEC endpoints in a cloud provider is PITA
e4: I agree; websockets with tls and dns autodiscovery and don't give a gently caress about IPSEC and ip addresses

The Fool fucked around with this message at 18:44 on Jul 31, 2018

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Thanks Ants posted:

Agrikk I assume you've had clients that bring a turd like this to you and assume :yaycloud: is just a place to run VMs for cheap?

Cut to Rutger Hauer's "I've seen things..." Bladerunner monologue.


Typically customers say, "Hold my beer. I got this." with their enterprise migration and our response is lukewarm at best. We've seen customers go the lift and shift route countless times and then burn their budget to the ground. It's one thing to have customers do lift and shift due to datacenter contracts coming due and then subsequently convert to cloud native. We like working with folks like this because they have a solid plan and are open to suggestions.

The PITA customers do a 1-for-1 migration, then come crying that "AWS isn't cheaper!" and our response is always "You are doing it wrong."


We literally beg customers to listen to us. It is so hard having the aggregate experience of millions of account-years at our fingertips and having know-it-all customers ignore all of our cloud best practices because they deem themselves unique snowflakes who "do IT different".

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


It's good how he's sort of lost track of the original aims of moving out of AWS (hitting scalability issues, 'needing' VRF for the dogshit mess that is the networking) and then has to shift the goalposts when the first question comes in.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply