|
Rex-Goliath posted:Hi everyone! Total AWS newbie here. I'm a consultant working for a niche NoSQL database company and we're in the middle of pivoting our customers into using AWS rather than running their own hosting, or at least providing them the option. I'm currently responsible for bird-dogging the AWS training program to find out which courses will be most relevant for our consultants to get them up to speed as well as in what order they should be taken in. Learn terraform alongside. Unless you are going into k8s. It’s the defacto platform for spinning up resources repeatedly in AWS. AWS will default to pushing cloud formation but tf is better(although cf is catching up). It’ll probably behoove you to get a skeleton deployment in terraform at some point anyway, it’s something I look for when I am trying new things.
|
# ? May 29, 2018 21:27 |
|
|
# ? May 21, 2024 13:52 |
|
I disagree that Terraform is better and in almost every situation I prefer CloudFormation. CloudFormation makes a point of being mind-numbingly simple; any sort of complexity is offloaded to other tools. And this is probably the right choice, given the situation they've got. On the other hand, we have Terraform, as the result of Hashicorp running away from their (previously smart) code-as-configuration tooling around Vagrant, managing to be both complex and complicated while not giving you the tools to solve your problems effectively. Terraform creates new problems around state management (yeah, even with a S3 bucket) and the language and tooling are way too stupid. I don't trust the Terraform team to write code that acts correctly--multiple clients of mine independently coined the term "terrafucked" for when something goes wrong and shreds your TF state, I've seen cases where existing states choke because they (incorrectly) change code in a validation so existing states no longer work, and I've seen resource creation failures cascade into data lost from the tfstate because of race conditions. I am biased; I wrote a DSL and framework around CloudFormation...but that's because I tried to do it for Terraform first and doing so sucked. (Ha, I thanked Hashicorp in the README. I was naive.) But I have clients who use CFN and clients who use Terraform, and the clients who have the most problems, and the ones for whom working is markedly more frustrating, are the ones using Terraform. There are situations where you have to use Terraform--if you are using multiple cloud providers (meaning more than one of GCE, AWS, and Azure, I don't exactly consider DigitalOcean a meaningful cloud provider) the pain of cross-cloud orchestration is greater than the pain of using Terraform. But if you're using a single cloud provider, the native solution (CloudFormation, Google Cloud Deployment Manager, Azure Resource Manager) are, IMO, the way to go. tracecomplete fucked around with this message at 00:16 on May 30, 2018 |
# ? May 30, 2018 00:12 |
|
I pushed terraform in my current role after a brief argument where I tried to convince everyone that cloudformation + ansible was better. It's really awful, I wouldn't even say that it's catching up to cloudformation, it's actually getting worse IMO. For context we're using it for basically every AWS region, AWS China, Alibaba cloud, and a few other ancillary services. I'm sure if you're just using it for a relatively simple stack, or (better yet), if you don't have to manage the entire AWS account and you just get a state path from your operations team, it's wonderful. If you're in more of a traditional role where you manage the entire account alongside a security team, definitely go with cloudformation. If you are stuck using Terraform you can just use cloudformation anyway: https://www.terraform.io/docs/providers/aws/r/cloudformation_stack.html and you should totally do this every chance you get. edit, if you spend any serious amount of time with Terraform you'll find that:
These problems are in addition to the fact that Terraform often lags behind cloudformation in terms of what resources they support. It's really, really hard to recommend terraform and I sincerely hope you do not use it if you can avoid it. So much that I wrote this huge effort post. 12 rats tied together fucked around with this message at 00:54 on May 30, 2018 |
# ? May 30, 2018 00:21 |
|
If you are going to use Terraform, the only long-term-tenable option I've seen is to do everything in a single monolithic system. It's a double-edged sword, in that it's easier to gently caress your state, but it also keeps that state in one place and usable for analysis and figuring out what the hell is what and who owns what. Terraform modules are kind of crap, but you should use them to constrain individual bits of awful to something a little better-scoped and you can separate those out as you prefer (separate repos, etc.). But have a single source of state.
|
# ? May 30, 2018 00:32 |
|
Totally agreed, however the hashicorp enterprise team that got in touch with us was really confused when we told them we had a state monolith and repeatedly referred us to the documentation and best practices guides where they say to create many small states. We even had some questions specific to our state monolith (such as: how are workspaces as a feature actually supposed to function?) that they would not answer until we changed our repo to be something they could actually comprehend. It's really easy to gently caress your state up, really easy to squash changes someone else made, and if you work in an environment where terraform is optional (god help you if so), you get to see every instance of a manual change someone ever makes to something that was ever in TF. It's still better than using modules the way they tell you to though.
|
# ? May 30, 2018 00:58 |
|
I'm surprised so many folks have had trouble with Terraform. We went the Cloudformation route with our own custom DSL and it's been a nightmare to build out and maintain. We began migrating to Terraform and more than 1/2 of our accounts are switched over and man, it's been like night and day. I understand some of the issues with Terraform but imo they kind of miss the point. HCL isn't really code. It's a config language like yaml or ini. You wouldn't make your yaml DRY or have a monolithic repo of all your config files because that gets awful and cumbersome. We've got a great system where we have a repo of company-approved modules that implement best practices and patterns as well as a repo per account or grouping of accounts for infrastructure. Mostly we isolate workspaces by region although we have a "global" region for account configuration and non-region scoped IAM stuff. I've never had a problem with the remote state loving itself and while some of the resources have weirdly inconsistent inputs and outputs that's a small price to pay to get the gently caress away from the awful dumpster fire that has been Cloudformation.
|
# ? May 30, 2018 03:34 |
|
Also why in God's name would you stand up individual EC2 instances with Terraform? Use ASGs and either control the deployment via a tool like Spinnaker or manage the group and not the individual instances.
|
# ? May 30, 2018 03:36 |
|
Blinkz0rz posted:I've never had a problem with the remote state loving itself
|
# ? May 30, 2018 04:07 |
|
Terraform is a pain, but its biggest sin is that it doesn't abide by semantic versioning. Pin your state-producing base configs (vs reusable modules) to a specific version so people don't bump your state version after they upgraded Terraform on their machine and you're less likely to break things. I'll shill kitchen-terraform here: https://github.com/newcontext-oss/kitchen-terraform . This is a set of Test Kitchen plugins that lets you test Terraform changes by converging them in a separate workspace and testing the results with any RSpec-based tests (Inspec, Serverspec, AWSpec, etc.) A decent CI pipeline can be set up for Terraform. Run kitchen-terraform on PRs, then post a plan against production back to Github. The terraform plan command exits with exit code 2 if there are changes to be made to the infrastructure, so you can build some approval rules around that. Once the PR is merged, run kitchen-terraform -> plan -> wait for approval -> apply on master. Or skip the approval if you're the yolo type. Make sure state changes only occur via the pipeline and most of your Terraform headaches go away.
|
# ? May 30, 2018 04:34 |
|
Idk maybe I've been lucky? I'm not sure what a remote state getting hosed up looks like.
|
# ? May 30, 2018 04:48 |
|
Blinkz0rz posted:I'm surprised so many folks have had trouble with Terraform. We went the Cloudformation route with our own custom DSL and it's been a nightmare to build out and maintain. We began migrating to Terraform and more than 1/2 of our accounts are switched over and man, it's been like night and day. YAML anchors exist so you could, and totally should, make your YAML DRY. There are tons of really good reasons to make your infrastructure-as-code DRY. Agree that Terraform is not code, but it not being code is basically a regression from other systems (including Cloudformation) that either are functionally psuedocode or are generally augmented with an orchestration tool that makes them into literally code. For example: Ansible, Sparkleformation, an actual templating engine, or any of the number of DSLs that exist for Cloudformation. I've spoken with a number of people about the perceived limitations of Cloudformation, and especially have been involved with protracted discussions at two employers now about the relative merits of both TF and Cloudformation, and my sincere belief is that a struggle with Cloudformation is genuinely a lack of knowledge or poor tooling more than it is a problem with the service itself. Working with purely Terraform in a reasonably complex environment for the past 14 months or so has only cemented that belief. Deciding to write your down DSL is a huge, huge red flag for me so unfortunately I'm not terribly surprised it didn't go well for you. Are you sure it was Cloudformation and not just poor implementation? Do you have any details? If you're logging into the web interface and clicking Create New Stack From Designer I agree it's a pretty terrible service. If you're taking it any further than that though it's seriously best in class by fair, fair margin. Especially compared to Terraform. IMO anyway. edit: quote:I'll shill kitchen-terraform here: https://github.com/newcontext-oss/kitchen-terraform . This is a set of Test Kitchen plugins that lets you test Terraform changes by converging them in a separate workspace and testing the results with any RSpec-based tests (Inspec, Serverspec, AWSpec, etc.) What I actually need to test for is stuff like: Does our us-east-1e have capacity for this instance type? Does this route table change break connectivity from service A to service B? Will this PR make us exceed our VPN connection account limit? Are we out of available addresses in this subnet? Is merging this PR going to make secops really really mad because we broke a compliance control? More broadly I think I would say that I just don't think unit tests are useful for infrastructure as code. IMHO canary environments, rolling deploys, and really robust monitoring and alerting give you way higher ROI past a certain point of complexity, and unit tests generally just become a waste of time. I'm really interested in being proven wrong here though. 12 rats tied together fucked around with this message at 05:36 on May 30, 2018 |
# ? May 30, 2018 05:17 |
|
This conversation prompted me to double-check and discover that google cloud actually does have a cloudformation analogue, so ta very much y'all.
|
# ? May 30, 2018 06:00 |
|
12 rats tied together posted:Totally agreed, however the hashicorp enterprise team that got in touch with us was really confused when we told them we had a state monolith and repeatedly referred us to the documentation and best practices guides where they say to create many small states. We even had some questions specific to our state monolith (such as: how are workspaces as a feature actually supposed to function?) that they would not answer until we changed our repo to be something they could actually comprehend. Vagrant is good. Packer is good. The rest of their stack ranges from bad (Terraform) to mediocre (Consul) to superfluous (Nomad), and the desperate way they try to drag you into their mess is really annoying. Blinkz0rz posted:I understand some of the issues with Terraform but imo they kind of miss the point. HCL isn't really code. It's a config language like yaml or ini. You wouldn't make your yaml DRY or have a monolithic repo of all your config files because that gets awful and cumbersome. Configuration via arbitrarily specified static files creates a less composable, more fragile system for competent administrators. It's marginally harder for keyboard-pounding systems monkeys to work with it, but we're here to render those folks unemployed and it creates a less complex system over time because you aren't forced to duct-tape together solutions from something like Chef (my CM tool of choice, because it's all code, but not having to use it is always better) or Ansible or ENTRYPOINTs in a Docker file which splat env vars into some random JSON file because That's What The Tool Allows. And, at the end of the day. if you have to support the aforementioned keyboard-pounding systems monkeys you can just do a `YAML.safe_load` or whatever. HCL is a bad programming language that tries to be a configuration language and isn't good at that, either. Doc Hawkins posted:This conversation prompted me to double-check and discover that google cloud actually does have a cloudformation analogue, so ta very much y'all. GCDM is not as good as CFN, but again--it's better than Terraform by dint of having better tooling around it (SparkleFormation started as a CFN tool but does GCDM now), by being directly supported by the provider, by not making you manage state by yourself, and not having Hashicorp in it. tracecomplete fucked around with this message at 06:16 on May 30, 2018 |
# ? May 30, 2018 06:13 |
|
12 rats tied together posted:It's come up a few times for me at work, I'm not wild about it. AWSpec is interesting in theory but kind of a waste of time IMHO. I'm sure it's useful in some contexts but in my experience anything I can verify with kitchen-terraform I'm already inheriting or reading from an environment. It's true that if you're just writing specs to say "x y and z exist" you're just describing your infrastructure in two different DSLs. It's more of a way of defining your intent. For instance, you can say "security group foo has these 3 rules and exactly 3 rules" and that spec will break if a rule is changed OR if one is accidentally added. You can also reference most AWS resources by name with AWSpec, allowing you to make sure resources defined in your Terraform config are unique, since Terraform references them by ID and will not be aware of pre-existing resources. It's also more geared towards reusable modules. Given a call to foo module with variables x, y, and z set this way, I expect this resulting infrastructure. Given another call with them set another way, I expect this different resulting infrastructure. When someone else changes your module, your spec can enforce your original intent. quote:What I actually need to test for is stuff like: Does our us-east-1e have capacity for this instance type? quote:Does this route table change break connectivity from service A to service B? quote:Will this PR make us exceed our VPN connection account limit? quote:Are we out of available addresses in this subnet? quote:Is merging this PR going to make secops really really mad because we broke a compliance control? quote:More broadly I think I would say that I just don't think unit tests are useful for infrastructure as code. IMHO canary environments, rolling deploys, and really robust monitoring and alerting give you way higher ROI past a certain point of complexity, and unit tests generally just become a waste of time. I'm really interested in being proven wrong here though.
|
# ? May 30, 2018 21:55 |
|
This past page is like the most positive I've ever seen someone be about CloudFormation. I'm used to people complaining about CloudFormation getting your stack into a weird state that you can't get out of without petitioning AWS support and about not knowing what exactly applying a change is going to do when there's multiple stacks involved. Does having a loosely connected set of terraform configs referencing each other's outputs really get so unwieldy compared to CloudFormation? I thought that was kind of comparable to how you manage multiple stacks there, and generally a good idea to ~limit blast radius~. I guess it's hard for me to conceptualize what a comprehensive, nontrivial AWS account configuration looks like, and I've had a hard time getting into CloudFormation because it seems like everybody who talks about using it also has their own DSL they recommend, which doesn't make things less confusing.
|
# ? May 30, 2018 23:23 |
|
I don’t mind cf I just wish there was an editor that did linting more than just ‘is valid JSON’.
|
# ? May 30, 2018 23:43 |
|
Vanadium posted:This past page is like the most positive I've ever seen someone be about CloudFormation. I'm used to people complaining about CloudFormation getting your stack into a weird state that you can't get out of without petitioning AWS support and about not knowing what exactly applying a change is going to do when there's multiple stacks involved. Fwiw our production load for one product is about 1000 instances across 80 or so microservices with backing persistence layers (rds, elasticache, etc), load balancers, IAM roles, etc. We're most of the way through a migration from CloudFormation to Terraform and it's night and day with how much easier it is to manage now. Echoing your sentiments; I've never seen people defend CloudFormation so vigorously as on this page.
|
# ? May 31, 2018 02:06 |
|
cloudformation was pretty mediocre until late 2016. since then they've added a ton of great features
|
# ? May 31, 2018 02:59 |
|
Vanadium posted:This past page is like the most positive I've ever seen someone be about CloudFormation. I'm used to people complaining about CloudFormation getting your stack into a weird state that you can't get out of without petitioning AWS support and about not knowing what exactly applying a change is going to do when there's multiple stacks involved. that was a very dumb constraint, but they've since gotten to the point of "can't fix stuff, what resources do you want me to just ignore from now on?" and then you can get rid of the stack and do whatever manual cleanup you need. also the cross-stack/cross-region stuff is choice, and about the only thing I think is ridiculous is missing is the ability to map over a list of things (like available AZs into subnets) so you don't have to wrestle with a bunch of if-conditionals for 2/3/4/5 az regions. more philosophically, i think cloudformation is really great for infrastructure but not so great for application definitions. it always feels like as an app developer you really want all of the implementation details of load-balancers, dns, etc. abstracted away from you and just want to focus on what your application is and what it needs.
|
# ? May 31, 2018 14:55 |
|
CloudFormation is actually garbage and the last page of this thread has been hilarious watching people defend it.
|
# ? Jun 5, 2018 11:56 |
|
very stable genius posted:CloudFormation is actually garbage and the last page of this thread has been hilarious watching people defend it. Wow you really convinced me not to use CloudFormation by saying "its garbage"
|
# ? Jun 5, 2018 18:38 |
|
I like Terraform and find CloudFormation a bit verbose but both are fine Fight me
|
# ? Jun 5, 2018 20:38 |
|
Jesus, I feel dirty asking this question but here goes. If someone has come to me with a requirement to run a particular legacy application in a few locations (likely west coast, east coast, and London) that is qualified on bare metal, Hyper-V or VMware, am I insane for thinking that nested virt in Azure might tick this box nicely? I'm already familiar with Azure networking and if it needs ExpressRoute then loads of people offer it. The only alternative I can think of is managed VMware with someone like Rackspace, and gently caress that. AWS VMware would also do it but the pricing is way out of budget for this.
|
# ? Jun 5, 2018 21:27 |
|
I mean, it should work, but why not just install it in an azure vm directly? Is there a technical reason, or a "we need to make sure it is in a supported configuration" reason?
|
# ? Jun 5, 2018 21:30 |
|
The application is supplied as an ISO or an OVA, and upgrades involve mounting the ISO and using the (virtual) console. While I could deploy the thing into Hyper-V and then migrate it into Azure it basically paints me into a corner as far as future upgrades go.
|
# ? Jun 5, 2018 21:36 |
|
Thanks Ants posted:The application is supplied as an ISO or an OVA, and upgrades involve mounting the ISO and using the (virtual) console. While I could deploy the thing into Hyper-V and then migrate it into Azure it basically paints me into a corner as far as future upgrades go. This process doesn't seem too convoluted in AWS if you have an OVA: https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html#import-image-prereqs
|
# ? Jun 7, 2018 19:23 |
|
Yeah it's more the maintenance and upgrades that have to be done through the console with an ISO mounted. Exporting back to do the upgrade and then importing it again is a bit too much of a PITA, and also we'd completely gently caress over any chance of getting vendor support. This is like the only service that runs this badly and I'd rather just ditch it, but it's not my call.
|
# ? Jun 7, 2018 20:03 |
How useful is the AWS Certified Cloud Practitioner, and in the future, AWS Certified Sysops Administrator, to someone whose career focus is Windows infrastructure support/administration? My present position is secure but I'd like to at least be prepared for a meteor destroying the entire industry where I work (terrestrial radio D: ). It seems like I'd be better served with Office 365 but a lot of job listings mention that AWS familiarity is a good thing. I have never touched AWS, I am not a developer nor do I wish to join them, and I'd only want to be able to understand how an AWS environment would work alongside a Windows one.
|
|
# ? Jun 14, 2018 13:35 |
|
If you are looking for how AWS fits into your world it is essentially just another place for virtual machines to go so learning about AWS is equivalent to learning about VMWare. SysOps will help you understand basic AWS functions from networking up to instances, how to build that using CloudFormation, and best practices for designing fault tolerant things.MJP posted:like to at least be prepared for a meteor destroying the entire industry where I work MJP posted:(terrestrial radio D: ) MJP posted:I am not a developer nor do I wish to join them Consider that you may be speaking to us from inside the meteor. Jokes aside, everything is code over here. We are all "developers" because everything is an API. All your infrastructure is a block of JSON (or YAML if you are a communist). No one will hire you because you can go into the console and start a Windows image and keep it online. If they will they won't pay you well and it will be a terrible place to work. Dozens of technologies like containers and serverless are reducing the need for people who care about operating systems. You can probably go down the Office 365 route and survive as a computer janitor forever but that sounds gross.
|
# ? Jun 14, 2018 15:35 |
|
MJP posted:How useful is the AWS Certified Cloud Practitioner, and in the future, AWS Certified Sysops Administrator, to someone whose career focus is Windows infrastructure support/administration? My present position is secure but I'd like to at least be prepared for a meteor destroying the entire industry where I work (terrestrial radio D: ). It seems like I'd be better served with Office 365 but a lot of job listings mention that AWS familiarity is a good thing. I have never touched AWS, I am not a developer nor do I wish to join them, and I'd only want to be able to understand how an AWS environment would work alongside a Windows one. AWS has a special lounge you can hang out in at their events if you are certified. Last years re:Invent they had arcade machines set up in it to match the RPG role based certification theme.
|
# ? Jun 14, 2018 15:46 |
|
If your company is paying for that. Apparently tickets are in the thousands. I had no idea it was so expensive.
|
# ? Jun 14, 2018 21:52 |
|
jiffypop45 posted:If your company is paying for that. Apparently tickets are in the thousands. I had no idea it was so expensive. Yeah it seems kinda pricey but it does include the whole week of event, free meals if you stick to their schedule, multiple free parties with booze. Plus any extra vendor parties you get into. Honestly Vegas is tiring after 3 days though so I don’t plan on going again this year. We are going to try and do Ansiblefest in Austin instead cause I’d probably learn more and it’s only 2 days. There are one day expos though at various cities that are completely free. It only costs money if you do the certification readiness courses or the other various boot camp stuff they have the day before. If you are new then AWS though the sessions at those events will help learn a fair bit. The hands on ones are fun to, like where you use lambda to make an alexa skill and whatnot.
|
# ? Jun 15, 2018 04:30 |
|
Can anyone help me understand my options for shutting down an EMR cluster and how billing works in that situation? We have a couple set up that we don't need currently but will need again in the future (4-6 months from now). They are fairly large. Right now, the underlying EC2 instances have been turned off, but with EMR's per second billing, does EMR billing continue to accumulate even if the instances are turned off? I know I'll still be paying for the EBS volumes for the cluster's EC2 instances, but we probably won't convert those volumes to snapshots because that would require completely disassembling the EMR clusters. My question mainly is do EMR charges still apply if all of the EC2 instances in the cluster are turned off and the EMR cluster shows as Terminated? The billing page doesnt make it clear.
|
# ? Jun 15, 2018 17:53 |
|
my bitter bi rival posted:Can anyone help me understand my options for shutting down an EMR cluster and how billing works in that situation? We have a couple set up that we don't need currently but will need again in the future (4-6 months from now). They are fairly large. Right now, the underlying EC2 instances have been turned off, but with EMR's per second billing, does EMR billing continue to accumulate even if the instances are turned off? I know I'll still be paying for the EBS volumes for the cluster's EC2 instances, but we probably won't convert those volumes to snapshots because that would require completely disassembling the EMR clusters. My question mainly is do EMR charges still apply if all of the EC2 instances in the cluster are turned off and the EMR cluster shows as Terminated? The billing page doesnt make it clear. EMR is based on EC2 charges as you've noted. So if the EMR cluster shows as terminated and all instances are stopped or terminated then there are no charges. Any EBS volumes that exist will still incur a charge, also as you have noted. While an EMR cluster is on or active, you are charged per the minute for each active node in the cluster. The power of EMR is that you turn it on, run a monster batch job on a zillion instances in a few minutes then shut it all down when you are done, costing you (minutes x notes) of compute time rather than having a bunch of server sitting around idle. Also note: You shouldn't be keeping data on EBS for an EMR cluster after it is shut down. When the cluster is done doing what it needs to do, you should run a final export of all data and states to an S3 bucket then kill everything related to the cluster. In 4-6 months when you need it again, launch the cluster and import the data from S3 as an initial step. Storing data on S3 is orders of magnitude cheaper than storing it in EBS. very stable genius posted:CloudFormation is actually garbage and the last page of this thread has been hilarious watching people defend it. Care to tell me why? I have many customers using CF to entirely automate their environments with great success. I'm not saying you are wrong, I'd just like to know how CF failed for you. Agrikk fucked around with this message at 18:34 on Jun 15, 2018 |
# ? Jun 15, 2018 18:24 |
|
Agrikk posted:Care to tell me why? I have many customers using CF to entirely automate their environments with great success. I'm not saying you are wrong, I'd just like to know how CF failed for you. Hey CF, can you tell me what's going to change if I run this template? lol nope Hey CF, why is this job running for so long? Well, you forgot to define something but instead of failing immediately on this missing piece of required data I'm gonna spin for a long time and fail after 10 minutes. lol Hey CF, why is it that if a stack creation fails I need to go manually delete the stack before I can run it again? Because I loving suck.
|
# ? Jun 15, 2018 19:37 |
|
very stable genius posted:Hey CF, can you tell me what's going to change if I run this template? lol nope And don't forget the classic Hey CF, you've failed to update a stack, failed to rollback, and I can't delete that stack as it's running production workloads. How long will it take for support to reset that state?
|
# ? Jun 15, 2018 21:00 |
|
Agrikk posted:EMR is based on EC2 charges as you've noted. So if the EMR cluster shows as terminated and all instances are stopped or terminated then there are no charges. Any EBS volumes that exist will still incur a charge, also as you have noted. While an EMR cluster is on or active, you are charged per the minute for each active node in the cluster. thank you for your advice
|
# ? Jun 15, 2018 22:54 |
|
very stable genius posted:Hey CF, can you tell me what's going to change if I run this template? lol nope I thought it had this now.
|
# ? Jun 16, 2018 04:12 |
|
very stable genius posted:Hey CF, can you tell me what's going to change if I run this template? lol nope +1’d on a couple of feature requests. Thank you!
|
# ? Jun 18, 2018 04:55 |
|
|
# ? May 21, 2024 13:52 |
|
What's up with the X1-series' lovely disk performance? I know it's supposed to be for in-memory applications, but it's also an awesome budget SQL server (R-series has too many cores so the licensing cost fucks ya). I mean, you're supposed to use RDS, but some of us are still stuck in medieval times.
|
# ? Jun 18, 2018 18:58 |