Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Methanar
Sep 26, 2013

by the sex ghost
I'm mostly talking out of my rear end here but I know of at least one company that has saves like 7 digits a year because they moved their baseload 1000+ node many many petabyte cassandra cluster out of aws and onto their own hardware.

EBS is expensive, disks are cheap :shrug:

quote:

- You're saving a huge amount of per-port costs, data center networking is expensive especially at 25/40/100G and modern hyperconverged systems use a lot of ports

Datacenter networking is like 5% of the price of Cloud wan access.

My company would have been absolutely bankrupt a long time ago if we paid AWS internet egress rates for everything

Methanar fucked around with this message at 02:16 on Oct 19, 2018

Adbot
ADBOT LOVES YOU

CLAM DOWN
Feb 13, 2007




Methanar posted:

I'm mostly talking out of my rear end here

I mean, yeah.

The Fool
Oct 16, 2003


Methanar posted:

I'm mostly talking out of my rear end here but I know of at least one company that has saves like 7 digits a year because they moved their baseload 1000+ node many many petabyte cassandra cluster out of aws and onto their own hardware.

EBS is expensive, disks are cheap :shrug:

Economy of scale is a real thing.

xsf421
Feb 17, 2011

H110Hawk posted:

Look sir/ma'am the spreadsheet aws made for us assuming we pay full retail for Dell servers with the top of the line warranty, raid cards, and redundant power supplies says we save money hand over fist if we do all up front reserved instances. Barely.

AWS sold my company on this, and when our architects did the math, they forgot to include having things in multiple AZ/regions, so we're already 1.5-2x what was projected.

H110Hawk
Dec 28, 2006

abigserve posted:

In my experience there is a lot of creative accounting that goes into the "cloud is too expensive" calculations. They generally omit the facts that:

- You're saving a huge amount of per-port costs, data center networking is expensive especially at 25/40/100G and modern hyperconverged systems use a lot of ports

In my experience when you do a full tco the cloud is handily twice as expensive beyond 100kw. And cumulus ("whitebox") switches can get you 32*4 25g ports for like $15-20k depending on mellanox you want to go. If you need (or want) full line rate 25g Clos setup it's going to add up but not as much as you might think.

Now if you have a highly cyclical workload and are actually willing to let autoscaling do its thing you can save a mountain of money. Or do your map reduce jobs using spot instances of like 20 different types.

abigserve
Sep 13, 2009

this is a better avatar than what I had before

H110Hawk posted:

In my experience when you do a full tco the cloud is handily twice as expensive beyond 100kw. And cumulus ("whitebox") switches can get you 32*4 25g ports for like $15-20k depending on mellanox you want to go. If you need (or want) full line rate 25g Clos setup it's going to add up but not as much as you might think.

Now if you have a highly cyclical workload and are actually willing to let autoscaling do its thing you can save a mountain of money. Or do your map reduce jobs using spot instances of like 20 different types.

True, but it's still a cost that is often not factored in.

Obviously some stuff is impossible to move to the cloud due to architectural limitations, however it has been my experience that this is a tiny, tiny subset of applications like Methanar's massive video streaming service but it's used as an excuse for not moving bullshit that can be entirely abstracted into lambda functions or run as SaaS like Elasticsearch databases and the time and usability savings are never considered.

Methanar
Sep 26, 2013

by the sex ghost
I just did some math. I can buy 10gbps to my datacenter for 1500/month including things like IX port fees, equinix line maintenance BS, switch port costs, plus a bit more for round numbers. If I run it 24/7 at 10gbps to cover my baseload. I'm paying

0.01388888888889 per gb

AWS used to charge you between .12 and .08 per GB egressed as your volume increased. I just checked and they've actually since dropped it to .05 per GB past 150TB/month

So what used to be 10-8x more expensive is now only 9-5x more expensive. This doesn't even include what they used to charge for ingress traffic. This is just for egress.

Over the last 2 years the amount of traffic I've pushed to the internet is measured in hundreds of petabytes. So its still bank-breaking on bandwidth costs alone. Thats a lot of money you need to make per user to even come close to breaking even. Scaling up doesn't do you much good if being larger just means you hemorrhage money faster.

Methanar fucked around with this message at 03:15 on Oct 19, 2018

H110Hawk
Dec 28, 2006

Methanar posted:

I just did some math. I can buy 10gbps to my datacenter for 1500/month including things like IX port fees, equinix line maintenance BS, switch port costs, plus a bit more for round numbers. If I run it 24/7 at 10gbps to cover my baseload. I'm paying

0.01388888888889 per gb

AWS used to charge you between .12 and .8 per GB egressed as your volume increased. I just checked and they've actually since dropped it to .5 per GB past 150TB/month

So what used to be 10-8x more expensive is now only 9-5x more expensive. This doesn't even include what they used to charge for ingress traffic. This is just for egress.

Over the last 2 years the amount of traffic I've pushed to the internet is measured in hundreds of petabytes. So its still bank-breaking on bandwidth costs alone. Thats a lot of money you need to make per user to even come close to breaking even. Scaling up doesn't do you much good if being larger just means you hemorrhage money faster.

:stare: cloudfront pricing is actually a lot better than their ec2/vpc egress for what sounds like a highly cachable workload.

Not that it's not still comically overpriced. $1500 all in for 10gbps is pretty good, especially inside equinix. (Though being on the DC ix is basically required on the east coast.)

Methanar
Sep 26, 2013

by the sex ghost

H110Hawk posted:

:stare: cloudfront pricing is actually a lot better than their ec2/vpc egress for what sounds like a highly cachable workload.

Not that it's not still comically overpriced. $1500 all in for 10gbps is pretty good, especially inside equinix. (Though being on the DC ix is basically required on the east coast.)

Hurricane electric and 2nd hand arista switches baby.

Unfortunately, what we do hasn't really been cacheable up until this point

H110Hawk
Dec 28, 2006

Methanar posted:

Hurricane electric and 2nd hand arista switches baby.

Unfortunately, what we do hasn't really been cacheable up until this point

Hah when only the worst will do and you already have cogent. Live a little, get oversubscribed GTT! How is video not cachable? Do you do video chat?

Methanar
Sep 26, 2013

by the sex ghost

H110Hawk posted:

Hah when only the worst will do and you already have cogent. Live a little, get oversubscribed GTT! How is video not cachable? Do you do video chat?

Video chat + some funky many unique stream, small-audience, live webrtc restreaming.

Methanar fucked around with this message at 03:10 on Oct 19, 2018

H110Hawk
Dec 28, 2006

Methanar posted:

Video chat + some funky many unique stream, small-audience, live webrtc restreaming.

So it's porn cam's?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

CLAM DOWN posted:

You don't move to the cloud to save money.
I see that. I am just not seeing the other reason(s) for moving to the cloud if I have not developed my own application(s) that can scale up and down.

H110Hawk posted:

Look sir/ma'am the spreadsheet aws made for us assuming we pay full retail for Dell servers with the top of the line warranty, raid cards, and redundant power supplies says we save money hand over fist if we do all up front reserved instances. Barely.
right, I calculated based on my actual price, within a few percent.

abigserve posted:

In my experience there is a lot of creative accounting that goes into the "cloud is too expensive" calculations. They generally omit the facts that:

- You won't need anyone managing on-prem data storage OR SAN networks anymore, depending on scale this can be an entire teams worth of people
- you won't need anyone babysitting VMWare and the associated server components
- you won't need anybody looking after UPS, cooling, technical floorspace
- You won't need to be paying for a separate backup solution
- Entire apps and their associated gatekeepers can be either dramatically simplified or delegated entirely (O365, SCM, etc.)
- You're saving a huge amount of per-port costs, data center networking is expensive especially at 25/40/100G and modern hyperconverged systems use a lot of ports
- Simple stuff can be migrated to severless components which only run when explicitly required and are super cheap
- No maintenance or licensing contracts for hardware/on-prem software components

Sometimes people also forget that on-prem hardware has a lifespan as well, it's not like a DC refresh lasts forever.
- How much time do you spend managing on-prem data storage? At my scale, we spend less than an hour a week on storage management.
- No one has to "babysit" VMware. Occasionally updates, sometimes a host fails. 2 hours per week on average.
- We have ~8 10U UPS, cahnge the battery in each one every 3-5 years.
- Are you saying you don't backup your data in the cloud? LOL.
- Definitely not cheaper though
- No way. The per gig transfer costs eat up any networking savings, nexus 9ks are pretty cheap.
- I agree on this, if we did any in house development. Basically everything we run is COTS and based on my experience with support of most of our apps, they would not be able to handle a sql server that was not a sql server.
- I factor this into my calculations. My $1m sticker price includes 5 years on everything.

I may be entering old man yells at the cloud territory, but unless I am automatically scaling, developing an application that can take advantage of the serverless services, or have trouble filling talent gaps, I don't see my benefit to moving to the cloud. With proper planning I can provide a similar or better level of service to my business for less money. I'm not here to start a street fight, just sharing my opinion. This opinion is shared by almost every other IT manager I network with in my local area. Local government is the one exception I regularly run into, they love the cloud. Probably because it's impossible for them to get capital and everything has to be opex. The one exception here is SaaS, that's fine by me and what I recognize as the future for many apps. I assume that most of the cloud as infra benefit is recognized by those SaaS providers.

Methanar
Sep 26, 2013

by the sex ghost

H110Hawk posted:

So it's porn cam's?

Yes*

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


adorai posted:

Sales people consistently remind me I am a moron for not being "all in" on the cloud like their other customers, but it just makes zero sense to me.

I think there’s a healthy amount of skepticism but what’s your actual “hosting” budget per year with hardware, electric, a/c, real estate and all the other items? And how did they come up with the 1M figure? Did they actually run Movere for a month for statistics?

I think it’s hilarious how cloud computing solves procurement and other political issues such as getting hardware or licensing.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Like anything with user-generated streaming video content, it isn't explicitly not porn cams

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Thread never changes.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Y’all should ignore clam down. He’s probably high on the weed in California

abigserve
Sep 13, 2009

this is a better avatar than what I had before

adorai posted:

I see that. I am just not seeing the other reason(s) for moving to the cloud if I have not developed my own application(s) that can scale up and down.
right, I calculated based on my actual price, within a few percent.
- How much time do you spend managing on-prem data storage? At my scale, we spend less than an hour a week on storage management.
- No one has to "babysit" VMware. Occasionally updates, sometimes a host fails. 2 hours per week on average.
- We have ~8 10U UPS, cahnge the battery in each one every 3-5 years.
- Are you saying you don't backup your data in the cloud? LOL.
- Definitely not cheaper though
- No way. The per gig transfer costs eat up any networking savings, nexus 9ks are pretty cheap.
- I agree on this, if we did any in house development. Basically everything we run is COTS and based on my experience with support of most of our apps, they would not be able to handle a sql server that was not a sql server.
- I factor this into my calculations. My $1m sticker price includes 5 years on everything.

I may be entering old man yells at the cloud territory, but unless I am automatically scaling, developing an application that can take advantage of the serverless services, or have trouble filling talent gaps, I don't see my benefit to moving to the cloud. With proper planning I can provide a similar or better level of service to my business for less money. I'm not here to start a street fight, just sharing my opinion. This opinion is shared by almost every other IT manager I network with in my local area. Local government is the one exception I regularly run into, they love the cloud. Probably because it's impossible for them to get capital and everything has to be opex. The one exception here is SaaS, that's fine by me and what I recognize as the future for many apps. I assume that most of the cloud as infra benefit is recognized by those SaaS providers.

Essentially your argument is the same one I hear a lot, I'm a network engineer/architect so I don't have a bias either way. I think you're probably too small of a shop to run into the issues that drive people towards the cloud. If you've never dealt with multiple millions of dollars of data storage not performing up to spec due to SAN issues that take weeks to resolve, or infrastructure teams deciding to buy a bunch of their own network hardware because "it was part of the deal!" or dealing with a room full of UPS's catching fire. All of those anecdotes are from different workplaces, mind you, and all teams involved were very highly skilled but managing on-prem infrastructure at scale in a fashion that is reliable and good either takes a LOT of time or a LOT of money and sometimes, it's both.

Otherwise you get Methanars situation (no offence my dude) who in the one breath says how much cheaper running on on prem is but then a post later says this:

Methanar posted:

Hurricane electric and 2nd hand arista switches baby.

like ehhh yeeah it works but how well is the question

edit: also I'm not trying to argue that it's cheaper, it's absolutely demonstrably not cheaper, just that the cost differential is often overstated and more importantly the non-dollar value benefits ignored or not considered.

abigserve fucked around with this message at 05:35 on Oct 19, 2018

Methanar
Sep 26, 2013

by the sex ghost

abigserve posted:

Otherwise you get Methanars situation (no offence my dude) who in the one breath says how much cheaper running on on prem is but then a post later says this:


like ehhh yeeah it works but how well is the question

The 2k apiece 48 port 10g SFPs switches actually own. I have no idea they're even used compared to the 10k apiece new ones.

With HE's peering policy the way it is, if you've ever got an HE link you kind of need to keep going with them otherwise you can't balance your ingress. One /24 of public addressing for me generates a lot more than 10gbps so theres only a certain level of granularity to work with.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


I'm just waiting on this whole SDN thing to take off.

Methanar
Sep 26, 2013

by the sex ghost

jaegerx posted:

I'm just waiting on this whole SDN thing to take off.

Do I still need to buy fiber with sdn

Methanar
Sep 26, 2013

by the sex ghost
I'm okay with HE because for what they cost I can just buy like 40gbps for what 10gbps would cost out of NTT.

I am kind of mad though they wouldn't give me a proper separate failure/maintenance domain for the latest fiber we bought from them.

abigserve
Sep 13, 2009

this is a better avatar than what I had before

jaegerx posted:

I'm just waiting on this whole SDN thing to take off.

you'll be waiting for a long time. if you are interested I can do an effort post on it but SDN (network in general tbh) is an absolute shitshow right now.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

abigserve posted:

you'll be waiting for a long time. if you are interested I can do an effort post on it but SDN (network in general tbh) is an absolute shitshow right now.

:justpost:

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!



Empty quote

abigserve
Sep 13, 2009

this is a better avatar than what I had before
okies

It is worth mentioning, before my rant kicks off, is that software-defined networking is is an extremely good idea. Network programming is a million times more complicated than it needs to be and development of new features is incredibly slow. We are still not only using but relying on protocols developed decades ago.

The original promise of SDN was to make the network programmable by implementing a standard interface to the dataplane, that is, switches and routers. This standard interface was, and still is, known as OpenFlow. Essentially, OpenFlow breaks down network forwarding into a 5-tuple rules, defines standards for how OpenFlow switches should store and process these rules, and finally the format of OpenFlow messages.

When the spec first came out, there was a ton of hype about it. "Imagine all the stuff we could do with THIS!" thought everybody. People came up with all sorts of fancy use cases like per-packet load balancing at layer 2, SD-WAN style routing, fabrics without spanning-tree...etc.The big vendors were making GBS threads themselves. Of course, it's not so simple.

First of all, OpenFlow switching does not translate to traditional network ASICs at all. This means, new ASICS and in turn, new switches needed to be produced to handle the spec. ASIC development was, and still is, an incredibly slow, arduous, and expensive process. Further, the OpenFlow spec does not encompass hardware architecture. As you can imagine, this created chaos:

- Everybody that actually wanted to build OpenFlow switches all went in different directions with their hardware architecture
- OpenFlow spec changes would often require ASIC changes which meant that hardware only supported certain compatible openflow versions
- The vast majority of openflow capable hardware was hot garbage, often supporting a tiny amount of flows or not meeting the given version specifications
- The complete lack of standards hosed OpenFlow programmers as it was impossible to ratify code against all the different openflow versions and available hardware.

To date, none of these problems have been solved. The only exception is that, generally, the world has standardized on OpenFlow version 1.5 and most modern "openflow" compatible switches are feature-complete with that version (usually with a list of caveats a mile long).

So, OpenFlow kinda sucks. Oh well. But, you say, I've heard the term "SDN" a lot, but I've only ever heard OpenFlow mentioned once or twice, what's up with that?

Well, naturally, when a non-vendor releases some new promising feature, all the big names scramble to produce something that they can sell to stupid people. Back before the numerous problems with OpenFlow were discovered there was a real concept in the networking community that CISCO/Juniper/Nortel/etc. were all on their way to obsolescence. So these big titans of industry came up with a delightful idea. They figured that, well what people want is a decentralized control plane, right? Now, we don't want to spend any time or money actually doing that because of reasons (???) so what we'll do instead is take our existing, traditional kit, slap a controller in front of it and call it SDN.

Around 6-ish years ago (maybe slightly later) I went to talk from CISCO about ACI. This was the early days, obviously, and it was their flagship "SDN" solution. They were selling it to customers, though were very tight lipped about the adoption, and assuring everyone that it was indeed a next-gen SDN solution. What it actually was, was full-mesh iBGP, MPLS and VXLAN, with some proprietary and very secretive interface telling everything how it should work (my theory, to this day, is that in it's first incarnation all the controller did was push configuration commands). This, naturally, did not even come close to what the original spec for software-defined networking was, the networking hardware was literally as smart as it'd ever been and you also needed a controller, sitting like a tumor on the side keeping it all working.

ACI is still a real product that gets sold to people. By all accounts, it now works, and has a very narrow use case around network segmentation, some engineers even like it to not have to worry about traditional routing/switching in the DC. However, it's completely divorced from what the original spec and hope for what SDN is.

Their are other examples as well, the most obvious being VMWares NSX, where it's a totally different technology labelled as "SDN" because that was the buzzword of the day.

So, "SDN" is kind of meaningless term. Whatever. Buzzwords get cooked up and thrown out practically everyday in this industry so who cares. The concept of software-defined networking is still good though, so who is running with that?

Well, the answer is - loving nobody. Not really. The vendor with the most skin in the game, that is, hardware AND software development is Bigswitch, which is seeing a fair bit of deployment (we run it as well). Bigswitch is an entirely openflow based architecture. Guess what? They decided the spec wasn't good enough, so they mangled it for their use case. Unless you ask specifically what protocol they use - and a reminder, it forms the basis of some pretty expensive software - they will never mention OpenFlow. Worse, they aren't authoring or contributing to any of the OpenFlow standards that might let it gain some ground.

There is a lot of interesting work coming out of company called Barefoot networks. They are attacking it from the reverse perspective, where instead of standardizing the protocol first, they are building a standard hardware architecture for network packet processing ("x86, but for networking") and providing an interface to program that hardware for developers (known as the "P4" programming language). The idea is that you define what your data plane looks like in terms of tables, flow attributes, and processing pipelines, then compile it like you would any other software. Then, you can program those tables using a standard interface. Sounds nice. The last time I attempted to do some development on that platform using their reference architecture all the control-plane stuff was firmly work in progress with no definite release date. By all accounts the hardware is prohibitively expensive and has issues with simple forwarding tasks such as interface buffering.

Well, R.I.P SDN, thank goodness we've got regular networking.

Not so fast there, champ. See because CISCO (and everyone else) collectively poo poo a brick when OpenFlow 1.1 was released, they started on a new direction - they wanted all of their switch asics to have some space for programmability to support all this fancy SDN related stuff. Many, many years of ASIC development hell later, this was integrated into the modern product lines, such as on all of the IOS-XE switches (3650, etc.). Naturally, this change of architecture has introduced an almost unbelievable amount of bugs into the platform across a huge range of features.

Not to mention, while everyone was farting around with SDN for several years, we've had an absolute stagnation of features in the rest of the networking space. Networking literally looks identical today to the way it did when I started my career over a decade ago. We're getting more netter at the way we design networks now that we're not shackled to layer 2, and we're getting more efficient at standing them up through automation (thank you Cumulus!), but the topologies and protocols in use are still largely the same. VXLAN is making inroads with a lot of caveats but it's not super interesting.

References:
https://etherealmind.com/musing-first-thoughts-cisco-aci-works/
https://www.opennetworking.org/software-defined-standards/specifications/
https://www.networkcomputing.com/networking/openflow-faces-interoperability-challenges/473162012

Sprechensiesexy
Dec 26, 2010

by Jeffrey of YOSPOS
I have to support Cisco ACI and so far I'm not impressed.

Methanar
Sep 26, 2013

by the sex ghost


Thank you for your service


Now do Neutron and Calico

Thanks Ants
May 21, 2004

#essereFerrari


I think my favourite use of ‘SDN’ is when Ubiquiti and Meraki use it to describe their management interface.

Kashuno
Oct 9, 2012

Where the hell is my SWORD?
Grimey Drawer
I am pretty sure the only reason I’m saving any money moving to cloud is because the previous IT manager was definitely buying way too much poo poo. Our current infrastructure is just under 3 years old, and I don’t think we actually utilize even half of it. We certainly don’t when it comes to storage.

Thanks Ants
May 21, 2004

#essereFerrari


Yeah I've definitely seen that - VMware clusters for maybe 8 VMs and it's all just basic Windows LOB stuff that Azure/whatever can do for maybe a grand a month, with the saving spent on having not-poo poo connectivity. Right tool for the job and all that.

FlapYoJacks
Feb 12, 2009
I see a lot of companies also have crazy misconfigured AWS setups, like Jenkins or Gitlab on extra large instances running 24/7 with engineers who have no idea about spinning up AMIs only when needed.

tortilla_chip
Jun 13, 2007

k-partite
There are Tofino chipsets out in the wild now.

abigserve
Sep 13, 2009

this is a better avatar than what I had before

tortilla_chip posted:

There are Tofino chipsets out in the wild now.

The Tofino stuff is SDN's best possible chance and the design makes a lot of sense, but based on my experience I'm not convinced they can deliver a product. I've looked at what's available and certain aspects were not impressive even with a "if I bought this, I recognise I am buying a pre-alpha of an actual product" mindset.

Vargatron
Apr 19, 2008

MRAZZLE DAZZLE


abigserve posted:

The Tofino stuff is SDN's best possible chance and the design makes a lot of sense, but based on my experience I'm not convinced they can deliver a product. I've looked at what's available and certain aspects were not impressive even with a "if I bought this, I recognise I am buying a pre-alpha of an actual product" mindset.

Ah so it's the Star Citizen equivalent of networking then.

tortilla_chip
Jun 13, 2007

k-partite
It makes sense where you have a specific network need, and are willing to engage closely with a vendor. AWS Blackfoot is an application where the P4 concept is well suited. I doubt most orgs have that much of a niche need though.

Danith
May 20, 2006
I've lurked here for years
This is a long shot but does anyone else work for a pharmacy or health care company that submits controlled substance reports to pmpclearinghouse.net? I'm trying to generate a 0 report file to send and they just won't take it.

For example, the AK pdmp guide (https://www.commerce.alaska.gov/web/portals/5/pub/pha_awarxe_dispenserguide.pdf) says the format should be:
code:
TH*4.1*123456*01**20150101*223000*P**\\
IS*9075555555*PHARMACY NAME*\
PHA*** ZZ1234567\
PAT*******REPORT*ZERO************\
DSP*****20150101******\
PRE*\
CDI*\
AIR*\
TP*7\
TT*123456*10\ 
I tried that, but just says it can't parse... Also I checked our files and the section separator seems like it should be "~" instead of "\", but I tried that as well and nothing. So basically I'm wondering if you are sending 0 report files that are being accepted, if I could see an example (with the pharmacy name and dea scrubbed of course) cause this is really annoying.

stevewm
May 10, 2005

CLAM DOWN posted:

You don't move to my butt to save money.

You definately don't...

We recently ran the numbers during a server hardware refresh... Assuming a 5 year period, moving our main application workload to any cloud service was easily 4x the cost of just buying new hardware and continuing to do it in-house. And this is factoring in power, all new network gear, cost to install, a brand new automatic generator we installed, etc...

Our application provider also has their own official cloud hosted version that they push hard, but again it was more expensive then doing it ourselves, AND over the past 7 years we have maintained a better uptime then they have.

I guess the cloud makes sense for some businesses, but for our case, it simply doesn't.

Adbot
ADBOT LOVES YOU

AlternateAccount
Apr 25, 2005
FYGM

MF_James posted:

Powershell is pretty intuitive but whatever.

I am not sure Microsoft has EVER put out something that is so universally useful. It's so good.

H110Hawk posted:

Look sir/ma'am the spreadsheet aws made for us assuming we pay full retail for Dell servers with the top of the line warranty, raid cards, and redundant power supplies says we save money hand over fist if we do all up front reserved instances. Barely.

Microsoft/AWS/Vendors sell the SAVINGGGSS!!! very, very hard and will routinely drop numbers like 30-60% without really even getting into what people are running. They always assume you'll move every damned thing to the CLOUD, when really whatever those mostly imaginary savings are only kick in when you're abandoning entire gigantic swaths of on-prem infrastructure, which most businesses cannot/will not do. So now you've got one foot in each ecosystem and are paying heavily for both.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply