Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Dilbert As gently caress posted:

Just because you reach fast IO doesn't mean you did everything you can with data.
That's like saying my turbo dart is better than a naturally aspirated Ferrari because I did everything I could with the air intake.

If you have never done any daily administration with a netapp, using the tools they provide, you will never understand the value. Without the tools, you are correct, it's just a brick of storage that provides x capacity at y iops.

Adbot
ADBOT LOVES YOU

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

The thing is you have to consider the fact that it actually sucks for most workloads on netapp. Compared to Nimble or Oracle, it's not even in the same ballpark. Dedupe however is pretty nice and makes up for the compression thing.

I honestly don't get the DAF comment about what you do with the data. There is far more poo poo that a netapp does with your data than oracle or nimble. The tools are light years ahead too.

Inline compression has gotten better with 8.2 and improves more in 8.3, to the point that you will be able to run it on workloads that require performance. It's not as good as Nimble's, though, because it's wasn't designed as an integral part of write processing, it was fitted in later. One big problem with compression is that until the 8000 series we simply didn't have enough CPUs to offload it. Nimble and Oracle were putting twice as many cores in their boxes so they could dedicate a couple of cores purely to inline data processing. We're getting better about this, but it still wish we would throw more hardware in our boxes because it's relatively cheap compared to all of the other costs and is the easiest thing to fix. But Oracle is a server company now, so they will always be able to offer more CPU and ram at lower prices.

I'm also puzzled about the data processing aspect because WAFL is basically a big data path optimization engine. There are a lot of very complicated algorithms that determine where data is written, how it is read, whether and where it is cached, and whether it should be moved for better performance. It's not just writing to a fixed sector based on an LBN in an ISCSI command and reading that block off of that disk later. And the iSCSI and NFS data paths are exactly the same so saying that NetApp does NFS well but not block is weird. It's all the exact same on the back end.

Also, DAF, when you talk about the HTTP console do you mean filerview, the on box web GUI? Or System Manager, the client executable that runs in a browser window?

madsushi
Apr 19, 2009

Baller.
#essereFerrari
Backups and replication is a great example of where NetApp excels.

SnapDrive - application-aware backups for database apps, tons of tooling built in for automatic cloning, application-level maintenance, etc

SnapVault - point-in-time immutable filer-to-filer backups (archival - even "deswizzles" for 100% data integrity)

SnapMirror - mirrored filer-to-filer data (duh) for high availability

NDMP - backing up to alternative media (tape, disk, etc)

Operations Manager - automating backups, retention, configuration, replication, monitoring, alerting, capacity planning, etc

The NetApp suite gives you the ability to back up and replicate your data any way you like. Nimble? You've got filer-to-filer replication... and that's it. You don't get any other options.

patrain
May 20, 2014
New to working with my company's product line of choice, HP storage. I had to talk to a few HP sys. architects to find a way to hook up a HP MSA2000 directly to a RMS (also HP - DL380G6 to be exact) with the solution of PCI 4GB fiber HBA(2 lanes) to A and B MSA2012fc controllers (then JBOD CX4 connections). This was an odd connection for their architect team for some reason and hadn't seen it before. My questions is how long should the initialization of 48TB (24 spindles) into 2 vdisks take? I think it just finished over the weekend when it was started on last Monday.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

madsushi posted:

NetApp stuff

This is a good list, but I must be pedantic here and note that deswizzling is for performance, not data integrity, and is used for volume snapmirror, not snapvault or qtree snapmirror (at least on 7 mode, all replication on CDOT kicks off the deswizzler).

madsushi
Apr 19, 2009

Baller.
#essereFerrari

NippleFloss posted:

This is a good list, but I must be pedantic here and note that deswizzling is for performance, not data integrity, and is used for volume snapmirror, not snapvault or qtree snapmirror (at least on 7 mode, all replication on CDOT kicks off the deswizzler).

My understanding is that since SnapVault works at the qtree level, it has to deswizzle, since it has to rebuild the dedupe thumbprint on the other end. Is that not the case?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

madsushi posted:

My understanding is that since SnapVault works at the qtree level, it has to deswizzle, since it has to rebuild the dedupe thumbprint on the other end. Is that not the case?

The deswizzler isn't related to dedupe. WAFL has two layers of virtualization (3 if you could LUNs). Each block has an identifier for it's location in the FlexVol, it's VVBN (virtual volume block number), and an identifier for it's location in the aggregate, it's PVBN (physical volume block number). When an IO write or read request comes in it will contain either an LBN or an FBN that tell you what block within the file or LUN the application is attempting to access. WAFL translates these requests into VVBNs because files and LUNs (which are just files) live on FlexVols. But while a VVBN for a block won't change when it is overwritten it's PVBN, it's physical location in the aggregate, will change because we don't do any writes in place. So we write the VVBN to a new location in the aggregate, and update the metadata for that block to point to that physical location for that VVBN. This requires maintaining a table of VVBN to PVBN mappings. The problem is that when a request comes in to read a block we need to translate FBN or LBN to VVBN, and then VVBN to PVBN to find the actual block on disk and read it. The VVBN to PVBN translation adds time to the IO path if you have to consult a lookup table, so while we maintain a table of those mappings we also write the PVBN into the block metadata when we write it to disk. Then when we need to find a physical block instead of reading the table to find it's actual location we just consult the metadata, which is generally already cached in memory anyway. It makes the read path much shorter so latency is reduced.

When we perform a SnapMirror update of a destination we maintain the VVBN layout of the original volume but the PVBNs that back them change because the destination aggregate can have a completely different physical layout. So we flag the PVBN field in the metadata with a dummy value and let the WAFL write allocator figure out the best place to put the blocks, so the write process is much faster. But this leads to slow access on these destination volumes because all of those reads have to go through the PVBN lookup process. The deswizzler is a background scanner that kicks off after a volume SnapMirror and goes through each snapshot on the destination and replaces all of the dummy PVBN values in the metadata with the correct PVBN for where the physical block landed. It's basically there so that read access isn't significantly slower on snapmirror destinations (which includes vol copy operations).

Snapvault doesn't need deswizzling because it doesn't replicate the volume structure, meaning the VVBN and PVBN get rewritten completely at the destination. It only sends data, not metadata, and the WAFL metadata is all generated at the destination. As a consequence it tends to be slower and generate higher load on the system than SnapMirror, in addition to inflating de-duplicated data. Which is why a background scanner kicks off a SIS job on the destination volume after a snap vault update is completed if it detects that the source was de-duplicated.

I know that it's likely that no one in here cares about the technical details behind how these technologies work, but I find it really interesting.

YOLOsubmarine fucked around with this message at 18:37 on Aug 12, 2014

Mr Shiny Pants
Nov 12, 2012
Me too, if you could shed some light on why Snapshots can't be deduped? That would be awesome.

Pile Of Garbage
May 28, 2007



That's some awesome info NippleFloss. Perhaps you could answer a NetApp question that our storage guys ummed and arred about when I asked them: if you have scheduled dedupe configured on a volume and you delete a large amount of data from said volume do the deduped portions get returned as free-space straight away or do I have to wait until the scheduled dedupe process runs again?

Apologies if that's a very vague question. I've only really worked with IBM SVC in the past and NetApp is new to me. Also in my current role storage is technically outside of my purview however I'm still on the consumption-end as it were. If it helps we're running Cluster-mode ONTAP 8.2.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Mr Shiny Pants posted:

Me too, if you could shed some light on why Snapshots can't be deduped? That would be awesome.

I'm not a developer, so I could be wrong about some of this, but my understanding is that it was decided that it was a cost-benefit consideration and it fell on the side of not worth doing. Snapshots are embedded pretty deeply within the WAFL code and the major principle behind them is that once we take a snapshot all of those blocks are locked and cannot be modified until the snapshot is deleted. We can do things like move the PVBNs underneath to re-arrange the data at the physical layer, but we can't change the VVBN layout at all. When a data block is deduplicated the metadata that points to that data block is updated with the new location of the reference block. Doing this on snapshots won't work because those metadata blocks are locked by WAFL.

There are some cases where you will get deduplication between the active filesystem and a block locked in a snapshot though, just not between blocks already locked in snapshot.

There is some interesting work going on along those lines though, that I hope eventually makes it in to a product. A fully reference counted version of WAFL would allow for pretty cool stuff. If you're curious about WAFL internals you can find Dave Hitz's (NetApp co-founder) original paper on WAFL from 1995 here and an updated paper describing how FlexVols are implemented here.


cheese-cube posted:

That's some awesome info NippleFloss. Perhaps you could answer a NetApp question that our storage guys ummed and arred about when I asked them: if you have scheduled dedupe configured on a volume and you delete a large amount of data from said volume do the deduped portions get returned as free-space straight away or do I have to wait until the scheduled dedupe process runs again?

Apologies if that's a very vague question. I've only really worked with IBM SVC in the past and NetApp is new to me. Also in my current role storage is technically outside of my purview however I'm still on the consumption-end as it were. If it helps we're running Cluster-mode ONTAP 8.2.

We maintain a reference count file that is basically a list of how many times a block (VVBN) in the filesystem is referenced. So if you've run dedupe and it's found 7 identical instances to a block it will remove 6 of those copies and point them at the one remaining copy of the block. In this case you have 6 references to that block. If you delete a file we break that file down into it's component blocks and for each block we check to see if that block is locked by a snapshot, and then if it's reference count is zero. If it does not exist in a snapshot and it does not have anything else referencing it then it gets freed up and added back in to the available storage pool. If it does still have references then we will still free up the metadata blocks for the deleted files, but not the actual data blocks. A new dedupe run is not required for any of this.

However, large deletes on NetApp, whether deduped or not, are not processed synchronously. We batch up frees and feed them in slowly over time so you'll see free space steadily increase for a little while before finally stopping, rather than a large jump in available space. I can do an effort post on why it's done this way, but it's for performance reasons.

Pile Of Garbage
May 28, 2007



NippleFloss posted:

We maintain a reference count file that is basically a list of how many times a block (VVBN) in the filesystem is referenced. So if you've run dedupe and it's found 7 identical instances to a block it will remove 6 of those copies and point them at the one remaining copy of the block. In this case you have 6 references to that block. If you delete a file we break that file down into it's component blocks and for each block we check to see if that block is locked by a snapshot, and then if it's reference count is zero. If it does not exist in a snapshot and it does not have anything else referencing it then it gets freed up and added back in to the available storage pool. If it does still have references then we will still free up the metadata blocks for the deleted files, but not the actual data blocks. A new dedupe run is not required for any of this.

However, large deletes on NetApp, whether deduped or not, are not processed synchronously. We batch up frees and feed them in slowly over time so you'll see free space steadily increase for a little while before finally stopping, rather than a large jump in available space. I can do an effort post on why it's done this way, but it's for performance reasons.

That's awesome information, thank you NippleFloss! Is there anyway to guesstimate the amount of time required for space to be freed when X bytes are deleted?

KennyTheFish
Jan 13, 2004

NippleFloss posted:


However, large deletes on NetApp, whether deduped or not, are not processed synchronously. We batch up frees and feed them in slowly over time so you'll see free space steadily increase for a little while before finally stopping, rather than a large jump in available space. I can do an effort post on why it's done this way, but it's for performance reasons.

Please do. I don't use NetApp at the moment, but find the theory and architecture stuff facinating.

Mr Shiny Pants
Nov 12, 2012

NippleFloss posted:

I'm not a developer, so I could be wrong about some of this, but my understanding is that it was decided that it was a cost-benefit consideration and it fell on the side of not worth doing. Snapshots are embedded pretty deeply within the WAFL code and the major principle behind them is that once we take a snapshot all of those blocks are locked and cannot be modified until the snapshot is deleted. We can do things like move the PVBNs underneath to re-arrange the data at the physical layer, but we can't change the VVBN layout at all. When a data block is deduplicated the metadata that points to that data block is updated with the new location of the reference block. Doing this on snapshots won't work because those metadata blocks are locked by WAFL.

There are some cases where you will get deduplication between the active filesystem and a block locked in a snapshot though, just not between blocks already locked in snapshot.

There is some interesting work going on along those lines though, that I hope eventually makes it in to a product. A fully reference counted version of WAFL would allow for pretty cool stuff. If you're curious about WAFL internals you can find Dave Hitz's (NetApp co-founder) original paper on WAFL from 1995 here and an updated paper describing how FlexVols are implemented here.


I was under the impression that it already had a reference count on each block, you learn something new every day. Thanks.

madsushi
Apr 19, 2009

Baller.
#essereFerrari
I guess I was using the wrong term, instead of "deswizzled", I needed to say "rehydrated". That is, the original data (non-deduped) is sent across and then re-deduped on the other side, so nothing you can do on the source (deleting a snapshot, deleting dedupe thumbprints, etc) will ruin your data on the destination.

some kinda jackal
Feb 25, 2003

 
 
What would you guys do if you were given an HP server with 8 empty 2.5" bays and asked to create a Veeam server on a tight budget? I'm thinking of going with 8 of these:

http://www.canadacomputers.com/product_info.php?cPath=15_1086_215_217&item_id=QR1669

I'm hoping that spread across eight spindles the 7200 won't be too much of a bog, and all this server will do is run Veeam so it'll obviously be write-heavy.

I'm going to be backing up about 1-2 tb worth of VMs (raw, likely much less after Veeam dedup) so I'd like to give myself extra room to work with for extra retention. RAID-5 would give me 7TB to work with, though I'd likely use a different setup.

I had to fight to get just this server and a Veeam license with a very limited budget so this client is obviously not terribly focused on backups (though I am adamant that they at least get something in place), so cost is likely to be a factor (I'll likely have to fight to just get the above link x8). I'll sacrifice performance for having a backup though.

some kinda jackal fucked around with this message at 15:58 on Aug 14, 2014

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Nimble has some "new" product lines. CS300 and CS500. Looks like they are dropping most of the 1GbE connections with the option of 10GbE copper or fiber.

I wonder if the actual hardware has changed much, or just the controllers.

Maneki Neko
Oct 27, 2000

Moey posted:

Nimble has some "new" product lines. CS300 and CS500. Looks like they are dropping most of the 1GbE connections with the option of 10GbE copper or fiber.

I wonder if the actual hardware has changed much, or just the controllers.

Looks like CS220 got "gimped" a bit and rebranded a CS215. Amusingly we had a conversation with Nimble sales like 2 weeks ago about "WHEN ARE YOU GUYS GOING TO LAUNCH NEW poo poo" and they played super dumb, even after we offered to sign an NDA.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

cheese-cube posted:

That's awesome information, thank you NippleFloss! Is there anyway to guesstimate the amount of time required for space to be freed when X bytes are deleted?

There is not. It's dependent on the ONTAP version you're running, the user load on the system, and the actual resources available to the controller. We usually try to do it fast enough so as not to be a problem for the customer, but it's not immediate.

KennyTheFish posted:

Please do. I don't use NetApp at the moment, but find the theory and architecture stuff facinating.

I'll get to this when I have a little more free time.

Mr Shiny Pants posted:

I was under the impression that it already had a reference count on each block, you learn something new every day. Thanks.

Only blocks in volumes with SIS enabled have reference counts. Most operations that you might expect to use reference counts (like snapshots) use bitmaps to track free and used blocks.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Moey posted:

I wonder if the actual hardware has changed much, or just the controllers.
Just new CPUs basically. My sales rep told me last week not to make any decisions until Tuesday of this week.

Richard Noggin
Jun 6, 2005
Redneck By Default

Martytoof posted:

What would you guys do if you were given an HP server with 8 empty 2.5" bays and asked to create a Veeam server on a tight budget? I'm thinking of going with 8 of these:

http://www.canadacomputers.com/product_info.php?cPath=15_1086_215_217&item_id=QR1669

I'm hoping that spread across eight spindles the 7200 won't be too much of a bog, and all this server will do is run Veeam so it'll obviously be write-heavy.

I'm going to be backing up about 1-2 tb worth of VMs (raw, likely much less after Veeam dedup) so I'd like to give myself extra room to work with for extra retention. RAID-5 would give me 7TB to work with, though I'd likely use a different setup.

I had to fight to get just this server and a Veeam license with a very limited budget so this client is obviously not terribly focused on backups (though I am adamant that they at least get something in place), so cost is likely to be a factor (I'll likely have to fight to just get the above link x8). I'll sacrifice performance for having a backup though.

We have Veeam repos on slower storage than that. Your bottleneck will most likely be the 1Gb ethernet connection, not disk. Other factors unconsidered I'd say you could expect 70-90MB/s.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

adorai posted:

Just new CPUs basically. My sales rep told me last week not to make any decisions until Tuesday of this week.

Hopefully I can shove those new controllers into my existing CS240s. Minimizing from 12x Copper Cables to 4x fibers would be nice (and give more bandwidth.

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE
You don't need to upgrade your controllers to get 10Gb. It's just a nic card that goes into your existing controller.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Nitr0 posted:

You don't need to upgrade your controllers to get 10Gb. It's just a nic card that goes into your existing controller.

I knew the controllers are just little blade type computers, but didn't know throwing my own NIC in there would be supported. I know they whitelist disk serial numbers so you can't upgrade your own cache.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Moey posted:

Hopefully I can shove those new controllers into my existing CS240s. Minimizing from 12x Copper Cables to 4x fibers would be nice (and give more bandwidth.
According to my sales engineer, you can do it cold. Obviously since it's a big change, you can't do it hot to avoid downtime.

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE

Moey posted:

I knew the controllers are just little blade type computers, but didn't know throwing my own NIC in there would be supported. I know they whitelist disk serial numbers so you can't upgrade your own cache.

Sorry I meant you can pull out a controller and install the nimble supplied 10gb nics. I have never tried installing our own nics.

So according to our nimble reps we are currently the largest customers in BC. I don't know how I feel about that.

Ours are still going strong though. Added in a couple of those new all flash shelves recently and found a bug immediately that brought down our SQL cluster. However their support is extremely responsive and found the issue and it was a half and half split between their software and antivirus.

If I can say one thing about nimble it's that I loving love phoning the number for support and having someone immediately answer the phone who can actually answer questions and fix problems. I told them to never get rid of it.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Nitr0 posted:

If I can say one thing about nimble it's that I loving love phoning the number for support and having someone immediately answer the phone who can actually answer questions and fix problems. I told them to never get rid of it.

They can do this now because they have a really small install base and they are running at a loss. If they keep growing, and want to turn profitable eventually, tiered support will be required.

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE
I would be very surprised if they go tiered phone support anytime soon. Despite how big they get

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Nitr0 posted:

I would be very surprised if they go tiered phone support anytime soon. Despite how big they get

I mean, they could go out of business before that, but if they stick around and keep growing they will eventually hit a tipping point where paying a bunch of engineers 150k a year to answer trivial questions isn't going to be economically feasible. There's a reason that every major player in the industry has tiered support and it's not because they are run by dummies.

Nimble's cost of revenue of service and support, under which support personnel would be subsumed, was about 5.5 million through the first 3 quarters of 2013, which means that they might employee 40 or so people in their entire support organization? Now that's not a bad ratio for a company of less than 700 and with a customer base of probably 2500 or so, but it's also a really small number when you're talking about hundreds of thousands of deployments which is what the major vendors have. Nimble's major advantages right now are that their product is narrowly focused, and their customer base is small. The narrow focus of the product means that they can keep QA costs down and the small customer base means that they are less likely to hit the 1 in a million type of bugs that only get found when you have huge numbers of deployments running every workload under the sun. But if they want to become EMC in the long run they can't do that without adding features and adding customers and both of those things make support much harder which means you need to employ more people and all of those people can't be rockstars because that's unaffordable. That they will also have to add customers who are more demanding than your average SMB will also be problematic.

And, again, they've actually increased their losses year over year since their inception. At some point they have to either get bought or turn a profit.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Nitr0 posted:

If I can say one thing about nimble it's that I loving love phoning the number for support and having someone immediately answer the phone who can actually answer questions and fix problems. I told them to never get rid of it.
Welcome to NetApp 10 years ago.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Welp looks like I will be working onsite with a netapp engineer on the 16th because "the engineer who installed our FAS, didn't install it right"; this comes after 1.5 years of prod service.... I really don't want to go through this, or deal with the "firmware update time!" BS.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Dilbert As gently caress posted:

Welp looks like I will be working onsite with a netapp engineer on the 16th because "the engineer who installed our FAS, didn't install it right"; this comes after 1.5 years of prod service.... I really don't want to go through this, or deal with the "firmware update time!" BS.

It's pretty hard to gently caress up a FAS install short of mis-cabling or really suboptimal aggregate layouts. I'd be curious to know what exactly is wrong with it if it's been in production for years. Feel free to PM if you'd rather respond privately.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

It's pretty hard to gently caress up a FAS install short of mis-cabling or really suboptimal aggregate layouts. I'd be curious to know what exactly is wrong with it if it's been in production for years. Feel free to PM if you'd rather respond privately.

I can send you the tech case number if you want to take a look.

It's loving stupid.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Dilbert As gently caress posted:

I can send you the tech case number if you want to take a look.

It's loving stupid.

Sure, go for it.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

Sure, go for it.


I think it was a random Kernel panic brought on by THINGS, but yeah. I'd feel more comfortable talking about this via PM, you have the case number.

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE

NippleFloss posted:

I mean, they could go out of business before that, but if they stick around and keep growing they will eventually hit a tipping point where paying a bunch of engineers 150k a year to answer trivial questions isn't going to be economically feasible. There's a reason that every major player in the industry has tiered support and it's not because they are run by dummies.

Nimble's cost of revenue of service and support, under which support personnel would be subsumed, was about 5.5 million through the first 3 quarters of 2013, which means that they might employee 40 or so people in their entire support organization? Now that's not a bad ratio for a company of less than 700 and with a customer base of probably 2500 or so, but it's also a really small number when you're talking about hundreds of thousands of deployments which is what the major vendors have. Nimble's major advantages right now are that their product is narrowly focused, and their customer base is small. The narrow focus of the product means that they can keep QA costs down and the small customer base means that they are less likely to hit the 1 in a million type of bugs that only get found when you have huge numbers of deployments running every workload under the sun. But if they want to become EMC in the long run they can't do that without adding features and adding customers and both of those things make support much harder which means you need to employ more people and all of those people can't be rockstars because that's unaffordable. That they will also have to add customers who are more demanding than your average SMB will also be problematic.

And, again, they've actually increased their losses year over year since their inception. At some point they have to either get bought or turn a profit.


I think it's a lot less dire straits than you think.

http://www.nimblestorage.com/news-events/press-releases/nimble-storage-reports-fourth-quarter-and-fiscal-year-2014-financial-results

All their profit is going straight into marketing. It's not the people answering the phones that will make a drat bit of difference to the bottom line. If they stopped marketing they are profitable. I don't think they want to become EMC. Look at how ridiculous that company has become and why would you want to strive to be that?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Nitr0 posted:

I think it's a lot less dire straits than you think.

http://www.nimblestorage.com/news-events/press-releases/nimble-storage-reports-fourth-quarter-and-fiscal-year-2014-financial-results

All their profit is going straight into marketing. It's not the people answering the phones that will make a drat bit of difference to the bottom line. If they stopped marketing they are profitable. I don't think they want to become EMC. Look at how ridiculous that company has become and why would you want to strive to be that?

More than half of their revenue (they aren't making profit because they are losing money) is going into sales and marketing, which means paying their sales force and all sales enablement, not just running ads. If they stop doing that they are not profitable because they have no one out there selling anything and also no one knows who they are.

But that gets to the heart of my point. The support center doesn't drive revenue, sales and marketing does, and R&D to a lesser extent. That's why they are dumping all of their revenue and VC into those departments, because those are their priorities for growing the customer base and getting profitable. Support personnel are a cost and don't have an obvious effect on the bottom line so that is where they will look to save costs when it becomes necessary.

This isn't doomsaying or anything, it's just what happens when companies get big. The NetApp example mentioned above is instructive. Nimble will either stagnate or they will grow and get less customer friendly in some respects, but they can't both grow substantially and also continue to function like a small startup by burning through venture capital and all of their revenue to buy positive word of mouth.

madsushi
Apr 19, 2009

Baller.
#essereFerrari
Every small storage company has great support because it's 100% engineers working the desk. Wait until it gets scaled up or outsourced (or they get acquired).

Compellent's Copilot support is the only one I know of that still maintains the high bar of quality after an acquisition. Nimble support is awesome and they did a ton of diag testing, etc all very quickly when fixing problems. But who knows where they'll be if they make it "big" in a few years.

e: NetApp support is great for hardware issues and such, but trying to get information about their software products has always been impossible. Questions about syntax or best practices for SMSQL PowerShell scripts? Good luck!!!

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

madsushi posted:

but trying to get information about their software products has always been impossible. Questions about syntax or best practices for SMSQL PowerShell scripts? Good luck!!!
What do you mean, the support staff always direct me to where I can find that information. "Did you check the NOW?"

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

madsushi posted:

Questions about syntax or best practices for SMSQL PowerShell scripts? Good luck!!!

These are almost always best directed to your SE. That sort of knowledge isn't really in the purview of support. Professional Services would be a better resource for that and your SE can usually track someone down on the PS side who can answer those questions. I've done a reasonable amount with SMSQL and Powershell, so if you have any questions in the future I may be able to help.

Adbot
ADBOT LOVES YOU

Syano
Jul 13, 2005
What is the smartest away to trend and forecast thin pool use in a VMAX? We are dumping some basic data from symcfg to a spreadsheet but our projections would be much more accurate if we could somehow correlate that growth to the actual client usage. I feel sort of confident we are circling the drain and will get it soon enough but thought there may be enough of you that have done something similar that could offer some suggestions.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply