Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
A Flaming Chicken
Feb 4, 2007
I've not been able to read all the thread but looking for some advice.

I'm now in charge of sorting out a (relative) cluster gently caress of letting 15 designers loose on Photoshop and saving the results to Google Drive. This "worked great" until someone hit the local drive limits and started to delete everyone elses files.

Someone is suggesting a Nasuni filer to handle this. However, given the volumes, I'm wondering if it could be standard NFS with an accelerator or if it needs to be a dedicated *vendor here* solution. I think we're only talking about a few terabytes with two/three offices distributed geographically. It doesn't feel insurmountable.

Any thoughts / advice anyone can share?

Adbot
ADBOT LOVES YOU

tehfeer
Jan 15, 2004
Do they speak english in WHAT?
I personally love our set of nimble cs220s. Its a bummer their stock tanked so hard. I hope they are still around when these units come up for replacement.

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
What is the current best practice for LUN sizes for VMware? I was under the impression that a 2TB LUN was preferred for manageability, and I know that VMware has a 2TB limit not so long ago.

I ask because I found a 9TB LUN today and I am hoping its not a bad thing.

Potato Salad
Oct 23, 2014

nobody cares


9TB is cool. You're fine -- so long as your backup system is handling it fine (assuming it interacts with the LUN itself at all).

Edit: I mean, in the interest of thoroughness, what's the 9TB lun hosted on, and what are you using for backups?

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

Potato Salad posted:

9TB is cool. You're fine -- so long as your backup system is handling it fine (assuming it interacts with the LUN itself at all).

Edit: I mean, in the interest of thoroughness, what's the 9TB lun hosted on, and what are you using for backups?

It is on an HP 3Par system. I am not familiar with it yet.

Backups? looool.

:negative:

Potato Salad
Oct 23, 2014

nobody cares


Consider Veeam for backups of VMware systems. It's cheap insofar as backups go, and it lets you throw backup data pretty much anywhere. No need for a backup destination that costs more than a SUV -- an inexpensive, low-end filer device can suit almost any small business' needs.

Not that you're asking for backup recommendations, but hey.

Edit: I know nothing about 3PAR appliances. Someone else may have familiarity.

Potato Salad fucked around with this message at 02:17 on Dec 8, 2015

Moey
Oct 22, 2010

I LIKE TO MOVE IT

mayodreams posted:

It is on an HP 3Par system. I am not familiar with it yet.

Backups? looool.

:negative:

I have a few datastores around that size, but my storage is tierless. How is the underlying storage configured?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

mayodreams posted:

What is the current best practice for LUN sizes for VMware? I was under the impression that a 2TB LUN was preferred for manageability, and I know that VMware has a 2TB limit not so long ago.

I ask because I found a 9TB LUN today and I am hoping its not a bad thing.
There's no one-size-fits-all answer; it depends on your I/O workloads and your storage configuration. If you're running over FC or hardware iSCSI, your adapter has a queue per LUN with an assigned queue depth; if you're on software iSCSI, this is likely configurable and thus not as much of a concern. Your total maximum number of queued I/Os will be your LUN queue depth times the number of LUNs you're running I/O against, so spreading your workloads across multiple LUNs increases your aggregate I/O throughput and decreases your chance of delayed I/O on a single LUN stalling the rest of the queue.

Your other concern is metadata updates, which is a function of the number of VMFSes rather than the number of LUNs (a VMFS filesystem can have multiple extents). When you issue a metadata update, like creating/manipulating a filesystem snapshot or growing a thin-provisioned disk, you issue a metadata lock to the filesystem until that operation completes. This implies that if you have multiple operations fighting over that lock, they'll queue up and all your VMs in that queue will hang on I/O until that lock is clear and they get to do whatever operation it was they were waiting on. Backup systems like Veeam or PHD Virtual can generate a huge number of these snapshot operations, and lots of thin-provisioned VMs on the same volume will cause lots of contention if they're being written at the same time.

On the other hand, if you're running a single huge file server off of VMFS, or a bunch of VMs with thick-provisioned volumes, your 9 TB LUN probably isn't much cause for concern.

Lastly, VAAI makes the metadata piece much, much less of a problem than they were four or five years ago because the operations are now very, very fast, but you should still be aware of the performance considerations.



e: as one other consideration, I generally tried to stay away from LUNs that size because I was using synchronous replication and that much data in an NLSAS tier might take a week or more to re-replicate depending on your foreground I/O.

Vulture Culture fucked around with this message at 03:49 on Dec 8, 2015

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
I appreciate all of the input.

I know these systems are very much a balance of needs and workflows. In this case, there are two FC luns on SAS, the 9tb and a 1.5TB. The rest are SATA via iSCSI.

With our configuration, we are talking a lot of VM storage, so I am definitely concerned about the queue depth issues.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

mayodreams posted:

I appreciate all of the input.

I know these systems are very much a balance of needs and workflows. In this case, there are two FC luns on SAS, the 9tb and a 1.5TB. The rest are SATA via iSCSI.

With our configuration, we are talking a lot of VM storage, so I am definitely concerned about the queue depth issues.

Large LUNs aren't necessarily a bad thing depending on how many hosts, size of VMs and most importantly: does your array support VAAI? I have a number of customers on VMAX today that just use something like a 10TB datastore size.

You can track the queue via 'esxtop' during live troubleshooting and historically via vCenter performance charts.

The big metrics to watch out for though are going to be read and write latency.

tehfeer
Jan 15, 2004
Do they speak english in WHAT?

Potato Salad posted:

Consider Veeam for backups of VMware systems. It's cheap insofar as backups go, and it lets you throw backup data pretty much anywhere. No need for a backup destination that costs more than a SUV -- an inexpensive, low-end filer device can suit almost any small business' needs.


I second the Veeam plug. We have tried a wide array of backup solutions from backupexec to appassure. Veeam surpasses all of them from cost to usability and reliability.

Internet Explorer
Jun 1, 2005





I somehow managed to avoid Veeam up until recently. I like it a lot but I have a few major gripes. One, you can't really do disk exceptions. They say you can but it requires listing all the disks you do want to back up, it listing the disks you don't want to back up. The other is more significant. They have no real good solution for archival backup storage. They have GFS implemented but it basically works out to needing to store a full backup for every restore point. If you want to keep 4 weeks, 12 months, and 5 years (which I think is entirely reasonable), you need to keep something like 31 + a few full backup sets. And that's just insane. Dedupe can help, but it's a kludge at best.

BonoMan
Feb 20, 2002

Jade Ear Joe
So in our ongoing storage woes, upgrades we ended up purchasing 2 QNAP rack mounted systems (main + expansion) and have about 120TB connected through a 10GbE switch to everyone and all is good.

Now we have 3 mini-sas raid enclosures (1 X 10 TB, 2 X 4 TB for 18 TB total) and drives that we want to create a sort of nearline backup that we'll use to dump to tape (an also mini-sas LTO 6 drive).

We're looking for the cheapest solution to try to consolidate these into a single volume (for all intents and purposes).

Getting another rackmount enclosure might make my boss balk at the cost. But for half we could get two 8 bay enclosures and daisy chain them?

Something like this?
http://www.amazon.com/Sans-Digital-Enclosure-Mini-SAS-TR8X/dp/B004WNLQ5E

Don't know about that brand, but just using it as an example.

the 10tb enclosure is the only one with two ports so we can't chain the other two to it.

GrandMaster
Aug 15, 2004
laidback
Do we have any 3PAR nerds here? I'm coming from EMC, and trying to make sense of the 3PAR LUN presentation - it looks really weird.

I would expect each host to have 4 paths to each LUN based on the zoning, but there appears to be 15 duplicates for each path - it's showing 60 paths per host per LUN. WAT? VMware isn't showing duplicate paths, just the 4 that should be there.


Walked
Apr 14, 2003

I have an EqualLogic I want to benchmark for an SQL environment. Any tips for tests to run with iometer? It's tough to pull from environment history as this is a new pilot under development, so i can only guess (mostly reporting, writes aren't terribly intensive, at least that's the plan).

Just trying to get a feel for what the array can handle - I'm not a storage guy by any means, so whatever tests I can concoct with iometer that'd make sense would be very helpful!

BonoMan
Feb 20, 2002

Jade Ear Joe

BonoMan posted:

So in our ongoing storage woes, upgrades we ended up purchasing 2 QNAP rack mounted systems (main + expansion) and have about 120TB connected through a 10GbE switch to everyone and all is good.

Now we have 3 mini-sas raid enclosures (1 X 10 TB, 2 X 4 TB for 18 TB total) and drives that we want to create a sort of nearline backup that we'll use to dump to tape (an also mini-sas LTO 6 drive).

We're looking for the cheapest solution to try to consolidate these into a single volume (for all intents and purposes).

Getting another rackmount enclosure might make my boss balk at the cost. But for half we could get two 8 bay enclosures and daisy chain them?

Something like this?
http://www.amazon.com/Sans-Digital-Enclosure-Mini-SAS-TR8X/dp/B004WNLQ5E

Don't know about that brand, but just using it as an example.

the 10tb enclosure is the only one with two ports so we can't chain the other two to it.

nm with this by the way... someone walks in and goes "why not just sell the two small enclosures plus all the hard drives and just fill the large single SAS bay with 6 TB drives?" Problem solved.

Forest for the trees and all that.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

NetApp has purchased scale out flash storage vendor Solidfire. Interesting choice for a company that generally doesn't make too many acquisitions.

KennyG
Oct 22, 2002
Here to blow my own horn.
Let's talk Isilon scale limitations. I have an app that relies heavily on smb. https://en.m.wikipedia.org/wiki/Electronic_discovery. We have a file tree on a single Isilon cluster that is more than 2 billion files. Each file is in a folder that's a sub path based on the fileId. the vendors app runs this way and I can't reasonably expect them to redo it. Thankfully I get a path like [project]\[filecategory]\[00-99]\[00-99]\[00-99]\[00-99]\[0000000000-9999999999]\*.ext or smb would further cry but this is causing anything on the Isilon that walks the file tree to squeal like Ned Batey in a swamp.

I really really really don't want to put the files in a Windows or Linux VM as this is a massive pain in the rear end for VM management. The benefits of quotas, hardware/metadata snaps, replication, scale out, management of things, APIs, reporting etc that the Isilon does well are what drove us to make this decision. Unfortunately it's crumbling under the weight. Unfortunately the app does not natively speak object. It requires Windows smb and every object gateway I've seen falls over far before we hit the billion file mark.

Anyone got another idea for a scale out, clustered file system that gives me substantial insight and management help the way oneFS should? I can't believe I'm looking for something that scales better than Isilon but what scales smb better for billions of files, but you don't know until you ask.

some kinda jackal
Feb 25, 2003

 
 
I guess this is as close to the right thread as I'll find -- Anyone consider themselves good at Brocade 300 FC switches? I got a boatload of these when we decomm'd one of our production environments and I want to throw them up on the 'bay, but I'm not sure what the best way to factory reset these is. I've reset the passwords and that seems to let me in, but I'm not seeing anything specific in configShow. I've set the management ethernet to DHCP on using ipaddrset, but beyond that is there anything I want to do?

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Martytoof posted:

I guess this is as close to the right thread as I'll find -- Anyone consider themselves good at Brocade 300 FC switches? I got a boatload of these when we decomm'd one of our production environments and I want to throw them up on the 'bay, but I'm not sure what the best way to factory reset these is. I've reset the passwords and that seems to let me in, but I'm not seeing anything specific in configShow. I've set the management ethernet to DHCP on using ipaddrset, but beyond that is there anything I want to do?

http://community.brocade.com/t5/Fibre-Channel-SAN/Reset-brocade-300-san/td-p/25426

You run a series of commands and it will do a factory reset:

The only way to do this in Brocade switches it through a series of commands:
configdefault -all
passwddefault
ipaddrset (Set to 10.77.77.77)
configure
defzone --allaccess
cfgclear;cfgsave
switchname
chassisname
snmpconfig

Do NOT do the licenseremove command in the forum link, the license is per switch and non-transferable.

some kinda jackal
Feb 25, 2003

 
 
Thanks for this. I also dug up that I needed to run:

aaaconfig --authspec local
aaaconfig --remove x.x.x.x -conf radius

Since there were RADIUS secrets in the auth config.

If I do a licenseshow, do you happen to know if the "hashes" before each license the actual license key? Wondering if I should blur them out if I post license info on eBay.

Going to hang onto a pair of these to play with though. Messing with this stuff has kind of whet my appetite.

Thanks Ants
May 21, 2004

#essereFerrari


If the license is per-switch then presumably it won't work on a different switch, so posting them up should be OK. I don't see much need to post anything other than the output from the switch listing what it's licensed for though.

some kinda jackal
Feb 25, 2003

 
 

Thanks Ants posted:

If the license is per-switch then presumably it won't work on a different switch, so posting them up should be OK. I don't see much need to post anything other than the output from the switch listing what it's licensed for though.

Good point. Thanks for the info again!

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Thanks Ants posted:

If the license is per-switch then presumably it won't work on a different switch, so posting them up should be OK. I don't see much need to post anything other than the output from the switch listing what it's licensed for though.

Pretty much this, the 'license' is really more of a per device 'authenticity' certificate embedded in the device. It won't compromise anything to show it to others.

Amandyke
Nov 27, 2004

A wha?

KennyG posted:

Let's talk Isilon scale limitations. I have an app that relies heavily on smb. https://en.m.wikipedia.org/wiki/Electronic_discovery. We have a file tree on a single Isilon cluster that is more than 2 billion files. Each file is in a folder that's a sub path based on the fileId. the vendors app runs this way and I can't reasonably expect them to redo it. Thankfully I get a path like [project]\[filecategory]\[00-99]\[00-99]\[00-99]\[00-99]\[0000000000-9999999999]\*.ext or smb would further cry but this is causing anything on the Isilon that walks the file tree to squeal like Ned Batey in a swamp.

I really really really don't want to put the files in a Windows or Linux VM as this is a massive pain in the rear end for VM management. The benefits of quotas, hardware/metadata snaps, replication, scale out, management of things, APIs, reporting etc that the Isilon does well are what drove us to make this decision. Unfortunately it's crumbling under the weight. Unfortunately the app does not natively speak object. It requires Windows smb and every object gateway I've seen falls over far before we hit the billion file mark.

Anyone got another idea for a scale out, clustered file system that gives me substantial insight and management help the way oneFS should? I can't believe I'm looking for something that scales better than Isilon but what scales smb better for billions of files, but you don't know until you ask.

What version of OneFS? What nodes are you running? Are you using GNA?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

KennyG posted:

Let's talk Isilon scale limitations. I have an app that relies heavily on smb. https://en.m.wikipedia.org/wiki/Electronic_discovery. We have a file tree on a single Isilon cluster that is more than 2 billion files. Each file is in a folder that's a sub path based on the fileId. the vendors app runs this way and I can't reasonably expect them to redo it. Thankfully I get a path like [project]\[filecategory]\[00-99]\[00-99]\[00-99]\[00-99]\[0000000000-9999999999]\*.ext or smb would further cry but this is causing anything on the Isilon that walks the file tree to squeal like Ned Batey in a swamp.

I really really really don't want to put the files in a Windows or Linux VM as this is a massive pain in the rear end for VM management. The benefits of quotas, hardware/metadata snaps, replication, scale out, management of things, APIs, reporting etc that the Isilon does well are what drove us to make this decision. Unfortunately it's crumbling under the weight. Unfortunately the app does not natively speak object. It requires Windows smb and every object gateway I've seen falls over far before we hit the billion file mark.

Anyone got another idea for a scale out, clustered file system that gives me substantial insight and management help the way oneFS should? I can't believe I'm looking for something that scales better than Isilon but what scales smb better for billions of files, but you don't know until you ask.

It's been a long time since I looked at isilon but I recall you could add SSD drives and it would use these for metadata - making such things as you describe faster. Are you using these?

(been a long time so I may have it mixed up or be flat out inaccurate with my thoughts)

Amandyke
Nov 27, 2004

A wha?

Vanilla posted:

It's been a long time since I looked at isilon but I recall you could add SSD drives and it would use these for metadata - making such things as you describe faster. Are you using these?

(been a long time so I may have it mixed up or be flat out inaccurate with my thoughts)

That's what gna does.

Thanks Ants
May 21, 2004

#essereFerrari


Would something built around an Isilon X210 be complete overkill for ~30TB of general design agency style files (anything from Office documents up to 400GB ProRes stuff) accessed from primarily Mac clients? I'm looking for tiering so that cold data can stay online but on slower disk, snapshots for easy recovery of user gently caress-ups, and the ability to mirror to another box across town in the event of a fire/flood/whatever.

I'm a bit out of the loop on NAS really, would appreciate any opinions on it. For what it's worth I wouldn't see it hitting the issues mentioned up there ^ as there isn't that sort of usage happening.

Amandyke
Nov 27, 2004

A wha?

Thanks Ants posted:

Would something built around an Isilon X210 be complete overkill for ~30TB of general design agency style files (anything from Office documents up to 400GB ProRes stuff) accessed from primarily Mac clients? I'm looking for tiering so that cold data can stay online but on slower disk, snapshots for easy recovery of user gently caress-ups, and the ability to mirror to another box across town in the event of a fire/flood/whatever.

I'm a bit out of the loop on NAS really, would appreciate any opinions on it. For what it's worth I wouldn't see it hitting the issues mentioned up there ^ as there isn't that sort of usage happening.

What kind of performance are you looking at needing? What kind of growth? How much of that data is cold?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Isilon is overkill for 30tb and tiering is basically dead technology in storage. Even the smallest config will end up with a good deal more storage than you need. Just buy a hybrid array that does NAS like Tegile or NetApp, you'll likely come in a good bit cheaper.

Thanks Ants
May 21, 2004

#essereFerrari


Yeah apologies I brought no information into that question, it's a favour for someone who gave me very little to go on.

Can you go into what you mean about tiering being dead? Are people just going all in on flash or have I misunderstood?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Thanks Ants posted:

Yeah apologies I brought no information into that question, it's a favour for someone who gave me very little to go on.

Can you go into what you mean about tiering being dead? Are people just going all in on flash or have I misunderstood?

Pretty much everyone outside of Compellent uses flash as a journal or a cache, not a storage tier where data persists for long period. Tiering is complicated and slow to react to changes. Caching is faster and easier to manage. Everyone does caching with flash these days, though they don't all do it the same way.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NippleFloss posted:

tiering is basically dead technology in storage
Tiering to faster-than-nearline, anyway. Tiering to archival storage is alive and well, to the chagrin of everyone who has to deal with the idea :(

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Vulture Culture posted:

Tiering to faster-than-nearline, anyway. Tiering to archival storage is alive and well, to the chagrin of everyone who has to deal with the idea :(

Yea, but that's generally handled by backup software or some sort of object layer. Outside of some niche offerings that's not generally a storage array function. And even then, like you said, it's something that often sounds better on paper than it works in practice.

Thanks Ants
May 21, 2004

#essereFerrari


Goddam I am so out of touch on storage. I think I'll try and push this off to a VAR to solve and see what I can learn from the process.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Thanks Ants posted:

Goddam I am so out of touch on storage. I think I'll try and push this off to a VAR to solve and see what I can learn from the process.

There is a wide range of knowledge here. People working for and having experience with big vendors, small vendors, all flash vendors, backup vendors, cloud, etc.

Just give as much detail as possible (remembering it's a public forum) and i'm sure a lot of people here will be able to point you in the right direction and give you a list of a few products to pursue. Your requirements don't fit anywhere near the company I work for (all flash) but I used to work for a large vendor so happy to give my 2c.

It may just be my experience but I often find that VARs really do add little value but love to charge 30%+ margin. When customers work with the vendor direct and insist on pass through the VAR basically gets forced into very little margin - better for you.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Vanilla posted:

It may just be my experience but I often find that VARs really do add little value but love to charge 30%+ margin. When customers work with the vendor direct and insist on pass through the VAR basically gets forced into very little margin - better for you.

This is highly variable. The VAR, in theory, owns the relationship with the customer and has a good understanding of their needs. Often they will be providing additional services to that customer beyond the ones for this specific vendor so they have a better understanding of what's required for integration.

I've been on both sides as vendor and VAR and it's really dependent on the people involved. As a VAR we've had vendors bypass us and deal direct with the customer and just use is as order fulfillment and end up screwing things up terribly because they undersize the solution or don't understand the requirements. They just walk in and make a pitch and get a sale and then we have to clean up the mess.

Most vendors will only work through the channel anyway and bypassing the VAR isn't possible. It's also not a great idea if you want them to pitch you to new accounts. And, of course, it's hard to expect great service when you start out the engagement by basically trying to cut them out of the deal.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

NippleFloss posted:

This is highly variable. The VAR, in theory, owns the relationship with the customer and has a good understanding of their needs. Often they will be providing additional services to that customer beyond the ones for this specific vendor so they have a better understanding of what's required for integration.

I've been on both sides as vendor and VAR and it's really dependent on the people involved. As a VAR we've had vendors bypass us and deal direct with the customer and just use is as order fulfillment and end up screwing things up terribly because they undersize the solution or don't understand the requirements. They just walk in and make a pitch and get a sale and then we have to clean up the mess.

Most vendors will only work through the channel anyway and bypassing the VAR isn't possible. It's also not a great idea if you want them to pitch you to new accounts. And, of course, it's hard to expect great service when you start out the engagement by basically trying to cut them out of the deal.

All valid points.

Maybe my experiences have been bitter :)

You're right in that the vast majority of vendors will only push through channel, but it's harder for a VAR to add high margin when they're essentially pass through. Valid if you're introducing a VAR to an account, not so if they are incumbent!

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Vanilla posted:

All valid points.

Maybe my experiences have been bitter :)

You're right in that the vast majority of vendors will only push through channel, but it's harder for a VAR to add high margin when they're essentially pass through. Valid if you're introducing a VAR to an account, not so if they are incumbent!

The problem from a vendor perspective is if you do that to your VAR you dramatically lower the chances that they are going to bring you into new deals or places where they are the incumbent. Especially in the storage market, where there's no shortage of other things to sell. Vendor/VAR politics are really weird and it's one of my least favorite parts of the job, but it seems like whenever they are working at cross purposes the customer ends up being that one that ultimately suffers.

The local Pure team is really good. They bring us ops and will occasionally even tip us off on ops that they know they aren't a fit for so we can pitch something else. So we really like them and try to bring them in wherever we can, do events with them, and help talk them up in the market. The technology is good, but so is the relationship.

On the other hand, the Nimble team can be really difficult to work with and they've screwed us over a few times, so if we don't have good technical reasons to pitch Nimble over something else then we will usually go with something else.

Adbot
ADBOT LOVES YOU

KennyG
Oct 22, 2002
Here to blow my own horn.

Amandyke posted:

What version of OneFS? What nodes are you running? Are you using GNA?

7.2.0.4 with 6 x410 nodes with GNA.

Our nodes are based around 3TB SED's with 128GB ram and 2 ssd drives for GNA.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply