Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Internet Explorer
Jun 1, 2005





skipdogg posted:

Trying to decide right now between the VNXe 3300 and a EqualLogic 4100 series setup. 1 Shelf of fast, 1 of slow for each.

Assuming pricing is pretty close to each other, any reason I should run away from either solution? The NAS functionality of the VNXe currently has us leaning that way.

If you think you are going to use the NAS portion then the VNXe would probably be the way to go. I will say that I went from Equallogic to a VNX5300 and I loving hate it. It's a hacked together piece of poo poo and EMC support sucks. That being said, I think the VNXe is significantly different than the VNX. Built off different code base or something.

Adbot
ADBOT LOVES YOU

complex
Sep 16, 2003

madsushi posted:

iSCSI-only with awful VAAI support means it's not well-suited for VMWare
You need to replace your SSDs every few years

Not sure why "iSCSI-only" makes it bad for VMware. As for "awful" VAAI support, which primitives are you missing that you wish you had? (Note: This is a test) I found VAAI support to be great.

Every SSD will need to be replaced every few years, perhaps the ones in the Nimble sooner than some other array where they are not a pass-through cache. But the failure of an SSD in the Nimble is a non-event. The cache is simply temporarily smaller and there is no interruption in service. Want to test it? Pull an SSD live.

Pile Of Garbage
May 28, 2007



complex posted:

As for "awful" VAAI support, which primitives are you missing that you wish you had? (Note: This is a test) I found VAAI support to be great.

According to the VMware Compatibility Guide the Nimble CS210 doesn't support any VAAI features in ESXi 5.0 or 5.0u1: http://www.vmware.com/resources/compatibility/detail.php?deviceCategory=san&productid=21009 (Of course that info could be out of date).

complex
Sep 16, 2003

It is out of date. See the bottom of http://www.nimblestorage.com/products/software/ I can't point to anything more definitive that isn't behind a support login.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

complex posted:

Not sure why "iSCSI-only" makes it bad for VMware. As for "awful" VAAI support, which primitives are you missing that you wish you had? (Note: This is a test) I found VAAI support to be great.

Every SSD will need to be replaced every few years, perhaps the ones in the Nimble sooner than some other array where they are not a pass-through cache. But the failure of an SSD in the Nimble is a non-event. The cache is simply temporarily smaller and there is no interruption in service. Want to test it? Pull an SSD live.

I will definitely test pulling an SSD today. :parrot:

iSCSI-only isn't a problem with VMWare, if VAAI is present and supported. From my reading, Nimble's only VAAI primitive is Write-Same, which isn't what I was looking for.

What I do want to see, at a minimum, is Atomic Test & Set (ATS) which is critical for ensuring good iSCSI/LUN performance when you have multiple hosts/multiple VMs using the same LUN. I am specifically trying to avoid the LUN locking overload that plagues iSCSI deployments.

complex
Sep 16, 2003

If the array is only for a proof of concept or similar ask your Nimble representative for the latest beta firmware.

Edit: concept for concert

complex fucked around with this message at 19:29 on Jun 14, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

madsushi posted:

I will definitely test pulling an SSD today. :parrot:

iSCSI-only isn't a problem with VMWare, if VAAI is present and supported. From my reading, Nimble's only VAAI primitive is Write-Same, which isn't what I was looking for.

What I do want to see, at a minimum, is Atomic Test & Set (ATS) which is critical for ensuring good iSCSI/LUN performance when you have multiple hosts/multiple VMs using the same LUN. I am specifically trying to avoid the LUN locking overload that plagues iSCSI deployments.
In a pinch, you always have the option of using multiple smaller LUNs like in the olden days.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
If you're going to make a SAN for use with virtual environments, it's pretty silly to not fully support VAAI.

Daddyo
Nov 3, 2000
Here's a question from left field. I've got an old CX300 laying around that I decided to use for VM testing, etc. Now I also have a new CX4 240 with a Celerra. My question is, can I hook the old SAN into the new NAS (and not screw with the existing 240)? Will it flat out refuse to talk to it?

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
Is this your question: you want to put the CX300 behind your Celerra and use its block disk space as another NAS share? You can do that as long as the interoperability of the CX300 and Celerra firmwares are supported.

Don't do this unless you really know what you're doing though. Celerras are FRAGILE.

Internet Explorer
Jun 1, 2005





paperchaseguy posted:

Is this your question: you want to put the CX300 behind your Celerra and use its block disk space as another NAS share? You can do that as long as the interoperability of the CX300 and Celerra firmwares are supported.

Don't do this unless you really know what you're doing though. Celerras are FRAGILE.

As someone who is freshly learning our new VNX 5300s, which the NAS portions are handled by what is basically a Celerra... I can agree. What a loving headache.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

VMWare licenses are going to cost more than my host hardware and my SAN. Not combined, but drat.

Internet Explorer
Jun 1, 2005





skipdogg posted:

VMWare licenses are going to cost more than my host hardware and my SAN. Not combined, but drat.

If that is the case then you may want to look at Hyper-V or XenServer. Just a suggestion. VMware is absolutely the king of the hill, but it is not an insignificant amount of money. If you only have the money for VMware and a decent SAN, it may be worth it to get a better SAN and settle for Hyper-V or XenServer. Just my opinion.

Mierdaan
Sep 14, 2004

Pillbug

skipdogg posted:

VMWare licenses are going to cost more than my host hardware and my SAN. Not combined, but drat.

VMware licensing only gets expensive when you're going for lots of CPU licenses with all the bells and whistles turned on. If you're doing that, you've probably already spent a bundle on expensive shared storage and powerful cluster hosts. Can you detail your setup a little bit?

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Oh I'm not complaining about the price, just found it funny the licenses will cost more than the 3 servers or the SAN.

We're putting a smaller cluster in a remote office right now we're looking at

3 x DL360 G8's w 2x E5-2640's and 128GB ram

EMC VNXe 3300 with 9TB raw of 15K and 12TB of 7.2K. The 12TB will just be a giant NFS/CIFS share, no VM's will be put on there.


VMware vSphere Enterprise Acceleration Kit w/ 3 year 24/7 support. (Basically this is 6x vSphere 5 Enterprise licenses with a vCenter license)

I could definitely Hyper-V it up though if it came down to it. We're a VMWare shop in the larger sites though so I think they want to stay consistent.

Mierdaan
Sep 14, 2004

Pillbug
That licensing should run you about $22k or so. It's a good jump up in functionality over Essentials though. We just did the same thing.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Mierdaan posted:

That licensing should run you about $22k or so. It's a good jump up in functionality over Essentials though. We just did the same thing.

I must not be getting much of a discount then, because my licensing costs before tax are over 29K.

Getting quoted for

VMware vSphere Enterprise Acceleration Kit
( v. 5 ) - license - 6 processors

and

VMware Support and Subscription Production
Technical support - emergency phone consulting - 3 years - 24x7 - 30 min - for
VMware vSphere Enterprise Acceleration Kit ( v. 5 ) - 6 processors

Mierdaan
Sep 14, 2004

Pillbug
That's exactly what we just bought.

They had a deal going until 2012-06-15 that was 30% off the licensing (not the support), so we ended up paying $22-23k + tax.

spoon daddy
Aug 11, 2004
Who's your daddy?
College Slice
I'm about to start a new job and the just bought EMC VNX 5500s. As a hardcore netapp guy I'm getting ready for fun. Quick questions. What Netapp model(s) does the 5500 compare to? Is MPFS really that much better than NFS? The folks at the new place were gushing about it but they know very little about storage and so I take their enthusiasm with a grain of salt.

Internet Explorer
Jun 1, 2005





spoon daddy posted:

I'm about to start a new job and the just bought EMC VNX 5500s. As a hardcore netapp guy I'm getting ready for fun. Quick questions. What Netapp model(s) does the 5500 compare to? Is MPFS really that much better than NFS? The folks at the new place were gushing about it but they know very little about storage and so I take their enthusiasm with a grain of salt.

We just bought 2 VNX 5300 and I've never even heard of MPFS. So... Ah.. Yeah.

the spyder
Feb 18, 2011
Is anyone using or have used TrueNAS?

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Internet Explorer posted:

We just bought 2 VNX 5300 and I've never even heard of MPFS. So... Ah.. Yeah.

MPFS is an EMC thing, you have an agent that talks with your EMC SAN and requests a file, the EMC SAN gives you the blocks where that file is at, and your agent goes and gets those blocks via iSCSI or FC. The idea is that you're serving up files (like with NFS/CIFS) but that they're retrieved by the block via a block protocol which has less overhead than NFS/CIFS.

spoon daddy
Aug 11, 2004
Who's your daddy?
College Slice

madsushi posted:

MPFS is an EMC thing, you have an agent that talks with your EMC SAN and requests a file, the EMC SAN gives you the blocks where that file is at, and your agent goes and gets those blocks via iSCSI or FC. The idea is that you're serving up files (like with NFS/CIFS) but that they're retrieved by the block via a block protocol which has less overhead than NFS/CIFS.
Is it really that compelling? My concern is running an agent. CIFS and NFS clients are (for the most part) stable and well known.

My experience with traditional NAS is that with 10Gb, you could get great performance. I've seen Netapps serve 700 MB/s to individual database clients. I see the value of MPFS in an HPC environment but I'm just running in a traditional enterprise environment. Though honestly, in an HPC environment my first instinct is infiniband but my experience there is limited.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

spoon daddy posted:

Is it really that compelling? My concern is running an agent. CIFS and NFS clients are (for the most part) stable and well known.

My experience with traditional NAS is that with 10Gb, you could get great performance. I've seen Netapps serve 700 MB/s to individual database clients. I see the value of MPFS in an HPC environment but I'm just running in a traditional enterprise environment. Though honestly, in an HPC environment my first instinct is infiniband but my experience there is limited.

As someone mentioned above the chatter goes across traditional CIFS/NFS but the file is provided over Fibre Channel. I've seen it used for HPC, video editing and even where there are millions of small files.

It's been around for what...10 years? So while it was really drat cool it's nothing special anymore and probably more of a headache. I think pNFS can now do the same functionality without the need for an agent and platforms like Isilon can scale out and provide GB's of throughput with ease.

Why is this company using it now? There may well be some history behind it or simply it fits a particular use case where it works well.

thebigcow
Jan 3, 2001

Bully!

the spyder posted:

Is anyone using or have used TrueNAS?

No, but isn't it just FreeNAS updated to a current version of ZFS?

spoon daddy
Aug 11, 2004
Who's your daddy?
College Slice

Vanilla posted:

As someone mentioned above the chatter goes across traditional CIFS/NFS but the file is provided over Fibre Channel. I've seen it used for HPC, video editing and even where there are millions of small files.

It's been around for what...10 years? So while it was really drat cool it's nothing special anymore and probably more of a headache. I think pNFS can now do the same functionality without the need for an agent and platforms like Isilon can scale out and provide GB's of throughput with ease.

Why is this company using it now? There may well be some history behind it or simply it fits a particular use case where it works well.

The hiring manager is my old boss. He just got there 2 weeks ago and had little input on the decision. From what I can gather from him, the most likely reason they went with it is because they know very little about storage and it sounded cool.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

spoon daddy posted:

The hiring manager is my old boss. He just got there 2 weeks ago and had little input on the decision. From what I can gather from him, the most likely reason they went with it is because they know very little about storage and it sounded cool.

Hmmm MPFS isn't really the kind of thing that is sold off hand. It takes some setting up, has a limited support matrix (agents) and a set number of use cases. The MPFS may just be for one or two systems, the rest of the NAS may well be just vanilla NFS.

What kind of company is it? What industry?

spoon daddy
Aug 11, 2004
Who's your daddy?
College Slice

Vanilla posted:

Hmmm MPFS isn't really the kind of thing that is sold off hand. It takes some setting up, has a limited support matrix (agents) and a set number of use cases. The MPFS may just be for one or two systems, the rest of the NAS may well be just vanilla NFS.

What kind of company is it? What industry?

Entertainment (not doing video). but its just standard enterprise stuff such as exchange, databases(mssql and oracle), etc.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

spoon daddy posted:

Entertainment (not doing video). but its just standard enterprise stuff such as exchange, databases(mssql and oracle), etc.

Unless you're doing big data analytics, fluid modeling, or some other type of work that is extremely IO intensive with high concurrency I truly can't see the point. Sure, it might make your CIFS access faster, but who cares if it takes 2ms to open a document instead of 15ms?

MC Fruit Stripe
Nov 26, 2002

around and around we go
Feh, just lobbing a softball out there, but I'm looking at the Unisphere website and it says that it's for low and mid range storage. Guh? What's above Unisphere in the EMC line?

Related to that, and as a result of realizing I know gently caress all about management software, could someone give me a quick rundown of what software you use for what SAN? 99% of my exposure is Equallogic, so I don't even know how you'd manage a NetApp SAN, for example.

spoon daddy
Aug 11, 2004
Who's your daddy?
College Slice

NippleFloss posted:

Unless you're doing big data analytics, fluid modeling, or some other type of work that is extremely IO intensive with high concurrency I truly can't see the point. Sure, it might make your CIFS access faster, but who cares if it takes 2ms to open a document instead of 15ms?

Yeah, it didn't make sense. Got more info, apparently it was a hosed up decision made exclusively by the database team with no input from syseng. :wtf: Oh well, I'll just spend time learning it and if its a horrible time suck they can hire a dedicated engineer for it or get something else.

Serfer
Mar 10, 2003

The piss tape is real



Anyone have any experience in clustered file systems like glusterfs or swift? I had setup a test gluster system like a year or so ago, but it seems like its gotten much better. Swift seems nice because of how it integrates with openstack. I probably wouldn't be doing anything but storing virtual machines on it, so some of the things like remote object access aren't terribly important, but being incredibly scalable with consumer hardware, asynchronous geo-replication, etc, seems like they would be very nice features to have.

Serfer fucked around with this message at 06:58 on Jun 19, 2012

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

MC Fruit Stripe posted:

Feh, just lobbing a softball out there, but I'm looking at the Unisphere website and it says that it's for low and mid range storage. Guh? What's above Unisphere in the EMC line?

Related to that, and as a result of realizing I know gently caress all about management software, could someone give me a quick rundown of what software you use for what SAN? 99% of my exposure is Equallogic, so I don't even know how you'd manage a NetApp SAN, for example.

EMC uses a different management console for Symmetrix. They've also got some special purpose storage like Avamar and Data Domain that run their own management software. And probably some other non-integrated storage that they've purchased from other vendors that isn't covered by Unisphere.

The NetApp management GUI is called system manager, but outside of that you can manage the boxes directly via an SSH console, via a powershell toolkit, or through one of their applications that provides enterprise management workflows like Provisioning and Protection Manager.

Hitachi uses an application called HiCommand on the newer storage like AMS and USP. Older arrays used a web app called storage navigator which was not very good.

StorageTek uses SUN Common Array Management, which is also pretty bad. CAM also includes a cli that is also pretty goofy, but is better than CAM.

That covers all of the equipment I've worked with except for an old IBM DS4000 series FAStT, years ago, which used the generically named Storage Manager and which I remember very little about except that it was easy to understand and pretty unremarkable.

Most modern day SANs are pretty easy to administer when it comes to basic activities like provisioning new storage, lun masking, etc. The differentiators for large enterprises are how effectively you can automate common workflows or orchestrate operations between multiple arrays easily.

MC Fruit Stripe
Nov 26, 2002

around and around we go
I literally love you.

I've never had any problem getting around, whether it's Equallogic, Openfiler, or Unisphere, and the end goal is always the same, but I don't even know what the alternatives are, so a little bit of knowledge goes a long way.

Mierdaan
Sep 14, 2004

Pillbug

NippleFloss posted:

The NetApp management GUI is called system manager, but outside of that you can manage the boxes directly via an SSH console,

Assuming it isn't a day ending in 'y' and your BMC has died a fiery death, requiring a bmc reboot. :argh:

Vanilla
Feb 24, 2002

Hay guys what's going on in th

NippleFloss posted:

Unless you're doing big data analytics, fluid modeling, or some other type of work that is extremely IO intensive with high concurrency I truly can't see the point. Sure, it might make your CIFS access faster, but who cares if it takes 2ms to open a document instead of 15ms?

It’s more about bandwidth than latency. I don’t even think latency would be improved.

Getting a large file or a number of large files over multiple FC connections would be a lot faster than a single 10/100 port. It’s not so much of a bonus today when we have 1Gb ports but may be something implemented after the fact when it was realised they’d really like more bandwidth on this one app.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

spoon daddy posted:

Yeah, it didn't make sense. Got more info, apparently it was a hosed up decision made exclusively by the database team with no input from syseng. :wtf: Oh well, I'll just spend time learning it and if its a horrible time suck they can hire a dedicated engineer for it or get something else.

Message me your email addy and i'll send you over some old training material if you want.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

For the NetApp admins in here, 8.1.1RC1 has been released. The big things to note in this release are:

- Support for VAAI copy offload and space reservation on NFS datastores. VAAI integration was limited to block protocols.
- Support for FlashPools. FlashPools are aggregates with HDD and SSD mixed with the SSD acting as a cache layer. It can be configured as read-only cache, similar to FlashCache, as random-overwrite cache, as straight write cache, or some combination of the 3. FlashPools can be created from existing aggregates by adding SSD drives to them.
- Continuous free space reallocation on aggregates. This is a background scanner that works in conjunction with the write allocator to identify fragmented space in an aggregate and re-write it in a contiguous fashion. This provides more chunks of contiguous free space to allow incoming writes to be full stripe writes which improves performance on aged and near full aggregates. This is an aggregate level option that needs to be enabled. It will consume CPU resources but at a steady rate so it's performance impact is measurable and predictable. If you're not CPU limited the impact will be nil.

If you're running Data OnTAP Cluster Mode then 8.1.1 provides support for Infinite Volumes which allows volumes (currently on NFS) to scale to petabytes in size. Infinite Volumes are striped over multiple aggregates but accessed via a common namespace so the segmentation is transparent to the user.


Vanilla posted:

It’s more about bandwidth than latency. I don’t even think latency would be improved.

Getting a large file or a number of large files over multiple FC connections would be a lot faster than a single 10/100 port. It’s not so much of a bonus today when we have 1Gb ports but may be something implemented after the fact when it was realised they’d really like more bandwidth on this one app.

SMB1 is a grossly inefficient protocol, to the point that much of the latency you see when accessing files over CIFS is protocol induced, so anything that trims down on that will help

Riverbed does this with their WAN OP appliances. Sure, they dedupe the traffic stream but they also cull a lot of the redundant network traffic that CIFS generates at each endpoint so they only end up sending useful data.

Like I said, though, the difference would be negligible at best on anything other than a multi-node farm doing some serious I/O.

Mierdaan posted:

Assuming it isn't a day ending in 'y' and your BMC has died a fiery death, requiring a bmc reboot. :argh:

You don't need your BMC running to access the filer via SSH. You can enable it using secureadmin and ssh directly to the filer's production IP (or any other IP it shares). The BMC provides console level access over the network which is another way to get in, but OnTAP runs an ssh daemon as well. We've got about 30 2040 HA pairs and while the BMCs occasionally have problems it's not all that common. Have you updated the BMC firmware recently?

Mierdaan
Sep 14, 2004

Pillbug

NippleFloss posted:

You don't need your BMC running to access the filer via SSH. You can enable it using secureadmin and ssh directly to the filer's production IP (or any other IP it shares). The BMC provides console level access over the network which is another way to get in, but OnTAP runs an ssh daemon as well. We've got about 30 2040 HA pairs and while the BMCs occasionally have problems it's not all that common. Have you updated the BMC firmware recently?

It's a 2020 we're phasing out anyways, so it's stuck on 7.x (7.3.3 specifically). The BMC is the only port not on the segregated storage network, so it's easiest just to use that even if the damned thing crashes occasionally.

Adbot
ADBOT LOVES YOU

Powdered Toast Man
Jan 25, 2005

TOAST-A-RIFIC!!!
So...how hosed are we?

director of IT posted:

HAY GUIZE IMMA MOVE ALL OUR FILES TO THIS UNUSED NAS BECAUSE WE NEED MORE SPACE WHAT COULD POSSIBLY GO WRONG HAHAHAHA

...we really did need more space. I expressed concerns about whether it was advisable to make this move without extensive testing. I was laughed off.

So, he did it anyway. Our entire file storage (except for databases and some other stuff like Exchange) got moved over the weekend to a Reldata appliance. This included shared network folders used by many people, as well as every user profile in the entire company (about 4,000 employees). To my great lack of surprise, the NTFS permissions on all of those folders and files (millions of them) essentially got put through a wood chipper/meat grinder/Blendtec blender/insert appropriate metaphor of destruction here. No one can get into their stuff. We can't fix it, because we don't have permissions to modify the folders. Admittedly I don't know much about how the appliance works but I'm guessing that it has its own filesystem and provides NTFS emulation of some sort. Poking around ACLs on folders I noticed "NODE-C\Administrators" which seems mighty suspicious to me. They're on the phone with Starboard right now trying to unfuck this.

We are highly reliant on centralized user profiles (everyone's path is \\fileserver\profiles\username) because the vast majority of our users are Citrix users, which means NONE OF THEIR loving APPS WORK. This has been going on for days and it still isn't fixed. I want to die.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply