|
Netapp PS guy reporting in; I'd love to see a cross comparison of an EMC customer who happens to be a NetApp user as well; I know there is a LONG love // hate relationship between the two... Also; to the dudes above me doing a POC of NetApp, ask questions, ask LOTS of questions... heck, PM me some if you want help; it's an amazing product (I'm a smidge of a fanboy...), but it isn't your traditional SAN at all...
|
# ¿ Oct 14, 2012 05:58 |
|
|
# ¿ Apr 28, 2024 07:09 |
|
I've got 12 PURE arrays in production. I can validate a good amount of dedup results if you want, we get some VERY aggressive deduplication numbers across the board, but we have almost dedicated arrays for specific purposes, so. Note: I don't have a NIMBLE with deduplication, but I do have three Nimbles in the works atm.
|
# ¿ May 23, 2018 19:44 |
|
YOLOsubmarine posted:Just anecdotally my experience with Pure vs Tegile across my customer base is that the Pure stuff works as advertised and the Tegile stuff often does not. I agree with you/all of this, PURE makes a very good point of being "matter of fact" and point blank about their offerings, more often than not, if you're on the fence, get a guarantee in writing and they'll honor it to a fault. When we POC'd our first two M20's, we were promised one of them could fit our entire Virtual environment without choking (1,000+ VM's, very mixed workload), with 8.0:1+ deduplication with the guarantee that if they didn't meet it, they'd ship another disk pack to make up for it, for free. TL;DR Pure's good stuff, I'm a fan.
|
# ¿ May 23, 2018 19:52 |
|
bull3964 posted:One of the crazier ratios I saw with our pure was a 2tb file server we had. It was mixed content, images, docs, zip files, PDFs. For a brief time I had it on the same volume as its redundant partner. So, two 2tb vmdk files, both 90% full, but the same data on each. So, we've got a VERY poorly architected Oracle Database that nets us 9.1:1 on average, they refuse to do any form of white space reclamation, or data pruning within it, but thats whatever... 90% reduction on mixed workload file server is pretty dope
|
# ¿ May 23, 2018 20:23 |
|
H110Hawk posted:Is your data encrypted before it's written to the device? pre-deduplicated data will net this poor of returns as well.
|
# ¿ May 24, 2018 14:44 |
|
As for isilon not bring good for general purpose. Imagine this. You have 15000 active connections, most of a hospital, right? User data, appdata, data streaming constantly, video data writing all hours of the day, right? The fact that one is is all file based protection, the fact that each time you take a snapshot and replicate, it locks the filesystem for a moment, even for split second, and you've got nearly 100 if these kicking off every 5minutes because your dumb rpo requires it; and a disk fails... If you don't have this piece of poo poo loaded to the gills with cache drives for metadata accelleration, the entire cluster is going to not only going to gag on its own lunch, but vomit all over the place as well. No department share gives two fucks if you can do PB scale, they don't care how many thousands of synciq jobs you claim to be able to choke down, they just want reliable data access. Generic application shares? They just want the poo poo accessable Your finnicky genomics processor though? They give a poo poo about scaling to numbers petabytes of data. Your weird Cisco video recorders? Same Security footage? Same 200tb highly compressed and deduplicated commvault data? I'd pass actually, this poo poo bag appliance doesn't support sparse files. PMR systems that have billions of small files inside of 20tb? They may care, most poo poo would run out of inodes before you hit the allocated space, depending on queer electrons that day... Tl;Dr Isilon is finnicky, buy something else unless you need a huge rear end time sink and are looking at well over a petabyte in use, OR need a single contiguous filesystem that scales something dumb. For everything else, there are niche cheap sans fronted by windows storage server, Cohesity, rubrik, and NetApp. P.s. I'm not bitter... I swear.
|
# ¿ Jun 26, 2018 06:16 |
|
evil_bunnY posted:Thanks for the input regardless! More data on what to look for when acceptance testing is good. Happy to. A well built one will run well, a poorly built one will run like a slug on a blisteringly hot driveway lined with salt. The difference between the two designs? how much flash you throw at it for caching and how
|
# ¿ Jun 27, 2018 15:23 |
|
Vulture Culture posted:"Their own subnet" sounds wasteful. You generally want to avoid L3 routing between your storage consumets and any high-performance storage volumes, as it does add a lot of latency, and it may dramatically complicate your efforts to use jumbo frames (if those would improve your deployment). But it's not gospel. There are lots of reasons not to do it this way, especially if the network isn't a performance bottleneck or if the performance isn't really a concern in the first place. I agree with this until you start getting into the horrifically large enterprise space, at that point your latency is mitigated by overkill of equipment and breakneck speed of processors.
|
# ¿ Jul 13, 2018 14:16 |
|
Vulture Culture posted:I worked in academia supporting researchers and clinicians for a number of years; this is definitely not the circumstance of someone beginning an out-of-the-wheelhouse NAS question with "in my lab". Sure, not saying it is, your mileage may vary highly depending on situation and equipment. I'm in medical Clinical/Research currently, and can say I don't experience latency generated by vlan segmentation, more commonly due to piss poor applications.
|
# ¿ Jul 13, 2018 16:00 |
|
I agree that you don't want to let egregious layer 3 routing occur across the environment especially in high workload environments. But your general purpose NAS serving up profiles, department shares, etc. won't notice a damned bit of difference. Your highly transactional workload, ala VMWare, Genomics, Oracle on NFS, etc, will suffer, I agree, not going to debate that.
|
# ¿ Jul 13, 2018 19:06 |
|
So, I'm having some MASSIVE issues prying data from a VNX5300 to a modern NetApp, no tool I've utilized so far will actually carry over permissions. Robocopy using /copyall spits error 31, using all of the /copy switches but S will net success, but the moment I try and actually *copy* securities, it errors. NetApp's XCP errors on *some* permissions, citing that ACE type 170 is not supported yet, and fails to move the file that it failed on. EMCOPY bombs out citing that it can't set the security descriptor, and doesn't even migrate the file it failed on. Icacls is the only thing I can find that will scrape, and successfully apply permissions, but if you've ever used icacls on a path that contains 100,000+ tiny files, it's *very* time consuming, and for a hospital, I don't have hours to spare some times. Any suggestions? I'm not against looking at 3rd party tools such as DataDobi, but I don't have time to wine-n-dine for a tool. Five months to move 600tb of data, oof.
|
# ¿ Jul 25, 2018 23:55 |
|
Thanks Ants posted:Can you not just restore your backups onto the new storage Considered it, we're trying to fix some fucky groups and bad file structure :/
|
# ¿ Jul 26, 2018 03:04 |
|
YOLOsubmarine posted:I’ve used SecureCopy in the past to migrate CIFS data onto NetApp.l and had better luck with permissions handling than, for instance, robocopy. I'll give it a look. I think it's more of something cocked up with the VNX, we had immense problems getting stuff off of the same appliance to an Isilon awhile back.
|
# ¿ Jul 26, 2018 03:25 |
|
evil_bunnY posted:Robocopy constantly lost ACLs for us until we ran it elevated. Since then it’s been great. Is there only way I run it, but you are right, with other runs in the past, this was it.
|
# ¿ Jul 26, 2018 16:00 |
|
7K for that isn't necessarily unreasonable. I've paid much more for a 1.1TB HGST SSD PCI-E card before, and definitely more for a 0+1 SSD Stripe in a bunch of C240's.
|
# ¿ Nov 12, 2018 15:38 |
|
adorai posted:I am looking at buying some new arrays. I am currently using Nimble and Oracle ZFS storage, and I want to consolidate into a single array at each site. The two contenders are Pure and Nimble. I am pretty sure I will get Nimble for less money, but I like a few things about Pure. One really cool feature is snap to NFS. Anyway, I was wondering if anyone has experience with both arrays and could tell me if it really is worth paying a price premium for Pure. Any takers? I have both actually. I'm a fan of both, but some workloads are not great for PURE, where others run like greased lightning. We use Nimble as our performance minded "Doesn't deduplicate" block storage offering, commonly video applications, or very large VMDK's that don't require greased lightning, but have large disks. If this says anything though, we have nine separate PURE arrays, and three nimble. 2 running just EPIC/Cache DB's 2 Prod VMWare/VDI for failure domains 1 Dedicated SQL/Other DB's array 1 Oracle/AIX Dedicated array for EDW nonsense. -- 1 DR VDI 1 BC/DR Vmware with fan in from the two above. 1 BC/DR EPIC/Cache DB I'd be more than happy to share honest deduplication / reduction numbers on PURE for our M20's, M50's and M20R2's if you want.
|
# ¿ Nov 15, 2018 17:08 |
|
YOLOsubmarine posted:I prefer Pure to Nimble all flash because I think they handle deduplication better. I also think they’re more likely to exist and still be innovating in 5 years, versus another HP storage acquisition, and one that wasn’t even purchased principally for their storage product. True with ActiveCluster, we're not using it yet in prod, but it's super loving simple. Otherwise, PURE deduplication is pretty spot on, dynamic finger printing really can't be beat in most scenarios.
|
# ¿ Nov 15, 2018 21:34 |
|
I had a much better response typed up, but it boils down to this. PURE will do well with just about any workload you toss at it, undeduplicated, deduplicated, doesn't matter, but cramming a bunch of fat kids into a Cadillac isn't the most efficient use of space you could have, which is why Nimble was a song and a dance cheaper to throw space hogs at versus Pure... but if your people want to fund it, absolutely do it. That said, it does well for our EDW workloads because our DBA's are just shy of useless, they refuse to use any capabilities of Oracle to compact, reclaim whitespace, clean, consolidate, name it... they also refuse to use ASM to migrate any data whatsofuckingever, so the last time they did an export/import of the data to migrate disks, it was nearly 9TB in size... as we see attached, it's barely 800GB of unique data.
|
# ¿ Nov 16, 2018 16:43 |
|
oops
|
# ¿ Nov 16, 2018 16:43 |
|
doublepost.shitter
|
# ¿ Nov 16, 2018 16:43 |
|
#4 I hate DCNM with a passion, but we're running a dated version due to purchasing our MDS's through EMC (predecessors call). If you have the bandwidth to learn the CLI, do it, if you don't, stick to the GUI and call it a day.
|
# ¿ Jan 21, 2019 18:53 |
|
Kaddish posted:I mean, it's a meme in IT and also absolutely true. The same could really be said for other vendors, like EMC etc. I dunno, my management at the end of our isilon days, I'm pretty sure would have been liquidated if they bought into the OneFS 8 bullshit emc was shilling I'll never use another one unless it's dedicated S tier for a niche borderline turnkey use. Anything else was begging for disaster.
|
# ¿ Jul 26, 2021 00:47 |
|
I work with something like 12 pure boxes and one flashblade, I swear if I could set fire to the flashblade and not get sent straight to the poor house, I'd consider it.
|
# ¿ Dec 14, 2022 13:10 |
|
Langolas posted:Ive had a number of friends go to work for pure. All the ones that went flashblade left the company already to a competitor. All the ones in the other product groups are still there. They jump to VAST? Honestly, I wouldn't wish Flashblade on my enemies. It's trash, in one calendar year of use, we've easily replaced 3/4 of the blades from failures. Their reasoning? Excessive overwrites. We're using it as a backup target with object lock, if you can't handle the waves stay out of the backup space.
|
# ¿ Dec 17, 2022 21:49 |
|
Kaddish posted:Welp, just bought a NetApp c250. I haven't used Ontap in like....10 years. Looks like there's been a few changes! Learning about LUN configuration and what a SVM even is like a little baby. Been doing Ontap off and on for the last 12 years or so, currently support 4.5 PB of it
|
# ¿ Nov 28, 2023 06:01 |
|
|
# ¿ Apr 28, 2024 07:09 |
|
Maneki Neko posted:We had very boring/reliable (now owned by Quantum) ActiveScale cluster that we have been quite happy with. Can't go into details (lol thanks legal agreements) but we did not have a good time with Cloudian. Netapp storage grid. It's pricy to start, but it is very reliable
|
# ¿ Nov 28, 2023 06:01 |