|
Thanks Ants posted:Isilon on a casual glance seems to be the only scale-out NAS platform that doesn't try and downplay performance expectations. Is that EMC overreaching or is that a fair assessment? Pure’s Flashblade is extremely fast. Also stuff like Panassas that no one’s ever heard of that is meant to be high performance. And HPC group used it at a previous job so I guess it did alright. But Isilon is fine. Good, even, if your main criteria is throughout, particularly on ingest. It’s definitely not general purpose storage though.
|
# ? Jun 20, 2018 00:22 |
|
|
# ? May 21, 2024 13:34 |
|
For scale out performance, Isilon is fine, if a bit pricey. If you have petabyte scale needs it’d be on my shopping list. Their primary? engineers left and founded Qumulo. Similar arch. It’s a startup, so ymmv. Panasas is fine, also kinda pricey. See them in enterprise HPC usually. It might be worthwhile to talk to IBM about Spectrum Scale; they’ve been surprisingly competitive on some projects. DDN has some options that aren’t bad. People are doing interesting stuff with Ceph but that’s still pretty green. RH’s support licensing is kinda high. Nearly all of these are built around large block IO. Most parallel file systems do poorly on metadata performance.
|
# ? Jun 20, 2018 02:27 |
|
PCjr sidecar posted:Nearly all of these are built around large block IO. Most parallel file systems do poorly on metadata performance. The interesting thing about Flashblade is that it’s not terrible on small block/object random IO. You wouldn’t necessarily run a bunch of VMs off of it but you can use it to store an OLTP database and be happy with the results. That’s definitely not true with Isilon.
|
# ? Jun 20, 2018 03:40 |
|
YOLOsubmarine posted:But Isilon is fine. Good, even, if your main criteria is throughout, particularly on ingest. It’s definitely not general purpose storage though. YOLOsubmarine posted:You wouldnt necessarily run a bunch of VMs off of it but you can use it to store an OLTP database and be happy with the results. Thats definitely not true with Isilon.
|
# ? Jun 20, 2018 12:28 |
|
YOLOsubmarine posted:The interesting thing about Flashblade is that it’s not terrible on small block/object random IO. You wouldn’t necessarily run a bunch of VMs off of it but you can use it to store an OLTP database and be happy with the results. That’s definitely not true with Isilon. That's actually depressing. I would have expected FlashBlade to be good for VMs, just based on my experience with Pure as a vendor. Do you know what slows it down too much for VM storage?
|
# ? Jun 20, 2018 16:15 |
|
CDW has been loving great, y'all are completely correct that it comes down to your account manager. The only times I can get poo poo cheaper from Dell are when I want something skirting supported configuration to do something a little fucky but interesting to the regional sales manager.
|
# ? Jun 20, 2018 16:47 |
|
Zorak of Michigan posted:That's actually depressing. I would have expected FlashBlade to be good for VMs, just based on my experience with Pure as a vendor. Do you know what slows it down too much for VM storage? You *could* run VMs off of it, that’s just not it’s best usage. It’s not tuned for extremely low latency the way FlashArray’s are. It’s tuned for great throughout during big block sequential work and reasonable latency during random IO, and for handling very large datasets in a single namespace. Generally for things like general purpose virtual infrastructure latency is the critical metric and you don’t need hundreds of TB or even petabytes in a single namespaces so FlashArray is going to be a better fit.
|
# ? Jun 20, 2018 16:57 |
|
Also, Tintri’s new CEO quit after just three months. The local SE is telling us that they can’t get any answers from management about what to tell customers about the future of their support contracts. We’re gonna have a few real unhappy customers, including some pretty big IT companies.
|
# ? Jun 20, 2018 21:04 |
|
YOLOsubmarine posted:Also, Tintri’s new CEO quit after just three months. The local SE is telling us that they can’t get any answers from management about what to tell customers about the future of their support contracts. We’re gonna have a few real unhappy customers, including some pretty big IT companies. They're about to be delisted on NASDAQ. They will be out of cash by the end of the month if they don't get help. Don't hold your breath.
|
# ? Jun 21, 2018 00:33 |
|
Richard Noggin posted:They're about to be delisted on NASDAQ. They will be out of cash by the end of the month if they don't get help. Don't hold your breath. They’ve got no chance of getting acquired until they go through bankruptcy. The question is can manage a bankruptcy, restructure and acquisition in a way that allows them to maintain their support organization to honor support contracts. They aren’t getting any more capital anywhere other than acquisition so that’s really the only option. Hopefully they manage to do something to make it right for existing customers. Violin has been a penny stock for years but they still exist somehow, so Tintri still has a shot at bare susbsistence.
|
# ? Jun 21, 2018 00:45 |
|
I totally forgot about Violin, yikes.
|
# ? Jun 21, 2018 00:59 |
|
YOLOsubmarine posted:They’ve got no chance of getting acquired until they go through bankruptcy. The question is can manage a bankruptcy, restructure and acquisition in a way that allows them to maintain their support organization to honor support contracts. I don't see any value for the investor. They weren't able to keep the product afloat and this has essentially poisoned the entire line. Would anyone really buy Tintri after this? I sure as hell wouldn't.
|
# ? Jun 21, 2018 13:43 |
|
Richard Noggin posted:I don't see any value for the investor. They weren't able to keep the product afloat and this has essentially poisoned the entire line. Would anyone really buy Tintri after this? I sure as hell wouldn't. Why would anyone buy Violin? And yet someone did. If it’s cheap enough it’s probably worth it to scrounge for IP.
|
# ? Jun 21, 2018 20:23 |
|
underlig posted:The problem is that the CSVs were all set with sofs1 as owner, and that server shut down two days ago. The problems with ownership continued, i wish i had contacted Microsoft right away since this has been a headake and major stress factor for me. When trying to find more information about the fiberchannel cards in the fileservers i found references to software for controlling them called OneCommand Manager. This was not installed on any of the servers, in fact i could not find any software to configure the cards. Once i installed OneCommand i saw that SOFS01 fc-card was set to talk iSCSI and SOFS02 fc-card was set for RoCE. The only internal documentation for the cluster is: quote:SOFS are created through MDT from deploy01. TS for SOFS exists, just choose SOFS TS and enter the name (sofs01,sofs02). The thing that fixed my problems was to reboot the controllers on the SAN, first the management controllers and then the storage controller. When the SC rebooted the primary MC also reset/crashed, this time it wrote an event in the logfiles that i'll get HPE to check out tomorrow, but once it all came online again the volume lock was gone, i could set any sofs as owner, i could move ownership and everything is now running perfectly. I suspect the iSCSI / RoCE mode on sofs01 was because HPE changed out the motherboard and that setting is handled somewhere on the motherboard, setting iSCSI as default.
|
# ? Jun 24, 2018 11:09 |
|
As for isilon not bring good for general purpose. Imagine this. You have 15000 active connections, most of a hospital, right? User data, appdata, data streaming constantly, video data writing all hours of the day, right? The fact that one is is all file based protection, the fact that each time you take a snapshot and replicate, it locks the filesystem for a moment, even for split second, and you've got nearly 100 if these kicking off every 5minutes because your dumb rpo requires it; and a disk fails... If you don't have this piece of poo poo loaded to the gills with cache drives for metadata accelleration, the entire cluster is going to not only going to gag on its own lunch, but vomit all over the place as well. No department share gives two fucks if you can do PB scale, they don't care how many thousands of synciq jobs you claim to be able to choke down, they just want reliable data access. Generic application shares? They just want the poo poo accessable Your finnicky genomics processor though? They give a poo poo about scaling to numbers petabytes of data. Your weird Cisco video recorders? Same Security footage? Same 200tb highly compressed and deduplicated commvault data? I'd pass actually, this poo poo bag appliance doesn't support sparse files. PMR systems that have billions of small files inside of 20tb? They may care, most poo poo would run out of inodes before you hit the allocated space, depending on queer electrons that day... Tl;Dr Isilon is finnicky, buy something else unless you need a huge rear end time sink and are looking at well over a petabyte in use, OR need a single contiguous filesystem that scales something dumb. For everything else, there are niche cheap sans fronted by windows storage server, Cohesity, rubrik, and NetApp. P.s. I'm not bitter... I swear.
|
# ? Jun 26, 2018 06:16 |
|
Thanks for the input regardless! More data on what to look for when acceptance testing is good.
|
# ? Jun 26, 2018 09:46 |
|
PCjr sidecar posted:It might be worthwhile to talk to IBM about Spectrum Scale; they’ve been surprisingly competitive on some projects.
|
# ? Jun 26, 2018 12:42 |
|
evil_bunnY posted:Thanks for the input regardless! More data on what to look for when acceptance testing is good. Happy to. A well built one will run well, a poorly built one will run like a slug on a blisteringly hot driveway lined with salt. The difference between the two designs? how much flash you throw at it for caching and how
|
# ? Jun 27, 2018 15:23 |
|
The train I’m getting on is basically an extension of a system that’s been running well for years already. I really appreciate the additional input.
|
# ? Jun 28, 2018 11:11 |
|
https://www.ddn.com/press-releases/ddn-storage-acquires-tintri/
|
# ? Jul 10, 2018 18:32 |
|
I've always heard that it's best to keep storage arrays on their own subnet. In my lab we have multiple NASes using NFS. Does it benefit, or is it even possible to put the NASes on their own subnet? Also, if I want to put them on a different physical switch, wouldn't I need a router so the clients can talk to the storage? That doesn't seem beneficial, but I want to try to follow best practices.
|
# ? Jul 10, 2018 23:08 |
|
Raere posted:I've always heard that it's best to keep storage arrays on their own subnet. In my lab we have multiple NASes using NFS. Does it benefit, or is it even possible to put the NASes on their own subnet? Also, if I want to put them on a different physical switch, wouldn't I need a router so the clients can talk to the storage? That doesn't seem beneficial, but I want to try to follow best practices. If you have to ask, it sounds like you don't have a clear goal in mind. That's often a good signal to leave it alone.
|
# ? Jul 11, 2018 04:10 |
|
Vulture Culture posted:"Their own subnet" sounds wasteful. You generally want to avoid L3 routing between your storage consumets and any high-performance storage volumes, as it does add a lot of latency, and it may dramatically complicate your efforts to use jumbo frames (if those would improve your deployment). But it's not gospel. There are lots of reasons not to do it this way, especially if the network isn't a performance bottleneck or if the performance isn't really a concern in the first place. I agree with this until you start getting into the horrifically large enterprise space, at that point your latency is mitigated by overkill of equipment and breakneck speed of processors.
|
# ? Jul 13, 2018 14:16 |
|
kzersatz posted:I agree with this until you start getting into the horrifically large enterprise space, at that point your latency is mitigated by overkill of equipment and breakneck speed of processors.
|
# ? Jul 13, 2018 14:33 |
|
Vulture Culture posted:I worked in academia supporting researchers and clinicians for a number of years; this is definitely not the circumstance of someone beginning an out-of-the-wheelhouse NAS question with "in my lab". Sure, not saying it is, your mileage may vary highly depending on situation and equipment. I'm in medical Clinical/Research currently, and can say I don't experience latency generated by vlan segmentation, more commonly due to piss poor applications.
|
# ? Jul 13, 2018 16:00 |
|
kzersatz posted:Sure, not saying it is, your mileage may vary highly depending on situation and equipment. Adding unnecessary latency to your storage path is always a good thing to avoid, even if it’s only very recently that NVMe has pushed access latency to the point where interconnect latency isn’t a rounding error. Plus hardware is less important than network design. It’s easy to end up with oversubscribed router ports if you’re hairpinning a bunch of traffic from your ToR switches to a core router and back and expecting to run high performance storage over that. And, of course, a layer 3 boundary often implies that firewall inspection is also happening, which adds further latency. If you’re trying to build a high performance storage network it’s nice to not have to worry about routing table exhaustion or tcam exhaustion or whatever causing issues. Outside of those with pitifully small port buffers modern switches are generally very consistent performers in a way that firewalls and even routers may not be.
|
# ? Jul 13, 2018 17:04 |
|
YOLOsubmarine posted:Adding unnecessary latency to your storage path is always a good thing to avoid, even if it’s only very recently that NVMe has pushed access latency to the point where interconnect latency isn’t a rounding error. Plus hardware is less important than network design. It’s easy to end up with oversubscribed router ports if you’re hairpinning a bunch of traffic from your ToR switches to a core router and back and expecting to run high performance storage over that. These days everything is a layer3 boundary, which is making stateful inspection less of a default on layer3 interconnections on the "trust" side. (You can poorly setup a network in any configuration.) Pitifully small port buffers on modern switches are going to be a bigger issue in general. You're not going through a "router" in the traditional sense just becayse it's a layer3 interconnection. In Juniper land you're highly likely to layer3 interconnect all of your QFX devices, but only when you start getting into full tables or inter-site (etc) connections would you hit a MX router. NVMe is a game changer no matter where you put it, like going from 15k SAS to normal SSDs 7-10 years ago.
|
# ? Jul 13, 2018 18:12 |
|
I agree that you don't want to let egregious layer 3 routing occur across the environment especially in high workload environments. But your general purpose NAS serving up profiles, department shares, etc. won't notice a damned bit of difference. Your highly transactional workload, ala VMWare, Genomics, Oracle on NFS, etc, will suffer, I agree, not going to debate that.
|
# ? Jul 13, 2018 19:06 |
|
we have multiple unrouted layer 2 subnets for storage. No reason to introduce any extra latency when you can avoid it.
|
# ? Jul 13, 2018 22:52 |
|
H110Hawk posted:These days everything is a layer3 boundary, which is making stateful inspection less of a default on layer3 interconnections on the "trust" side. (You can poorly setup a network in any configuration.) Pitifully small port buffers on modern switches are going to be a bigger issue in general. You're not going through a "router" in the traditional sense just becayse it's a layer3 interconnection. In Juniper land you're highly likely to layer3 interconnect all of your QFX devices, but only when you start getting into full tables or inter-site (etc) connections would you hit a MX router. There are still a *lot* of places that hairpin traffic north south to a routing core or for firewall inspection (or both on the same device). Like yea, QFabric or Fabricpath or ACI or just plain ole svis and layer 3 switches means you see layer 3 boundaries in more places that you might have previously, but there are still a lot of networks out there that are very traditional in design. And as a storage dude you don’t generally have any control over that so often simply sticking to layer 2, or even having a dedicated physically separate storage network, if your workloads are very latency sensitive. And yes, NVMe will be a big deal no matter where it sits, but it’s going to force people to think much more carefully about storage networking than they traditionally have because up until NVMe just about any functional 10Gbe network (i.e. not exhausting port buffers or constantly suffering spanning tree events) was good enough that the added latency of even an excessively long network path was a few orders of magnitude lower than the storage response time.
|
# ? Jul 13, 2018 22:57 |
|
NVMEoF is pretty cool, and there’s some fun things being built on top of it.
|
# ? Jul 13, 2018 23:16 |
|
adorai posted:we have multiple unrouted layer 2 subnets for storage. No reason to introduce any extra latency when you can avoid it. Same here.
|
# ? Jul 15, 2018 05:52 |
|
Trip report: Rdma is
|
# ? Jul 17, 2018 04:12 |
|
adorai posted:we have multiple unrouted layer 2 subnets for storage. No reason to introduce any extra latency when you can avoid it.
|
# ? Jul 17, 2018 04:12 |
|
Hoping this might be the right place to ask this question. I'm currently doing some research on upgrading our storage at work for a subset of workers. Now, my work doesn't and won't employ a regular IT or sysadmin person, so that's why me, a 3D/VFX guy is here asking this question (it's an uphill battle I've fought for years). We have about 8 designers at an ad agency that, at the moment, are accessing an old NAS storage system that's proved a bit slow for them. So we'd like to upgrade it. Currently our video team is accessing a nice 10GbE system and we all work from that 100TB of centralized storage. It's fast enough for us to do 4K video work off of. Love it. The designers want the same ability to be able to work off their own central storage like we do. At the moment, for large files (like their 1 gig photoshop files or large indesign projects), they usually just copy it over, work on it and move it back. They claim they get bogged down or crash when they try to work off the network. So my question is kind of two parts. Is the "style" of file access different for these two departments? I feel like the video projects read from the raw files "on the fly" whereas for larger print design projects, the computer usually tries to load that into memory and that may be causing the slowness they experience. That it doesn't really read photoshop files on the fly. And second is, should an upgraded NAS with trunked 1gig ports help alleviate this problem? Right now they're just connected to an old NAS through a single 1 gig port. Sorry if that's kind of a vague spaghetti question! Right now I'm just looking at upgrading to this (https://www.qnap.com/en-us/product/ts-873u-rp) and just trunking the ports and calling it a day.
|
# ? Jul 18, 2018 16:05 |
|
I think you are on the right track with that Qnap product with the 10Gbe ports. 2x1Gb ports would yield double the speed at best. You might be running into a situation where your older NAS can't saturate gigabit but it's hard to say. I don't know off hand how photoshop handles caching huge files but do the workstations involved have enough RAM to load these 1GB files completely?
|
# ? Jul 18, 2018 16:41 |
|
redeyes posted:I think you are on the right track with that Qnap product with the 10Gbe ports. 2x1Gb ports would yield double the speed at best. You might be running into a situation where your older NAS can't saturate gigabit but it's hard to say. I don't know off hand how photoshop handles caching huge files but do the workstations involved have enough RAM to load these 1GB files completely? They should. They're all fairly new iMacs and I've upgraded the RAM in all of them to 24 gigs or better. With that note, we won't be using the 10GbE like we do for the video side of things. The only way to do that, that I can tell, with the iMacs is to get expensive adapters and then run new cable for each one. Management would just lol in my face about that one!
|
# ? Jul 18, 2018 16:43 |
|
I have no idea if this knowledge is still current or not, but in general Adobe products seem to have a weird aversion to network storage. If they are saying they get bogged down or crash when working "off the network" I would first work with Adobe to see if what they are doing is supported on network shares before dropping a bunch of money.
|
# ? Jul 18, 2018 21:59 |
|
Adobe will not support working from network shares under any circumstances. Yes it's archaic and nobody actually works that way, but the only Adobe-approved workflow is to copy the file locally, work on it, and then move it back onto the share.
|
# ? Jul 18, 2018 22:23 |
|
|
# ? May 21, 2024 13:34 |
|
Yeesh. I'm guessing y'all are talking about non-video products right? Most companies that use Adobe video software (premiere and after effects) work pretty exclusively from network shares (in some form or another).
|
# ? Jul 18, 2018 22:26 |