|
Maneki Neko posted:Nope we actually had a pretty good Dell team pre-merger
|
# ? Dec 15, 2016 13:15 |
|
|
# ? May 21, 2024 18:43 |
|
evil_bunnY posted:EMC sales folks i've encountered have by and large been overpaid over-aggro sociopaths so that's no surprise.
|
# ? Dec 15, 2016 16:12 |
|
Can confirm that EMC sales people are the worst. It's a company that has multiple in house products that compete with each other, sold by different teams, who try to screw one another out of deals. You'll get Avamar people trying to steal deals from Data Domain people, or Isilon people trying to steal deals from VNX people. It's guaranteed to create and foster only the most lovely and amoral sales practices, and people.
|
# ? Dec 15, 2016 21:02 |
big money big clit posted:Can confirm that EMC sales people are the worst. It's a company that has multiple in house products that compete with each other, sold by different teams, who try to screw one another out of deals. You'll get Avamar people trying to steal deals from Data Domain people, or Isilon people trying to steal deals from VNX people. They just merged and made a few of my sales guys disappear. With the news that they merged two of the big tech departments together hopefully that competition poo poo starts to go away. Their's use cases for everything you just have to get someone look at your environment and tell the sales shits to gently caress off.
|
|
# ? Dec 15, 2016 21:28 |
|
HP just picked up Simplivity for $650m. It will be interesting to see where they take the line, as I believe Simplivity's stuff is build on top of both Dell and Cisco hardware.
|
# ? Jan 18, 2017 16:30 |
|
Can I just get Windows Storage Server in a 2U box with a bunch of disks up front and a couple of nodes in the rear with a shared SAS backplane, thanks.
|
# ? Jan 18, 2017 17:37 |
|
Richard Noggin posted:HP just picked up Simplivity for $650m. It will be interesting to see where they take the line, as I believe Simplivity's stuff is build on top of both Dell and Cisco hardware. Simplivity was built to be true software defined hyper-converged, so it doesn't really matter what it runs on. They had a deal with Cisco for a while to resell their software with UCS, but it never went anywhere and Cisco has been working with Springpath instead for the past year or so. This seems like one of those acquisitions that is destined to never pick up any momentum, and slowly wither on the vine until you hear about layoffs three years from now where 90 percent of the team is let go.
|
# ? Jan 18, 2017 17:42 |
|
I'm still waiting for LeftHand to make it's triumphant return!
|
# ? Jan 18, 2017 17:50 |
|
big money big clit posted:This seems like one of those acquisitions that is destined to never pick up any momentum, and slowly wither on the vine until you hear about layoffs three years from now where 90 percent of the team is let go.
|
# ? Jan 18, 2017 18:08 |
|
Vulture Culture posted:Every HP acquisition post-Compaq, in other words. More like every big company purchase of a startup since 2000. Has this strategy ever worked?
|
# ? Jan 18, 2017 21:28 |
|
EoRaptor posted:More like every big company purchase of a startup since 2000. Has this strategy ever worked?
|
# ? Jan 18, 2017 21:40 |
|
EoRaptor posted:More like every big company purchase of a startup since 2000. Has this strategy ever worked? lol NetApp/Everyone they've ever bought
|
# ? Jan 18, 2017 22:36 |
|
Thanks Ants posted:Can I just get Windows Storage Server in a 2U box with a bunch of disks up front and a couple of nodes in the rear with a shared SAS backplane, thanks. You could. Nimble is just a Supermicro chassis as you described with the two "controllers" (blades) in the back sharing the backplane.
|
# ? Jan 18, 2017 23:47 |
|
Maneki Neko posted:lol NetApp/Everyone they've ever bought Has Solidfire turned into a debacle yet? I haven't really followed that one.
|
# ? Jan 19, 2017 00:25 |
Docjowles posted:Has Solidfire turned into a debacle yet? I haven't really followed that one. Was it Solidfire that was built off of Dell chassis? To the comment about just buying a chassis with disks, vendors are starting to put software out to allow that. I expect to see more of that in the future honestly
|
|
# ? Jan 19, 2017 03:16 |
|
Maneki Neko posted:lol NetApp/Everyone they've ever bought NetApp has actually done pretty well with the Engenio acquisition. They haven't really made many acquisitions though. The Riverbed tech they picked up was pretty cheap and they're still doing engineering work on it to integrate it with the FAS line, so it's probably not going away, though the long term goal is to integrate the functionality directly into the arrays I hear. Docjowles posted:Has Solidfire turned into a debacle yet? I haven't really followed that one. Not a debacle, they're still pretty aggressively trying to sell it and they actually added staff to that side of the house, but its not a huge seller for them. It does get them into certain strategic accounts that they otherwise wouldn't get into though.
|
# ? Jan 19, 2017 09:12 |
|
GitLab.com melts down after wrong directory deleted, backups fail
|
# ? Feb 1, 2017 20:50 |
|
Looks like they didn't adhere to The Tao of Backup
|
# ? Feb 1, 2017 21:01 |
|
I use Netapp's Data ONTAPP with Powershell Toolkit for some win automation. Does NetApp have a bash orchestration plugin / module, and if so what is it called?
|
# ? Feb 8, 2017 14:43 |
|
Hmmm, I can create workflows then call them by REST apis.
|
# ? Feb 8, 2017 14:51 |
|
Potato Salad posted:I use Netapp's Data ONTAPP with Powershell Toolkit for some win automation. The bash orchestration plugin is called SSH.
|
# ? Feb 8, 2017 15:54 |
|
Anyone worked with FusionIO cards? I have a 1.2tb one that's not behaving as anticipated. It's for a lab and out of warranty but very very low use. edit: nevermind; it looks like I needed to reinstall the chipset drivers for the host server. Walked fucked around with this message at 18:38 on Feb 8, 2017 |
# ? Feb 8, 2017 17:20 |
|
Walked posted:Anyone worked with FusionIO cards? Is it a regular PCIe card? Home much do you want for it?
|
# ? Feb 8, 2017 18:27 |
|
Walked posted:Anyone worked with FusionIO cards? We are playing these games right now as well. Can you describe how it's performing, kernel version, benchmark, and copy/paste your partitioning + FS creation commands? I will see if I can find mine. We're playing with "LIQID" NVMe cards. And yes it's pronounced liquid it is a dumb name and I pronounce it lick-id.
|
# ? Feb 8, 2017 18:36 |
|
Mr Shiny Pants posted:Is it a regular PCIe card? Home much do you want for it? It is! And it's going to find a home in my desktop if I cant get what I want out of it for a storage server. H110Hawk posted:We are playing these games right now as well. Can you describe how it's performing, kernel version, benchmark, and copy/paste your partitioning + FS creation commands? I will see if I can find mine. We're playing with "LIQID" NVMe cards. And yes it's pronounced liquid it is a dumb name and I pronounce it lick-id. I'm actually running inside Windows Server 2016; as I'm hoping to use it as a StarWind VSAN cache disk. Although I'm ready to dump and move to a ZFS build if this keeps up. I've tried it all; Local benchmarks are good. ~1.5gbps ready / ~1.2gbps write. IOPS within range (and insane on small block sizes ) As soon as I throw it on a network of ANY kind or even just a Hyper-V VHD stored on there, performance drops to ~200mbps read / ~400mbps write; IOPs are still find on lower block sizes though. I've tried setting 512 and 4096 sector sizes, NTFS / ReFS, MBR and GPT. That said; I just updated all my chipset drivers are I'm seeing much closer to spec performance inside my test system. I'm going to re-build this server today and see where I end up. EDIT: nevermind performance is terrible again Walked fucked around with this message at 18:44 on Feb 8, 2017 |
# ? Feb 8, 2017 18:41 |
|
Walked posted:It is! And it's going to find a home in my desktop if I cant get what I want out of it for a storage server. Nice.
|
# ? Feb 8, 2017 22:59 |
|
Walked posted:It is! And it's going to find a home in my desktop if I cant get what I want out of it for a storage server. What do you mean by throw it on a network ? Also are the partitions aligned with the underlying storage ?
|
# ? Feb 9, 2017 00:19 |
|
Any time I expose the storage to anything other than the native host bare metal, and performance tanks by 80% or so. Hyper-V VHD on there? Performance in the VM is garbage time. And this was my local test to rule out network somehow being related. Present the drive or a folder on the drive via SMB and the performance tanks. Present the drive or a VHD on the drive via iSCSI and performance tanks. It's quite strange. But I think I'm getting somewhere. I've updated the drive firmware and reinstalled my chooser firmware again. A brief iometer run shows MUCH better numbers (close to theoretical max) but I want to run that a tad longer to be sure and calling it a day.
|
# ? Feb 9, 2017 01:09 |
|
Walked posted:Any time I expose the storage to anything other than the native host bare metal, and performance tanks by 80% or so. I would focus on this one, as it likely has the fewest variables. I am 100% unfamiliar with Windows but: Is there a way to map through the FusionIO drive to your Hyper-V instance with its VHD somewhere else? Or is that exactly what you're doing? Basically put the root disk for your windows instance elsewhere and expose through localhost a smb share or similar.
|
# ? Feb 9, 2017 01:52 |
|
Are you doing read or write IO tests? Are you using it as a raw device in IO meter?
YOLOsubmarine fucked around with this message at 01:58 on Feb 9, 2017 |
# ? Feb 9, 2017 01:55 |
|
H110Hawk posted:I would focus on this one, as it likely has the fewest variables. I am 100% unfamiliar with Windows but: Is there a way to map through the FusionIO drive to your Hyper-V instance with its VHD somewhere else? Or is that exactly what you're doing? Basically put the root disk for your windows instance elsewhere and expose through localhost a smb share or similar. I've run both routes. However, after the firmware update and chipset driver reinstall - all is still benchmarking as it should be, and for sustained periods now. Saturating 10gbe in style edit: nm; performance tanked again over network. Just going to use this for my vmware workstation as I'm tired of fighting with it for what amounts to a cache disk I dont really need for pure lab stuff Walked fucked around with this message at 04:25 on Feb 9, 2017 |
# ? Feb 9, 2017 03:13 |
FusionIO is a game of "which drivers will mostly work for me". I never had it working with their sales guys "benchmarking" anywhere near the specs they laid out. And that was us getting lucky it didn't BSOD one of the boxes My buddy used to work for them and said its a massive poo poo show especially since they got bought out. Want to play a game of outsourced devs that dont know what they're doing? That's 95% of the driver coding for FusionIO. My favorite is they were trying to charge us 5 figures for one card years ago when we were testing it. We spent $1000 or so on a POS OCZ card that was more compatible with our Dell servers. The FusionIO guys did however never come get that proof of concept card. We basically put it in static wrap sealed away in the datacenter and used the carrying case (nice cut foam handgun case) to lockup specific backup tapes for transit.
|
|
# ? Feb 9, 2017 04:29 |
|
Walked posted:I've run both routes. The performance difference is likely due to some filesystem caching behavior that is present when you do local tests but not remote tests or tests on a VHD, and not driver changes. Irrespective of how you're accessing the device it's still ultimately getting translated down to scsi device calls, which is the point at which the driver and firmware come in. The driver is completely unaware of how those calls are being generated. The only effect it might have is if the different tests are generating different IO profiles (iSCSI reads/writes may get broken up into smaller disk reads/writes on the destination, same with local access through a VHD) but probably not enough to change results that much.
|
# ? Feb 9, 2017 05:40 |
|
Welp https://www.hpe.com/us/en/newsroom/...sh-Storage.html
|
# ? Mar 7, 2017 14:10 |
|
Storage market consolidation is overdue and Nimble was struggling, though HP being the suitor is surprising.
|
# ? Mar 7, 2017 17:11 |
|
HP can't engineer a storage solution worth a poo poo, but they also can't manage an acquisition worth a poo poo either.
|
# ? Mar 7, 2017 17:12 |
|
This buys them nothing though?big money big clit posted:HP being the suitor is surprising. evil_bunnY fucked around with this message at 17:16 on Mar 7, 2017 |
# ? Mar 7, 2017 17:14 |
|
I had never touched 3 Par before this stop, and I was skeptical at first, but HPE and 3 Par engineers and support won me over.
|
# ? Mar 7, 2017 17:21 |
|
3Par is pretty good for what it is, they just haven't done much development on it recently. The EVA line was pretty good too, but they killed that for some reason. HP has had good storage products, they just don't seem to have any desire or ability to sell them in large enough volumes to keep them around. Which is great news for Nimble employees and customers.
|
# ? Mar 7, 2017 17:31 |
|
|
# ? May 21, 2024 18:43 |
|
How's LeftHand doing these days? I feel like with Dell buying EMC that HP felt they needed to do something. I didn't know Nimble was struggling. I've got a hardware refresh coming in the next year or two and I'm not sure I'm ready to hop on the hyperconvergence train. What are people doing in that realm these days? vSAN? Nutanix? Scale? Cisco UCS?
|
# ? Mar 7, 2017 18:08 |