Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Aunt Beth posted:

As I understood it was never really meant to compete with SVC, it was a high performance block storage device made of off the shelf x86 parts. SVC could frontend storage from anyone and everyone to smartly handle tiering, migrations, et.
As a former IBMer I drank the kool-aid and am incapable of seeing the value in any EMC storage product, so I have no experience with Isilon other than seeing them in some data centers.
I must have brain-farted writing that. I meant SVC+DS8000, which is the architecture that a lot more clients in need of top-end performance ended up on

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Pile Of Garbage posted:

See that poo poo is why you pay for a hardware storage appliance from a reputable vendor that provides support.
I've dealt with way worse problems, and had way higher ownership in them, with IBM/Hitachi/DDN enterprise storage than anything that's going on in this post.

There was a stretch of nearly half a year where IBM shipped firmware on their DS4000/DS5000 LSI SANs that would silently forget to replicate anything past the first 2 TB of a LUN, that was fun to find doing a DR test.

gently caress, don't even get me started on SONAS/GPFS.


VostokProgram posted:

If they were dead set on using proxmox, they could have just dumped those disks into a zfs pool and called it a day. Really weird to be using a clustered filesystem when your disks would easily fit in a shelf imo.
It feels premature/overengineered, but it's hard to say without knowing what the max size was that they envisioned this setup scaling to. It's weird buying a server with a pile of empty drive bays in it if you have no plans for them, right?

Vulture Culture fucked around with this message at 18:14 on Apr 17, 2021

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Pile Of Garbage posted:

Yeah big whoop so what? I've seen things you'd never believe as well. It's an indisputable fact that all software and hardware is trash. That's why you, or rather your employer/customer, pays for enterprise kit with support so that the liability is shifted up to the vendor. Imagine where you'd be if you had no vendor support in the incidents you described?
Maybe slightly worse off? We were the drivers of all those resolutions. My team and I spotted the SONAS regression by ingesting 1 TB of time series metrics on NFS mounts from a compute cluster until we found a correlation between a running job and nfslock latency and chased it with the upstream developers directly. We found the 2 TB LUN regression doing a DR test. I once reported a different storage bug to VMware in which I had the specific patch release where the bug was introduced, a description of the precise area of the code where the bug lived, and an explanation of what the logic error was, and there was no fix made available for eight months.

Vendor support is amazing for dealing with hardware logistics (I'm grey enough to remember trying to source hard drives after the tsunami wrecked all the HDD manufacturing in Thailand), and there's a lot of value in smaller sets of certified releases. But it's the farthest thing from a panacea. The vendor will still introduce software bugs they can't help you with. There will still be times where you have to do the legwork and find the fix yourself. With a vendor there might be relationships and money on the table, but here's the catch: you're always losing more money from the thing being down than what you paid for it, or you wouldn't have spent the money. The pressure will always be higher on you than the company you bought the kit from.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

CommieGIR posted:

We're 6 figures in our cloud bill already and not even doing anything in it yet.
Got sold on the reserved instance prepayments, I see

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Pile Of Garbage posted:

Brocade got acquired by Broadcom yeah? I haven't worked with them in over a decade but I'm guessing they're now probably much worse.
Vaunnies Brocbroadadecom

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Moey posted:

As someone who has used a handful of arrays from only a few different vendors (Dell/EMC, Nimble, Pure), how much manual fuckery is needed now a days setting up a new app for block storage?
Depends how manual the rest of your process is. If you're zoning out manually provisioned LUNs to initiators/HBAs, the experience isn't going to differ a ton between vendors. If you're doing dynamic provisioning from a Kubernetes or OpenStack cluster, there's a lot of variation in CSI/Cinder drivers, their quality, and how tightly coupled they are to the underlying storage topology

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evil_bunnY posted:

Because this is suddenly relevant here, which ones are best?
My personal experience is limited on the K8s front. I found that the Trident operator for NetApp ONTAP had a great feature set and the performance was off the charts for our use cases in early testing, there's a huge user base compared to basically every other storage vendor, but the CSI drivers presented some stability problems I didn't see using AWS's native storage options. Gluster is basically dead. Ceph's integrations work well, but you're stuck managing Ceph, and I'm a little bit anxious about the recent move out from Red Hat into IBM to be closer to the other IBM storage teams. Traditional vendors' (Dell, HPE, IBM) CSI drivers have very few GitHub stars, but the selection bias vs. open-source or cloud offerings might make this a bad proxy metric. Unsure about Pure but Portworx feels like a well-thought-out product with engineers who are active in the broader sysadmin community.

evil_bunnY posted:

LMAO it takes a 50% consulting FTE to keep ours from eating itself. And there's like 12 people in the (admittedly small) country who can competently manage it. When they hired another tech they had to get a fully remote icelandic guy :eng99:
That's impressive. I've seen TSM take a .5 FTE just to keep up with licensing.

Vulture Culture fucked around with this message at 17:35 on Nov 28, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply