|
THREADS BACK
|
# ? Jan 11, 2018 06:28 |
|
|
# ? May 11, 2024 13:29 |
|
Wooooo
|
# ? Jan 11, 2018 06:34 |
|
What fixed it?
|
# ? Jan 11, 2018 15:52 |
|
Enough posts to make it tick passed the cursed page. Edit: Methanar posted:im gay but thankfully nobody will ever read this Think again rear end in a top hat!
|
# ? Jan 11, 2018 15:58 |
|
It verks!!
|
# ? Jan 11, 2018 16:05 |
|
We have a Dell MD3200 DAS. It's been out of warranty and it's only 6TB RAW (10x600GB 15K SAS) I'd like to buy a PowerVault MD3400, with something like 2x1.92TB SSD and then 10x4TB spinners How do you get all the data from one of those, to the other? We have 2 VMware servers connected to our current one.
|
# ? Mar 21, 2018 19:20 |
|
Bob Morales posted:We have a Dell MD3200 DAS. It's been out of warranty and it's only 6TB RAW (10x600GB 15K SAS) Is the first one being used as a datastore within VMware? If so, just SvMotion the VMs from one to the other (or power off/migrate if you are running free/essentials).
|
# ? Mar 21, 2018 19:39 |
|
There's a Dell R610 and Dell R430 (?) connected to the existing MD3200 via mini-SAS cables. Both servers have 1 connection to each of the two controllers in the MD3200. Here's the back of the servers: And the back of the MD3200 Can I just connect the new MD3400 to the servers (instead of the second controller in the MD3200), and then just copy the datastores over?
|
# ? Mar 21, 2018 20:02 |
|
That's not technically a DAS - it's a SAN that connects via SAS. With a DAS you'd see all the disks presented to a PERC in the server and then build your logical volumes from that, whereas I assume you can access a management interface through some horrible Java applet and export volumes from the SAN, and then the HBA just sees these logical volumes. You probably can connect the new MD3400 to the second port on your servers, assuming there's no configuration in there for multipathing already. Thanks Ants fucked around with this message at 20:31 on Mar 21, 2018 |
# ? Mar 21, 2018 20:28 |
|
Thanks Ants posted:That's not technically a DAS - it's a SAN that connects via SAS. With a DAS you'd see all the disks presented to a PERC in the server and then build your logical volumes from that, whereas I assume you can access a management interface through some horrible Java applet and export volumes from the SAN, and then the HBA just sees these logical volumes. I thought DAS meant directly attached storage, as in it's directly connected using SAS. Is that just a Dell term or something? Yea, I was also wondering if I could connect the MD3200 and MD3400 together and copy everything over using the management interface/Dell program. The goal is to retire the MD3200 when it's all said and done.
|
# ? Mar 21, 2018 20:33 |
|
I took that bit out because it looks like you can't just plug them together like that. DAS is usually a shelf of disks presented to a controller inside one server, this is shared storage that happens to be connected via SAS to avoid the switch costs and configuration involved with iSCSI. It's not a bad idea but obviously you can't have more hosts than you have SAS ports.
|
# ? Mar 21, 2018 21:10 |
|
You’ll need to break redundancy, move one set of cables to the new SAN, sVmotion to the new datastores, and re-establish multipathing once everything is migrated. Hope you can find some easy maintenance windows!
|
# ? Mar 21, 2018 21:14 |
|
devmd01 posted:You’ll need to break redundancy, move one set of cables to the new SAN, sVmotion to the new datastores, and re-establish multipathing once everything is migrated. Hope you can find some easy maintenance windows! It should cause no downtime, but probably a good idea to do it during a window anyway.
|
# ? Mar 22, 2018 02:07 |
|
I have a question. A friend told me that I need to be unmapping or doing something in VMware to free up storage on our SAN. I'm not really sure what he's talking about. We have a Nimble CS300. Do I need to be doing any maintenance tasks on this thing like mentioned? edit: some more info: The CS300 is just one array with two volumes/datastores. Using iSCSI and both datastores are formatted with VMFS using the entire space. A very straight forward simple setup. kiwid fucked around with this message at 19:09 on Mar 27, 2018 |
# ? Mar 27, 2018 19:06 |
|
iSCSI unmapping. Whether you need to do anything or not depends on your environment.
|
# ? Mar 27, 2018 19:12 |
|
How do I know if I have to do that?
|
# ? Mar 27, 2018 19:49 |
|
kiwid posted:How do I know if I have to do that? What version of ESXi are you running?
|
# ? Mar 27, 2018 20:41 |
|
Also are the LUNs that your datastore sit on thin provisioned? If not, then you don't need to worry about this.
|
# ? Mar 27, 2018 22:36 |
|
Also worth checking you're using VAAI if you're doing general storage maintenance
|
# ? Mar 27, 2018 22:40 |
|
YOLOsubmarine posted:What version of ESXi are you running? 6.0 U2 Internet Explorer posted:Also are the LUNs that your datastore sit on thin provisioned? If not, then you don't need to worry about this. Yes
|
# ? Mar 28, 2018 01:56 |
|
kiwid posted:6.0 U2 Then you’ll need to run the scsi unmap command manually to reclaim thin provisioned blocks on the datastore. https://kb.vmware.com/s/article/2057513. Starting in 6.5 it’s automated (again). Thanks Ants posted:Also worth checking you're using VAAI if you're doing general storage maintenance It’s enabled by default, so I’d presume so.
|
# ? Mar 28, 2018 02:46 |
|
YOLOsubmarine posted:Then you’ll need to run the scsi unmap command manually to reclaim thin provisioned blocks on the datastore. Well we have a planned upgrade soon. If I upgrade it'll just start automating it then and I don't have to worry about this? edit: nvm, found this: quote:However, due to the changes done in VMFS 6 metadata structures to make it 4K aligned, you cannot inline/offline upgrade from VMFS5 to VMFS6. kiwid fucked around with this message at 03:40 on Mar 28, 2018 |
# ? Mar 28, 2018 03:37 |
|
Just create a new datastore and SvMotion or power down and migrate.
|
# ? Mar 28, 2018 04:40 |
|
So my company is probably going to venture into the realm of all-flash arrays soon. Our current setup is an IBM v7000 with 17TB usable space among a couple data stores with 10&15k disks. I know a lot of the all flash arrays rely on dedupe and compression, but how reliable are their numbers in this regard? I’m getting quoted flash setups with anywhere from 10-20TB usable and then they’ll say 28-48TB ‘effective’ space. I feel like a doofus potentially buying a new SAN with less physical disk space than our current one, though I know it really isn’t the case. Help calm my nerves?
|
# ? May 22, 2018 01:33 |
|
We reliably get 3:1 on our Pure on average. Some datasets are closer to 2.4:1, some are 5:1 or more. So those quoted numbers seem about right.
|
# ? May 22, 2018 01:55 |
|
Spring Heeled Jack posted:So my company is probably going to venture into the realm of all-flash arrays soon. You’re gonna need to list some specific vendors and tell us what kind of data you’ve got. I’ve seen everything from 6:1 to 1.3:1 ratios. It can vary significantly. To size this stuff the vendors or VARs should at minimum be providing a data reduction rate based on your specific data accounting. Some vendors will back this up with a guarantee (Pure and NetApp at least) that if you don’t at least match their stated data reduction ratio then they will give you more drives. Just buy Pure though. It’s the most likely to give you the best ratios and it’ll be one of the easiest to work with.
|
# ? May 22, 2018 02:43 |
|
I see vsan get 2-3 in a mixed nix/win environment. Pure is loving fantastic if your budget can do it. S2D is working pretty drat well for a customer too. I may or may not like building my own storage infra though.
|
# ? May 22, 2018 02:57 |
|
YOLOsubmarine posted:You’re gonna need to list some specific vendors and tell us what kind of data you’ve got. I’ve seen everything from 6:1 to 1.3:1 ratios. It can vary significantly. To size this stuff the vendors or VARs should at minimum be providing a data reduction rate based on your specific data accounting. Some vendors will back this up with a guarantee (Pure and NetApp at least) that if you don’t at least match their stated data reduction ratio then they will give you more drives. We’re getting quotes from Tegile, Pure, Nimble, and both Compellent and Unity from Dell. So pretty much all of the big players, still waiting on final quotes from all of them aside from Tegile, who quoted us the T4700 with (I think) about 20TB raw disk. I’ll have to check the quotes tomorrow. This is a strictly VMware env with mostly smaller windows servers, and a big (5TB) mssql DB that we are planning to move from a failover cluster to an AG, doubling the needed space on our SAN. We’re pretty much at capacity on our v7000. Spring Heeled Jack fucked around with this message at 03:15 on May 22, 2018 |
# ? May 22, 2018 03:12 |
|
Pure and Nimble are the winners for an esxi environment there, all things being equal.
|
# ? May 22, 2018 03:29 |
|
Potato Salad posted:Pure and Nimble are the winners for an esxi environment there, all things being equal. That seems to be the overall feeling I’m getting! I’ve heard Pure can be pricy but I’ve yet to see numbers from either yet (thanks CDW)!
|
# ? May 22, 2018 03:36 |
|
Spring Heeled Jack posted:We’re getting quotes from Tegile, Pure, Nimble, and both Compellent and Unity from Dell. So pretty much all of the big players, still waiting on final quotes from all of them aside from Tegile, who quoted us the T4700 with (I think) about 20TB raw disk. I’ll have to check the quotes tomorrow. Of those Pure and Nimble are the only ones I’d consider. Tegile’s data reduction is strictly worse than Pure’s (requires setting block size to 32k which is much larger than Pure’s dedupe chunk size of 512b, no global dedupe across a/b pools, incredibly memory hungry), Compellant is still built on a pointless tiering architecture, and Unity is just VNX with SSDs. Nimble is good, but their dedupe is a little funky and it’s now an HP storage product, which means it will slowly wither on the vine. I’d guess you’ll probably let see at least 3:1 on that data. Database data doesn’t reduce as well, but server OS does. Pretty similar footprint to on of our new Pure customers and they’re seeing right at 3:1. Also worth noting that your two DAG copies will deduplicate against each other resulting in significantly less than double the utilization.
|
# ? May 22, 2018 03:37 |
|
I've been super happy with our Pure arrays. Our data reduction ratios are pretty good (though they're thrown off weird poo poo like huge swap partitions created to provide DISM backing store for Solaris nodes running Oracle 11, which are guaranteed never to see a byte of use) and more importantly, support has been great.
|
# ? May 22, 2018 03:54 |
|
YOLOsubmarine posted:Of those Pure and Nimble are the only ones I’d consider. Tegile’s data reduction is strictly worse than Pure’s (requires setting block size to 32k which is much larger than Pure’s dedupe chunk size of 512b, no global dedupe across a/b pools, incredibly memory hungry). How does pure manage to not be memory hungry when it's a tiny block size?
|
# ? May 22, 2018 03:57 |
|
H110Hawk posted:How does pure manage to not be memory hungry when it's a tiny block size? Pure actually does some memory management. Tegile (I *think* this has changed very new versions of the code) never pruned the fingerprint database, so it would just grow and grow and grow unless you ran an undocumented command to clean it. On hybrid systems this was catastrophically bad since it would overflow to the SSD tier which would in turn push cached data out and suddenly everything would get 10-1000 times slower. Their all flash stuff is better simply because memory exhaustion is less catastrophic when gets pushed off to flash instead of spinning media. But in general their grasp of the technology they borrowed to make their arrays doesn’t seem all that solid. ZFS dedupe was not built to be lightweight, and Tegile didn’t do anything to change that. Just anecdotally my experience with Pure vs Tegile across my customer base is that the Pure stuff works as advertised and the Tegile stuff often does not. Also, my original statement was incomplete. Pure can check disk segments at 512B offset for pattern matching and dedupe, but still operates on the blocks from 4K and up for dedupe. YOLOsubmarine fucked around with this message at 04:27 on May 22, 2018 |
# ? May 22, 2018 04:24 |
|
YOLOsubmarine posted:Of those Pure and Nimble are the only ones I’d consider
|
# ? May 22, 2018 22:11 |
|
I've got 12 PURE arrays in production. I can validate a good amount of dedup results if you want, we get some VERY aggressive deduplication numbers across the board, but we have almost dedicated arrays for specific purposes, so. Note: I don't have a NIMBLE with deduplication, but I do have three Nimbles in the works atm.
|
# ? May 23, 2018 19:44 |
|
YOLOsubmarine posted:Just anecdotally my experience with Pure vs Tegile across my customer base is that the Pure stuff works as advertised and the Tegile stuff often does not. I agree with you/all of this, PURE makes a very good point of being "matter of fact" and point blank about their offerings, more often than not, if you're on the fence, get a guarantee in writing and they'll honor it to a fault. When we POC'd our first two M20's, we were promised one of them could fit our entire Virtual environment without choking (1,000+ VM's, very mixed workload), with 8.0:1+ deduplication with the guarantee that if they didn't meet it, they'd ship another disk pack to make up for it, for free. TL;DR Pure's good stuff, I'm a fan.
|
# ? May 23, 2018 19:52 |
|
I'm leaning towards Pure, I need to get a proper demo scheduled with them. I know there's probably a ton of dedupe savings waiting to be had on our 17TB array, as most of it is smaller IIS webservers and other things.
|
# ? May 23, 2018 19:57 |
|
One of the crazier ratios I saw with our pure was a 2tb file server we had. It was mixed content, images, docs, zip files, PDFs. For a brief time I had it on the same volume as its redundant partner. So, two 2tb vmdk files, both 90% full, but the same data on each. Actual volume size on storage was about 210gb.
|
# ? May 23, 2018 20:06 |
|
|
# ? May 11, 2024 13:29 |
|
Pure just announced new hardware at Accelerate as well, so if you buy now you’ll get an NVMe ready X array instead of the older Ms.
|
# ? May 23, 2018 20:10 |