|
Moey posted:Just racked a Dell/EMC Unity XT 480F to demo. You are aware they have just announced their new omgPowerStore line to replace all the things? Including unity afaik.
|
# ? May 12, 2020 01:43 |
|
|
# ? May 13, 2024 06:33 |
|
Vanilla posted:You are aware they have just announced their new omgPowerStore line to replace all the things? Including unity afaik. I was not. Unity XT was just launched in mid 2019 to my knowledge to phase out the original Unity line. Edit: Hmmm. Sure looks very similar but more powerful. I'll have to see how pricing compares. Moey fucked around with this message at 04:25 on May 12, 2020 |
# ? May 12, 2020 04:20 |
Powerstore and UnityXT use mostly the same hardware. Just some slight differences with the Powerstore hardware having a few extra bells and whistles for options UnityXT comes down the line from the Clariion/VNX days of succession. Powerstore is Dell taking the various midrange products and making something new and eventually converge the product lines. From what I am told, the underlying block storage functions was made by the XIO guys to plan for NVME and Storage class memory usage. Powerstore's storage operating system will eventually go software defined for integration all over the place in the cloud, whitebox hardware etc according to the sales documents (if we believe them). Powerstore can also be HCI, Scale out or up depending but seems like they have features coming down the line not fully implemented yet. Essentially one is a new product that will have new product pains. Might be wise to sit and watch what the industry says about it.
|
|
# ? May 12, 2020 15:31 |
|
I am building a new computer using an SFF case. My current computer has 2 4TB drives in RAID 1 for data storage. They won't fit inside my new computer, but I am still looking for access to storage, so I was thinking of an external enclosure, but after thinking about it some more, I was thinking I might have better results using a cheap personal use NAS solution. Does anyone have experience with this who would know if tossing those 2 HDDs into this would be reasonable for file storage needs? https://www.amazon.com/Synology-DiskStation-DS220j-Diskless-2-Bay/dp/B0855LMP81/ref=sr_1_4 The 2 drives I currently have are a set of these: https://www.amazon.com/gp/product/B013HNYV8I/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1 They aren't specified to be NAS drives, and they are about a year and a half old. Could I expect them to last for awhile in a NAS enclosure that would be running 24/7? Since they are in a RAID, if one fails, I won't lose my data, so even if I could probably get a couple years out of them that should be plenty.
|
# ? May 17, 2020 04:21 |
|
Mistikman posted:I am building a new computer using an SFF case. That's a question for the nas thread but i'll answer it anyway. WD Blue are poo poo for raid purposes(they are what it was once called greens, a lot of nas appliances will throw issues when the disks go to sleep), i would not use them for 24/7 setups like a nas. There are standalone disk chassis like the QNAP TR-004/TR-002 that might fit your disks better.
|
# ? May 17, 2020 08:18 |
|
This question sort of straddles the line between storage and backups, but has anyone used HubStor or know anyone who does? Veeam is loving around with their licensing just as we're starting to have a need to back up our O365 data so I'm looking at options. They seem almost too good to be true since they can handle VMware backups, cloud backups, O365, and do some nifty archival/retention/compliance stuff, plus storage tiering.
|
# ? May 20, 2020 20:30 |
|
So many of the backup SaaS companies are attempting to just resell someone else's storage service with a nice markup on each gig. I deal with cloudberrylab.com / msp360.com (they renamed). They license their software yearly for functions like you mention, and then you give them some cloud storage provider (aws/azure/google/backblaze/etc/etc) account to dump the backups to, which gives some nice transparent pricing and storage compliance assurance. You can signup as a "msp", with just 1 license btw, and everything has a 15 day trial I believe.
|
# ? Jul 9, 2020 18:25 |
|
Anyone worked with HPE Primera yet? Pros? Cons? We got one for our lab, trying to figure out the future of our storage systems.
|
# ? Jul 9, 2020 18:56 |
|
skipdogg posted:Anyone worked with HPE Primera yet? Pros? Cons? We got one for our lab, trying to figure out the future of our storage systems. I have, we have two 2-node Primera A650s with a bunch of 7.68TB SSDs. At the end of the day there's not a lot of difference from 3Par, which we also have, and I've worked with (two 7200s, two 7400s, all 2-node). Hell, even the HPE support people still refer to Primera as 3Par, and SSMC reports the Primera as a 3Par. If you've ever worked on a 3Par before, you'll be right at home. As far as I can tell, there are a few internal upgrades, and the caching system has been re-worked. One of the nice things they've done is build the service processor right in to the controller, so you don't need to deploy a separate appliance or buy the physical service processor, and the updating process has been dramatically simplified. They also have locked options out in SSMC (I imagine you can probably coax Primera in to doing anything you want using the command line) to encourage best practices, so you can't choose RAID 5 any longer, to give one example. They've clearly taken that approach from Nimble, which is really incredibly restrictive compared to the flexibility of 3Par's InformOS. If I'm completely honest, I haven't been blown away by the performance (although we could possibly be experiencing some latency due to us running synchronous replication, but I'm not convinced), and some of the old limitations are showing - for example deduped+compressed volumes still have a size limit of 16TiB (although supposedly that's getting changed soon.. and don't start talking to me about VVols, in my opinion there are still too many issues surrounding them). I'm also not seeing dedupe and compression rates that amaze me, about 1.3x or so. I think they could do with stealing some of Nimble's code in that area (which we also run). Is there any specific information you're looking for? Have you worked with 3Par before, or with HPE? HalloKitty fucked around with this message at 09:05 on Jul 10, 2020 |
# ? Jul 9, 2020 19:01 |
|
Just curious is all. Went through an acquisition a year and a half ago and we're trying to figure out what new platform to standardize on. Old company was NetApp for everything, had a couple EMC VNX's around from a prior acquisition as well so that's what I'm familiar with. Company that bought us has a mix of many different things. 3Par being one, I think there's a VPLEX somewhere, XtremeIO and some other things. They just bought us a 3Par 8200 and a Primera 630 to play with in our lab. Might be getting a Dell Powerstore or Unity to compare as well. I've never used 3Par or any HP storage other than a base MSA unit. Really happy with NetApp but that's a no go for ~reasons~
|
# ? Jul 9, 2020 22:31 |
|
skipdogg posted:Just curious is all. Powerstore is just Unity with a coat of paint much like Primera is just 3Par with a coat of paint. I’m innately skeptical of any of these solutions that have soldiered on from the spinning disk age into the the age of NVMe and storage class memory without a ground up rewrite. Not that they’re bad, but they are encumbered with technical debt and sometimes the seams show pretty clearly.
|
# ? Jul 9, 2020 23:29 |
YOLOsubmarine posted:Powerstore is just Unity with a coat of paint Read the Pure article on Powerstore I see? I may like the simplicity of Pure's product when I'm selling it selling it , but they have been heavy handed on their marketing Edit: removed my comments on Powerstore cause I guess I had already posted about it before Langolas fucked around with this message at 06:25 on Jul 11, 2020 |
|
# ? Jul 11, 2020 04:11 |
|
Langolas posted:Read the Pure article on Powerstore I see? I may like the simplicity of Pure's product when I'm selling it selling it , but they have been heavy handed on their marketing No, I just work for a Dell partner and know how Dell functions as an organization.
|
# ? Jul 11, 2020 17:20 |
YOLOsubmarine posted:No, I just work for a Dell partner and know how Dell functions as an organization. You do realize Powerstore block stack is brand new built from the ground up by XIO guys? That's why I'm keeping it at arms length because its not proven itself to me yet. Now Powerstore's file implementation definitely comes from the celerra/enas/unity code train. Hard pass on file type implementations on that product.
|
|
# ? Jul 12, 2020 05:15 |
|
I'm having zero issues with the Unity XT 480F units. We are only doing block via iSCSI, and probably got around 3x the space for the same price as Pure.
|
# ? Jul 12, 2020 23:38 |
|
I'm curious, what's happening with the Compellant gear, then? I'm guessing they'll have to pack up their poo poo and go home. Has anyone used and compared the newest Unity with the newest SC kit? I guess it's a pointless question now
|
# ? Jul 13, 2020 16:08 |
|
Moey posted:I'm having zero issues with the Unity XT 480F units. We are only doing block via iSCSI, and probably got around 3x the space for the same price as Pure. For the price, they aren't bad arrays. I managed 10 of them in my previous role, and the biggest complaint I had was EMC loving up serial numbers and site IDs. Once I got that fixed up, CloudIQ was really handy. I can't remember which software release came out last year, but the advanced deduplication and compression improvements were nice too from what I remember.
|
# ? Jul 13, 2020 18:35 |
|
Langolas posted:You do realize Powerstore block stack is brand new built from the ground up by XIO guys? That's why I'm keeping it at arms length because its not proven itself to me yet. Now Powerstore's file implementation definitely comes from the celerra/enas/unity code train. Hard pass on file type implementations on that product. I have no doubt that there’s some new code in there but I’m going to bet that it’s mostly still a Frankenstein’s monster of existing IP. I remember when Unity was announced a “purpose-built all-flash array, designed from the ground up for the flash data center.” That sure wasn’t true. qutius posted:For the price, they aren't bad arrays. I managed 10 of them in my previous role, and the biggest complaint I had was EMC loving up serial numbers and site IDs. Once I got that fixed up, CloudIQ was really handy. For the most part you have to try pretty hard to buy a bad array. But if you have to install of work with a bunch of these products from different vendors you start to learn which ones are a little more well designed and polished. That extra polish may not be worth extra cost, which is also fine, each customer has their own set of drivers.
|
# ? Jul 13, 2020 21:55 |
|
I've been running Nimble hybrid arrays for like 8 years now, those things were amazing for what they were. I agree, it's probably tough to find an enterprise class array that is a dumpster fire now a days.
|
# ? Jul 14, 2020 05:43 |
|
hopefully this is the right thread. starting to think i should upgrade the hard drives in my 4-bay synology nas. would like to have each drive be at least 4tb, but am willing to go more if it's worth it. what are the best drives these days? i think most of these are wd reds, except for one i had to replace in an emergency.
|
# ? Jul 17, 2020 07:18 |
|
abelwingnut posted:hopefully this is the right thread. If you go WD red check VERY accurately the part number, small sizes are now shingled(which means performance is poo poo). I have fitted my QNAP with seagate ironwolf pros and (knock on wood) they work fine.
|
# ? Jul 17, 2020 07:22 |
|
i'd be buying online, from amazon or newegg or someone reputable, so i'm not sure how i'd check the part number. is that the same as the model number? and what are small numbers? is there a thread somewhere about this?
|
# ? Jul 17, 2020 15:10 |
|
abelwingnut posted:hopefully this is the right thread. https://forums.somethingawful.com/showthread.php?threadid=2801557
|
# ? Jul 17, 2020 15:23 |
|
thanks.
|
# ? Jul 17, 2020 15:40 |
|
Hmm, I seem too of started to lose a LUN on a cluster that's been up for like a year. This is the error message. Any idea? Cluster Shared Volume 'Volume1' has entered a paused state because of 'STATUS_LOGON_FAILURE(c000006d)'. All I/O will temporarily be queued until a path to the volume is reestablished.
|
# ? Aug 2, 2020 02:33 |
|
Does anyone have Pure storage and VMware Site Recovery Manager? At work it takes (literal) hours for a failover, when I know on Hitachi G400/SRM it can take up to just 7 minutes to have the vm(s) up and running on the other side.
|
# ? Aug 6, 2020 12:30 |
|
Pikehead posted:Does anyone have Pure storage and VMware Site Recovery Manager? Seen many people with it, call support. Something funky is going on and someone will have a look
|
# ? Sep 30, 2020 23:56 |
|
Vanilla posted:Seen many people with it, call support. Something funky is going on and someone will have a look Pure Support did end up getting involved. The word I got back is that Pure think they might know what the problem is, but it's lots of work to fix (at both ends). There's apparently something funky in our environment that works okay except when SRM tries to do it's thing. Yeah, I know that's vague, but that's all I got back from the team involved. Pikehead fucked around with this message at 10:08 on Oct 1, 2020 |
# ? Oct 1, 2020 10:06 |
|
SRM is pretty lovely, so that’s not too surprising.
|
# ? Oct 1, 2020 17:29 |
SRM is pretty lovely I've seen Vmware sit and point fingers at storage vendors on it when it was a re-scan issue from Vmware taking longer then expected and timeout values weren't tuned right from SRM If Pure has a plan to address it, I would follow what they say before trusting anything Vmware says
|
|
# ? Oct 2, 2020 19:20 |
|
I worked on a quote last year for a complete pie in the sky scenario to replace all of our IBM v7000's with new storage last year. I went with IBM again because I love SVC. (not that I needed to, I guess, but I do like Storwize) My FlashSystem 5100 for prod and 5030 for DR were delivered, out of the blue, a couple of weeks ago. I'm going all DRAID bitches! Kaddish fucked around with this message at 22:17 on Mar 3, 2021 |
# ? Mar 3, 2021 22:09 |
|
Langolas posted:SRM is pretty lovely It's because SRM is dependent on storage vendor API usage and both VMware AND Pure are working with barely functioning code. Edit - I love Pure storage, they are solid dudes with a solid product. Kaddish fucked around with this message at 22:18 on Mar 3, 2021 |
# ? Mar 3, 2021 22:14 |
|
lol internet. posted:Hmm, I seem too of started to lose a LUN on a cluster that's been up for like a year. Did you resolve this? It sounds like a fabric/hardware layer issue. If this is FC, and I assume it is, you might want to look at C3 discards on your target/initiator ports. A slow drain device could cause this, which is also the bane of my existence. Kaddish fucked around with this message at 22:36 on Mar 3, 2021 |
# ? Mar 3, 2021 22:29 |
|
Langolas posted:SRM is pretty lovely More News: I've kept out of this for a while but got to talking to the people who deal with this. Apparently my organisation uses very descriptive names for the luns at the Pure level and they went over some magical 40 character or so limit. This meant that every time SRM got used a full rescan was required and it took hours. Options are: 1. Fix it somehow 2. Wait for tags Tags are now apparently a thing in the latest SRA so once everyone lines everything up and tests it first and then put it into production, SRM will finally work the way it's supposed to.
|
# ? Mar 4, 2021 01:37 |
|
Pikehead posted:More News: Um. Why do you have 40+ Pure LUNS is the pertinent question? I can't fathom a reason for this. Like, you're trying to fix something that seems to be fundamentally broken, and fixing the fundamentals will help with whatever you're trying to do going forward. Edit - I see, it's the LUN naming convention that is the problem, which gives me a headache even thinking about. Pure LUNS are just a storage bucket, and unless you need specific compression/dedupe statistics per Pure LUN, they don't mean anything. And even if you need those stats, just create a 'test' LUN for compression/dedupe potential. Or, there's always vvols (lol) Kaddish fucked around with this message at 01:59 on Mar 4, 2021 |
# ? Mar 4, 2021 01:50 |
|
Kaddish posted:I worked on a quote last year for a complete pie in the sky scenario to replace all of our IBM v7000's with new storage last year. I went with IBM again because I love SVC. (not that I needed to, I guess, but I do like Storwize) I miss working with IBM storage, also doing FC fabrics. Don't get to work on that kinda stuff these days... Edit: those IBM FC switches/MPRs that were just re-branded Brocade devices, so nice to work with! Pile Of Garbage fucked around with this message at 03:29 on Mar 4, 2021 |
# ? Mar 4, 2021 03:27 |
|
And in true IBM style, each port was licensed
|
# ? Mar 4, 2021 09:52 |
|
Pile Of Garbage posted:I miss working with IBM storage, also doing FC fabrics. Don't get to work on that kinda stuff these days... Ha, yep I still have some 8GB IBM switches at our DR location. Edit - The new Storwize code has the option of setting up a storage array, including pools, mdisks, etc for SVC best practices, I tempted to try it. I'll need to recable/rezone my DR SVC cluster first to get NPIV working, though. Kaddish fucked around with this message at 14:39 on Mar 4, 2021 |
# ? Mar 4, 2021 14:35 |
|
Kaddish posted:Um. Why do you have 40+ Pure LUNS is the pertinent question? I can't fathom a reason for this. Nothing to do with statistics at all - Decently sized managed service provider with multiple sites, each site having multiple vmware clusters. Additionally, there are customers that have dedicated luns (both SRM and non SRM) for various reasons, and there are also customers who directly use the storage arrays. I looked at vVols and they'd be awesome except our backup solution doesn't allow for direct storage access as a transport mechanism on a vVol.
|
# ? Mar 5, 2021 03:18 |
|
|
# ? May 13, 2024 06:33 |
|
Thanks Ants posted:And in true IBM style, each port was licensed Not just the ports, also the port types and features. Years ago I did a dual-site V7000 deployment with an FC fabric spanned between the sites using FCIP. Each site had two FC switches (SAN24B-4) and an MPR (SAN06B-R) so to deploy the solution we had to get the following licenses:
IIRC for the four FC switches, two MPRs and all the licensing it was around $85k AUD which was over a quarter of the total BOM cost. poo poo those MPRs alone were like $30k AUD each. Was a really fun build though! Edit: hah I've still got the design/as-built I did for it, good times. A Pile Of Garbage fucked around with this message at 10:09 on Mar 5, 2021 |
# ? Mar 5, 2021 09:58 |