|
Isilon seems like an obvious fit for Dell's product portfolio. I'm not so sure about EMC's other offerings.
|
# ? Oct 12, 2015 18:28 |
|
|
# ? May 11, 2024 16:34 |
|
I really like EMC's stuff right now and I'm terrified what Dell is going to do to it. Since Dell went private support has gone to poo poo, production times have gone up, and basically every interaction I have with them is worse. We're not one of Dell's largest customers or anything, but we do over a million a year through them and everything has been worse the last couple years from the account team, to support, and so on. Dell picked up EqualLogic and Compellant, have they done anything with either of those companies really? Dell's cheap MD stuff is a big hit in the SMB market, but I feel like they let their acquired storage companies just sort of atrophy. We've got over 2 million worth of VNX deployed across our sites and man I'm going to be pissed if support and service goes downhill.
|
# ? Oct 12, 2015 18:48 |
|
At least on the quotes we've done lately, Dell aren't able to get close to IBM V3700 pricing with their MD stuff. Like they are 30-40% more expensive.
|
# ? Oct 12, 2015 18:58 |
|
Technically, the MD line is rebranded Netapp so it doesn't surprise me that it's more expensive. They would much rather you buy Equallogic.
|
# ? Oct 12, 2015 19:05 |
|
Yeah, if you get a quote for MD and for EqualLogic from Dell the EqualLogic quote will be pretty drat close. Dell doesn't want you buying MD.
|
# ? Oct 12, 2015 19:11 |
|
Tried that, EqualLogic was a good 30% again. Maybe Dell just doesn't want to have to go to the trouble of shipping anything?
|
# ? Oct 12, 2015 19:16 |
|
NippleFloss posted:Well, it just happened. I'd guess Compellent goes away, some EMC product lines get trimmed, and there's a bigger push towards hyperconverged. Also guessing that ScaleIO becomes Dell only. Any reason why you think Compellent will go away? We have a few units and i'm wondering if I missed the writing on the wall somewhere?
|
# ? Oct 13, 2015 15:11 |
|
I am having to setup a new configuration for a client that's never used virtualisation before. My plan is to have two physical servers with a failover cluster of VMs with storage on an iscsi connection. I have a couple of questions relating to the storage. Firstly, I am used to dealing with a QNAP NAS for iscsi which is fine, but this client can't have a single point of failure, and I know they wouldn't really like their QNAP to fail and have them out of action. Is there a real-time blocklevel iscsi copier? Or am I approaching this the wrong way? Their current setup uses DFS and it's frequently annoying for all concerned. Secondly - and this may be a question in the VM thread - but I would assume that the NAS / SAN setup would have a few iscsi targets, one for the VMs, and one for the storage of the client's data. My only headache about this, is that a file access means Client -> Server -> VM target -> Server -> File store target -> server -> client Which seems a little long winded. Would there simply be one BIG target that contains big VHD files with the client data in, in the single VM storage cluster target? Thanks!
|
# ? Oct 13, 2015 17:16 |
|
Fruit Smoothies posted:Firstly, I am used to dealing with a QNAP NAS for iscsi which is fine, but this client can't have a single point of failure, and I know they wouldn't really like their QNAP to fail and have them out of action. Is there a real-time blocklevel iscsi copier? Or am I approaching this the wrong way? Their current setup uses DFS and it's frequently annoying for all concerned. But if you're trying to do a high-availability configuration with a consumer-grade NAS, you're going about it the wrong way and you're never going to get a reliably working system out of it. On the VMware side you have some of their software-defined storage offerings that you can look at, but I don't believe Microsoft has anything similar in the Hyper-V ecosystem. Unless you're hosting things that absolutely need to be on-premises, please consider Azure or AWS with a VPN to a server in the cloud, and let them handle the storage availability. They can do it a lot cheaper than you can. Fruit Smoothies posted:Secondly - and this may be a question in the VM thread - but I would assume that the NAS / SAN setup would have a few iscsi targets, one for the VMs, and one for the storage of the client's data. My only headache about this, is that a file access means
|
# ? Oct 13, 2015 17:32 |
|
Vulture Culture posted:What you're describing is LUN mirroring. It's a very common feature in enterprise-grade storage. From all I've seen & tested with Hyper-V, failover clustering seems what I want, and as long as the LUN mirroring is handled by the device, and not the servers, I don't imagine needing to investigate VMWare. Cloud seems a reasonable suggestion, but they'd need a leased line put in, and it seems silly to recommend that expenditure when it would be just for that purpose and when storage is cheap to buy. Regarding the storage of the client data, you're right to assume I was asking whether it should be in the VHD or separated out. Your answer of "yes" seemed to imply that both were feasible.
|
# ? Oct 13, 2015 17:44 |
|
Fruit Smoothies posted:From all I've seen & tested with Hyper-V, failover clustering seems what I want, and as long as the LUN mirroring is handled by the device, and not the servers, I don't imagine needing to investigate VMWare. I'm not sure why you would need a leased line unless they're on rural dial-up and have no broadband options. Any decent business broadband router will support site-to-site VPN connections. Fruit Smoothies posted:Regarding the storage of the client data, you're right to assume I was asking whether it should be in the VHD or separated out. Your answer of "yes" seemed to imply that both were feasible.
|
# ? Oct 13, 2015 17:54 |
|
Fruit Smoothies posted:but this client can't have a single point of failure Fruit Smoothies posted:storage is cheap to buy These two statements are mutually exclusive. Like Vulture Culture said, they either actually need high availability, or it's okay to use the QNAP storage. Not both. Fruit Smoothies posted:Client -> Server -> VM target -> Server -> File store target -> server -> client
|
# ? Oct 13, 2015 18:04 |
|
Vulture Culture posted:The LUN mirroring isn't handled by consumer-level devices, that's my point; HA on entry-level NAS devices is a shitshow in the best of cases. QNAP isn't going to give you what you want in any way that's sane to manage. Storage isn't cheap to buy at all if you actually need it to work quickly and reliably. Expect a decent storage system to run you easily more than your two servers, unless your servers have a terabyte of RAM each. Providing reliable storage at a minimal marginal cost is something that the cloud providers have worked out. You've got to explain this to your client or manage their expectations re: single points of failure on the budget they have to work with. They probably don't need actual HA as long as their data is safe and you have contingencies in place that will allow their business to continue working if the storage shits itself. Performance or storage volume aren't the issue. Their business is running an old application powered by CSV files. It's incredibly sensitive, such that it won't work over WiFi. However, after extensive testing with the QNAP and clustering VMs, it doesn't corrupt during these scenarios. (It's so sensitive, the remote sites have to use terminal server to access the application locally) They're working on developing a new product, but it's going to take time. In the mean while, I am lumbered with ensuring as close to 100% uptime as possible. DFS never worked with these files, and there's about 40GB of data, and 10 GB of archive. If you can come up with a better idea for me to test, I am happy to accept suggestions.
|
# ? Oct 13, 2015 18:19 |
|
Fruit Smoothies posted:Performance or storage volume aren't the issue. Their business is running an old application powered by CSV files. It's incredibly sensitive, such that it won't work over WiFi. However, after extensive testing with the QNAP and clustering VMs, it doesn't corrupt during these scenarios. Fruit Smoothies posted:They're working on developing a new product, but it's going to take time. In the mean while, I am lumbered with ensuring as close to 100% uptime as possible. DFS never worked with these files, and there's about 40GB of data, and 10 GB of archive.
|
# ? Oct 13, 2015 18:43 |
|
Vulture Culture posted:If it works fine over terminal services, it might be worthwhile to investigate setting up a terminal server in the cloud to handle it. I'm not sure how many user licenses you're talking about, but I can't imagine mass concurrency in an application driven by CSVs. My plan B, is to buy two identical QNAP devices and test the ability to simply swap the drives across in case of a unit failure. From my testing, I get good enough performance from the QNAP to handle in testing, but will it have problem scaling in the real world?
|
# ? Oct 13, 2015 18:58 |
|
Just deploy the app on Azure RemoteApp. Qnap and Synology stuff is great in your house or in a lab or as a backup target. It's only a matter of time until it bites you in the arse if you're putting VMs on top of it.
|
# ? Oct 13, 2015 19:33 |
|
Ignoring the bigger looming catastrophe, look into SAS DAS units that can allow multiple hosts to connect. Like the Dell PowerVault MD series (non-iSCSI). They are basically OEM hardware and most major storage companies sell an equivalent. You can get them with dual controllers and dual power supplied and will let you sidestep the switching side of things.
|
# ? Oct 13, 2015 19:34 |
|
Thanks Ants posted:Just deploy the app on Azure RemoteApp. The application has MAPI connectors. I considered RDP for everyone before, but the exchange server is on site, and no one wanted to keep swapping between desktop and RDP.
|
# ? Oct 13, 2015 20:09 |
|
Internet Explorer posted:Ignoring the bigger looming catastrophe, look into SAS DAS units that can allow multiple hosts to connect. Like the Dell PowerVault MD series (non-iSCSI). They are basically OEM hardware and most major storage companies sell an equivalent. Ding ding. This is the only realistic solution for a two-host VMware cluster that won't be expanded. Just make drat sure you follow the VMware hcl for round-robin pathing policies, I had a nightmare of a VMware host upgrade a few months back at our sister company.
|
# ? Oct 13, 2015 20:25 |
|
Fruit Smoothies posted:when storage is cheap to buy. Oh it is now? That's good to know, I sure am glad it went from being the most expensive thing to do right to one of the least with no major paradigm shifts. Uh my vendor just got back to me, poo poo's still expensive. You lied to me fruit smoothies. You lied to me.
|
# ? Oct 14, 2015 15:05 |
|
Hard drives are cheap. It's the stuff you plug them into and the software that runs on that stuff that's expensive.
|
# ? Oct 14, 2015 15:57 |
|
I definitely feel i'm getting a lot more for my money over the last 18 months than previously, tape and enterprise SSD in particular. poo poo's still not cheap though.
|
# ? Oct 14, 2015 16:04 |
|
Some interesting things from NetApp insight: Software defined ONTAP with HA and clustering coming in the 8.4 timeframe. Basically cDOT on white box or a hypervisor. Inline deduplication coming in 8.3.2 3.8T SSD drives shipping by the end of the year FlashRay AFA shipping around February of next year. Will include HA, which was the hold up.
|
# ? Oct 19, 2015 04:06 |
|
Cloud ONTAP for azure soon as well. Also, snap mirror support for altavault. Death to all traditional backup software. parid fucked around with this message at 06:37 on Oct 19, 2015 |
# ? Oct 19, 2015 06:34 |
|
Do any of you have interesting experiences to report with Oracle storage? I'm looking at a hardware refresh on our data warehouse platform, which uses Oracle. If the claims they make for the efficiency of their hybrid columnar compression are remotely accurate, we're going to want to at least look at their storage options, but I can't help wondering if that's something we'd regret in a year.
Zorak of Michigan fucked around with this message at 02:29 on Oct 20, 2015 |
# ? Oct 19, 2015 15:11 |
|
Zorak of Michigan posted:Do any of you have interested experiences to report with Oracle storage? I'm looking at a hardware refresh on our data warehouse platform, which uses Oracle. If the claims they make for the efficiency of their hybrid columnar compression are remotely accurate, we're going to want to at least look at their storage options, but I can't help wondering if that's something we'd regret in a year. My only experience with Oracle related storage stuff was during a production-down incident affecting our ODAs which utilise SAS DAS disk-shelves managed via ASM. Our DBA worked with Oracle support for 8+ hours and was handed-over between techs three times due to shifts ending. Each time he was handed to a new tech there was a 30 minute delay in call-back and then a further 15 minute delay whilst the new tech read up on the case notes. I was brought in at around 9 hours (5AM by then) as I was the only person with Oracle/Linux experience and the DBA was literally dying having already worked 11 hours prior to picking up the incident. At 12 hours we finally had the databases back up and application services restored. Early into the incident we attempted to engage our Oracle account manager to accelerate things but he didn't do much. So yeah, not technically storage related but a pretty piss-poor performance from Oracle support during a legit production-down incident.
|
# ? Oct 19, 2015 16:00 |
|
Zorak of Michigan posted:Do any of you have interested experiences to report with Oracle storage? I'm looking at a hardware refresh on our data warehouse platform, which uses Oracle. If the claims they make for the efficiency of their hybrid columnar compression are remotely accurate, we're going to want to at least look at their storage options, but I can't help wondering if that's something we'd regret in a year.
|
# ? Oct 20, 2015 01:31 |
|
I've been asked to help troubleshoot some iSCSI performance issues between a Windows Server host and a NetApp FAS8020. Any suggestions for a good, free storage benchmarking tool that runs on Windows? IOmeter looks decent but it's also old as gently caress (user guide refers to NT 4.0 ). I don't typically support Windows so I'm out of my element here.
Docjowles fucked around with this message at 17:11 on Oct 27, 2015 |
# ? Oct 27, 2015 17:08 |
|
Docjowles posted:I've been asked to help troubleshoot some iSCSI performance issues between a Windows Server host and a NetApp FAS8020. Any suggestions for a good, free storage benchmarking tool that runs on Windows? IOmeter looks decent but it's also old as gently caress (user guide refers to NT 4.0 ). I don't typically support Windows so I'm out of my element here. CrystalDiskMark isn't a bad place to start. The tests are the more elementary kinds available (doesn't look at the full-range of queue depths available, doesn't do 80%read/20% write loads, etc) but it'll point you in the right direction if there's a clear performance problem. http://crystalmark.info/software/CrystalDiskMark/index-e.html Naturally, it follows that you'd couple storage performance with an analysis of network traffic.
|
# ? Oct 27, 2015 17:58 |
|
I found Microsoft's SQLIO tool which seems pretty solid. Will try that one, too.
|
# ? Oct 27, 2015 18:21 |
|
Docjowles posted:I've been asked to help troubleshoot some iSCSI performance issues between a Windows Server host and a NetApp FAS8020. Any suggestions for a good, free storage benchmarking tool that runs on Windows? IOmeter looks decent but it's also old as gently caress (user guide refers to NT 4.0 ). I don't typically support Windows so I'm out of my element here. Just this host having the problem?
|
# ? Oct 27, 2015 21:48 |
|
Docjowles posted:I've been asked to help troubleshoot some iSCSI performance issues between a Windows Server host and a NetApp FAS8020. Any suggestions for a good, free storage benchmarking tool that runs on Windows? IOmeter looks decent but it's also old as gently caress (user guide refers to NT 4.0 ). I don't typically support Windows so I'm out of my element here. I had an open issue with Microsoft for this and they had me run Storport traces (per the instructions at the top of the link below) and then used these instructions to decipher them. This will specifically gather timing statistics for requests made to the storage device, which was useful somehow. To be honest it's been about six months since I did this so I don't remember why exactly it was useful it helped us eliminate our storage device as a problem (turns out there was no problem other than the backup guy being dumb, but that's another story). Anyway, the link: http://blogs.technet.com/b/askcore/archive/2014/08/19/deciphering-storport-traces-101.aspx
|
# ? Oct 27, 2015 21:59 |
|
Docjowles posted:I've been asked to help troubleshoot some iSCSI performance issues between a Windows Server host and a NetApp FAS8020. Any suggestions for a good, free storage benchmarking tool that runs on Windows? IOmeter looks decent but it's also old as gently caress (user guide refers to NT 4.0 ). I don't typically support Windows so I'm out of my element here. NetApp has a very simple cli based tool called sio_ntap that you can download if you have a support login. It's just an exe that takes a few parameters, so it's less work than IOmeter.
|
# ? Oct 28, 2015 00:42 |
|
Docjowles posted:I've been asked to help troubleshoot some iSCSI performance issues between a Windows Server host and a NetApp FAS8020. Any suggestions for a good, free storage benchmarking tool that runs on Windows? IOmeter looks decent but it's also old as gently caress (user guide refers to NT 4.0 ). I don't typically support Windows so I'm out of my element here.
|
# ? Oct 28, 2015 01:14 |
|
adorai posted:I bet it's a misaligned lun. Windows tried to auto align it and netapp also did. Doesn't seem like it, based on NetApp's guidance. Although that article seems designed to be deliberately confusing so I could be misinterpreting. OS: 2008R2 Partition type: GPT Hyper-V role: No LUN type: windows_2008 Partition starting offset: 135266304 135266304 is evenly divisible by 4096. And so should be fine? Honestly I don't think there's a significant problem here, if any. I'm kind of being asked to prove a negative. DBA is saying performance is "not as fast as he wants". So far my benchmarking shows it's behaving fine. So I'm just trying to cover my rear end and show some hard data that it's performing as designed. If that's not good enough, ok, but it's not because of something really trivial. Rhymenoserous posted:Just this host having the problem? FISHMANPET posted:To be honest it's been about six months since I did this so I don't remember why exactly it was useful it helped us eliminate our storage device as a problem (turns out there was no problem other than the backup guy being dumb, but that's another story) There's only one host accessing the NetApp device. If spending a shitload of money on an iSCSI SAN which only one server is going to use sounds retarded, I won't disagree. The DBA team basically gets to do whatever they want for political reasons, despite not having the expertise to make informed decisions. But we still have to support it when those questionable decisions backfire. Docjowles fucked around with this message at 14:59 on Oct 28, 2015 |
# ? Oct 28, 2015 14:52 |
|
I've never met a DBA that hasn't complained about performance.
|
# ? Oct 28, 2015 15:01 |
|
I am a total storage noob so I apologize in advance if this is too basic to post here. I'm working with an old Dell PS4000 that has an existing volume on it. The volume mounts fine to our virtual environment (this was set up before me). I need to copy some large files to this volume and rather thaon go through the vsphere datastore browser I was hoping it would be faster to mount the volume direct to my workstation, so I can use robocopy. I tried to point to one of the IP addresses listed under "network" for the member SAN. I can ping this IP address. When I try to add it using iSCSI Initiator, I get error "Connection Failed." I am limiting iSCSI access to the volume to 172 . * . * . * and just for kicks I removed all restrictions, and I still get connection failed. I am assuming I am either using a wrong IP address or this IP address is not allowed to be used for iSCSI connections... But what do I know! thanks in advance
|
# ? Oct 28, 2015 20:47 |
|
NevergirlsOFFICIAL posted:I am a total storage noob so I apologize in advance if this is too basic to post here. Whoa, you got a lot of things going on here. So it has an existing volume on there. This is formatted as VMFS. You cannot natively browse that on your workstation. There are tools to allow this. Use WinSCP to connect to the ESXi host, then browse to your datastore.
|
# ? Oct 28, 2015 21:12 |
|
Disregard me I solved one problem and have 5 more (like you mentioned)
|
# ? Oct 28, 2015 21:18 |
|
|
# ? May 11, 2024 16:34 |
|
NevergirlsOFFICIAL posted:Disregard me I solved one problem and have 5 more (like you mentioned) Free free to post them! That way everyone can learn.
|
# ? Oct 28, 2015 22:42 |