|
I have had similar experiences with NetApp really bending over backwards to fix mistakes. Something we haven't seen from someone like let's say Commvault. I wonder if this is a by product of the heavy competition (and spending) in the storage space.
|
# ? Jun 19, 2014 04:18 |
|
|
# ? May 27, 2024 03:24 |
|
Bitch Stewie posted:Still leading with the HUS 110. Hitachi seem deathly honest but it would be useful to know if you consider there to be any "must have" license options? Well, given my weirdo latency problems, I wish we'd bought the Tuning Manager. Bitch Stewie posted:We're planning on doing FC direct connect so other than tiering and the performance analyser license I don't see much else that jumps off the page as something we'd need? In retrospect, I honestly wish we'd done FC direct to begin with. It would have been cheaper. Not even considering the extra set of switches I had to buy because the ones the VAR recommended were wholly inadequate. Bitch Stewie posted:Incidentally do you have VAAI? I'm still a little hazy on how the zero reclaim works depending if you have it enabled or not (we're cheap scum so only have vSphere Standard licenses). I didn't install their vSphere integration stuff, so I can't really speak for them.
|
# ? Jun 19, 2014 14:36 |
|
Bitch Stewie posted:Incidentally do you have VAAI? I'm still a little hazy on how the zero reclaim works depending if you have it enabled or not (we're cheap scum so only have vSphere Standard licenses). Zero page reclaim just consolidates the "thick" pages within the pool and then releases any pages that have all zeros. So if you provision an eagerzerothick vmdk of 100gb in a DP pool it will take up 100gb of space in the pool, but if you run zero page reclaim on it then it will shrink back to 0gb used in the pool (or however much data you've actually written to it). It will still benefit from the improved first write latency that you get from eagerzeroing, but will act as if it is thin provisioned on the storage. Not hugely useful outside of that scenario, but better than nothing. Basically dedupe that only works on zeroed pages.
|
# ? Jun 19, 2014 17:11 |
|
Out of curiosity, what kind of capacities are you running in your vnx2 /Vplex setup?
|
# ? Jun 19, 2014 22:21 |
|
You guys complaining about IBM support, have any of you experienced Premium Support with an Account Advocate? It's pretty boss.
|
# ? Jun 19, 2014 22:34 |
|
Kaddish posted:You guys complaining about IBM support, have any of you experienced Premium Support with an Account Advocate? It's pretty boss. Can he add contra-rotating cabling to your V7000?
|
# ? Jun 20, 2014 01:14 |
|
NippleFloss posted:Can he add contra-rotating cabling to your V7000? No sir. Are there any SAS systems that have that type of cabling? I've only seen it with fiber.
|
# ? Jun 20, 2014 04:50 |
|
I'm sure it's great, meanwhile us scum with normal 24x7, 4HR onsite response time support get the shaft by IBM support. Today at 3:45 PM EST, a controller failed in one of my V7Ks. I'm still onsite now at 12:16 AM EST, a replacement hasn't even been dispatched yet and all they've had me do is reseat the loving thing. Awaiting callback from the National Duty Manager now, and it's already been a half hour since I escalated this to him for the second time. gently caress IBM
|
# ? Jun 20, 2014 05:18 |
|
mattisacomputer posted:gently caress IBM
|
# ? Jun 20, 2014 06:04 |
|
Kaddish posted:No sir. Are there any SAS systems that have that type of cabling? I've only seen it with fiber. Sure, NetApp does it with their SAS expansion shelves. I'm sure other vendors do too. It's really baffling why they wouldn't, it's a sound design for resiliency.
|
# ? Jun 20, 2014 07:22 |
|
Kaddish posted:No sir. Are there any SAS systems that have that type of cabling? I've only seen it with fiber.
|
# ? Jun 20, 2014 10:07 |
|
Oh. Well then. That's pretty dumb.
|
# ? Jun 20, 2014 14:17 |
|
Anyone know of any major reasons why I shouldn't pull the trigger on an EMC VNX 5200 for a small (3 host) VMware environment? This is a severely time-constrained project and I'm already familiar with its little brother (have a VNXe 3300 already), so this is looking like an attractive option I could get up and running quickly.
|
# ? Jun 20, 2014 14:28 |
|
Cavepimp posted:Anyone know of any major reasons why I shouldn't pull the trigger on an EMC VNX 5200 for a small (3 host) VMware environment? This is a severely time-constrained project and I'm already familiar with its little brother (have a VNXe 3300 already), so this is looking like an attractive option I could get up and running quickly. I just setup the same machine 2 weeks ago. Are you just going with block?
|
# ? Jun 20, 2014 14:35 |
|
Sickening posted:I just setup the same machine 2 weeks ago. Are you just going with block? Yep, just going with iSCSI using the 1gb ports (4 onboard, 4 on card) and MPIO. The idea is to fairly closely mirror my existing VNXe's setup for simplicity. If I do that, I should be able to get it set up in a day or two max and move on to other things.
|
# ? Jun 20, 2014 14:48 |
|
Cavepimp posted:Yep, just going with iSCSI using the 1gb ports (4 onboard, 4 on card) and MPIO. The idea is to fairly closely mirror my existing VNXe's setup for simplicity. If I do that, I should be able to get it set up in a day or two max and move on to other things. There really isn't much to block. I found it pretty painless and fast. We used FC though.
|
# ? Jun 20, 2014 15:42 |
Sickening posted:There really isn't much to block. I found it pretty painless and fast. We used FC though. Should be easy to setup to match his vnxe especially with iScsi. Just pull up the best practices from EMCs support site and go to town
|
|
# ? Jun 20, 2014 15:45 |
|
Cavepimp posted:Yep, just going with iSCSI using the 1gb ports (4 onboard, 4 on card) and MPIO. The idea is to fairly closely mirror my existing VNXe's setup for simplicity. If I do that, I should be able to get it set up in a day or two max and move on to other things. Out of curiosity, what kind is space and performance do you need, as well as what is that running you ballpark?
|
# ? Jun 20, 2014 15:57 |
|
Moey posted:Out of curiosity, what kind is space and performance do you need, as well as what is that running you ballpark? Not much space or performance at the time we build it. There will only be 3-4 VMs on the cluster initially, and it's somewhat undefined exactly how we're projecting to use it. It's a little bit of an odd project, but it made more sense to build the VMware environment now than it did to buy physical servers/appliances for everything we need to implement. The config I was quoted was the 5200 + DAE, 2+1 100gb FAST cache, 25x600gb 10k 2.5", 2 4x1gb IO cards, 3yr 24x7x4h support, FAST suite, local protection suite, block suite for right about $22k.
|
# ? Jun 20, 2014 16:21 |
|
Look at Nimble if all you care about is iSCSI. It will be very easy to get up and running within a day as there it's very little to configure.
|
# ? Jun 20, 2014 16:53 |
|
Was about to say the same thing. Could probably get a Nimble CS220 for around the same price. Dead simple to work with and good performance. You will spend more time racking the thing (their rails suck) than you will deploying it.
|
# ? Jun 20, 2014 17:28 |
|
So I've had this exact (HP N54L running FreeNAS 9.2.1.2) box connected to various ESXi boxes of different versions (4.0, 5.0 and 5.1) in the past several months. Each time it seemed rather finicky to get this thing connected and I stupidly never kept track of what exactly I did to get it working. Mainly because I wasn't working with production data. I basically changed iSCSI settings here and there and rescanning from vmware until it connected. Can't seem to get it connected to a host at the moment. Anyway, I did a factory reset of FreeNAS and configured an IP address, DNS, gateway and configured iSCSI by... 1) Creating a Portal to the IP of the box 192.168.0.32:3260 2) Setting up an initiator. I left it default to allow all initiators and authorized networks. I later set the authorized network to 192.168.0.0/24 3) Created a file extent to /mnt/ZFS36/extent with a size of 3686GB. (browsed to this directory and the file exists and is 3.6tb) 4) Created a target, then a target/extent association. I created a software iSCSI adapter, added a NIC and IP, pointed it to the portal address, and VMware picks up the target name, but doesn't connect. There's got to be something simple here I'm over looking... goobernoodles fucked around with this message at 21:46 on Jun 20, 2014 |
# ? Jun 20, 2014 21:43 |
|
Moey posted:Was about to say the same thing. Could probably get a Nimble CS220 for around the same price. Dead simple to work with and good performance. How well does their replication work? Do they support any form of active fail over? I just got a tentative approval from my boss to quote out a secondary SAN for our current planned MSSQL Billing environment, with a budget of 80k. Right now we were thinking we want to purchase a second copy of our Equallogic SAN to act as a backup in case of a primary array failure, but I'm fairly certain that Equallogic can't seamlessly failover in any way. It also doesn't support a lot of advanced features such as compression or dedup, and has absolutely no flash to speak of.
|
# ? Jun 20, 2014 21:57 |
|
Wicaeed posted:How well does their replication work? Do they support any form of active fail over? Replication is a snap to setup and works fast. We are going over a 50meg connection to our other site. Controllers run active-passive and you can fail over live without issues. I'm able to run firmware updates without any outage. I am currently running 2xCS240 with expansion shelves and a CS240.
|
# ? Jun 20, 2014 22:28 |
|
zen death robot posted:Technically speaking the VNX uses post-process deduplication, so newly written data isn't deduplicated until later on. They might be using some bits of Data Domain's IP for the hashing and all, but since DD is mostly for backup and not live data they probably couldn't use it as is without some rather severe performance penalties. For some reason I had it in my head that VNX2 did inline dedupe. Whoops! Yes, post-process is a different beast entirely, but it should still be run as a low priority background process that gives way to user IO and doesn't cause the system to fall over. As far as just not using it, you can get away with it using linked clones for VDI (though I'd argue that you're not really getting the same benefits you would from deduplication since you can still end up with many duplicate blocks existing across linked VMs if they are simply written at different times) but VDI isn't going to be the only thing running on your SAN at most places. I'm just naturally skeptical of heavily using linked clones anyway due to the potential for performance issues. I'd much rather leverage VAAI copy offload on NAS to create thin clones on the storage.
|
# ? Jun 26, 2014 22:20 |
|
So I've had a few meetings with Dell, Nimble and VMware (still waiting on NetApp to get back to us). And some of my colleagues like the idea of VMware vSAN. Based on our workload I think this is a bad idea, but they don't seem to think so. Also from what I understand management and scaling out/up vSAN is a pain in the rear end . How can I convince them that vSAN is a bad idea? I'm having a hard time articulating why.
|
# ? Jun 27, 2014 15:54 |
|
bigmandan posted:So I've had a few meetings with Dell, Nimble and VMware (still waiting on NetApp to get back to us). And some of my colleagues like the idea of VMware vSAN. Based on our workload I think this is a bad idea, but they don't seem to think so. Also from what I understand management and scaling out/up vSAN is a pain in the rear end . How can I convince them that vSAN is a bad idea? I'm having a hard time articulating why. It's really not a bad idea by default. As with everything we would need to know your use case... and raid controllers (pray they aren't perc 310)
|
# ? Jun 27, 2014 17:35 |
|
bigmandan posted:So I've had a few meetings with Dell, Nimble and VMware (still waiting on NetApp to get back to us). And some of my colleagues like the idea of VMware vSAN. Based on our workload I think this is a bad idea, but they don't seem to think so. Also from what I understand management and scaling out/up vSAN is a pain in the rear end . How can I convince them that vSAN is a bad idea? I'm having a hard time articulating why. Because you're trying to fit enterprise requirements into consumer hardware. Don't do it.
|
# ? Jun 27, 2014 17:37 |
|
Nitr0 posted:Because you're trying to fit enterprise requirements into consumer hardware. How did you get consumer hardware out of that?
|
# ? Jun 27, 2014 17:53 |
|
Because most people will deploy a vsan with 7200RPM drives, a lovely raid controller and then wonder why their VDI infrastructure doesn't work. For the cost of buying proper components (15k drives, ssd, dual raid controllers, etc) you may as well just buy a proper storage system.
|
# ? Jun 27, 2014 17:56 |
|
Nitr0 posted:Because most people will deploy a vsan with 7200RPM drives, a lovely raid controller and then wonder why their VDI infrastructure doesn't work. Yeah, I guess I assumed if they are doing proper legwork looking at options, they would spec their servers properly as well. I also read consumer as home stuff. I think vSAN is neat for real small loads for the SMB, but probably still has room for improvement. Also that story of the poo poo hardware on the HCL and everything locking up due to the load of expanding a node is hilarious. Nonetheless, I agree with you that a real SAN with redundant everything is the way to go.
|
# ? Jun 27, 2014 18:03 |
|
VSAN is probably great if you don't use cheap hardware. And then if you do use non-cheap hardware, your costs are more than just buying a cheaper array. both ways.
|
# ? Jun 27, 2014 18:18 |
|
I think the main problem is that going the vSAN route would fit our needs right now but some of my colleagues don't seem to realize is that we would very quickly outgrow what vSAN provides. Going with a "normal" SAN makes sense long term. Additionally, our read/write ratio is pretty drat close to 1:1. I'm fairly new to SANs in general but, based on my research and dealings with vendors, I think that alone would justify a SAN array. So far I think two Nimble cs220's (one for replication to satisfy DR) would fit our needs now and for the next few years based on our growth. Equallogic arrays would work as well, but I like the flexibility Nimble provides (on paper it seems that way). Am I on the right track here or am I way off base? Just to make sure what I'm thinking is sane I'll provide a few details of our environment: We're an ISP. We're looking to consolidate majority of our physical servers with virtualization. Currently we have no unified storage solution. Replication to offsite is going to be a must have. Current performance across all servers, both physical and virtual, is about 50 MB/s avg, 100 peak. 1:1 read/write ratio, averaging 1k IOPS, peaking at around 2k. Performance is limited due to directly attached storage, either mirrored or RAID5. A lot of our production hardware is older than 7 years. Most services are the usual things an ISP has: DNS, mail, web servers, radius, etc. Mail accounts for half our IO. After we finish consolidation we'll end up with about 45-50 VM's. 6 DNS, 2 Mail, 1 MySQL (20 schemas or so), 2 RADIUS, 4-6 virtual desktops and the rest being Web servers serving various functions (customer vhosts, internal sites, etc..). Most servers are Debian, with a few Win2k8 servers that we needed for specific applications.
|
# ? Jun 27, 2014 19:46 |
|
bigmandan posted:I think the main problem is that going the vSAN route would fit our needs right now but some of my colleagues don't seem to realize is that we would very quickly outgrow what vSAN provides. Going with a "normal" SAN makes sense long term. Additionally, our read/write ratio is pretty drat close to 1:1. I'm fairly new to SANs in general but, based on my research and dealings with vendors, I think that alone would justify a SAN array. So far I think two Nimble cs220's (one for replication to satisfy DR) would fit our needs now and for the next few years based on our growth. Equallogic arrays would work as well, but I like the flexibility Nimble provides (on paper it seems that way). Your IO requirements are really really low. You could probably run that on just about anything. Even VSAN would work just fine, though if you're concerned about growth it might be more problematic long term. Things like replacing a failed drive will require putting the host in maintenance mode and evacuating all VMs, which is a lot of hassle for something that would be handled very easily by a dedicated storage array with hot spares and hot swappable drives. Data also isn't guaranteed to be local to the node hosting the VM either, which adds latency. And the requirement for write mirroring to SSD on another node adds still more latency which can definitely be felt in VDI environments. VDI is fairly write intensive and very latency sensitive so all things being equal I would choose the lowest latency solution possible, which is going to be an array that does not have to distribute IO over a backplane and which acknowledges writes when then hit NVRAM, rather than SSD (both are fast, but NVRAM will be an order of magnitude faster). Like everyone has said, by the time you spec out hardware for a proper VSAN deployment you're in dedicated SAN territory anyway and you might as well get one and accrue the other benefits that come with it.
|
# ? Jun 27, 2014 20:05 |
|
I suspect the best use case of vSAN is extending the value of previous capex by repurposing hardware that already meets the requirements, not buying new.
|
# ? Jun 27, 2014 20:08 |
|
I ended up going with the VNX, mostly because of the familiarity and lack of time to research the Nimble. Are the EMC VNX associate/specialist certs worth pursuing? After sitting through the training we had bundled and doing this implementation I'd probably be pretty close, I just don't know how much value that holds.
|
# ? Jun 27, 2014 21:06 |
|
NippleFloss posted:Your IO requirements are really really low. You could probably run that on just about anything. Even VSAN would work just fine, though if you're concerned about growth it might be more problematic long term. Things like replacing a failed drive will require putting the host in maintenance mode and evacuating all VMs, which is a lot of hassle for something that would be handled very easily by a dedicated storage array with hot spares and hot swappable drives. Data also isn't guaranteed to be local to the node hosting the VM either, which adds latency. And the requirement for write mirroring to SSD on another node adds still more latency which can definitely be felt in VDI environments. VDI is fairly write intensive and very latency sensitive so all things being equal I would choose the lowest latency solution possible, which is going to be an array that does not have to distribute IO over a backplane and which acknowledges writes when then hit NVRAM, rather than SSD (both are fast, but NVRAM will be an order of magnitude faster). Thanks for the info. Our VDI is pretty minimal at the moment but it's good to know about the write intensity and latency.
|
# ? Jun 27, 2014 21:37 |
|
I've got a budget of ~10k. Need to get an iSCSI solution for a small VMware environment. Right now we're at 2-3 hosts, with about 30 VMs. I can do 10GE, as it would hook into a couple of N5Ks. Dilbert steered me away from hacking together a solution with some discount UCS 240s a while back and pointed me into the direction of Dell's MD series. I'm currently looking at the MD3800i with 8 2TB 7.2k SAS drives, does that sound alright? I was looking at the configuration options, wasn't really sure what this refereed to: I can't tell if that's needed or not.
|
# ? Jun 27, 2014 22:04 |
|
sudo rm -rf posted:I've got a budget of ~10k. Need to get an iSCSI solution for a small VMware environment. Right now we're at 2-3 hosts, with about 30 VMs. I can do 10GE, as it would hook into a couple of N5Ks. That looks like an HBA for doing DAS (direct attach), which you wouldn't need if your goal is to use iSCSI. More generally, do you have the option to go through a VAR or at least work directly with a Dell sales rep? They'll have access to discounts, no one should ever be paying list price for IT gear. Also their job is to help you ensure you're buying the right thing and answer questions like these.
|
# ? Jun 27, 2014 22:13 |
|
|
# ? May 27, 2024 03:24 |
|
So I got to sit down for an hour with Nimble and go through a webex presentation about their product. If half of what they are claiming is true, this should be pretty simple sell to Management, as long as it doesn't break the bank ($80k)
|
# ? Jun 27, 2014 22:32 |