|
Yay! I can reuse these cables, then. Just to find a decent (yet cheap) switch, then.
|
# ? Mar 14, 2013 16:40 |
|
|
# ? May 14, 2024 15:49 |
|
Corvettefisher posted:Yeah you should be getting 2 dual port 10gb Nics, 2 10Gb switches going to your 10g ports on your netapp device. The software vendor had us run the sqlio test and claimed we needed 7,000+ iops result from that, which is what they got in their controlled environment on RAID10, they didn't want us running RAID-DP at all initially because they claimed it was inferior. We ran the SQLIO test in a development environment and ended up breaking that IO requirement but only when we allocated more than 4Gb of bandwidth to the NFS storage (it was tested on 10GbE). I'm 100% sure their requirements are bullshit but if we don't play by their rules they will claim our storage is the problem the first time we run into an issue.
|
# ? Mar 14, 2013 16:50 |
|
whaam posted:The software vendor had us run the sqlio test and claimed we needed 7,000+ iops result from that, which is what they got in their controlled environment on RAID10, they didn't want us running RAID-DP at all initially because they claimed it was inferior. We ran the SQLIO test in a development environment and ended up breaking that IO requirement but only when we allocated more than 4Gb of bandwidth to the NFS storage (it was tested on 10GbE). I'm 100% sure their requirements are bullshit but if we don't play by their rules they will claim our storage is the problem the first time we run into an issue.
|
# ? Mar 14, 2013 23:11 |
|
It seems my networking issues with Hyper-V, which I've mentioned earlier in this thread, seem to have gone away when I've updated the kernel of the Linux VM to 3.8. I suppose it was indeed an issue with the integration driver.
|
# ? Mar 15, 2013 00:09 |
|
adorai posted:IOPS and link speed are not intrinsically linked. The bandwidth of 7000 IOPS of 4KB mixed random reads and writes is a lot different than 7000 1MB writes. The first one needs a maximimum of 218mb/s (plus overhead, either way well under 1gbps), probably less since it's a mix of reads of writes. The second needs 55Gbps. I'm not sure you should rely solely on your lab test. I see what you mean, we had a specific size of write for the test, it was just to show that on a similar size write that RAID DP could pull the same IOPS and MB/s as RAID10 that they benchmarked with, the size of the test had no practical link to the size of the writes that the actual program will do. You mentioned iSCSI earlier, we have been researching as well as our vendor if its possible to band 4 1GbE links together in iSCSI and have traffic actually travel over more than one line at a time. Even with iSCSI is this a case of doing unconventional things like VMDK software raid and using different subnets for each link? Or does iSCSI support a more straight forward method?
|
# ? Mar 15, 2013 16:28 |
|
I guess I spoke too soon about the issue being solved. However I did find out that the offload functionality of a NIC may mess with Hyper-V, so I've disabled it. Anyone heard about this one before? I mean, it's an Intel NIC, so I don't expect it to be broken in the driver/chip.
|
# ? Mar 15, 2013 19:49 |
|
If this belongs in the Home Networking thread, I'll move it there but its VMware! I have a pfsense VMware image running inside of Workstation 9.0.2 on an older Optiplex 755 running WHS2011 (Server2008 based). I'm using a dual-port PCIx Intel Pro Gigabit card (in a PCI slot) for LAN/WAN interfaces, and pfsense is intermittent or unwilling to move traffic across either interface. I have disabled all of the TCP/IP and other parts of Windows control on the interface except for VMware Bridge Protocol (as told in the how-to), used the Virtual Network Editor to configure VMnet1 and VMnet2 to bridge the ports on the card directly, and those are mapped to two network adapters in the vmx itself. pfSense reports them as Intel PRO/1000 Legacy Network COnnection 1.0.4 , reports correct MAC addresses, and correctly reports if the links are up or down, though without updating status past what it was at boot. Network map: Nanobridge (192.168.0.1/24) to Intel card Port 1 0.2/24 wmnet1/em0 WAN Intel card Port 2 (bottom) to vmnet2 to em1 as LAN 192.168.1.1/24 to GbE switches and dd-wrt WRT54G This bugs me, and I must be missing something with the configuration. Windows firewall is totally disabled (ruling things out), there's no antivirus, and unless WHS2011 has something running on the port that isn't able to be unchecked in the card settings (VMware Bridge Protocol is the only enabled option), I'm confused. The card is tested, the ports light up correctly, cables are good. Ideas? edit: I just noticed that the hardware connected icon is greyed out, setting "Connect at Power on" doesn't work, neither will checking the Connect box. This is regardless of whatever actual physical port I assign VMnet1 to (realtek 8139d card, onboard Intel GbE, either PCIx card port). Is it a corrupted or bad vmx? DJ Commie fucked around with this message at 22:53 on Mar 15, 2013 |
# ? Mar 15, 2013 22:23 |
|
whaam posted:I see what you mean, we had a specific size of write for the test, it was just to show that on a similar size write that RAID DP could pull the same IOPS and MB/s as RAID10 that they benchmarked with, the size of the test had no practical link to the size of the writes that the actual program will do.
|
# ? Mar 15, 2013 22:36 |
|
Combat Pretzel posted:I guess I spoke too soon about the issue being solved. However I did find out that the offload functionality of a NIC may mess with Hyper-V, so I've disabled it. Anyone heard about this one before? I mean, it's an Intel NIC, so I don't expect it to be broken in the driver/chip. Offload has always caused problems for us.. Really weird stuff to troubleshoot too e.g. I can ping node 1 & node 2, both on the same subnet but they can't ping each other. Or both nodes can ping each other yet trying to join them to a cluster fails unless you disable offload. It's usually been Broadcom cards in my experience, not sure about Intel.
|
# ? Mar 15, 2013 22:44 |
|
For fellow VCP's out there, what would you want to see in the ICM class that you felt was under-taught/unaddressed. 5.1 is this fall and I want to be prepared. I can honestly think of a lot but I want some second opinions.
|
# ? Mar 16, 2013 00:08 |
|
Cool. I just got the go-ahead to build out a new Hyper-V environment the other day. Every other one I've come into has been inherited, and poorly designed at best. So I've got about 300k for hosts and storage (and any networking additions needed). I'm meeting with the software architect next week to discuss his "needs/wants" for hardware power, and I have a pretty good idea for storage requirements based on running the infrastructure for the previous generation of the same software (relatively low speed/IOPS requirements on storage, lots and lots of space needed). Right now, this is a pretty vague question, but: What SAN solutions [ideally under 50k] have you had good experiences with in Hyper-V environments? Just trying to see what has worked for others, and worked well.
|
# ? Mar 16, 2013 14:51 |
|
As long as you realize it is a vague question, I would personally suggest Equallogic. Especially if you don't have anyone with a ton of storage experience.
|
# ? Mar 16, 2013 14:53 |
|
Internet Explorer posted:As long as you realize it is a vague question, I would personally suggest Equallogic. Especially if you don't have anyone with a ton of storage experience. Definitely know it's a vague question - and definitely just starting the exploration of options. We dont have a storage guy - we have me, and I'm hiring 1-2 people, but dont have the budget to hire someone for just storage
|
# ? Mar 16, 2013 14:55 |
|
Then I can't recommend Equallogic enough and I can't say enough bad things about EMC.
|
# ? Mar 16, 2013 15:02 |
|
Make sure to also check out IBM V7000. It falls easily within your price range and while the level of knowledge needed to get one going is going to be slightly higher than a comparable EqualLogic, a decent partner will do all the up-front integration work for you. They're really loving rock-solid pieces of hardware with awesome performance and great support (caveat: see earlier comment in the thread about support escalation processes). They're definitely the best entry-level/midrange SAN product on the market right now, but the EqualLogic experience is also a good choice for IT generalists who just need to make poo poo work.Walked posted:Definitely know it's a vague question - and definitely just starting the exploration of options. We dont have a storage guy - we have me, and I'm hiring 1-2 people, but dont have the budget to hire someone for just storage That said, virtualization deployments are tough, and $300k is a decent chunk of change if you're not talking about replication -- I assume you're looking at virtualizing a decent-sized environment here. Make sure you fully understand everything you're doing, because undersizing a virtualized environment is a recipe for complete business disaster. Vulture Culture fucked around with this message at 15:19 on Mar 16, 2013 |
# ? Mar 16, 2013 15:15 |
|
Misogynist posted:Make sure to also check out IBM V7000. It falls easily within your price range and while the level of knowledge needed to get one going is going to be slightly higher than a comparable EqualLogic, a decent partner will do all the up-front integration work for you. They're really loving rock-solid pieces of hardware with awesome performance and great support (caveat: see earlier comment in the thread about support escalation processes). They're definitely the best entry-level/midrange SAN product on the market right now, but the EqualLogic experience is also a good choice for IT generalists who just need to make poo poo work. Thanks; I'll give that a look too. As per your last tidbit of commentary: it's a bit pile a / pile b. The actual size of the deployment is relatively small compared to most of the virtualized environments I've come across. Three VMs per environment, with three environments (test, training, production) plus a COOP site with relatively lax requirements on data replication (both volume and latency) that, thus far has been handled quite well via Hyper-V 2012 replicas. I've found at every stage here that budget has been over-allocated for the actual system needs, and by a large margin at that. Highly preferable to the inverse, I suppose. (I have little say in how much I get to work with for a specific project, but I get to use it pretty freely as deemed fit for that project). Walked fucked around with this message at 15:54 on Mar 16, 2013 |
# ? Mar 16, 2013 15:52 |
|
Walked posted:Thanks; I'll give that a look too. The actual budget for this project should probably be a third of what was allocated, tops, assuming the requisite datacenter infrastructure (networking, etc.) is already in play.
|
# ? Mar 16, 2013 18:22 |
|
Yeah, we're finally moving through with our new environment that's going to be hosting around 50 VMs and we're only in a bit over $100k.
|
# ? Mar 16, 2013 18:29 |
|
Misogynist posted:
Even if requisite infrastructure wasn't in play I bet you could still do it for a 3rd of that budget. Basic SMB virtualization is cheap as balls now. A Dell MD3200i fully pop'ed with 1TB disks, a pair of Cisco 2960s and 3 hosts, along with licensing should cost about 50 grand, give or take, these days. And it should be powerful enough to run any number of servers that an SMB should need. Even if your environment pushes some badass IOPs or has some sort of massive memory requirements you should be able to put together a crème de la crème environment for just around 6 figures.
|
# ? Mar 16, 2013 19:34 |
|
Syano posted:Even if requisite infrastructure wasn't in play I bet you could still do it for a 3rd of that budget. Basic SMB virtualization is cheap as balls now. A Dell MD3200i fully pop'ed with 1TB disks, a pair of Cisco 2960s and 3 hosts, along with licensing should cost about 50 grand, give or take, these days. And it should be powerful enough to run any number of servers that an SMB should need. Even if your environment pushes some badass IOPs or has some sort of massive memory requirements you should be able to put together a crème de la crème environment for just around 6 figures. For hardware, sure, but you forgot to include the $200k that they'll be paying a consultant to put it in.
|
# ? Mar 16, 2013 20:31 |
|
madsushi posted:For hardware, sure, but you forgot to include the $200k that they'll be paying a consultant to put it in. drat I gotta get in this business.
|
# ? Mar 16, 2013 21:05 |
|
Walked posted:Right now, this is a pretty vague question, but: What SAN solutions [ideally under 50k] have you had good experiences with in Hyper-V environments? Just trying to see what has worked for others, and worked well. 6x High end single proc Cisco UCS rack mount servers each with 128GB of RAM: $60k 6x Procs of enterprise plus VMware licensing (not sure on exact price, I would guess around $20k including vcenter) 6x procs of Server '08 datacenter: $20k 2x Nexus 5k switches (layer 2 only): $35k 1x NetApp HA pair: $200k I put this information here so you can see that in our case, the storage is what we spent the most on, not the least. Nearly every performance issue we have had in our environment has been traced back to the storage in one way or another. It is never the network, and it is definitely not ever the servers. This is probably a consequence of the great visibility we have into CPU, RAM, and network utilization, and the OK at best visibility we have into our storage, but either way, storage is the most important thing in any new deployment, in my opinion.
|
# ? Mar 16, 2013 23:33 |
|
adorai posted:6x High end single proc Cisco UCS rack mount servers each with 128GB of RAM: $60k
|
# ? Mar 17, 2013 00:21 |
|
evil_bunnY posted:Hold me or I may die laughing.
|
# ? Mar 17, 2013 00:32 |
|
evil_bunnY posted:Hold me or I may die laughing. Depends which model he got, the M3's are actually decent unlike the M1's where they were poo poo, the M2's were slightly better after you update the bios's, however the remote management was stuff flaky, M3's are the first I haven't actually had a problem with. Other then the bullshit on how you are allowed to configure them. Not to mention cisco is being fairly competitive now with pricing, once they realized people wouldn't flock to cisco servers because they had the word cisco on them.
|
# ? Mar 17, 2013 02:20 |
|
adorai posted:I went back and looked, and they were only actually $52k (total). Either way, not sure why you are laughing. Even using newegg prices for memory, 10Gbe NICs, and e5 2690 procs, I don't think we could have done much better. I think that while your price wasn't actually so bad, what made me laugh was remembering what we got quoted for UCS when I was last buying machines. They were 3 times the prices of dell/fuji/ibm and 2 different resellers wouldn't budge much at all. Corvettefisher posted:Not to mention cisco is being fairly competitive now with pricing, once they realized people wouldn't flock to cisco servers because they had the word cisco on them.
|
# ? Mar 17, 2013 11:35 |
|
Edit: Replied to ancient post without realizing it. Hi thread.
|
# ? Mar 18, 2013 17:30 |
|
I am having trouble getting network connections to work reliably in VirtualBox using CentOS 6.4 VMs. Host machine is Windows8 Pro with an intel motherboard/intel gig-e adapter. I can install CentOS 6.4 on a redhat vm, during install I configure the ethernet adapter during setup, I can ping google.com from the command line. If I clone this VM in the VirtualBox GUI and check the "reconfigure MAC address" box and then boot the machine, the cloned machine is not able to ping google.com. Both machines are using NAT with unique MAC addresses. Is this a Virtualbox issue or is my DHCP server wigging out? I have Ubuntu 12.04 server and 12.04 desktop VMs running side by side with no network issues on the same machine. Thanks
|
# ? Mar 19, 2013 08:25 |
|
Hadlock posted:I am having trouble getting network connections to work reliably in VirtualBox using CentOS 6.4 VMs. Host machine is Windows8 Pro with an intel motherboard/intel gig-e adapter. 99% odds that it's coming up as a different interface. Redhat(-alikes) put MAC info in /etc/sysconfig/network-scripts/ifcfg-${interface}, as well as /etc/udev/rules.d/70-persistent-net-rules When it reboots, "ifconfig -a" and check the MACs on your (probably multiple) interfaces, then compare that against /etc/sysconfig/network-scripts/ifcfg-${interface}. If no matching file exists, it won't do anything. If the MAC is wrong, it won't do anything. Etc. If it does, just delete everything from /etc/udev/rules.d/70-persistent-net-rules and reboot, as well as the old (nonexistent) MAC from /etc/sysconfig/network-scripts/ifcfg-${interface} and reboot
|
# ? Mar 19, 2013 14:40 |
|
Hyper-V friends, lend me your ears. Whenever I try to create a fixed disk on a Server 2012 box, I get the following: quote:The server encountered an error trying to create the virtual hard disk. Preliminary googling says this is a result of sector sizing problems and Hyper-V's lack of support for 4k sectors. However, everything I find says basically "this is a problem", not "there is a fix for this" - which obviously, feels incorrect. What should I be looking into?
|
# ? Mar 20, 2013 22:59 |
|
Since Hyper-V has to go through the host's NTFS driver in case of a VHDX, why would the underlying hardware sector size pose any issues? Especially since NTFS defaults to 4KB cluster size, too?
|
# ? Mar 20, 2013 23:13 |
|
Yeah it makes fuckall sense to me since my other Hyper-V nodes are using 4k sectors - the only difference is they're 2008 R2 and this is the first time I've rolled out on 2012. However, I've replaced all the relevant hardware, so at this point I'm just clutching at anything I can.
MC Fruit Stripe fucked around with this message at 23:21 on Mar 20, 2013 |
# ? Mar 20, 2013 23:18 |
|
Weird, how big's the disk you're trying to create?
|
# ? Mar 21, 2013 00:31 |
|
50gb - it works fine when I create a dynamic disk, I can build a VM, run it, no issues at all. When I try to create a fixed disk, it'll get to say, 20% of 50gb (that's 10gb!), and start slowing, and by 30% it'll fail with the above error. I've changed every moving part except for the OS, so I just put 2008 R2 on that box and am gonna see if it behaves the same way.
|
# ? Mar 21, 2013 00:49 |
|
Is there any way to mount an FC presented LUN to a Server 2008R2 Hyper-V *Guest*? The key issue here is I need a LUN presented and recognised as a NetApp LUN in the guest OS, as opposed to a generic IDE or SCSI driver. I understand a virtual FC HBA is a feature of 2012 Hyper-V but assuming upgrading the hosts is not an option, do I have anything else I can try?
|
# ? Mar 21, 2013 12:12 |
|
Was having a problem with the snapshot size alarm triggering falsely in vSphere 5.1 (it would trigger for VMs that didn't have snapshots). I submitted a case, then just for fun, I deleted the alarm and recreated it, and it triggered for every single VM (and sent a shitload of emails). The tech and I had a bit of a laugh when we did a webex, and then he told me that that alarm doesn't work in 5.1 and there's no work around.
|
# ? Mar 21, 2013 20:17 |
|
That's like the VDR bug where it will send you emails every 10 seconds until you hard reboot it http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2013533 Best bug ever to have in the morning....
|
# ? Mar 21, 2013 20:20 |
|
Muslim Wookie posted:Is there any way to mount an FC presented LUN to a Server 2008R2 Hyper-V *Guest*? You can definitely do physical disk and LUN passthrough on 2008R2 Hyper-V (not supported with failover clustering). You don't even need a volume on it. But recognizing it as a LUN on the guest is a tough requirement -- I can't see how it's possible without an HBA.
|
# ? Mar 22, 2013 05:21 |
|
Exclusive posted:You can definitely do physical disk and LUN passthrough on 2008R2 Hyper-V (not supported with failover clustering). You don't even need a volume on it. But recognizing it as a LUN on the guest is a tough requirement -- I can't see how it's possible without an HBA. Ya, definately knew that you can passthrough LUNs, but having it recognised as such in the VM is another story
|
# ? Mar 22, 2013 06:36 |
|
|
# ? May 14, 2024 15:49 |
|
For anyone with an interest and supported GPU, nVidia VIBs are available: http://www.nvidia.com/object/vmware-vsphere-esxi-5.1-304.76-driver.html. I have doubts it'll work with other GPUs without modification/hackery, say for whitebox users like myself, but I do hope to have some time to try with a Fermi desktop card some time next week. Edit: I had it working before with an older driver that wasn't locked down. I could just rely on it, but I noticed odd power consumption patterns with the card. Could be a server-side issue, though. I would basically come back in a few days to find the room hot from the GPU, which had been running full-tilt for days. Without any VMs running off of it. Edit: Fixed link! Kachunkachunk fucked around with this message at 19:36 on Mar 23, 2013 |
# ? Mar 22, 2013 14:06 |