Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Kachunkachunk
Jun 6, 2011
Yay! I can reuse these cables, then. Just to find a decent (yet cheap) switch, then.

Adbot
ADBOT LOVES YOU

whaam
Mar 18, 2008

Corvettefisher posted:

Yeah you should be getting 2 dual port 10gb Nics, 2 10Gb switches going to your 10g ports on your netapp device.

Remember to use some VM affinity with that APP server and SQL server, as well as the VMXNET3.

Also just a question, have you done any simulations of the APP server and SQL server? Some companies love love love to completely over-spec products requirements, when in production they will never utilize anywhere close to it.

The software vendor had us run the sqlio test and claimed we needed 7,000+ iops result from that, which is what they got in their controlled environment on RAID10, they didn't want us running RAID-DP at all initially because they claimed it was inferior. We ran the SQLIO test in a development environment and ended up breaking that IO requirement but only when we allocated more than 4Gb of bandwidth to the NFS storage (it was tested on 10GbE). I'm 100% sure their requirements are bullshit but if we don't play by their rules they will claim our storage is the problem the first time we run into an issue.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

whaam posted:

The software vendor had us run the sqlio test and claimed we needed 7,000+ iops result from that, which is what they got in their controlled environment on RAID10, they didn't want us running RAID-DP at all initially because they claimed it was inferior. We ran the SQLIO test in a development environment and ended up breaking that IO requirement but only when we allocated more than 4Gb of bandwidth to the NFS storage (it was tested on 10GbE). I'm 100% sure their requirements are bullshit but if we don't play by their rules they will claim our storage is the problem the first time we run into an issue.
IOPS and link speed are not intrinsically linked. The bandwidth of 7000 IOPS of 4KB mixed random reads and writes is a lot different than 7000 1MB writes. The first one needs a maximimum of 218mb/s (plus overhead, either way well under 1gbps), probably less since it's a mix of reads of writes. The second needs 55Gbps. I'm not sure you should rely solely on your lab test.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
It seems my networking issues with Hyper-V, which I've mentioned earlier in this thread, seem to have gone away when I've updated the kernel of the Linux VM to 3.8. I suppose it was indeed an issue with the integration driver.

whaam
Mar 18, 2008

adorai posted:

IOPS and link speed are not intrinsically linked. The bandwidth of 7000 IOPS of 4KB mixed random reads and writes is a lot different than 7000 1MB writes. The first one needs a maximimum of 218mb/s (plus overhead, either way well under 1gbps), probably less since it's a mix of reads of writes. The second needs 55Gbps. I'm not sure you should rely solely on your lab test.

I see what you mean, we had a specific size of write for the test, it was just to show that on a similar size write that RAID DP could pull the same IOPS and MB/s as RAID10 that they benchmarked with, the size of the test had no practical link to the size of the writes that the actual program will do.

You mentioned iSCSI earlier, we have been researching as well as our vendor if its possible to band 4 1GbE links together in iSCSI and have traffic actually travel over more than one line at a time. Even with iSCSI is this a case of doing unconventional things like VMDK software raid and using different subnets for each link? Or does iSCSI support a more straight forward method?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I guess I spoke too soon about the issue being solved. However I did find out that the offload functionality of a NIC may mess with Hyper-V, so I've disabled it. Anyone heard about this one before? I mean, it's an Intel NIC, so I don't expect it to be broken in the driver/chip.

DJ Commie
Feb 29, 2004

Stupid drivers always breaking car, Gronk fix car...
If this belongs in the Home Networking thread, I'll move it there but its VMware!

I have a pfsense VMware image running inside of Workstation 9.0.2 on an older Optiplex 755 running WHS2011 (Server2008 based). I'm using a dual-port PCIx Intel Pro Gigabit card (in a PCI slot) for LAN/WAN interfaces, and pfsense is intermittent or unwilling to move traffic across either interface. I have disabled all of the TCP/IP and other parts of Windows control on the interface except for VMware Bridge Protocol (as told in the how-to), used the Virtual Network Editor to configure VMnet1 and VMnet2 to bridge the ports on the card directly, and those are mapped to two network adapters in the vmx itself. pfSense reports them as Intel PRO/1000 Legacy Network COnnection 1.0.4 , reports correct MAC addresses, and correctly reports if the links are up or down, though without updating status past what it was at boot.

Network map:
Nanobridge (192.168.0.1/24) to Intel card Port 1 0.2/24 wmnet1/em0 WAN
Intel card Port 2 (bottom) to vmnet2 to em1 as LAN 192.168.1.1/24 to GbE switches and dd-wrt WRT54G

This bugs me, and I must be missing something with the configuration. Windows firewall is totally disabled (ruling things out), there's no antivirus, and unless WHS2011 has something running on the port that isn't able to be unchecked in the card settings (VMware Bridge Protocol is the only enabled option), I'm confused.

The card is tested, the ports light up correctly, cables are good.


Ideas?


edit: I just noticed that the hardware connected icon is greyed out, setting "Connect at Power on" doesn't work, neither will checking the Connect box. This is regardless of whatever actual physical port I assign VMnet1 to (realtek 8139d card, onboard Intel GbE, either PCIx card port). Is it a corrupted or bad vmx?

DJ Commie fucked around with this message at 22:53 on Mar 15, 2013

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

whaam posted:

I see what you mean, we had a specific size of write for the test, it was just to show that on a similar size write that RAID DP could pull the same IOPS and MB/s as RAID10 that they benchmarked with, the size of the test had no practical link to the size of the writes that the actual program will do.

You mentioned iSCSI earlier, we have been researching as well as our vendor if its possible to band 4 1GbE links together in iSCSI and have traffic actually travel over more than one line at a time. Even with iSCSI is this a case of doing unconventional things like VMDK software raid and using different subnets for each link? Or does iSCSI support a more straight forward method?
iSCSI supports MPIO which allows you to more or less round robin your disk operations. You can do it from either your guest or from the VMware host, and which way your choose is probably more of a personal preference than anything else. We run an iSCSI initiator in our guests and use NFS for our VMDKs, but we used to run MPIO iSCSI for our VMDKs as well.

GrandMaster
Aug 15, 2004
laidback

Combat Pretzel posted:

I guess I spoke too soon about the issue being solved. However I did find out that the offload functionality of a NIC may mess with Hyper-V, so I've disabled it. Anyone heard about this one before? I mean, it's an Intel NIC, so I don't expect it to be broken in the driver/chip.

Offload has always caused problems for us.. Really weird stuff to troubleshoot too e.g. I can ping node 1 & node 2, both on the same subnet but they can't ping each other. Or both nodes can ping each other yet trying to join them to a cluster fails unless you disable offload.

It's usually been Broadcom cards in my experience, not sure about Intel.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
For fellow VCP's out there, what would you want to see in the ICM class that you felt was under-taught/unaddressed.

5.1 is this fall and I want to be prepared.

I can honestly think of a lot but I want some second opinions.

Walked
Apr 14, 2003

Cool. I just got the go-ahead to build out a new Hyper-V environment the other day. Every other one I've come into has been inherited, and poorly designed at best.

So I've got about 300k for hosts and storage (and any networking additions needed). I'm meeting with the software architect next week to discuss his "needs/wants" for hardware power, and I have a pretty good idea for storage requirements based on running the infrastructure for the previous generation of the same software (relatively low speed/IOPS requirements on storage, lots and lots of space needed).

Right now, this is a pretty vague question, but: What SAN solutions [ideally under 50k] have you had good experiences with in Hyper-V environments? Just trying to see what has worked for others, and worked well.

Internet Explorer
Jun 1, 2005





As long as you realize it is a vague question, I would personally suggest Equallogic. Especially if you don't have anyone with a ton of storage experience.

Walked
Apr 14, 2003

Internet Explorer posted:

As long as you realize it is a vague question, I would personally suggest Equallogic. Especially if you don't have anyone with a ton of storage experience.

Definitely know it's a vague question - and definitely just starting the exploration of options. We dont have a storage guy - we have me, and I'm hiring 1-2 people, but dont have the budget to hire someone for just storage :(

Internet Explorer
Jun 1, 2005





Then I can't recommend Equallogic enough and I can't say enough bad things about EMC.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Make sure to also check out IBM V7000. It falls easily within your price range and while the level of knowledge needed to get one going is going to be slightly higher than a comparable EqualLogic, a decent partner will do all the up-front integration work for you. They're really loving rock-solid pieces of hardware with awesome performance and great support (caveat: see earlier comment in the thread about support escalation processes). They're definitely the best entry-level/midrange SAN product on the market right now, but the EqualLogic experience is also a good choice for IT generalists who just need to make poo poo work.

Walked posted:

Definitely know it's a vague question - and definitely just starting the exploration of options. We dont have a storage guy - we have me, and I'm hiring 1-2 people, but dont have the budget to hire someone for just storage :(
Most storage stuff from anyone other than EMC is pretty easy to wrap your head around if you don't need to know every little intricate detail about performance and best practices. When you do need that, hire a storage engineer, or at least a decent consultant.

That said, virtualization deployments are tough, and $300k is a decent chunk of change if you're not talking about replication -- I assume you're looking at virtualizing a decent-sized environment here. Make sure you fully understand everything you're doing, because undersizing a virtualized environment is a recipe for complete business disaster.

Vulture Culture fucked around with this message at 15:19 on Mar 16, 2013

Walked
Apr 14, 2003

Misogynist posted:

Make sure to also check out IBM V7000. It falls easily within your price range and while the level of knowledge needed to get one going is going to be slightly higher than a comparable EqualLogic, a decent partner will do all the up-front integration work for you. They're really loving rock-solid pieces of hardware with awesome performance and great support (caveat: see earlier comment in the thread about support escalation processes). They're definitely the best entry-level/midrange SAN product on the market right now, but the EqualLogic experience is also a good choice for IT generalists who just need to make poo poo work.

Most storage stuff from anyone other than EMC is pretty easy to wrap your head around if you don't need to know every little intricate detail about performance and best practices. When you do need that, hire a storage engineer, or at least a decent consultant.

That said, virtualization deployments are tough, and $300k is a decent chunk of change if you're not talking about replication -- I assume you're looking at virtualizing a decent-sized environment here. Make sure you fully understand everything you're doing, because undersizing a virtualized environment is a recipe for complete business disaster.


Thanks; I'll give that a look too.

As per your last tidbit of commentary: it's a bit pile a / pile b. The actual size of the deployment is relatively small compared to most of the virtualized environments I've come across. Three VMs per environment, with three environments (test, training, production) plus a COOP site with relatively lax requirements on data replication (both volume and latency) that, thus far has been handled quite well via Hyper-V 2012 replicas. I've found at every stage here that budget has been over-allocated for the actual system needs, and by a large margin at that. Highly preferable to the inverse, I suppose. (I have little say in how much I get to work with for a specific project, but I get to use it pretty freely as deemed fit for that project).

Walked fucked around with this message at 15:54 on Mar 16, 2013

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Walked posted:

Thanks; I'll give that a look too.

As per your last tidbit of commentary: it's a bit pile a / pile b. The actual size of the deployment is relatively small compared to most of the virtualized environments I've come across. Three VMs per environment, with three environments (test, training, production) plus a COOP site with relatively lax requirements on data replication (both volume and latency) that, thus far has been handled quite well via Hyper-V 2012 replicas. I've found at every stage here that budget has been over-allocated for the actual system needs, and by a large margin at that. Highly preferable to the inverse, I suppose. (I have little say in how much I get to work with for a specific project, but I get to use it pretty freely as deemed fit for that project).
:psyduck:

The actual budget for this project should probably be a third of what was allocated, tops, assuming the requisite datacenter infrastructure (networking, etc.) is already in play.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Yeah, we're finally moving through with our new environment that's going to be hosting around 50 VMs and we're only in a bit over $100k.

Syano
Jul 13, 2005

Misogynist posted:

:psyduck:

The actual budget for this project should probably be a third of what was allocated, tops, assuming the requisite datacenter infrastructure (networking, etc.) is already in play.

Even if requisite infrastructure wasn't in play I bet you could still do it for a 3rd of that budget. Basic SMB virtualization is cheap as balls now. A Dell MD3200i fully pop'ed with 1TB disks, a pair of Cisco 2960s and 3 hosts, along with licensing should cost about 50 grand, give or take, these days. And it should be powerful enough to run any number of servers that an SMB should need. Even if your environment pushes some badass IOPs or has some sort of massive memory requirements you should be able to put together a crème de la crème environment for just around 6 figures.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Syano posted:

Even if requisite infrastructure wasn't in play I bet you could still do it for a 3rd of that budget. Basic SMB virtualization is cheap as balls now. A Dell MD3200i fully pop'ed with 1TB disks, a pair of Cisco 2960s and 3 hosts, along with licensing should cost about 50 grand, give or take, these days. And it should be powerful enough to run any number of servers that an SMB should need. Even if your environment pushes some badass IOPs or has some sort of massive memory requirements you should be able to put together a crème de la crème environment for just around 6 figures.

For hardware, sure, but you forgot to include the $200k that they'll be paying a consultant to put it in.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

madsushi posted:

For hardware, sure, but you forgot to include the $200k that they'll be paying a consultant to put it in.

drat I gotta get in this business.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Walked posted:

Right now, this is a pretty vague question, but: What SAN solutions [ideally under 50k] have you had good experiences with in Hyper-V environments? Just trying to see what has worked for others, and worked well.
I just wanted to put in my $0.02. Our virtualization environment consists of:

6x High end single proc Cisco UCS rack mount servers each with 128GB of RAM: $60k
6x Procs of enterprise plus VMware licensing (not sure on exact price, I would guess around $20k including vcenter)
6x procs of Server '08 datacenter: $20k
2x Nexus 5k switches (layer 2 only): $35k
1x NetApp HA pair: $200k

I put this information here so you can see that in our case, the storage is what we spent the most on, not the least. Nearly every performance issue we have had in our environment has been traced back to the storage in one way or another. It is never the network, and it is definitely not ever the servers. This is probably a consequence of the great visibility we have into CPU, RAM, and network utilization, and the OK at best visibility we have into our storage, but either way, storage is the most important thing in any new deployment, in my opinion.

evil_bunnY
Apr 2, 2003

adorai posted:

6x High end single proc Cisco UCS rack mount servers each with 128GB of RAM: $60k
Hold me or I may die laughing.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

evil_bunnY posted:

Hold me or I may die laughing.
I went back and looked, and they were only actually $52k (total). Either way, not sure why you are laughing. Even using newegg prices for memory, 10Gbe NICs, and e5 2690 procs, I don't think we could have done much better.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

evil_bunnY posted:

Hold me or I may die laughing.

Depends which model he got, the M3's are actually decent unlike the M1's where they were poo poo, the M2's were slightly better after you update the bios's, however the remote management was stuff flaky, M3's are the first I haven't actually had a problem with. Other then the bullshit on how you are allowed to configure them.

Not to mention cisco is being fairly competitive now with pricing, once they realized people wouldn't flock to cisco servers because they had the word cisco on them.

evil_bunnY
Apr 2, 2003

adorai posted:

I went back and looked, and they were only actually $52k (total). Either way, not sure why you are laughing. Even using newegg prices for memory, 10Gbe NICs, and e5 2690 procs, I don't think we could have done much better.
Iunno we paid quite a bit less than that for biproc machines with the same stats (2*2*10GBE too).
I think that while your price wasn't actually so bad, what made me laugh was remembering what we got quoted for UCS when I was last buying machines. They were 3 times the prices of dell/fuji/ibm and 2 different resellers wouldn't budge much at all.

Corvettefisher posted:

Not to mention cisco is being fairly competitive now with pricing, once they realized people wouldn't flock to cisco servers because they had the word cisco on them.
This may be it.

Rhymenoserous
May 23, 2008
Edit: Replied to ancient post without realizing it. Hi thread.

Hadlock
Nov 9, 2004

I am having trouble getting network connections to work reliably in VirtualBox using CentOS 6.4 VMs. Host machine is Windows8 Pro with an intel motherboard/intel gig-e adapter.

I can install CentOS 6.4 on a redhat vm, during install I configure the ethernet adapter during setup, I can ping google.com from the command line.

If I clone this VM in the VirtualBox GUI and check the "reconfigure MAC address" box and then boot the machine, the cloned machine is not able to ping google.com. Both machines are using NAT with unique MAC addresses.

Is this a Virtualbox issue or is my DHCP server wigging out? I have Ubuntu 12.04 server and 12.04 desktop VMs running side by side with no network issues on the same machine.

Thanks

evol262
Nov 30, 2010
#!/usr/bin/perl

Hadlock posted:

I am having trouble getting network connections to work reliably in VirtualBox using CentOS 6.4 VMs. Host machine is Windows8 Pro with an intel motherboard/intel gig-e adapter.

I can install CentOS 6.4 on a redhat vm, during install I configure the ethernet adapter during setup, I can ping google.com from the command line.

If I clone this VM in the VirtualBox GUI and check the "reconfigure MAC address" box and then boot the machine, the cloned machine is not able to ping google.com. Both machines are using NAT with unique MAC addresses.

Is this a Virtualbox issue or is my DHCP server wigging out? I have Ubuntu 12.04 server and 12.04 desktop VMs running side by side with no network issues on the same machine.

Thanks

99% odds that it's coming up as a different interface. Redhat(-alikes) put MAC info in /etc/sysconfig/network-scripts/ifcfg-${interface}, as well as /etc/udev/rules.d/70-persistent-net-rules

When it reboots, "ifconfig -a" and check the MACs on your (probably multiple) interfaces, then compare that against /etc/sysconfig/network-scripts/ifcfg-${interface}. If no matching file exists, it won't do anything. If the MAC is wrong, it won't do anything. Etc.

If it does, just delete everything from /etc/udev/rules.d/70-persistent-net-rules and reboot, as well as the old (nonexistent) MAC from /etc/sysconfig/network-scripts/ifcfg-${interface} and reboot

MC Fruit Stripe
Nov 26, 2002

around and around we go
Hyper-V friends, lend me your ears.

Whenever I try to create a fixed disk on a Server 2012 box, I get the following:

quote:

The server encountered an error trying to create the virtual hard disk.

Failed to create the virtual hard disk.

The system failed to create 'E:\Virtual Machines\New Virtual Hard Disk.vhdx': The request could not be performed because of an I/O device error. (0x8007045D).

Preliminary googling says this is a result of sector sizing problems and Hyper-V's lack of support for 4k sectors. However, everything I find says basically "this is a problem", not "there is a fix for this" - which obviously, feels incorrect. What should I be looking into?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Since Hyper-V has to go through the host's NTFS driver in case of a VHDX, why would the underlying hardware sector size pose any issues? Especially since NTFS defaults to 4KB cluster size, too?

MC Fruit Stripe
Nov 26, 2002

around and around we go
Yeah it makes fuckall sense to me since my other Hyper-V nodes are using 4k sectors - the only difference is they're 2008 R2 and this is the first time I've rolled out on 2012. However, I've replaced all the relevant hardware, so at this point I'm just clutching at anything I can.

MC Fruit Stripe fucked around with this message at 23:21 on Mar 20, 2013

Kachunkachunk
Jun 6, 2011
Weird, how big's the disk you're trying to create?

MC Fruit Stripe
Nov 26, 2002

around and around we go
50gb - it works fine when I create a dynamic disk, I can build a VM, run it, no issues at all. When I try to create a fixed disk, it'll get to say, 20% of 50gb (that's 10gb!), and start slowing, and by 30% it'll fail with the above error.

I've changed every moving part except for the OS, so I just put 2008 R2 on that box and am gonna see if it behaves the same way.

Muslim Wookie
Jul 6, 2005
Is there any way to mount an FC presented LUN to a Server 2008R2 Hyper-V *Guest*?

The key issue here is I need a LUN presented and recognised as a NetApp LUN in the guest OS, as opposed to a generic IDE or SCSI driver. I understand a virtual FC HBA is a feature of 2012 Hyper-V but assuming upgrading the hosts is not an option, do I have anything else I can try?

Erwin
Feb 17, 2006

Was having a problem with the snapshot size alarm triggering falsely in vSphere 5.1 (it would trigger for VMs that didn't have snapshots). I submitted a case, then just for fun, I deleted the alarm and recreated it, and it triggered for every single VM (and sent a shitload of emails). The tech and I had a bit of a laugh when we did a webex, and then he told me that that alarm doesn't work in 5.1 and there's no work around. :ughh:

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
That's like the VDR bug where it will send you emails every 10 seconds until you hard reboot it

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2013533

Best bug ever to have in the morning....

Exclusive
Jan 1, 2008

Muslim Wookie posted:

Is there any way to mount an FC presented LUN to a Server 2008R2 Hyper-V *Guest*?

The key issue here is I need a LUN presented and recognised as a NetApp LUN in the guest OS, as opposed to a generic IDE or SCSI driver. I understand a virtual FC HBA is a feature of 2012 Hyper-V but assuming upgrading the hosts is not an option, do I have anything else I can try?

You can definitely do physical disk and LUN passthrough on 2008R2 Hyper-V (not supported with failover clustering). You don't even need a volume on it. But recognizing it as a LUN on the guest is a tough requirement -- I can't see how it's possible without an HBA.

Muslim Wookie
Jul 6, 2005

Exclusive posted:

You can definitely do physical disk and LUN passthrough on 2008R2 Hyper-V (not supported with failover clustering). You don't even need a volume on it. But recognizing it as a LUN on the guest is a tough requirement -- I can't see how it's possible without an HBA.

Ya, definately knew that you can passthrough LUNs, but having it recognised as such in the VM is another story :(

Adbot
ADBOT LOVES YOU

Kachunkachunk
Jun 6, 2011
For anyone with an interest and supported GPU, nVidia VIBs are available: http://www.nvidia.com/object/vmware-vsphere-esxi-5.1-304.76-driver.html.
I have doubts it'll work with other GPUs without modification/hackery, say for whitebox users like myself, but I do hope to have some time to try with a Fermi desktop card some time next week.

Edit: I had it working before with an older driver that wasn't locked down. I could just rely on it, but I noticed odd power consumption patterns with the card. Could be a server-side issue, though.
I would basically come back in a few days to find the room hot from the GPU, which had been running full-tilt for days. Without any VMs running off of it. :psyduck:

Edit: Fixed link!

Kachunkachunk fucked around with this message at 19:36 on Mar 23, 2013

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply