Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Pantology posted:

After installing the Hyper-V role, that instance of Windows is somewhat comparable to the ESX Service Console.
Or the Xen dom0.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

mpeg4v3 posted:

For some reason I thought GPU passthrough worked well now, but I have no idea where I got that impression from. I guess that means I'll have to do all decoding in software, which is fine- but I'm really not sure if a single 4-core VM would be able to decode and reencode to 1080p with two or three movies at once. If I went with the 6-core, I was actually planning to assign the VM all six cores, and assign one to each of the rest of the VMs, with my reasoning being that the other VMs were not really going to be that CPU-intensive.
If you're not going to be running all the encoder threads near 100% of the time, the extra cores may actually hamstring your VM performance because of the way that SMP co-scheduling works.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bitch Stewie posted:

What's the recommended option for VMware without having to stand up a dedicated monitoring server with the HP software and the Dell software etc.
Dedicated monitoring server with IPMI software.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
And don't forget to set the restart priorities sanely for all your VMs if that wasn't a concern before.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Corvettefisher posted:

Wait they removed VMware teams in 8?

The gently caress?
Teams are now folders.

http://blogs.vmware.com/workstation/2011/10/what-happened-to-teams.html

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evil_bunnY posted:

You're really asking for it if you have 70 hosts and don't pin vCenter.
vCenter will only live on the cluster it lives on (hat tip, Yogi Berra), so you're really at a max of 32 unless you're completely clueless and have no idea which host belongs to which cluster.

Not that this is a significantly better situation, of course.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

cheese-cube posted:

Looking at some recent quotes that have come across my desk a 2.5" form-factor, 8Gb FC control enclosure with 24 x 600GB 10k 6Gb SAS HDDs will set you back around $120k ($60k for the enclosure and around $2.6k per HDD). Of course that is retail pricing, IBM Special Bid pricing can be much better. Not to sound like an IBM shill or anything but I can say the V7000 scales up ridiculously and no matter how much I/O I throw at it the thing just keeps on chugging.
As someone who's considering dropping a quarter mil on a V7000/SONAS configuration, this is reassuring.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
We're having no end to our support issues with Veeam, and we're considering moving to a competitor's product. I'd like to solicit this thread's opinions while we do our bake-off: Veeam, PHD Virtual or vRanger? Why?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mierdaan posted:

I've always meant to ask if I just had a bad experience with Veeam or what. We were using it to back up a 1TB Server 2008 R2 file server a few years back, and every single night the backup would take {$hugeInt} of hours. The amount of data written to the Veeam server's drive reflected the actual deltas, but it took a full backup's worth of time every single night. Veeam wasn't ever able to explain it, since according to their sales guy and their tech support the synthetic incrementals should've been fast.
This is one of our issues that we do not have with our eval of PHD Virtual.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

three posted:

Veeam's first level support sucks. You really need to engage your rep to get escalated.
Their level 2 support is also bad enough that after speaking with the support manager, the tech we had doesn't work there anymore.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mierdaan posted:

I'm also not sure if I need to modify the vmkernel interface directly too or just esxcfg-vswitch -m 9000 the vswitch.
You definitely need to modify the vmknic as well; its MTU defaults to 1500.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Edit: dumb

Vulture Culture fucked around with this message at 22:40 on Mar 6, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

fatjoint posted:

I've only been in one scenario where automation needed to disabled, that was in a stretched cluster - i.e. a cluster whose hosts are separated miles from each other.
Even then, since 4.1 you're probably better off just setting affinity rules. They were really designed for metro clustering configurations.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

Is this... normal?


This is a machine with 12 physical cores (two hexacore Intels) running ESX 4.1. The dark red line is CPU usage, and you can see the average is only 18%, but some of the cores have these 100% spikes, even though the averages on all the cores is pretty low . Is this something I should worry about? Is there a way to see which VMs are causing the spike?
Assuming you're not seeing application hiccups due to high ready times, having your CPU spike to 100% means that your processing isn't being bound by memory or I/O wait times. It's only a good thing.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Wonder_Bread posted:

Apparently my issues with the PHDvirtual software are because I am using CIFS shares, 'cause my Drobo doesn't support multi-target iSCSI or NFS.

Ugh... not really sure where to go from here. Don't have any money for storage, already pushing it with trying to get the five grand to get the backup software upgraded.
Hm? We're currently backing up to CIFS on a Sun x4540 and we're not having any issues with our eval at all.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

luminalflux posted:

Whoever came up with the VMWare SDK needs to seriously reconsider their life choices. Such a huge pile of enterprise garbage. I'm really glad that someone at VMWare came to their senses and wrote a ruby wrapper around it (rbvmomi) to reduce most of the suck that is trying to automate deploying VMs.
Wait, people still use things that aren't PowerCLI for automating vSphere?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Martytoof posted:

Ugh, my 60 day vSphere trial ended and I barely made it through a few chapters of Mastering vSphere 5 thanks to some contract work I had to start.

Is this a thing where I email VMware and say "I'm trying to get vSphere experience so pretty please with a cherry on top can I have another 60 days" and they'll be agreeable, or do I basically have to game the system and create a second VMware account, redo my lab with the new keys?

I can't even download things like the vCSA that I was hoping to play with :|

How the hell do people manage setting up home ESXi labs with this limitation? :(
Reinstall with the same key, unless things have changed with V5.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Alctel posted:

Ok, so backing up a VSphere 5.0 enviroment, 2 ESXi servers, ~20 machines.

Having to choose between Veeam, Commvault, VRanger (and symantic ahahaaha)

I am leaning heavily towards Veeam but my manager has heard great things about Commvault and wants to go with them

halp
Is there any reason you're not considering PHD Virtual? We're most likely switching over to them from Veeam in the next couple of weeks. (The stories I could tell you about Veeam.)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evil_bunnY posted:

I meant to ask you: what happened that made you go from "rargh IBM die in the fires of hell" one year ago to considering buying their stuff again?
The IBM stuff that we were having problems with was the Midrange Storage series (DS4800, DS5100) which is rebranded LSI with some IBM tweaks and parts fulfillment. The new stuff is IBM engineering from the ground up (DS8000 and GPFS), which we've had much better experiences with. I'm really looking forward to our V7000 stuff coming in.

Mierdaan posted:

Definitely take Veeam for a drive before purchasing it.
And log a support ticket on some trivial issue if you want to see the real Veeam.

My big beef with Veeam is that they got too big too fast, and their entire company largely seems to be a mismanaged, organically-grown, sprawling wreck where nobody can coordinate to get anything done. When you call and hit the button to speak with an operator, it directs you to HR for some reason. At least the woman managing support is really nice and gets things done, once you figure out how to escalate to her. (Nobody in support will actually escalate you when you ask them to, and you generally need to move through your sales rep to get any wheels turning.)

For what it's worth, our backups finish in 20% of the time with PHD Virtual versus Veeam, and PHD Virtual's architecture is way easier to manage.

Vulture Culture fucked around with this message at 21:25 on Mar 12, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Fancy_Lad posted:

And here our ESX guys are wanting to drop PHD because their support is apparently horrible as well. Fun!
Did any techs get fired over repeated passive-aggressive comments made in your last support case?

I'll put up with it if the product works. My experience with Veeam has been 2 months of fighting with this product just to get it to back up all our VMs halfway reliably. Now that it does, it takes 26:30:00 to back up 200 GB of nightly changes, which is precisely six times as long as PHD Virtual takes.

Vulture Culture fucked around with this message at 02:33 on Mar 13, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Rhymenoserous posted:

It's not P2V'd, started fresh.
Same advice applies. Start medium-low and scale up/down as necessary -- Windows Server 2003+, in Enterprise Edition or higher, can hot-add CPU and memory as long as you have the option checked off in the VM options. Make sure to actually watch what SQL is doing with the memory, because it will take up most of the available RAM on the host for use as cache whether you need it or not. Only the SQL Server-level statistics will give you the memory usage information that you need to know.

Keep in mind what co-scheduling does to latency of underutilized SMP systems. Start with no more than two vCPUs unless you're positive you'll be consistently using four cores, or it will actually hurt your application response times.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I haven't seen anything on Sandy Bridge downclocking memory when additional banks are populated like on Nehalem and Westmere, but do be aware that supported DDR3 memory speed ranges from 1066 MHz up to 1600 MHz depending on your CPU model. DDR3-1600 is only available on the 2637, 2643, 2667, and all of the eight-core models.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Kachunkachunk posted:

My guess is that you would want to see what the chipset's specifications say about this stuff.
Sandy Bridge moves the northbridge on-die, FYI.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Alctel posted:

Never even heard of PHD
PHD Virtual is one of the biggest players in this space, and the easiest to set up (the entire product is virtual appliance-based). You can have an eval up and running in less than 10 minutes (download time not included). Management is a little wonky for really big environments, since the appliances don't share settings. However, unlike Veeam, it actually multithreads correctly, so you don't need to break your backups into 10 mini backup jobs in order to get concurrency.

Alctel posted:

what problems did you have with Veeam?
First Veeam tech being evasive about needing to troubleshoot issues with vCenter when all of our backups were timing out on vCenter SOAP calls:

Veeam Support posted:

That's great that the job is successful when backing up directly through the host. Let me know if you have any other questions regarding this issue or if it's ok to close the ticket.

When Veeam actually does, one in a hundred times, back up our 110 VM job correctly:

Veeam posted:

Elapsed Time: 26:34:26

Level 2 Veeam tech on the same set of backup issues (backup failures due to vCenter timeouts). When I was asked for the third time to restart management agents and re-run a 26-hour backup job by a tech who clearly didn't read my case notes and was stalling me until the following Monday, I replied that this was done. I got this reply:

Veeam Support posted:

Wow!
I've never seen someone be able to disable firewalls, check services, then restart a Veeam job (and have it complete) that quickly before!

Most of the support side of the company, from what I've seen of them, is staffed by passive-aggressive work-averse little shits, which I would be more than willing to tolerate if the product actually worked. I've never done business with such an unprofessional vendor before.

Vulture Culture fucked around with this message at 14:28 on Mar 14, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Kachunkachunk posted:

Holy cow! I had no idea that's what people go through when working with Veeam.
Everyone's had their bad support experiences, and I don't want to color the company unfairly with my impressions -- lots of other people have had fairly decent experiences with Veeam, and I don't want to sway anybody's opinions away from what might be a good solution for them. But the amount of time I'm likely to invest in making it work correctly in our environment far outweighs the cost of switching to another vendor right now.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Corvettefisher posted:

It's web base, and sadly a lot of the advice is incorrect or probably not good advice for anyone experienced in VMware. In one video it says cloud is great because "you don't have to have long meetings with the IT to spec out servers you just give exactly what you want!" that was almost word for word.
VMware wants you to sell things. In this sense, you really don't need to spec out much, just scale up as needed and charge on capacity actually used.

As for whether people actually do this in practice, that's a whole other story. Most people prefer predictable cost models within an organization.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Corvettefisher posted:

WHAT DID I DO!?!?
add a horizontal scrollbar to my browser window

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

madsushi posted:

Trying to flush out all of the posters with small monitors.
24" monitors in portrait orientation are for poors :(

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

Is there some way to dd an RDM into a VMDK? I've got two SCCM servers with RDM, and at some point it sounds like I need to move these to VMDK.
You can Storage vMotion them to any datastore if you're using Virtual mode RDMs, which is probably a little bit simpler than Converter.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Moey posted:

Have you done this before, or an offline migration? Is the time about the same as just a normal Storage vMotion or offline migration?
I've done it inadvertently (:shobon:) and I can confirm for you that it works the same as any other Storage vMotion.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

On previous generations of Intel processors (not sure about sandy bridge or whatever is newest) exceeding 48 or 64GB per populated socket caused a memory slowdown. You could mix DIMM sizes but there are guidelines to follow when doing so in regard to ranks, not sure of the specifics.
IBM finally released their technical documentation on the x3550 M4 and this seems to be generally applicable to the Sandy Bridge platform:



Unfortunately, the LRDIMMs that actually exist on the market seem to overwhelmingly be 32 GB modules.

Vulture Culture fucked around with this message at 16:42 on Mar 22, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

We just talked to a Dell sales guy yesterday, who admitted he wasn't 100% versed on the technical stuff because he's been selling ops stuff lately, but he said that if you filled 2 of 3 slots in each bank, your RAM would operate at native memory, but if you filled all 3, it would go down to 800MHz. According to that chart it drops down to 1066MHz.

Though from my naive view it doesn't matter much since VMWare can't use all that memory license wise anyway.
Enterprise Plus has a 96 GB per socket entitlement vs. 64 GB for Enterprise. It may provide you a cost incentive to upgrade.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evil_bunnY posted:

So either I'm blind or IBM doesn't sell these with 16GB 1.6GHz modules?
If you read between the lines, there are no 16GB dual-rank 1600 MHz parts on the market today that are rated to run at 1.35V.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
If your VMs start swapping at the hypervisor level, you have much bigger fish to fry.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

bull3964 posted:

Yeah, that's what I meant, I just worded it a bit weird. The additional cost of VMWare on top of the datacenter license effectively doubles the cost of licensing the host.
Until you take into account how much better VMware is at overcommit and resource management and what that buys you in terms of how many hosts you actually need to pay for.

Hyper-V is getting there faster than I expected, though.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Aniki posted:

Some of VMware's licensing packages make the cost more palatable for small to medium sized businesses, but the cost of licensing does nullify a lot of the benefits in certain situations.
What I've used of Hyper-V in Server 2008 R2 hasn't wowed me, but Windows 8 might make me change my mind. It's been my opinion for quite some time that OS-independent virtualization like VMware probably wouldn't stick around for long, and that it's only a matter of time before the Windows guys and Linux guys each run their own native virtualization stacks in most environments. I'm curious what the virtualization landscape will look like in a few years, but right now I still wouldn't touch Hyper-V with a 2km single-mode pole. The weird little gotchas like lack of promiscuous mode support for a VM and lack of support for link teaming (fixed in Server 8) are painful to most organizations doing serious IT.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evil_bunnY posted:

Bulldozer single thread performance is horrible, but I wasn't expecting it to be this bad.
You'd figure they would have learned something from Sun's experiences with the T2/T3 chips.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

So, any insight on what "not supported" means? Does that mean VMware tech support won't support the configuration, even though it works perfectly fine?
Hypothetically, you can get it working "perfectly fine" using VMDKs on a service with physical bus sharing involved also. In practice, it dramatically increases the probability that you will experience cluster disk timeouts and failover events under load.

The main concern when using iSCSI is that most people opt to use software HBAs. Even with TOE and other hardware acceleration turned on, the CPU usage of iSCSI is significantly higher than using Fibre Channel. In 99% of production use cases, you'll never notice this CPU usage. However, one impact is that if you are pegging the CPU on your system at 100% utilization, or otherwise jamming up the scheduling queue (i.e. by scheduling a VM that uses every core on the box), you run a much higher risk of introducing random I/O timeouts and errors into your stack than if you used Fibre Channel. If these stack up, and you cause a number of cluster failovers in quick succession, you risk major data corruption on your cluster disks.

Should it be supported in the year 2012 when most new production boxes are shipping with 16-20 cores? Probably. But that's the original rationale.

Do note that Microsoft and VMware do support iSCSI when it's used from the guest OS's initiator. You can't possibly introduce an I/O timeout while the guest OS is descheduled.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

If you are running your host CPU that high you have other things to worry about. The outrageous levels of CPU power you find today over, say, 2008 levels, makes iSCSI CPU overhead a complete non issue in well over 99% of real world production installs. The CPU requirements of iSCSI shouldn't even enter into the mind of someone doing a deployment these days. When a high end VMware server was 2x 2 core Xeons with 8GB of RAM, yes, but no longer.
"Other things to worry about" typically aren't as severe or permanent as catastrophic data corruption, but your point is well taken otherwise. It's a corner case to be sure.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Pantology posted:

All storage vendors and VMware have always said "It depends," and when pressed for a number have said something low and safe. With VAAI and some help from array-side caching technologies, you can get absurdly high densities under the right conditions. EMC has claimed up to 512 VMs per LUN, NetApp was claiming up to 128 on vSphere 4.1--I'm sure that's higher by now.
Keep in mind that this depends a lot on how you use your storage, as well -- VMs per LUN is tied far more to VMFS metadata operations (which are locking) than VM disk I/O (which does not create any locks). If you're not a big user of thin provisioning, for example, you can get away with way higher densities as well because your only metadata operations occur during VM create/delete/poweron/poweroff and when you swap at the hypervisor level.

Edit: Snapshots, too, especially if you have a tendency to keep them around for awhile, because the delta files grow dynamically.

Vulture Culture fucked around with this message at 18:02 on Apr 9, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply