|
Pantology posted:After installing the Hyper-V role, that instance of Windows is somewhat comparable to the ESX Service Console.
|
# ¿ Feb 21, 2012 05:35 |
|
|
# ¿ Apr 27, 2024 09:45 |
|
mpeg4v3 posted:For some reason I thought GPU passthrough worked well now, but I have no idea where I got that impression from. I guess that means I'll have to do all decoding in software, which is fine- but I'm really not sure if a single 4-core VM would be able to decode and reencode to 1080p with two or three movies at once. If I went with the 6-core, I was actually planning to assign the VM all six cores, and assign one to each of the rest of the VMs, with my reasoning being that the other VMs were not really going to be that CPU-intensive.
|
# ¿ Feb 25, 2012 02:25 |
|
Bitch Stewie posted:What's the recommended option for VMware without having to stand up a dedicated monitoring server with the HP software and the Dell software etc.
|
# ¿ Feb 25, 2012 16:20 |
|
And don't forget to set the restart priorities sanely for all your VMs if that wasn't a concern before.
|
# ¿ Feb 26, 2012 20:26 |
|
Corvettefisher posted:Wait they removed VMware teams in 8? http://blogs.vmware.com/workstation/2011/10/what-happened-to-teams.html
|
# ¿ Feb 27, 2012 04:30 |
|
evil_bunnY posted:You're really asking for it if you have 70 hosts and don't pin vCenter. Not that this is a significantly better situation, of course.
|
# ¿ Feb 29, 2012 15:12 |
|
cheese-cube posted:Looking at some recent quotes that have come across my desk a 2.5" form-factor, 8Gb FC control enclosure with 24 x 600GB 10k 6Gb SAS HDDs will set you back around $120k ($60k for the enclosure and around $2.6k per HDD). Of course that is retail pricing, IBM Special Bid pricing can be much better. Not to sound like an IBM shill or anything but I can say the V7000 scales up ridiculously and no matter how much I/O I throw at it the thing just keeps on chugging.
|
# ¿ Feb 29, 2012 19:26 |
|
We're having no end to our support issues with Veeam, and we're considering moving to a competitor's product. I'd like to solicit this thread's opinions while we do our bake-off: Veeam, PHD Virtual or vRanger? Why?
|
# ¿ Mar 4, 2012 00:25 |
|
Mierdaan posted:I've always meant to ask if I just had a bad experience with Veeam or what. We were using it to back up a 1TB Server 2008 R2 file server a few years back, and every single night the backup would take {$hugeInt} of hours. The amount of data written to the Veeam server's drive reflected the actual deltas, but it took a full backup's worth of time every single night. Veeam wasn't ever able to explain it, since according to their sales guy and their tech support the synthetic incrementals should've been fast.
|
# ¿ Mar 4, 2012 00:35 |
|
three posted:Veeam's first level support sucks. You really need to engage your rep to get escalated.
|
# ¿ Mar 4, 2012 02:26 |
|
Mierdaan posted:I'm also not sure if I need to modify the vmkernel interface directly too or just esxcfg-vswitch -m 9000 the vswitch.
|
# ¿ Mar 6, 2012 21:20 |
|
Edit: dumb
Vulture Culture fucked around with this message at 22:40 on Mar 6, 2012 |
# ¿ Mar 6, 2012 22:36 |
|
fatjoint posted:I've only been in one scenario where automation needed to disabled, that was in a stretched cluster - i.e. a cluster whose hosts are separated miles from each other.
|
# ¿ Mar 7, 2012 19:13 |
|
FISHMANPET posted:Is this... normal?
|
# ¿ Mar 8, 2012 05:20 |
|
Wonder_Bread posted:Apparently my issues with the PHDvirtual software are because I am using CIFS shares, 'cause my Drobo doesn't support multi-target iSCSI or NFS.
|
# ¿ Mar 9, 2012 19:43 |
|
luminalflux posted:Whoever came up with the VMWare SDK needs to seriously reconsider their life choices. Such a huge pile of enterprise garbage. I'm really glad that someone at VMWare came to their senses and wrote a ruby wrapper around it (rbvmomi) to reduce most of the suck that is trying to automate deploying VMs.
|
# ¿ Mar 12, 2012 15:55 |
|
Martytoof posted:Ugh, my 60 day vSphere trial ended and I barely made it through a few chapters of Mastering vSphere 5 thanks to some contract work I had to start.
|
# ¿ Mar 12, 2012 19:00 |
|
Alctel posted:Ok, so backing up a VSphere 5.0 enviroment, 2 ESXi servers, ~20 machines.
|
# ¿ Mar 12, 2012 20:44 |
|
evil_bunnY posted:I meant to ask you: what happened that made you go from "rargh IBM die in the fires of hell" one year ago to considering buying their stuff again? Mierdaan posted:Definitely take Veeam for a drive before purchasing it. My big beef with Veeam is that they got too big too fast, and their entire company largely seems to be a mismanaged, organically-grown, sprawling wreck where nobody can coordinate to get anything done. When you call and hit the button to speak with an operator, it directs you to HR for some reason. At least the woman managing support is really nice and gets things done, once you figure out how to escalate to her. (Nobody in support will actually escalate you when you ask them to, and you generally need to move through your sales rep to get any wheels turning.) For what it's worth, our backups finish in 20% of the time with PHD Virtual versus Veeam, and PHD Virtual's architecture is way easier to manage. Vulture Culture fucked around with this message at 21:25 on Mar 12, 2012 |
# ¿ Mar 12, 2012 21:14 |
|
Fancy_Lad posted:And here our ESX guys are wanting to drop PHD because their support is apparently horrible as well. Fun! I'll put up with it if the product works. My experience with Veeam has been 2 months of fighting with this product just to get it to back up all our VMs halfway reliably. Now that it does, it takes 26:30:00 to back up 200 GB of nightly changes, which is precisely six times as long as PHD Virtual takes. Vulture Culture fucked around with this message at 02:33 on Mar 13, 2012 |
# ¿ Mar 13, 2012 02:30 |
|
Rhymenoserous posted:It's not P2V'd, started fresh. Keep in mind what co-scheduling does to latency of underutilized SMP systems. Start with no more than two vCPUs unless you're positive you'll be consistently using four cores, or it will actually hurt your application response times.
|
# ¿ Mar 13, 2012 18:35 |
|
I haven't seen anything on Sandy Bridge downclocking memory when additional banks are populated like on Nehalem and Westmere, but do be aware that supported DDR3 memory speed ranges from 1066 MHz up to 1600 MHz depending on your CPU model. DDR3-1600 is only available on the 2637, 2643, 2667, and all of the eight-core models.
|
# ¿ Mar 14, 2012 02:40 |
|
Kachunkachunk posted:My guess is that you would want to see what the chipset's specifications say about this stuff.
|
# ¿ Mar 14, 2012 04:27 |
|
Alctel posted:Never even heard of PHD Alctel posted:what problems did you have with Veeam? Veeam Support posted:That's great that the job is successful when backing up directly through the host. Let me know if you have any other questions regarding this issue or if it's ok to close the ticket. When Veeam actually does, one in a hundred times, back up our 110 VM job correctly: Veeam posted:Elapsed Time: 26:34:26 Level 2 Veeam tech on the same set of backup issues (backup failures due to vCenter timeouts). When I was asked for the third time to restart management agents and re-run a 26-hour backup job by a tech who clearly didn't read my case notes and was stalling me until the following Monday, I replied that this was done. I got this reply: Veeam Support posted:Wow! Most of the support side of the company, from what I've seen of them, is staffed by passive-aggressive work-averse little shits, which I would be more than willing to tolerate if the product actually worked. I've never done business with such an unprofessional vendor before. Vulture Culture fucked around with this message at 14:28 on Mar 14, 2012 |
# ¿ Mar 14, 2012 14:16 |
|
Kachunkachunk posted:Holy cow! I had no idea that's what people go through when working with Veeam.
|
# ¿ Mar 14, 2012 15:02 |
|
Corvettefisher posted:It's web base, and sadly a lot of the advice is incorrect or probably not good advice for anyone experienced in VMware. In one video it says cloud is great because "you don't have to have long meetings with the IT to spec out servers you just give exactly what you want!" that was almost word for word. As for whether people actually do this in practice, that's a whole other story. Most people prefer predictable cost models within an organization.
|
# ¿ Mar 14, 2012 16:03 |
|
Corvettefisher posted:WHAT DID I DO!?!?
|
# ¿ Mar 15, 2012 05:22 |
|
madsushi posted:Trying to flush out all of the posters with small monitors.
|
# ¿ Mar 15, 2012 14:32 |
|
FISHMANPET posted:Is there some way to dd an RDM into a VMDK? I've got two SCCM servers with RDM, and at some point it sounds like I need to move these to VMDK.
|
# ¿ Mar 20, 2012 14:18 |
|
Moey posted:Have you done this before, or an offline migration? Is the time about the same as just a normal Storage vMotion or offline migration?
|
# ¿ Mar 20, 2012 15:44 |
|
adorai posted:On previous generations of Intel processors (not sure about sandy bridge or whatever is newest) exceeding 48 or 64GB per populated socket caused a memory slowdown. You could mix DIMM sizes but there are guidelines to follow when doing so in regard to ranks, not sure of the specifics. Unfortunately, the LRDIMMs that actually exist on the market seem to overwhelmingly be 32 GB modules. Vulture Culture fucked around with this message at 16:42 on Mar 22, 2012 |
# ¿ Mar 22, 2012 16:39 |
|
FISHMANPET posted:We just talked to a Dell sales guy yesterday, who admitted he wasn't 100% versed on the technical stuff because he's been selling ops stuff lately, but he said that if you filled 2 of 3 slots in each bank, your RAM would operate at native memory, but if you filled all 3, it would go down to 800MHz. According to that chart it drops down to 1066MHz.
|
# ¿ Mar 22, 2012 17:06 |
|
evil_bunnY posted:So either I'm blind or IBM doesn't sell these with 16GB 1.6GHz modules?
|
# ¿ Mar 23, 2012 15:18 |
|
If your VMs start swapping at the hypervisor level, you have much bigger fish to fry.
|
# ¿ Mar 27, 2012 22:35 |
|
bull3964 posted:Yeah, that's what I meant, I just worded it a bit weird. The additional cost of VMWare on top of the datacenter license effectively doubles the cost of licensing the host. Hyper-V is getting there faster than I expected, though.
|
# ¿ Mar 28, 2012 22:15 |
|
Aniki posted:Some of VMware's licensing packages make the cost more palatable for small to medium sized businesses, but the cost of licensing does nullify a lot of the benefits in certain situations.
|
# ¿ Mar 29, 2012 02:02 |
|
evil_bunnY posted:Bulldozer single thread performance is horrible, but I wasn't expecting it to be this bad.
|
# ¿ Mar 31, 2012 13:13 |
|
FISHMANPET posted:So, any insight on what "not supported" means? Does that mean VMware tech support won't support the configuration, even though it works perfectly fine? The main concern when using iSCSI is that most people opt to use software HBAs. Even with TOE and other hardware acceleration turned on, the CPU usage of iSCSI is significantly higher than using Fibre Channel. In 99% of production use cases, you'll never notice this CPU usage. However, one impact is that if you are pegging the CPU on your system at 100% utilization, or otherwise jamming up the scheduling queue (i.e. by scheduling a VM that uses every core on the box), you run a much higher risk of introducing random I/O timeouts and errors into your stack than if you used Fibre Channel. If these stack up, and you cause a number of cluster failovers in quick succession, you risk major data corruption on your cluster disks. Should it be supported in the year 2012 when most new production boxes are shipping with 16-20 cores? Probably. But that's the original rationale. Do note that Microsoft and VMware do support iSCSI when it's used from the guest OS's initiator. You can't possibly introduce an I/O timeout while the guest OS is descheduled.
|
# ¿ Apr 1, 2012 21:31 |
|
adorai posted:If you are running your host CPU that high you have other things to worry about. The outrageous levels of CPU power you find today over, say, 2008 levels, makes iSCSI CPU overhead a complete non issue in well over 99% of real world production installs. The CPU requirements of iSCSI shouldn't even enter into the mind of someone doing a deployment these days. When a high end VMware server was 2x 2 core Xeons with 8GB of RAM, yes, but no longer.
|
# ¿ Apr 2, 2012 04:36 |
|
|
# ¿ Apr 27, 2024 09:45 |
|
Pantology posted:All storage vendors and VMware have always said "It depends," and when pressed for a number have said something low and safe. With VAAI and some help from array-side caching technologies, you can get absurdly high densities under the right conditions. EMC has claimed up to 512 VMs per LUN, NetApp was claiming up to 128 on vSphere 4.1--I'm sure that's higher by now. Edit: Snapshots, too, especially if you have a tendency to keep them around for awhile, because the delta files grow dynamically. Vulture Culture fucked around with this message at 18:02 on Apr 9, 2012 |
# ¿ Apr 9, 2012 17:59 |