Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
incoherent
Apr 24, 2004

01010100011010000111001
00110100101101100011011
000110010101110010
Quick Q: Can I use vsphere Resource Allocation Limit to "give" 2GB of ram, but present 4GB? I've got some engineers that complain there isn't enough memory for the VM, but at the host level it historically doesn't use all 2GB it's allocated (at most 1.5gb).

For reference, these VMs run an app that spawns a macro event on a browser.

Adbot
ADBOT LOVES YOU

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

incoherent posted:

Quick Q: Can I use vsphere Resource Allocation Limit to "give" 2GB of ram, but present 4GB? I've got some engineers that complain there isn't enough memory for the VM, but at the host level it historically doesn't use all 2GB it's allocated (at most 1.5gb).

For reference, these VMs run an app that spawns a macro event on a browser.

You can set a limit that’s lower than the allocation but it’s very likely going to lead to poor performance. If the VM thinks it has 4GB of memory but only 2GB of those are backed by actual memory and the rest is just swapped to disk on host, then it can’t make intelligent choices about how to use that memory. If your host isn’t overallocated in memory then why do you care? And if it is you should be using resource pools or reservations or some other better mechanism than per VM limits.

incoherent
Apr 24, 2004

01010100011010000111001
00110100101101100011011
000110010101110010
That is the technical explanation I needed! Thanks.

Internet Explorer
Jun 1, 2005





Yeah, definitely don't do that. I had an Exchange server that someone had set a Resource Allocation Limit to the amount of memory they gave the server, then when I came in years later and gave it a bit more it caused massive performance issues until I realize what the hell happened.

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

Internet Explorer posted:

For sure. Good thing to point out. I'm conflating the two because they do similar things, but depending on what you are troubleshooting you might do one or the other.


The reason I'm interested in this discussion is because I have definitely needed to do both setting it in BIOS, and setting it in VMware, to fix different issues. It has been a while, so I was wondering my my knowledge was out of date, but it certainly fixed the issues and was not placebo.

VMware very much says to set the BIOS to OS controlled for power management and allow ESXi to handle the performance/power. For workloads like SQL it's no different except they'd suggest to set the ESXi setting to High Performance. This allows ESXi to control the c-states in software instead of just turning poo poo off like setting it to High Performance would in BIOS. (Edit: though it does state that setting it to High Performance in ESXi turns off all power saving features so not sure how much better it is than doing it in BIOS?)

I think it's been suggested this way since 6.0

https://www.vmware.com/techpapers/2017/Perf_Best_Practices_vSphere65.html

TheFace fucked around with this message at 19:47 on Nov 27, 2018

Internet Explorer
Jun 1, 2005





TheFace posted:

VMware very much says to set the BIOS to OS controlled for power management and allow ESXi to handle the performance/power. For workloads like SQL it's no different except they'd suggest to set the ESXi setting to High Performance. This allows ESXi to control the c-states in software instead of just turning poo poo off like setting it to High Performance would in BIOS. (Edit: though it does state that setting it to High Performance in ESXi turns off all power saving features so not sure how much better it is than doing it in BIOS?)

I think it's been suggested this way since 6.0

https://www.vmware.com/techpapers/2017/Perf_Best_Practices_vSphere65.html

VMware recommends that. Hardware vendors often recommend setting it in BIOS when troubleshooting issues on their side. The recommendations have changed over the years.

I feel like we may be talking in circles here, but the point is that there are legitimate reasons for trying all methods if you are troubleshooting an odd performance problem.

evil_bunnY
Apr 2, 2003

incoherent posted:

Quick Q: Can I use vsphere Resource Allocation Limit to "give" 2GB of ram, but present 4GB? I've got some engineers that complain there isn't enough memory for the VM, but at the host level it historically doesn't use all 2GB it's allocated (at most 1.5gb).

For reference, these VMs run an app that spawns a macro event on a browser.
There’s 2 ways to go about this: either you relinquish perf to the devs and never get a say in it ever again, or work with them testing both configs back to back.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

YOLOsubmarine posted:

Do you have zones with multiple initiators?

Single initiator zones are the bare minimum recommendation for VMware. If you have Brocade or Cisco switches then you can use peer zoning or smart zoning to easily create what are effectively single initiator/single target zones.

We switched to single initiator zoning and that solved this issue and hopefully helps with the other glitches we've experienced.


Another question. Is there any useful alternative for beacon probing if having three NICs isn't an option. We use HPE Proliant BL460c blade servers for our ESXi hosts, but with the latest Gen10 servers 4 NIC ports is the maximum with an FC card and we have preferred to keep guest and management VLANs on separate NIC pairs. Is the practical option to just use beacon probing with two NIC ports and accept "shotgunning", or would this cause other issues? Until now we've only used "link status", since the servers only have 4 connections and our c7000 chassises aren't equipped to provide more. But we've had one case when "link status" wasn't enough to detect malfunction.

I would think it could be possible to connect every port to a monitoring VLAN and randomly "ping" from every port to other hosts' ports to find any malfunctioning link, but VMware doesn't have such functionality.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evil_bunnY posted:

There’s 2 ways to go about this: either you relinquish perf to the devs and never get a say in it ever again, or work with them testing both configs back to back.
or ignore it entirely and just let memory ballooning do its job

who has time to give a poo poo about 2 GB of RAM in nearly the year 2019 anyway

Potato Salad
Oct 23, 2014

nobody cares


don't relinquish performance to the devs, they don't know poo poo

Potato Salad
Oct 23, 2014

nobody cares


"well it's slow because virtualization"

select * from table...

evil_bunnY
Apr 2, 2003

Vulture Culture posted:

or ignore it entirely and just let memory ballooning do its job

who has time to give a poo poo about 2 GB of RAM in nearly the year 2019 anyway
LMAO yes when the numbers look like that WGAF. But sometimes the numbers look like 2 more orders of magnitudes, and devs are screaming about performance "issues" 24/7

Potato Salad posted:

"well it's slow because virtualization"

select * from table...
get out of my head

Thanks Ants
May 21, 2004

#essereFerrari


Moved a legacy app suite to RDS (pending a preview/launch of Windows Virtual Desktop), got a load of whinging about the speed it was running at, and how it must be under-resourced, they needed a bigger instance etc.

Sat on a session while somebody ran through generating the reports that were slow - RAM usage hovered around 20%, CPU never went above 40%. Reported it back to the app support people to take a look and oh, would you look at that, it was timing out trying to load a module that wasn't installed by the guys that deployed the software.

Volguus
Mar 3, 2009

Potato Salad posted:

"well it's slow because virtualization"

select * from table...

They're not wrong. Running on a dedicated machine would hide bad decisions such as this one. Up to a point.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money
Can anyone comment on how good ESXi 6.7 U1’s version of the HTML5 configuration client is? Compared to VCSA, how fully-featured is it? For example, can the newer web client do vMotion moves from one drive to another?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Wanted to try Hyper-V DDA on my system, to try to hardware accelerate graphics on a VM. Turns out it's disabled. You might figure it's about market segmentation (server and poo poo), but some googling turned up a Reddit post of a guy on the Hyper-V team, and the reason mentioned is that it might break some consumer features like Hibernation.

I don't know, mention this on the documentation that we people are very likely to look up, or have the Powershell cmdlet toss a warning, and expect us to be OK with it. But instead, it's disabled on purpose for some menial issue. :[

Tev
Aug 13, 2008

bobfather posted:

Can anyone comment on how good ESXi 6.7 U1’s version of the HTML5 configuration client is? Compared to VCSA, how fully-featured is it? For example, can the newer web client do vMotion moves from one drive to another?

6.7 U1 is supposed to be feature parity with the flash web client. Haven't tested that specifically myself, and it's not on this chart so not 100% sure.

Potato Salad
Oct 23, 2014

nobody cares


Think the poster is referring to esxi's embedded web client as opposed to vcenter flex vs html5, though I am still uncertain too

Tev
Aug 13, 2008

Potato Salad posted:

Think the poster is referring to esxi's embedded web client as opposed to vcenter flex vs html5, though I am still uncertain too

Woops, I missed that. Sorry bobfather, not sure if the host client can do that. Didn't think you could execute vMotion directly from a host anyways?

monsterzero
May 12, 2002
-=TOPGUN=-
Boys who love airplanes :respek: Boys who love boys
Lipstick Apathy
We're looking at building a backup/replecation/DR site for our small ESXi cluster in another building and I just watched a Veeam rep's demo. Looks like goddamned magic. What's the catch?

Thanks Ants
May 21, 2004

#essereFerrari


As far as I know the 'catch' with Veeam is it doesn't scale brilliantly, but I've never done anything VMware in large environments so that's fine.

Being free from storage vendor replication is worth the cost alone.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Thanks Ants posted:

As far as I know the 'catch' with Veeam is it doesn't scale brilliantly, but I've never done anything VMware in large environments so that's fine.

Being free from storage vendor replication is worth the cost alone.

It has issues scaling yea. Part of that is down the the BYOD nature of the product where you provide compute via VMs and you provide storage and you provide various forms of connectivity to repositories and if your infrastructure sucks VEEAM performance will suck. But you also have to understand things like how many concurrent backup jobs are running and how many virtual disks per backup and how that translates into proxy server CPU and memory requirements and how and when you might use SAN direct vs network transfer vs hot add backups and so on.

For a small environment it works fine because mostly those answers don’t matter and you can’t screw it up too badly. At it’s worst it’s still better than Commvault or NetBackup or Networker.

The issue with replication, specifically, is that it’s VMware snapshot based so there’s still issues with VM stun on release.

Zerto is a replication product that actually feels like magic.

Internet Explorer
Jun 1, 2005





I've also heard amazing things about Zerto. To the point where some I spoke with used Veeam for backups and Zerto for replication.

For me, if you're getting into having to go crazy with replication tools like Veeam or Zerto, I'd rather spend the time creating the infrastructure at the application level for replication. Stuff like DFS, Exchange DAGs, SQL AlwaysOn, etc. Veeams works great for replication for me because we're not large enough to have a proper DR set up and just shoving data offsite is the goal for the foreseeable future.

Thanks Ants
May 21, 2004

#essereFerrari


If you have two data centres then it does seem a bit pointless to have one sat there waiting for a disaster if you can run the software in a resilient fashion (just make sure you don't run either site above 50% I guess). A lot of software is complete trash though and the only answer you get for improving availability is to use fault tolerance.

monsterzero
May 12, 2002
-=TOPGUN=-
Boys who love airplanes :respek: Boys who love boys
Lipstick Apathy
Thanks for the feedback.

Zerto sounds cool but it’s way overkill do us. Our recovery points are daily to weekly at present, and running live off the backups mean we could be back up in minutes vs like an hour+ with our current solution.

Our poo poo is pretty small fry, and nothing is particularly real-time so I doubt anyone will notice the snaps happening.

My plan is to use our DR host as the backup proxy, so it should have plenty of horsepower to handle Veeam tasks as there won’t be any other production happing on it unless the poo poo has hit the fan.

I’m also really liking that Veeam doesn’t charge per TB or tariff for transfers to cloud storage.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Internet Explorer posted:

I've also heard amazing things about Zerto. To the point where some I spoke with used Veeam for backups and Zerto for replication.

For me, if you're getting into having to go crazy with replication tools like Veeam or Zerto, I'd rather spend the time creating the infrastructure at the application level for replication. Stuff like DFS, Exchange DAGs, SQL AlwaysOn, etc. Veeams works great for replication for me because we're not large enough to have a proper DR set up and just shoving data offsite is the goal for the foreseeable future.

This stuff is great but it rarely covers every need an organization has for availability. There’s still bespoke and legacy apps that need recovery and if you’re not almost exclusively a Microsoft shop things get much more challenging. You’re basically stuck trying to piece together a complete availability and recovery strategy from a number of different technologies and then managing each of them piecemeal.

Additionally those solutions aren’t always as flexible with regards to things like latency and bandwidth requirements and every app may have slightly different ones.

Also the things you mentioned are the most likely candidates (outside of DB, maybe)to be XaaSified anyway so they’re already kind of low hanging fruit.

The appeal of something like Zerto or VEEAM are that you can overlay them over your existing environment and recovery everything irrespective of how it’s architected. They’re very flexible in regards to networking requirements for replication. And when you need to fail things over it all happens in a single console, can be done as a single job, and can be structured to turn everything up in the appropriate order. That simplicity can be very appealing compared to a massive runbook that requires notifying different application teams at different times to handle their puzzle.

Also Zerto provides near zero RPO for everything and can do it over impressively large geographic distances. That’s basically impossible to get any other way. Synchronous active/active can be done at the storage layer or on a per-application basis but has geographic limitations, and fully asynchronous is easy with a lot of different tools, but sub 1 minute RPO for any workload is unique to Zerto as far as I know.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Double post, but I’m sure most people saw the AWS announcement about Outposts which is basically they’re response to Azurestack, but they’re also doing VMware Cloud on AWS Outposts as well. So first AWS let you put VMware in the cloud and now they’re disrupting the industry by letting you...put VMware in your datacenter?

Weird stuff, but I can also sort of see the appeal. Anyone have thoughts on this?

YOLOsubmarine fucked around with this message at 01:59 on Dec 5, 2018

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

Internet Explorer posted:

I've also heard amazing things about Zerto. To the point where some I spoke with used Veeam for backups and Zerto for replication.

For me, if you're getting into having to go crazy with replication tools like Veeam or Zerto, I'd rather spend the time creating the infrastructure at the application level for replication. Stuff like DFS, Exchange DAGs, SQL AlwaysOn, etc. Veeams works great for replication for me because we're not large enough to have a proper DR set up and just shoving data offsite is the goal for the foreseeable future.

Application level replication is awesome if you have a decent amount of bandwidth to work with and low latency, but can cause some pretty crazy split brain stuff if your WAN goes to poo poo. Granted my experience with this is old and anecdotal, so your mileage may vary.

I worked for a company that wanted to stretch every nickle they could out of everything, we ran Citrix XenServer over VMware because they couldn't warrant the cost even though I begged (this was about 7 years ago now). They wanted a copy of all data in our secondary datacenter (read: server room built in office 3 states away), but wouldn't actually invest in any technology that would allow for it so I had to make due with what I had at the application level. DFS for files (actually scrapped for a 3rd party tool called peersync), DAG (Exchange 2010), SQL mirroringEdit: SQL Transaction log shipping (it's been a while, I did SQL mirroring locally to be resilient in our primary datacenter). All across a T3.

I actually built it all out and it worked pretty drat well... at first, Then one side of the T3 had issues... it probably wouldn't have been as big a deal if it had dropped completely, but instead of dropping, latency was pretty consistent at 300+ ms. SQL would require a manual failover so SQL was ok. DFS just didn't transfer files but that was ok. Exchange LOST ITS MIND, to the point that both sides thought they were the active databases but yet somehow refused to mount the DBs. Even taking the effected side (side with WAN issue) down didn't help. DBs wouldn't mount on the primary (it thought it was dirty). I spent all day working with Microsoft to get the DBs to mount, when finally the WAN issue resolved, the other Exchange server was brought up, and somehow the DAG sorted out its poo poo.

I know they've made Exchange DAGs a lot smarter since 2010, and a lot more tolerant. Still just left a bad taste in my mouth, even though hind sight I should have never built it without the proper networking in place (redundant connections, different carriers, more bandwidth would have been nice, etc).

TheFace fucked around with this message at 22:20 on Dec 5, 2018

Maneki Neko
Oct 27, 2000

Anyone seeing weird performance/response issues with Windows Server 2016 guests on VMWare? In particular the UI seems laggy both through local console/RDP. Seems to be consistent regardless of what kind of hosts or storage things are on, resources assigned, hardware versions, vmware tools versions, virtual storage controllers/nics, etc.

I see threads about issues with Windows Server 2016 performance on ESXi 6.5 prior to U1, but we've updated past that point (although we're still on 6.5)

BallerBallerDillz
Jun 11, 2009

Cock, Rules, Everything, Around, Me
Scratchmo
Anyone here using Kubernetes and tinker with KubeVirt or Virtlet? I saw a few demos at Kubecon this week and they both are cool feats of technology but I'm not sure I'd want to use either in production, even when they're more mature. I guess maybe if you got your entire environments into Kubernetes except a very small handful of things you couldn't containerize it would be tempting.

Alfajor
Jun 10, 2005

The delicious snack cake.
ESX 6.5 question.
I'm going to update some settings for NFS tuning based on https://kb.vmware.com/s/article/2239. Mainly, updating Net.TcpipHeapMax and NFS.MaxQueueDepth.
I'm starting in one cluster of 3 hosts, and want to move carefully.

Would it be OK to update these 2 settings on just one host, and have it be different from the other 2 in the cluster, for a few days? Or do these settings need to be consistent across the cluster at all times?
The way I see it, these are to tune how many resources the ESX host reserves for the NFS stack, and not actually in touching the NFS mount or data, so I wouldn't expect a conflict... but this is a tough one to google, so ya'll are my next best try :)

*edit: I opened a P4 ticket with VMware, we'll see how long until I get an answer

Alfajor fucked around with this message at 00:40 on Jan 18, 2019

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Alfajor posted:

ESX 6.5 question.
I'm going to update some settings for NFS tuning based on https://kb.vmware.com/s/article/2239. Mainly, updating Net.TcpipHeapMax and NFS.MaxQueueDepth.
I'm starting in one cluster of 3 hosts, and want to move carefully.

Would it be OK to update these 2 settings on just one host, and have it be different from the other 2 in the cluster, for a few days? Or do these settings need to be consistent across the cluster at all times?
The way I see it, these are to tune how many resources the ESX host reserves for the NFS stack, and not actually in touching the NFS mount or data, so I wouldn't expect a conflict... but this is a tough one to google, so ya'll are my next best try :)

*edit: I opened a P4 ticket with VMware, we'll see how long until I get an answer

It’s a best practice to keep those settings the same across all hosts, but it’s not a requirement. If they don’t all match then you may experience inconsistent performance depending on which host a VM lives on.

Personally I’d change them on all three as the NFS MaxQueueDepth setting can cause all path down conditions if not set appropriately.

Alfajor
Jun 10, 2005

The delicious snack cake.
Interesting, thanks.
And yeah, my goal would be to of course have all settings consistent across the cluster. I've just joined this company, and the prior infra engineers are not around, but everything was tuned to ESX 5.0, and my boss is shy about just changing things... so I'm asked to test these settings in just one host first if possible, and see how things run for a full production day.
This would be on one host, on a DR/Dev cluster, during a maintenance window. I was planning on doing a rolling-update fashion of one host a time, but all host done in one night, and then came the restraint of just one host a day cadence.

GrandMaster
Aug 15, 2004
laidback
Also, any decent storage vendors should publish tech documents specifying what those settings should be, which are usually pretty safe unless you start loving around against their recommendations..

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

GrandMaster posted:

Also, any decent storage vendors should publish tech documents specifying what those settings should be, which are usually pretty safe unless you start loving around against their recommendations..

This, before you change anything check the storage vendor. I know for a fact NetApp publishes all these settings for ESXi, others likely do as well.

As for one host vs three. I'd push back on your boss or change control board that having a mix of settings is "Against best practice and may cause unforeseen issues for the cluster." Throwing around best practice (especially when it is) tends to carry weight with decision making types, at least in my experience.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

TheFace posted:

This, before you change anything check the storage vendor. I know for a fact NetApp publishes all these settings for ESXi, others likely do as well.

As for one host vs three. I'd push back on your boss or change control board that having a mix of settings is "Against best practice and may cause unforeseen issues for the cluster." Throwing around best practice (especially when it is) tends to carry weight with decision making types, at least in my experience.

They also have the VAAI plugins for iSCSI and NFS that you should be deploying on your hosts through the update manager.

evil_bunnY
Apr 2, 2003

Yes there’s no reason to gently caress with the settings until you’ve loaded Netapp’s

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

BangersInMyKnickers posted:

They also have the VAAI plugins for iSCSI and NFS that you should be deploying on your hosts through the update manager.

This too.

Alfajor who's the storage vendor?

Alfajor
Jun 10, 2005

The delicious snack cake.
NetApp :haw:

As I said, I'm bringing up the old "best practices" to current ones. I'm literally going to apply Netapp's best practices, based off this: https://www.netapp.com/us/media/tr-4597.pdf
Boss-man doesn't want to install the vCenter plugin from NetApp, so I'll just do all their NFS tuning manually. The what is locked in, I'm just working on the how.

Btw, VMware support called me back already! Not bad for a P4!
Dude said I should be fine with one host with different settings, but recommended disabling DRS to avoid VMs migrating and having the underlying storage be different. He couldn't tell me to expect issues, but this is an easy way of avoiding finding out.

*edit: TheFace, check your PMs man

Alfajor fucked around with this message at 20:50 on Jan 18, 2019

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


There's no good reason not to install the vCenter plugins that your storage vendor provides

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply