|
Quick Q: Can I use vsphere Resource Allocation Limit to "give" 2GB of ram, but present 4GB? I've got some engineers that complain there isn't enough memory for the VM, but at the host level it historically doesn't use all 2GB it's allocated (at most 1.5gb). For reference, these VMs run an app that spawns a macro event on a browser.
|
# ? Nov 27, 2018 03:31 |
|
|
# ? May 18, 2024 18:04 |
|
incoherent posted:Quick Q: Can I use vsphere Resource Allocation Limit to "give" 2GB of ram, but present 4GB? I've got some engineers that complain there isn't enough memory for the VM, but at the host level it historically doesn't use all 2GB it's allocated (at most 1.5gb). You can set a limit that’s lower than the allocation but it’s very likely going to lead to poor performance. If the VM thinks it has 4GB of memory but only 2GB of those are backed by actual memory and the rest is just swapped to disk on host, then it can’t make intelligent choices about how to use that memory. If your host isn’t overallocated in memory then why do you care? And if it is you should be using resource pools or reservations or some other better mechanism than per VM limits.
|
# ? Nov 27, 2018 03:45 |
|
That is the technical explanation I needed! Thanks.
|
# ? Nov 27, 2018 03:53 |
|
Yeah, definitely don't do that. I had an Exchange server that someone had set a Resource Allocation Limit to the amount of memory they gave the server, then when I came in years later and gave it a bit more it caused massive performance issues until I realize what the hell happened.
|
# ? Nov 27, 2018 04:01 |
|
Internet Explorer posted:For sure. Good thing to point out. I'm conflating the two because they do similar things, but depending on what you are troubleshooting you might do one or the other. VMware very much says to set the BIOS to OS controlled for power management and allow ESXi to handle the performance/power. For workloads like SQL it's no different except they'd suggest to set the ESXi setting to High Performance. This allows ESXi to control the c-states in software instead of just turning poo poo off like setting it to High Performance would in BIOS. (Edit: though it does state that setting it to High Performance in ESXi turns off all power saving features so not sure how much better it is than doing it in BIOS?) I think it's been suggested this way since 6.0 https://www.vmware.com/techpapers/2017/Perf_Best_Practices_vSphere65.html TheFace fucked around with this message at 19:47 on Nov 27, 2018 |
# ? Nov 27, 2018 19:41 |
|
TheFace posted:VMware very much says to set the BIOS to OS controlled for power management and allow ESXi to handle the performance/power. For workloads like SQL it's no different except they'd suggest to set the ESXi setting to High Performance. This allows ESXi to control the c-states in software instead of just turning poo poo off like setting it to High Performance would in BIOS. (Edit: though it does state that setting it to High Performance in ESXi turns off all power saving features so not sure how much better it is than doing it in BIOS?) VMware recommends that. Hardware vendors often recommend setting it in BIOS when troubleshooting issues on their side. The recommendations have changed over the years. I feel like we may be talking in circles here, but the point is that there are legitimate reasons for trying all methods if you are troubleshooting an odd performance problem.
|
# ? Nov 27, 2018 20:01 |
|
incoherent posted:Quick Q: Can I use vsphere Resource Allocation Limit to "give" 2GB of ram, but present 4GB? I've got some engineers that complain there isn't enough memory for the VM, but at the host level it historically doesn't use all 2GB it's allocated (at most 1.5gb).
|
# ? Nov 29, 2018 00:43 |
|
YOLOsubmarine posted:Do you have zones with multiple initiators? We switched to single initiator zoning and that solved this issue and hopefully helps with the other glitches we've experienced. Another question. Is there any useful alternative for beacon probing if having three NICs isn't an option. We use HPE Proliant BL460c blade servers for our ESXi hosts, but with the latest Gen10 servers 4 NIC ports is the maximum with an FC card and we have preferred to keep guest and management VLANs on separate NIC pairs. Is the practical option to just use beacon probing with two NIC ports and accept "shotgunning", or would this cause other issues? Until now we've only used "link status", since the servers only have 4 connections and our c7000 chassises aren't equipped to provide more. But we've had one case when "link status" wasn't enough to detect malfunction. I would think it could be possible to connect every port to a monitoring VLAN and randomly "ping" from every port to other hosts' ports to find any malfunctioning link, but VMware doesn't have such functionality.
|
# ? Dec 1, 2018 14:34 |
|
evil_bunnY posted:There’s 2 ways to go about this: either you relinquish perf to the devs and never get a say in it ever again, or work with them testing both configs back to back. who has time to give a poo poo about 2 GB of RAM in nearly the year 2019 anyway
|
# ? Dec 1, 2018 17:04 |
|
don't relinquish performance to the devs, they don't know poo poo
|
# ? Dec 1, 2018 17:52 |
|
"well it's slow because virtualization" select * from table...
|
# ? Dec 1, 2018 17:58 |
|
Vulture Culture posted:or ignore it entirely and just let memory ballooning do its job Potato Salad posted:"well it's slow because virtualization"
|
# ? Dec 1, 2018 19:45 |
|
Moved a legacy app suite to RDS (pending a preview/launch of Windows Virtual Desktop), got a load of whinging about the speed it was running at, and how it must be under-resourced, they needed a bigger instance etc. Sat on a session while somebody ran through generating the reports that were slow - RAM usage hovered around 20%, CPU never went above 40%. Reported it back to the app support people to take a look and oh, would you look at that, it was timing out trying to load a module that wasn't installed by the guys that deployed the software.
|
# ? Dec 1, 2018 20:30 |
|
Potato Salad posted:"well it's slow because virtualization" They're not wrong. Running on a dedicated machine would hide bad decisions such as this one. Up to a point.
|
# ? Dec 1, 2018 20:36 |
|
Can anyone comment on how good ESXi 6.7 U1’s version of the HTML5 configuration client is? Compared to VCSA, how fully-featured is it? For example, can the newer web client do vMotion moves from one drive to another?
|
# ? Dec 1, 2018 21:27 |
|
Wanted to try Hyper-V DDA on my system, to try to hardware accelerate graphics on a VM. Turns out it's disabled. You might figure it's about market segmentation (server and poo poo), but some googling turned up a Reddit post of a guy on the Hyper-V team, and the reason mentioned is that it might break some consumer features like Hibernation. I don't know, mention this on the documentation that we people are very likely to look up, or have the Powershell cmdlet toss a warning, and expect us to be OK with it. But instead, it's disabled on purpose for some menial issue. :[
|
# ? Dec 2, 2018 02:30 |
|
bobfather posted:Can anyone comment on how good ESXi 6.7 U1’s version of the HTML5 configuration client is? Compared to VCSA, how fully-featured is it? For example, can the newer web client do vMotion moves from one drive to another? 6.7 U1 is supposed to be feature parity with the flash web client. Haven't tested that specifically myself, and it's not on this chart so not 100% sure.
|
# ? Dec 3, 2018 00:46 |
|
Think the poster is referring to esxi's embedded web client as opposed to vcenter flex vs html5, though I am still uncertain too
|
# ? Dec 3, 2018 06:32 |
|
Potato Salad posted:Think the poster is referring to esxi's embedded web client as opposed to vcenter flex vs html5, though I am still uncertain too Woops, I missed that. Sorry bobfather, not sure if the host client can do that. Didn't think you could execute vMotion directly from a host anyways?
|
# ? Dec 3, 2018 15:22 |
|
We're looking at building a backup/replecation/DR site for our small ESXi cluster in another building and I just watched a Veeam rep's demo. Looks like goddamned magic. What's the catch?
|
# ? Dec 4, 2018 21:50 |
|
As far as I know the 'catch' with Veeam is it doesn't scale brilliantly, but I've never done anything VMware in large environments so that's fine. Being free from storage vendor replication is worth the cost alone.
|
# ? Dec 4, 2018 21:53 |
|
Thanks Ants posted:As far as I know the 'catch' with Veeam is it doesn't scale brilliantly, but I've never done anything VMware in large environments so that's fine. It has issues scaling yea. Part of that is down the the BYOD nature of the product where you provide compute via VMs and you provide storage and you provide various forms of connectivity to repositories and if your infrastructure sucks VEEAM performance will suck. But you also have to understand things like how many concurrent backup jobs are running and how many virtual disks per backup and how that translates into proxy server CPU and memory requirements and how and when you might use SAN direct vs network transfer vs hot add backups and so on. For a small environment it works fine because mostly those answers don’t matter and you can’t screw it up too badly. At it’s worst it’s still better than Commvault or NetBackup or Networker. The issue with replication, specifically, is that it’s VMware snapshot based so there’s still issues with VM stun on release. Zerto is a replication product that actually feels like magic.
|
# ? Dec 4, 2018 23:27 |
|
I've also heard amazing things about Zerto. To the point where some I spoke with used Veeam for backups and Zerto for replication. For me, if you're getting into having to go crazy with replication tools like Veeam or Zerto, I'd rather spend the time creating the infrastructure at the application level for replication. Stuff like DFS, Exchange DAGs, SQL AlwaysOn, etc. Veeams works great for replication for me because we're not large enough to have a proper DR set up and just shoving data offsite is the goal for the foreseeable future.
|
# ? Dec 4, 2018 23:34 |
|
If you have two data centres then it does seem a bit pointless to have one sat there waiting for a disaster if you can run the software in a resilient fashion (just make sure you don't run either site above 50% I guess). A lot of software is complete trash though and the only answer you get for improving availability is to use fault tolerance.
|
# ? Dec 4, 2018 23:56 |
|
Thanks for the feedback. Zerto sounds cool but it’s way overkill do us. Our recovery points are daily to weekly at present, and running live off the backups mean we could be back up in minutes vs like an hour+ with our current solution. Our poo poo is pretty small fry, and nothing is particularly real-time so I doubt anyone will notice the snaps happening. My plan is to use our DR host as the backup proxy, so it should have plenty of horsepower to handle Veeam tasks as there won’t be any other production happing on it unless the poo poo has hit the fan. I’m also really liking that Veeam doesn’t charge per TB or tariff for transfers to cloud storage.
|
# ? Dec 5, 2018 00:05 |
|
Internet Explorer posted:I've also heard amazing things about Zerto. To the point where some I spoke with used Veeam for backups and Zerto for replication. This stuff is great but it rarely covers every need an organization has for availability. There’s still bespoke and legacy apps that need recovery and if you’re not almost exclusively a Microsoft shop things get much more challenging. You’re basically stuck trying to piece together a complete availability and recovery strategy from a number of different technologies and then managing each of them piecemeal. Additionally those solutions aren’t always as flexible with regards to things like latency and bandwidth requirements and every app may have slightly different ones. Also the things you mentioned are the most likely candidates (outside of DB, maybe)to be XaaSified anyway so they’re already kind of low hanging fruit. The appeal of something like Zerto or VEEAM are that you can overlay them over your existing environment and recovery everything irrespective of how it’s architected. They’re very flexible in regards to networking requirements for replication. And when you need to fail things over it all happens in a single console, can be done as a single job, and can be structured to turn everything up in the appropriate order. That simplicity can be very appealing compared to a massive runbook that requires notifying different application teams at different times to handle their puzzle. Also Zerto provides near zero RPO for everything and can do it over impressively large geographic distances. That’s basically impossible to get any other way. Synchronous active/active can be done at the storage layer or on a per-application basis but has geographic limitations, and fully asynchronous is easy with a lot of different tools, but sub 1 minute RPO for any workload is unique to Zerto as far as I know.
|
# ? Dec 5, 2018 01:51 |
|
Double post, but I’m sure most people saw the AWS announcement about Outposts which is basically they’re response to Azurestack, but they’re also doing VMware Cloud on AWS Outposts as well. So first AWS let you put VMware in the cloud and now they’re disrupting the industry by letting you...put VMware in your datacenter? Weird stuff, but I can also sort of see the appeal. Anyone have thoughts on this? YOLOsubmarine fucked around with this message at 01:59 on Dec 5, 2018 |
# ? Dec 5, 2018 01:56 |
|
Internet Explorer posted:I've also heard amazing things about Zerto. To the point where some I spoke with used Veeam for backups and Zerto for replication. Application level replication is awesome if you have a decent amount of bandwidth to work with and low latency, but can cause some pretty crazy split brain stuff if your WAN goes to poo poo. Granted my experience with this is old and anecdotal, so your mileage may vary. I worked for a company that wanted to stretch every nickle they could out of everything, we ran Citrix XenServer over VMware because they couldn't warrant the cost even though I begged (this was about 7 years ago now). They wanted a copy of all data in our secondary datacenter (read: server room built in office 3 states away), but wouldn't actually invest in any technology that would allow for it so I had to make due with what I had at the application level. DFS for files (actually scrapped for a 3rd party tool called peersync), DAG (Exchange 2010), I actually built it all out and it worked pretty drat well... at first, Then one side of the T3 had issues... it probably wouldn't have been as big a deal if it had dropped completely, but instead of dropping, latency was pretty consistent at 300+ ms. SQL would require a manual failover so SQL was ok. DFS just didn't transfer files but that was ok. Exchange LOST ITS MIND, to the point that both sides thought they were the active databases but yet somehow refused to mount the DBs. Even taking the effected side (side with WAN issue) down didn't help. DBs wouldn't mount on the primary (it thought it was dirty). I spent all day working with Microsoft to get the DBs to mount, when finally the WAN issue resolved, the other Exchange server was brought up, and somehow the DAG sorted out its poo poo. I know they've made Exchange DAGs a lot smarter since 2010, and a lot more tolerant. Still just left a bad taste in my mouth, even though hind sight I should have never built it without the proper networking in place (redundant connections, different carriers, more bandwidth would have been nice, etc). TheFace fucked around with this message at 22:20 on Dec 5, 2018 |
# ? Dec 5, 2018 22:16 |
|
Anyone seeing weird performance/response issues with Windows Server 2016 guests on VMWare? In particular the UI seems laggy both through local console/RDP. Seems to be consistent regardless of what kind of hosts or storage things are on, resources assigned, hardware versions, vmware tools versions, virtual storage controllers/nics, etc. I see threads about issues with Windows Server 2016 performance on ESXi 6.5 prior to U1, but we've updated past that point (although we're still on 6.5)
|
# ? Dec 13, 2018 21:18 |
|
Anyone here using Kubernetes and tinker with KubeVirt or Virtlet? I saw a few demos at Kubecon this week and they both are cool feats of technology but I'm not sure I'd want to use either in production, even when they're more mature. I guess maybe if you got your entire environments into Kubernetes except a very small handful of things you couldn't containerize it would be tempting.
|
# ? Dec 15, 2018 04:27 |
|
ESX 6.5 question. I'm going to update some settings for NFS tuning based on https://kb.vmware.com/s/article/2239. Mainly, updating Net.TcpipHeapMax and NFS.MaxQueueDepth. I'm starting in one cluster of 3 hosts, and want to move carefully. Would it be OK to update these 2 settings on just one host, and have it be different from the other 2 in the cluster, for a few days? Or do these settings need to be consistent across the cluster at all times? The way I see it, these are to tune how many resources the ESX host reserves for the NFS stack, and not actually in touching the NFS mount or data, so I wouldn't expect a conflict... but this is a tough one to google, so ya'll are my next best try *edit: I opened a P4 ticket with VMware, we'll see how long until I get an answer Alfajor fucked around with this message at 00:40 on Jan 18, 2019 |
# ? Jan 18, 2019 00:29 |
|
Alfajor posted:ESX 6.5 question. It’s a best practice to keep those settings the same across all hosts, but it’s not a requirement. If they don’t all match then you may experience inconsistent performance depending on which host a VM lives on. Personally I’d change them on all three as the NFS MaxQueueDepth setting can cause all path down conditions if not set appropriately.
|
# ? Jan 18, 2019 01:22 |
|
Interesting, thanks. And yeah, my goal would be to of course have all settings consistent across the cluster. I've just joined this company, and the prior infra engineers are not around, but everything was tuned to ESX 5.0, and my boss is shy about just changing things... so I'm asked to test these settings in just one host first if possible, and see how things run for a full production day. This would be on one host, on a DR/Dev cluster, during a maintenance window. I was planning on doing a rolling-update fashion of one host a time, but all host done in one night, and then came the restraint of just one host a day cadence.
|
# ? Jan 18, 2019 02:02 |
|
Also, any decent storage vendors should publish tech documents specifying what those settings should be, which are usually pretty safe unless you start loving around against their recommendations..
|
# ? Jan 18, 2019 07:37 |
|
GrandMaster posted:Also, any decent storage vendors should publish tech documents specifying what those settings should be, which are usually pretty safe unless you start loving around against their recommendations.. This, before you change anything check the storage vendor. I know for a fact NetApp publishes all these settings for ESXi, others likely do as well. As for one host vs three. I'd push back on your boss or change control board that having a mix of settings is "Against best practice and may cause unforeseen issues for the cluster." Throwing around best practice (especially when it is) tends to carry weight with decision making types, at least in my experience.
|
# ? Jan 18, 2019 15:22 |
|
TheFace posted:This, before you change anything check the storage vendor. I know for a fact NetApp publishes all these settings for ESXi, others likely do as well. They also have the VAAI plugins for iSCSI and NFS that you should be deploying on your hosts through the update manager.
|
# ? Jan 18, 2019 16:37 |
|
Yes there’s no reason to gently caress with the settings until you’ve loaded Netapp’s
|
# ? Jan 18, 2019 16:38 |
|
BangersInMyKnickers posted:They also have the VAAI plugins for iSCSI and NFS that you should be deploying on your hosts through the update manager. This too. Alfajor who's the storage vendor?
|
# ? Jan 18, 2019 16:52 |
|
NetApp As I said, I'm bringing up the old "best practices" to current ones. I'm literally going to apply Netapp's best practices, based off this: https://www.netapp.com/us/media/tr-4597.pdf Boss-man doesn't want to install the vCenter plugin from NetApp, so I'll just do all their NFS tuning manually. The what is locked in, I'm just working on the how. Btw, VMware support called me back already! Not bad for a P4! Dude said I should be fine with one host with different settings, but recommended disabling DRS to avoid VMs migrating and having the underlying storage be different. He couldn't tell me to expect issues, but this is an easy way of avoiding finding out. *edit: TheFace, check your PMs man Alfajor fucked around with this message at 20:50 on Jan 18, 2019 |
# ? Jan 18, 2019 20:46 |
|
|
# ? May 18, 2024 18:04 |
|
There's no good reason not to install the vCenter plugins that your storage vendor provides
|
# ? Jan 18, 2019 21:15 |