|
Martytoof posted:Ugh is there any way to programatically rename all the files associated with a VM when you rename the VM itself? Rename them, then SvMotion them to a temp datastore and back. If you can. Or script it, as said. Renaming the files and the lines in the vmx file isn't terrible.
|
# ? Sep 11, 2015 05:10 |
|
|
# ? May 28, 2024 15:33 |
|
Martytoof posted:Ugh is there any way to programatically rename all the files associated with a VM when you rename the VM itself? sed, something like this: http://www.yellow-bricks.com/2008/02/10/howto-rename-a-vm/
|
# ? Sep 11, 2015 14:30 |
|
Oh hey this looks cool: https://blogs.vmware.com/vsphere/2015/09/vsphere-update-manager-fully-integrated-interface-with-the-vsphere-web-client.html Let's read... quote:You read it right. As of vSphere 6.0 Update 1, the vSphere Update Manager (VUM) now has it’s interface fully-integrated in the vSphere Web Client! What does this mean for you? Now you truly have no excuse not to ditch the c# client and move directly into the Web Client! Yeah the lack of VUM isn't why people aren't using the web client. Also VUM still requires Windows. How is that still possible after all this time?
|
# ? Sep 11, 2015 20:14 |
|
Potato Salad posted:Local datastore or on a storage device? Its a folder on a local datastore. I think the SSD might just be dieing though. It is weird that I can create other folders on the datastore and browse them. Its just this one folder.
|
# ? Sep 11, 2015 20:47 |
|
Number19 posted:Yeah the lack of VUM isn't why people aren't using the web client. I'm trying to use the Web Client for (basically) the first time today. I can't figure out how to do anything. I can't even add the ESX machine that is hosting the VCVA. I'm sorry to anyone that actually has to use this garbage.
|
# ? Sep 11, 2015 21:43 |
|
How does licensing for vRealize Operations licensing work, specifically for the Performance Monitoring and Analytics side? A coworker (mistakenly) bought a 25 seat license for vRealize Operations Advanced (we have something like 500 VMs) thinking it would cover everything (???). It works, but we'd like to get into licensing compliance. We also have various VM's that we really don't need much monitoring on (Std level monitoring/analytics would work fine) and critical applications that we're trying to virtualize that we absolutely need application level monitoring on. Can you mix and match VM licensing within a single vRealize Operations instance?
|
# ? Sep 12, 2015 02:58 |
|
Wicaeed posted:How does licensing for vRealize Operations licensing work, specifically for the Performance Monitoring and Analytics side? You can be compliant with a 25 pack in an environment with 500 VMs, just create a user for vROps to use that only has permission to view the 25 VMs you want to monitor ( KB 1036195). Otherwise, yes, you can mix in additional vROps 25 packs to get up to 500.
|
# ? Sep 12, 2015 03:27 |
|
Number19 posted:Yeah the lack of VUM isn't why people aren't using the web client. Also VUM still requires Windows. How is that still possible after all this time? Because they have no clue how anyone uses their product or how they feel about the garbage they are constantly spewing out and forcing on us.
|
# ? Sep 12, 2015 04:26 |
|
I'm actually heading for an all day meeting with VMware Austin to discuss vsphere 6. Anyone want me to drop some questions for them?
|
# ? Sep 12, 2015 05:40 |
|
theperminator posted:Because they have no clue how anyone uses their product or how they feel about the garbage they are constantly spewing out and forcing on us. Look, we want to know how you feel about our Photon container strategy. Virtualization is old legacy crap tha no one cares about.
|
# ? Sep 12, 2015 06:33 |
|
DevNull posted:Look, we want to know how you feel about our Photon container strategy. (i still have vendors who don't support virtualization)
|
# ? Sep 12, 2015 15:37 |
|
It's actually happened.. I switched to the VCSA Web GUI full time. I've been using it so long that I had an opportunity to sit down at my oft-neglected Windows laptop and boot the C# GUI and I was like "ugh what the hell is this, where is everything" I'm not making any statements about its performance or overhead, but as far as functionality I think I can honestly say I'm happy with having the WebUI. Just ditch the loving flash and I'll be content. jaegerx posted:I'm actually heading for an all day meeting with VMware Austin to discuss vsphere 6. Anyone want me to drop some questions for them? No, but can you yell HTML5 WEB GUI at them repeatedly until they ask you to leave the room?
|
# ? Sep 12, 2015 16:23 |
|
Martytoof posted:No, but can you yell HTML5 WEB GUI at them repeatedly until they ask you to leave the room? One of the two guys that did the html5 host client fling is based out of the Austin office. He was working on vmkernel stuff, but does the html5 client because the vmkernel people know it needs to be done.
|
# ? Sep 12, 2015 19:51 |
|
Is there ever any hope of a vCenter-less "new VM from template" functionality addition? At least I think you need a vCenter server to be able to deploy from templates. Please correct me if I'm wrong. That's literally the only reason I have VCSA deployed in one of our 1 host deployments.
|
# ? Sep 13, 2015 00:57 |
|
Martytoof posted:Is there ever any hope of a vCenter-less "new VM from template" functionality addition? At least I think you need a vCenter server to be able to deploy from templates. Please correct me if I'm wrong. https://github.com/lamw/vghetto-scripts/blob/master/perl/ghettoCloneVM.pl
|
# ? Sep 13, 2015 04:51 |
|
Cool, I'll give that a try, thanks!
|
# ? Sep 13, 2015 17:59 |
|
Anyone have any recommendations for a cost effective, low profile, quad port, PCI-e NIC for use with ESXi 5.1, 5.5 and 6.0? Any reason not to go with Intel I350-T4's? They're ~$140 new through Amazon. They would be used in my current IBM x3650 of varying types with HP Procurve switches if that has any impact. I plan to transfer these cards to whatever servers I end up replacing the IBM's with - probably HP servers. e: There's this HP card (HP NC364T) that's supported by ESXi 6.0U1: http://www.amazon.com/gp/product/B000P0NX3G/ref=pd_luc_rh_sbs_02_01_t_ttl_lh?ie=UTF8&psc=1 goobernoodles fucked around with this message at 18:52 on Sep 14, 2015 |
# ? Sep 14, 2015 18:47 |
|
I'll check to see what my Dell R620s have. We ordered them all with extra quad NICs because apparently money is no object until it comes time to negotiate for salary.
|
# ? Sep 15, 2015 01:31 |
|
Shouldn't the HP card be ok? I'm pretty sure it's an Intel 82571EB or something close to that which will be well supported by ESXI, I bought some HP NC360T Dual-ports for my stuff.
|
# ? Sep 15, 2015 07:24 |
|
I would think so, but I'm quite sure what to look for with server NICs. There's a quite a large price difference between some NICs and I'm not sure if it's additional processing power/cache or features or what. Hoping someone can chime in with a "yeah, that's fine. there's no reason to spend more for features you won't use."
|
# ? Sep 15, 2015 19:44 |
|
On another note - is VSOM any good?
|
# ? Sep 15, 2015 19:47 |
|
goobernoodles posted:I would think so, but I'm quite sure what to look for with server NICs. There's a quite a large price difference between some NICs and I'm not sure if it's additional processing power/cache or features or what. Hoping someone can chime in with a "yeah, that's fine. there's no reason to spend more for features you won't use." Some NICs do iSCSI in hardware, etc. You really just need the basic hardware offload features that have been around for a decade unless you're doing crazy kernel-bypass poo poo like high-frequency traders.
|
# ? Sep 15, 2015 20:34 |
|
Vulture Culture posted:You mostly pay for QA. Ask me about the two weeks I lost to Emulex's dog poo poo handling of SFP+ twinax connections. We spent two weeks fighting with Intel X710s (yay PSODs) before just getting our VAR to replace them all
|
# ? Sep 16, 2015 06:51 |
|
Vulture Culture posted:You mostly pay for QA. Ask me about the two weeks I lost to Emulex's dog poo poo handling of SFP+ twinax connections. I'll give the HP NIC's a shot since they're waaaaaaaay cheaper. The cost of 6 of them is less than one Intel I350 from CDW. I'll test them out on my test hosts first. goobernoodles fucked around with this message at 18:10 on Sep 16, 2015 |
# ? Sep 16, 2015 18:07 |
|
goobernoodles posted:On another note - is VSOM any good? I assume you mean vSphere's Operations Manager product? If so then yes it's getting a lot better and there are a number of additional extensions you can bring in to monitor things like your storage and some of your applications. VSOM can then provide you some recommendations that actually make sense instead of just the health badge and a number thats not 100% clear. It also has merged dashboards into one unified dashboard so you can get all the data from one place. If you're good with Operations manager then you'll get a lot out of it.
|
# ? Sep 17, 2015 04:58 |
|
Ran in to a really weird bug on the upgrade to vCenter 6.0u0 where the web gui wouldn't load any inventory, but the thick client worked normally and all the alarms were good so clearly the backend still saw everything. Support didn't know what was up and u1 came out two days later and installing that cleared it up. 6.0 web gui really does work better than 5.5. but its still not great. I'd prefer to stack on the thick client if they weren't locking me out of things. On the plus side, it looks like all the update manager functionality has finally been ported to the web client so I'll just pretend the thick client doesn't exist and suffer in silence.
|
# ? Sep 17, 2015 15:57 |
|
BangersInMyKnickers posted:Ran in to a really weird bug on the upgrade to vCenter 6.0u0 where the web gui wouldn't load any inventory, but the thick client worked normally and all the alarms were good so clearly the backend still saw everything. Support didn't know what was up and u1 came out two days later and installing that cleared it up. This happens if either the web client or VCenter service aren't properly registered through the lookup service. The thick client connects directly VCenter and does not use the lookup service.
|
# ? Sep 17, 2015 20:18 |
|
NippleFloss posted:This happens if either the web client or VCenter service aren't properly registered through the lookup service. The thick client connects directly VCenter and does not use the lookup service. Weird, I looked through the logs and didn't see any complaints coming out of lookup. The problem was slightly intermittent, the worked immediately after the upgrade to 6.0 and then it rebooted for OS and stayed broken until U1. I'll keep on eye on that in case it comes back again.
|
# ? Sep 18, 2015 18:12 |
|
I got a bit of a puzzler. I think I got caught with an outlier situation that I may not be able to improve upon, but I'm not 100% sure. Our main switch backbone is a pair of Nexus 5672UP units. 3 out of our 4 hosts are connected via 1gb links to FEXs whereas our 4th host and our new reference architecture is connected via 10gb links directly to the Nexus. The setup is as follows. 2x10gb links for Management (active/standby), Vmotion(standy/active), and VM Traffic (active/active) Basically, we make sure that Management and Vmotion traffic aren't competing on the same adapter unless there's a failover and let VM traffic use both adapters normally. 2x10gb links for iSCSI traffic. In both pairs, one adapter is plugged into Nexus A and one to Nexus B. The storage has 4x10gb links which gives us 8 paths. Paths are set to round robin and 1 I/O operation per path based on SAN manufacturer recommendations. Paths failover has been validated by pulling network links. Same with the Vmotion and Management failovers. Here's where things get odd. Nexus A puked on Friday night and eventually rebooted. I say eventually because things started going south before the reboot happened. Now, I have a physical database server connected to these same two switches so I was able to compare what I saw on the VMware side compared to just plain Windows 2012 R2 MPIO. For the physical windows cluster, I saw MPIO start failing out paths around 11:57 pm. However, windows didn't log a link loss to the switch on that adapter until 12:02pm So, the switch must have hung or otherwise gone south 5 minutes before it rebooted. The cluster handled this well. Paths died, latency increased a bit, but the OS didn't lose access to volumes and things recovered once the switch was back online. It was exactly what you would expect. The picture on the VMWare side isn't as clear. I see some events logged that say "Lost access to volume <GUID> due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly". However, I also see other events just saying that path is degraded. So, I don't know for sure that volume access was lost for the host with the 10gb adapters Does it log that "Lost access to volume" message whenever a path fails but may not have truly lost full access to the volume?. I did have some VMs fail ping checks, but that could have been on the network side of things and not IO related. I don't really see any evidence in those VM event logs that they had any severe IO stalls. Management access to the server died for the 5 minute period between 11:57 and 12:02. Since failover is on link failure, that seems to make sense. If the switch was on but not processing traffic during that period, VMWare likely didn't have a reason to failover the link. Beacon mode wouldn't have helped necessarily either since with only two adapters, it wouldn't have a tiebreaker. As soon as the link went down when the switch rebooted on its own, management failed over to the other interface and was up again. I then saw other failure events about 6 minutes later when I think the link was failing back. So, I think for the most part the configuration I have is valid and best practices and I really didn't have an real impact from the switch crash and reboot, I just got some unexpected results based on a strange failure mode on the switch. That's a whole other issue, I look forward with wrangling cisco over that one. So, I guess rant done, I don't know that there's a real question in there, just kinda working over the incident in my head. I guess the only real question is if multipathing worked properly in this instance and if any volumes were really impacted other than having a few stalls over path failovers.
|
# ? Sep 22, 2015 23:55 |
|
bull3964 posted:I got a bit of a puzzler. I think I got caught with an outlier situation that I may not be able to improve upon, but I'm not 100% sure. Failover to standby paths or controller failover can cause the "lost access to volume" message since there is a period of time where the device is unavailable while VMware either waits for recovery on the primary path or waits for the storage controller to complete failover and begin serving data again. As long as it reconnects within the within the SCSI timeout value of the guests it's generally going to be fine, though you may see some latency spikes on guests during the interval. What storage are you using? On the network side, the appeal of LACP over simple active/standby adapters in the port group is that LACP can detect that sort of ghost switch problem where physical link is still active, so link state detection does not cause a failover, but the switch is not actually processing or passing data.
|
# ? Sep 23, 2015 05:33 |
|
Storage was a combination of pure and netapp. Yeah, the loss of access to volume is what threw me, but I couldn't find any evidence in the guests that storage was interrupted in any way. In all, the incident was less severe than I initially thought (other than a Cisco Nexus switch hanging and rebooting for no reason) and all the failovers and multipath worked as intended within the best of their abilities. The loss of management access briefly was annoying and set alarms off like crazy, but it really didn't cause any issues for the most part. Though, I don't know if this would have been the case if ALL of the hosts would have lost management access, I assume that would have caused HA to go into a bit of freakout mode. Still though, the VMs should have been ok. LACP isn't an option for me unfortunately. That requires VDS and Enterprise Plus and we are only on Enterprise. Heh, it would actually be cheaper to get a few 10g FEXs and let the switch handle LACP internally since the FEXs are all cross connected with the Nexus switches. We didn't see any evidence of network connectivity being interrupted on the hosts connected via the 1g FEXs and they only lost half their paths to storage (due to the controllers losing half their connections) rather than 75% of the paths going down with the 10g connected host (due to both half of the controller connections being down and 1 of the 2 host connections being down). bull3964 fucked around with this message at 05:59 on Sep 23, 2015 |
# ? Sep 23, 2015 05:44 |
|
weird that you have a nexus issue, our 5548up switches have been up since may of 2012 without interruption. We probably aren't taxing them, but have had zero stability issues.
|
# ? Sep 23, 2015 05:56 |
|
adorai posted:weird that you have a nexus issue, our 5548up switches have been up since may of 2012 without interruption. We probably aren't taxing them, but have had zero stability issues. Threw me too. It was up for 340ish days without incident (they were installed at the end of October 2014). I haven't dug into them too deeply yet since I was trying to get my head around the behavior of the VMWare host and storage. Only A switch rebooted, B switch is still chugging. 'show system reset-reason' shows unknown and 'show cores' comes up with nothing at all so no core was created. It's been the month of odd hardware failures. Our 6 month old Netapp SAN had a controller die a few weeks ago. Dead dead. Like, was reset by watchdog and wouldn't power on dead. Only the service processor was functional. Took the tech 3 1/2 hours to replace since most of the instructions for replacing the controller assume it's up in some shape or form. So, he was on the phone with Netapp pulling info from the Autosupport checkins. Meanwhile, the instructions they gave the tech and the instructions they had posted online for the controller replacement were totally different. The guy on the phone was telling the guy on site to run commands with switches that were for other commands. It was a clusterfuck. My first experience with Netapp support was not...glowing or confidence inspiring. bull3964 fucked around with this message at 06:13 on Sep 23, 2015 |
# ? Sep 23, 2015 06:07 |
|
Out of curiosity how many vmk interfaces do you use for iSCSI? They will have actual IPs assigned to them, not just the physical uplinks. Whats your vSwitch config look like?
|
# ? Sep 23, 2015 16:47 |
|
1000101 posted:Out of curiosity how many vmk interfaces do you use for iSCSI? They will have actual IPs assigned to them, not just the physical uplinks. Whats your vSwitch config look like? We have a vSwitch for iSCSI with two 10gb adapters in it. One vmk has the first adapter active and the 2nd adapter unused and the other vmk has the 2nd adapter active and the 1st adapter unused. Pretty much bog standard config.
|
# ? Sep 23, 2015 22:32 |
|
Finished my update to vCenter Server 6.0 Update 1 Now I'm trying to roll out vRealize/Hyperic for a VM OS/App monitoring POC. Does Hyperic Agent really only run on Ubuntu version 10, and not 12/14?
|
# ? Oct 3, 2015 01:09 |
|
Wicaeed posted:Finished my update to vCenter Server 6.0 Update 1
|
# ? Oct 3, 2015 04:35 |
|
Vulture Culture posted:Can always throw it in a Docker container. I ended up taking that route with HP's old-generation (G6) management tools, which require 32-bit Ubuntu 10.04. Note for others: privileged (and super privileged) containers are clusterfuck.
|
# ? Oct 3, 2015 06:52 |
|
Maybe I missed a thread on this, but does anyone have any recommendations or experience with setting up/running a private cloud with Azure Stack/Azure Pack? I found a few technet articles and blogposts online about setting this up, but was curious if there are any shops out there that have deployed this, particularly a hosted VM infrastructure using the Azure pack frontend. Essentially I am looking at migrating away from Citrix Xenserver to hyper-v on my hosting backend, and trying to leverage a good frontend portal that will allow customer self-service and automated billing, particularly with VM hosting in mind.
|
# ? Oct 4, 2015 20:33 |
|
|
# ? May 28, 2024 15:33 |
|
vCenter 6.0u1 conflicts with AppAssure because VMware helpfully decided to run a python process on one of the ports Dell officially reserved and I'm still waiting on resolution to that. Until then, I get to enjoy a million schannel errors and intermittent vCenter failures as IIS and Python duke it out for control of TCP 8006.
|
# ? Oct 5, 2015 21:45 |