- This topic has 9 replies, 8 voices, and was last updated December 14, 2018 by Amy M.
Using the VMware Update Manager with Zerto
Jan GMay 23, 2017 01:40:34 PM
We are currently trying to update our vSphere infrastructure using the vMware Update Manager.
The typical behavior when we remediate the host or cluster should be that the host is send to maintenance mode and then patching should take place.
But having Zertos vRAs running on the ESXi-hosts, seems to prevent the hosts from being able to go into maintenance mode.
This behavior is only occurring when the maintenance mode is triggered by the VMware Update Manager. If we are triggering the maintenance mode on the host manually, the vRA is shut down and the hosts is send to maintenance correctly.
Any advice how to fix this, is highly appreciated.Jan GMay 23, 2017 02:53:56 PM
We’ve already found some setting in the documentation which mentions that we have to configure
“Allow Zerto to always enter hosts to maintenance mode during remediation.”
in the “Policies –> VMware Virtual Update Manager”. But after changing this setting, the situation remains the same. Remediating the hosts raises an error:
“VMware vSphere Update Manager had an unknown error. Check the events and log files for details.”
The Details show:
“Cannot remediate host xxx because it cannot enter maintenance mode.”Harry SMay 26, 2017 02:03:20 PM
What is the reason the host will not enter maintenance mode? For clarification, are you saying it is due to the VRAs, or are there some VMs preventing MM because they cannot VMotion? vSphere has a map view that will show VMotion compatibility, so it may be worth checking there. Of course, if the VRA is the last VM on that host when it’s trying to enter MM, then obviously we need to look there. 🙂
Thanks!Follow me: www.twitter.com/HarrySiiiJan GMay 31, 2017 12:57:08 PM
we’re talking about the VRAs which are not shutdown when triggering the remediate.
When triggering the remediate, the first hosts is triggered by the update manager and all VMs are moved via vMotion/DRS to the other hosts in the cluster.
Except for the VRA/VRAH.
Then the process is stuck at that point where only the VRA (and VRAHs) are located on the hosts and keep their powered on state.
When using Maintenance Mode directly instead of the remediate form the update manager we’re having 2 different behaviours:
On Site A the VRA is shutdown and the host is going into maintenance mode successfully.
On Site B the VRA is not shutdown. The process stays on “19% Enter Maintenance Mode” and runs into a timeout.
JanBrian SJune 26, 2017 12:59:50 PM
I have seen similar issues in my environment as well. It is really hit or miss when we attempt to patch via VUM on if the VRA’s will power off or not, most of the time I eventually have to power it off manually before the VUM job fails.Jason CJanuary 9, 2018 04:33:39 PM
Is there a solution to this yet? We have similar issues – VUM remediation fails because it cannot put the hosts in maintenance mode. The VUM logs confirm that the VRA is the VM causing the fault because it is pinned to a host.
However, we can manually put a host in maintenance mode and everything works fine. We have also verified the Update Manager box is checked in the Site Settings -> Policies section.
vCenter and ESXi 6.0 U2, ZVM 5.5 U3