Using the VMware Update Manager with Zerto

  • We are currently trying to update our vSphere infrastructure using the vMware Update Manager.

    The typical behavior when we remediate the host or cluster should be that the host is send to maintenance mode and then patching should take place.

    But having Zertos vRAs running on the ESXi-hosts, seems to prevent the hosts from being able to go into maintenance mode.

    This behavior is only occurring when the maintenance mode is triggered by the VMware Update Manager. If we are triggering the maintenance mode on the host manually, the vRA is shut down and the hosts is send to maintenance correctly.

     

    Any advice how to fix this, is highly appreciated.

    Update:

    We’ve already found some setting in the documentation which mentions that we have to configure

    “Allow Zerto to always enter hosts to maintenance mode during remediation.”

    in the “Policies –> VMware Virtual Update Manager”. But after changing this setting, the situation remains the same. Remediating the hosts raises an error:

    “VMware vSphere Update Manager had an unknown error. Check the events and log files for details.”

    The Details show:

    “Cannot remediate host xxx because it cannot enter maintenance mode.”

    Hi Harry,

    we’re talking about the VRAs which are not shutdown when triggering the remediate.

    When triggering the remediate, the first hosts is triggered by the update manager and all VMs are moved via vMotion/DRS to the other hosts in the cluster.

    Except for the VRA/VRAH.

    Then the process is stuck at that point where only the VRA (and VRAHs) are located on the hosts and keep their powered on state.
    When using Maintenance Mode directly instead of the remediate form the update manager we’re having 2 different behaviours:
    On Site A the VRA is shutdown and the host is going into maintenance mode successfully.
    On Site B the VRA is not shutdown. The process stays on “19% Enter Maintenance Mode” and runs into a timeout.

    Thanks!
    Best regards,
    Jan

     

    I have seen similar issues in my environment as well. It is really hit or miss when we attempt to patch via VUM on if the VRA’s will power off or not, most of the time I eventually have to power it off manually before the VUM job fails.

    Is there a solution to this yet?  We have similar issues – VUM remediation fails because it cannot put the hosts in maintenance mode.  The VUM logs confirm that the VRA is the VM causing the fault because it is pinned to a host.

    However, we can manually put a host in maintenance mode and everything works fine.  We have also verified the Update Manager box is checked in the Site Settings -> Policies section.

    vCenter and ESXi 6.0 U2, ZVM 5.5 U3

    i am also having same issue, if i power off ZVM then update manager can put in host in MM and remediate.  but without powering off VRA update manager cannot remediate.

    is there any option to fix this?

    Same here. Despite setting the “Allow Zerto to always enter hosts to maintenance mode during remediation” option in Site Settings, Update Manager will not be able to move hosts into Maintenance Mode.

    On vSphere 6.5, Update Manager remediation already fails in the pre-check phase, as it detects a pinned, powered-on VM (VRA), and then refuses to even try entering maintenance mode on all hosts.

    I’m tempted to just power off all VRAs in the cluster and then run Update Manager.

    Glad I found this because I’m about to try some patching via VUM (vSphere 6.5U1).  I assume I’ll have the same issues and have to power off the VRA’s manually.  Any update from Zerto on this behavior, and maybe a fix?

    Thanks for letting us know you are going to be using VUM with Zerto.  We have an example script on our Tech Alliances github that you can use with VUM: https://github.com/Zerto-TA-Public/Zerto_Scripts .  We are investigating better support for VUM in our roadmap.

    Amy Mitchell

    Zerto Product Manager

You must be logged in to create new topics. Click here to login