Hi, i’m hoping this is a basic issue! When we created our VPG’s we selected the entire cluster for recovery host hoping that during ESX host maintenance the VPG would use another host in the cluster for replication and recovery. Today I placed a host in maintenance mode and all VPG’s associated with that host went into an errored state. Is this normal? We have multiple hosts in the cluster and i’d ideally like to perform ESX host upgrades and patching without impacting DR replication
Many thanks in advance for any advice
James – The experience that you had is normal. Every protected VM is going to be pinned to a target VRA, on a single host. By selecting the cluster in your configuration, that’s simply telling it that it can use any host, but it does in fact choose a specific host/vra.
That said, there is an easy way to still perform maintenance on your hosts. You first move recovery VRA operations to another host. To do this, go to the Setup tab, select the host in question, pull down the “more” menu and select Change VM Recovery VRA. Select another VRA to move protection to.
Once that completes those VPG’s will do a quick bitmap sync and from then on be recoverable. You can then perform maintenance on your host.
Hope this helps.
Thanks for the question, James C, and your answer, Matthew C!
I wanted to add that if you’re running ZVR 5.0 you will also be able to gain the ZVR support of Maintenance Mode for ESXi hosts, meaning protected VMs can migrate to other available hosts, and the VRA will power down when the VPGs are offloaded.
Hope this helps!
Hi Harry –
We were upgrading ESXi hosts from 6.0 Update 2 to Update 3 last night and noticed when you remediate a host, the VRAs don’t automatically shutdown. If we put host into maintenance mode they did.
Is this expected behavior? If so, any plans to support the VRAs when a remediate operation is started?
What is the benefit of using the cluster if Zerto will not migrate to use another VRA in that cluster if there is a failure?