We create VPG’s by pointing them at Resource Pools rather than individual hosts. I’m finding that Zerto does not spread VM’s across hosts within a cluster when doing this. I setup one today that pointed 3 of 4 large busy VM’s at a single host rather than spreading across the cluster. Replication traffic is pounding one VRA rather than taking advantage of others that are idle.
Hi Ron – I think how it works is, when you setup the default settings, in your case the cluster for the hosts (I do the same thing btw) it’s just saying that any host can be used. I believe (someone from Zerto can double-check me on this) when you do that, a host is selected at random and put into the actual config. There is no evaluation of load or utilization, it simply selects any host that is online and has a VRA.
You can easily customize this both during the VPG creation, and afterwards as well. Just edit the VPG and change the destination host to the one that you desire.
I agree with your feature request. It would be nice if it could do some sort of baseline evaluation of host resources before assigning a host. A simple calculation to even spread VPG’s or look at the % used CPU/RAM, something like that.
Hope that helps.
Thanks for the reply. Yes I’m aware that we can move them to other hosts after the fact but the initial seed of data is painful when one VRA is handling the operation and the others are sitting around like a highway road crew on their shovels.
I don’t want to have to modify every VM in the VPG to point them to specific hosts prior to replication but maybe that is what we have to do. We are on version 5.0U1
You are correct that Zerto does not check the load of a VRA or ESX host before placing a VM’s replicated disks on it.
The other thing to keep in mind is that when you run a failover test, or a live failover (before it is committed) the VRA will also be involved in the read/write process. So, you will certainly want to manually load balance those workloads between ESX hosts and VRA’s for the best results.
Please note that if it were a real disaster, and you commit to the failover, the VRA is no longer involved and cannot cause any issues with the production workload. (It’s only involved when you are running in a non-committed state.)
This is a matter of scale. Doing a lot of manual VM by VM processes doesn’t work so well in larger shops. The ZVM’s are aware of how many VPG’s and VM’s are currently using VRA’s on given hosts so it doesn’t seem too much of a stretch that it could perform some initial placement distribution.