Article number
Affected Versions
Source Hypervisor
Target Hypervisor

How to Perform ZVM Connectivity Tests for Zerto Version 7.5 and Below

Viewed 1026 times


This KB is an overview of how to ensure ZVM to ZVM connectivity works as required between two ZVMs on Zerto version 7.5 and below. These checks need to be done bidirectionally.

Note: Zerto version 8.0 and above use port 9071 as opposed to 9081 in 7.5 and below for ZVM to ZVM communication.


Note: Each of the tests demonstrated below should be performed from both ZVMs to ensure bidirectional communication is working properly.

Before we begin, we must eliminate the option that the ZVM NICs are offline. This can be checked under ‘NCPA.CPL’ (Run > ncpa.cpl). There you should see all NICs under the VM. Make sure the relevant NICs are not offline and are running properly.

Ping & Packet loss 

  1. Validate that ICMP protocol is enabled. 

  2. Ping the Peer ZVM by running the ping command:

    1. ping -n 30 <peer ZVM IP>

    2. If the test fails and ICMP is enabled, there is a communication issue between the ZVMs.

  3. If the test passes and there is a connection, please validate that there isn't any packet loss. If the result has anything above 0% packet loss, you will need to consult with your network team. In the screenshot below you can see an example of an output with packet loss:

Figure A


MTU stands for Maximum Transmit Unit. This will be the maximum length of a packet that we can send through the network. If the MTU value is too high or too low, we could experience connection or performance issues. First, we will want to determine what is the currently configured MTU size for each one of the ZVMs, so we could test it later. We will run the following command in CMD:

netsh interface ipv4 show sub interfaces

We will see a result like the figure below:

Figure B

Now that we have the MTU value, we will decrease it by 28 (IP and ICMP packet header).
In this example, we will use an MTU value of 1472 (1500 – 28). The next command will be a ping with a custom packet size of 1472:

ping -f -l 1472 <Peer ZVM IP>

If we don’t get a proper response (as in the normal ping attempt), we will need to start decreasing the MTU value until we do. In the figure below, we can see that the MTU value of 1472 (1500-28) is too high for the environment, and that we will need to decrease it:

Figure C

Once the value is found, we will increase it by 28 and use the following command to set it persistently. The Interface name will be the name we found in Figure B:

netsh interface ipv4 set subinterface "<Interface name>" mtu=<MTU> store=persistent

Once the command was executed, we should see the updated MTU value in the output of the “netsh interface ipv4 show subinterfaces” command and see improvements in the ZVR performance in a matter of minutes.

Please see this KB for more details on MTU testing from the VRA perspective as well 


The telnet command checks if a specific port is open for communication. This test checks whether the firewall allows this communication, as well as if the target is actively listening on the specified port. We will need to telnet to the peer ZVM from the source ZVM over port 9081 with the following command:

telnet <ZVM IP> 9081

This test should result in a blank black screen under good, bi-directional communication circumstances.

Make sure that we get the blank screen for all telnet attempts when we attempt from the different sites. If the test fails, please contact your networking team to further diagnose/troubleshoot the telnet failures. One suggestive next step to take is to disable the Windows Firewall on both ZVMs and try again.


The tracert command is a Command Prompt command that stands for “trace route”. It will attempt (depends on firewalls) to show every “hop” between the source and the target hosts. Usually, we can use this to see where the connection drops.

For example, here is a successful attempt to trace the route:

Figure D

We can see that the packet went through a router with IP of and then reached the destination, 

If we were to disable ICMP in the destination server (, this would be the result:

Figure E

We can determine in this case that the issue occurs somewhere after the packet passes the router with IP