- This topic has 14 replies, 7 voices, and was last updated June 28, 2022 by Adam C.
New Zerto 9 Error Reporting
I just upgraded my Zerto to Version 9. It is replicating across a private WAN link to another private site. Both are VMWare 7.0U2. Before upgrade had no error reporting of anything. Now I am getting a warning that the target site disk latency at target is 4ms or higher and has occurred 4 times in the last 22 hours. The disk is report a latency of 0ms. The WAN connection can sometimes be flakey, so is this an indication of that versus disk latency? Again no issues with Zerto 8.5U3, just started with version 9.James GJuly 27, 2021 12:49:16 PM
We are getting the same message on all of our self protected machines. I assume due to storage being slightly underpowered. I have asked for a tweak to disable this as I don’t really care about latency on self protected VPG’s. Also v9 was pulled due to issues so we may need to wait for an update to be released.
I rebooted my Target ZVRA and these errors went away. I have no other errors right now so will have to live with Version 9 as I do not want to rebuild my VPG farm.
I installed Zerto 9 with the latest release at a different location and same issue. False Disk Latency reporting for target. Anyone else seeing this?Chris HAugust 9, 2021 06:34:02 PM
Same issue here, upgraded Zerto from 8.5U2 to 9.0, running vSphere 7.0U2 Last Wednesday (8/4). We have 3 hosts in our DR site and had this warning on 2 of 3 Thursday (8/5). Sunday (8/8) the 3rd host had the warning. We have 2 VPGs that replicate data to another host on the source side and neither of these have the warning. Only the 3 hosts in our DR site.
I finally got Zerto support to give me a ZVRA Tweak process that turns the High Disk Latency error off. I think the new 9.0 reporting can’t distinguish between disk latency and an communications delay, so it just calls it a disk latency issue. My target systems were reporting 0.023ms disk latency, but still reported this error since the target was across a WAN connection.Greg WAugust 13, 2021 01:42:44 PM
Care to share the tweak?. These messages are really annoying.
The TweakTool has to be downloaded with a password from Zerto Support. If you stick to your guns and tell them you need to do it they finally give it up. When you run the TweakTool and query the ZVRA’s, if they don’t show up, that means you need to reboot the ZVRA, which is not a big deal, and then the ZVRA answers the tool and you can add the Tweak.Marcin MSeptember 23, 2021 06:12:25 AM
I have the tweak tool already as I had to disable driver false positive errors on our VRAs. Can you share the name of the tweak? I assume the value is “false”.
I hate v. 9.0U1. If I could I would come back to 8.5U3 right now.
Exactly as it shows and False.Marcin MSeptember 23, 2021 02:20:52 PM
Thanks, Jay!Carlos COctober 25, 2021 07:30:27 PM
t_queryDriverAlerts=falseWARNING: Driver I/O cache ran out of memory on host xxxx. This may impair replication performance. Current I/O cache size: 96 MB. Last full I/O cache memory occurrence: 9 hours, 46 minutes and 52 seconds ago. Full I/O cache memory has occurred 1
WARNING: has reduced storage throughput due to high storage latency. This may impair replication performance.
Thanks for the note. I discovered this error once we went to 9.0U1 also. Both errors occur on 9.0U3. Need to apply both on install now.Adam CJune 28, 2022 06:09:39 PM
We just moved to ZVR 9.0U4 and got this error on most of our VRAs immediately after the VRAs were upgraded (VPGs were bitmap syncing, aka a datacomm line congestion and not a storage bottleneck as others above report).
Which would be fine, except for the alerts are still on 2+ hours after the last “latency” observed by ZVR. Of course the documentation on the error is lacking and doesn’t indicate how long before the alarm disappears/remediates itself.
So we’ll disable via Tweak as well. Thanks for another PITA, Zerto!Adam CJune 28, 2022 06:19:10 PM
Of course I get an error even after rebooting VRAs:
Tweak ‘t_isEnabledHighStorageLatencyAlert’ does not exist.
Great, another support interaction!