• This topic has 7 replies, 5 voices, and was last updated August 17, 2021 by Joseph L.

Offsite clone stuck at 10% – No errors

  • I’m testing this functionality between 2 vCenters in the same physical location, using the same network.

    The 2 ZVMs are running the latest patch version of Zerto 8.0 with an O2M license.

    When I start an offsite clone of the (already successfully synced) VPG it gets to 10% and hangs indefinitely without any errors. There is more than enough space in the destination datastore (~16TB free, the VMs in the VPG are less than 200GB).

    I’m guessing I have something configured incorrectly, but I’m not sure what it is. I performed a test failover of the VPG without issues, it’s just the clone which isn’t working.

    Any insight / advice would be appreciated 😀

    Hi,

    This is Kalsang from Zerto.

    Not sure It could be due to VRA resources or VRA was busy. But were you able to clone it from the vCenter?

    Could you try the force sync the VPG and try cloning with different checkpoints. If that doesn’t work then I suggest opening a support ticket.

    Best regards,

    i’m also seeing this today. Can the resolutio be posted here?

    Please be informed that cloning a VM via Zerto does take a little longer as it’s getting all the data from the journal and the recovery disk. It also depends on how old the checkpoints and the resources on the recovery VRA as well. If it’s getting stuck and making no progress then I suggest to do force sync first and try with different checkpoints. If the issue still persists then submit a request with support.

    Thanks,

    Kal

    Yesterday I was asked by my customer to perform an Offsite Clone of his VPGs to a DR site for testing. The customer has around 60TB of VPGs which he wants cloned. I started the work and I seem to have been able to clone around 30 servers but then the rest got stuck in the Zerto GUI sitting at 10%. I checked the DR vCenter and the Tasks list was empty however there were no errors in the vCenter. Zerto support asked me to restart the ZVM services and once restarted, all my VPGs went into error but then went green after about 15 minutes.

    I’m trying the process again today, however it is very slow, a 100GB VPG took 1 hour to complete. I’m worried how long the 4TB & 5TB VPGs are going to go.

    Zerto V8.0U4P1
    vCenter 6.7 Build 18010599

    Hello,

     

    Please open a case with Zerto support to investigate this issue. Thank you.

     

    Joe

    First issue- slowness of creating the offsite clones:

    So when you deploy a VRA, the default is 1x vCPU and whatever you specify as the RAM- we use 16GB RAM on ours. The recovery VRAs were maxing out CPU trying to bring up the clones so I edited each one and set 4x CPUs on each which made the process somewhat faster.

    My next issue was that although the VRAs were now maxed out with resources, the clones were still taking time to create. For example, here’s what I found from my testing:

    4.3TB VPG took 12 hours
    2TB VPG took 10 hours
    100GB VPG took 1 hour

    My cloning times were all over the place, and depended on what was going on at the time. If I had smaller VPGs running at the same time then clones were taking an OK amount of time, but as soon as the big 2TB, 3TB & 5TB clones were running, timings went out the window. I expected that as the SAN was doing a LOT of IO. Some of the clones failed with errors like:

    Failed in GetFileInfo
    Return code after CreateClone was RcError

    Not very useful error descriptions, but as Zerto Support would say, this is what was received from vSphere.

    We still don’t know why yesterday Clones were stuck at 10% in the Zerto GUI yet the vSphere task just dropped off without an error (as far as I could see). Anyway, been up over 24 hours straight getting the clones out, still waiting for 3 to complete.

    Hello,

     

    Unfortunately, this isn’t something we can answer in the forum. The issues experienced would require a case with Zerto support for an investigation.

     

    Thank you,

    Joe

You must be logged in to create new topics. Click here to login