• This topic has 5 replies, 6 voices, and was last updated February 3, 2023 by Malachy G.

Exclude large volume from replication

  • We have a few vm’s with very large volumes attached (in the 2TB range) and would like to protect the OS drive but not the cache drive(s). Is there a way to set up a VPG so that it excludes these drives? Setting them as Temp still causes the VPG to want to do an initial sync.

    I don’t think there is a way (currently) to do that. Your best bet as it sits now, would be to do a pre-seed of the large volume, and mark it as a swap volume. Then after the initial delta sync (it’s going to have to do that) it won’t replicate that disk any further. It’s not pretty, but it would get the job done so you can protect that VM. I believe there is a feature request in to do what you’re asking, but I’m not 100% sure on that.  You should submit it anyway, so it get another vote.

    Beginning with Zerto 9.5 you are able to completely exclude a volume (even without doing an initial sync).

    See the release notes here, under the whats new in 9.5 section. https://help.zerto.com/bundle/Zerto_v9.5_Release_Notes/resource/Zerto_v9.5_Release_Notes.pdf

    When creating your VPG you can now select, for each disk, which sync type you want.

    What does mean “initial sync only” and “no sync”?

    No sync creates vmdk on the recovery disk.

    Hi All,

    Just wanted to add my 2 cent here. On a protected VM, which has a 35TB Disk attached I needed to mount a 2nd 35TB disk to allow for dome data maintenance. Zerto detected the 2nd disk, I configured this as a “No Sync” and pointed the blank .vmdk to a DS. I was unable to complete as the DS did not have enough space.

     

    To get around this I re-provisioned the Disk as 100Gb allowed, mapped this within the VPG configuration and allowed Zerto to sync successfully. I then Extended the Disk to the required size which had no impact on the VPG.

    Cheeky way around it, hopefully useful to someone.

     

You must be logged in to create new topics. Click here to login