• This topic has 4 replies, 3 voices, and was last updated March 28, 2019 by Daniel S.

Zerto + Veeam

  • Hi All,

    Just a quick one has anyone had a client use Zerto to also replicate a Veeam Proxy server. These obviously attach disks to preform the required backup then removes them a short time later once the backup has been completed.

    As those newly attached disks sit in path has anyone seen issues in regards to this especially when the replicated volume is above 2tb?

     

     

    I am testing the below setup and have found an issue.

    ESXi server hosting several servers. One of the virtual servers we call a Management server as it has Veeam 9.5 running ‘traditional’ backups and Replicas of the other virtual servers. The issue I am having is as you mention, when the Veeam backups start they attach the disks of the backedup server to the proxy server (in our case that is the Management server).

    Zerto is backing up all servers as well. And as soon as it sees the new proxy disks attach it triggers a full Sync and this makes that Managment server massive. Normally just a C drive of 60 GB but after backup that server looks like its over 3 TB in zize due to all the attached disks from the other servers coming and going.

    I tried checking the temp disk option in Zerto but after Veeam auto removes the temp drives those drives in Zerto also disappear. Wish you could tell Zerto to simply focus on one disk and ignore all others, not this weird set them to temp after.

    Hi,

    This was fixed on our side for our client’s issue, in short don’t backup your veeam proxy server’s as the hot-adding and removing of disks causes this behavior where it will push the VM into needs configuration because of the above.

     

    So from our experience they can work happily together but just not when attempting to also backup your Veeam backup proxies.

     

     

     

    Just to mention, Zerto picks up on changes and acts on them automatically which I think is what was experienced from the descriptions from OP. This is why when a new disk is attached to a VM, the solution will automatically create a corresponding recovery volume and begin sinking the data from the new disk to the new recovery volume. Unfortunately the only way to prevent this from occurring is to mount the volume from within the VM itself (since Zerto will not see this, nor would the host or vcenter/vmm).  Also worth mentioning that the selected recovery datastore would be based on what the default recovery servers settings (the 6th picture in this excerpt from the documentation) however if the storage does not have the space to accommodate then the VPG will revert to ‘needs configuration’ setting (since it could not be auto configured and must be configured by the user).  If I recall correctly, there are certain vsphere versions that have trouble hot-adding disks larger than 2TB to a running VM (so Zerto is subject to this limitation of the environment – I am not 100% on this though so possible my memory could be incorrect just as a forewarning)

    Hi Justin,

    You are correct it would enter that state if the Datastore needed more space, or the disk being attached is greater the 2TB (version 6.0 or earlier this is an issue 6.5 and above disks can be dynamically attached above 2tb) This was our specific issue along with the constant adding and removing of veeam proxy disks.

You must be logged in to create new topics. Click here to login