• This topic has 2 replies, 3 voices, and was last updated March 5, 2019 by Justin P.

Journal best practices

  • We have a few VMs that we are protecting in our environment that are about 5TB in size.  Data is constantly written/deleted.  In these cases we are using Zerto mainly for protecting the OS.  I am noticing that the Journal isn’t keeping up with all of these reads/writes.  Is there any way to reduce the number of chekpoints Zerto creates? I understand this would increase our RPO but in these cases, that is acceptable. Has anyone else run into this?

    What is causing the changes? You have the option to exclude certain disks from replication by marking them as temp, that could eliminate some of the replication.

    Reducing the number of checkpoints would most likely not reduce the amount of data in the journal. Zerto replicates at the block level, so the only way that zerto would replicate less is if the same block changed multiple times between checkpoints. (Which is unlikely to happen because of the way most filesystems work)

    Marking the volume as temp data could be an option of the data is not important.

    You can also set the journal limit higher (basically there is two limits on the journal 1.) time 2.) size )  if size is the only issue, you could up the limit or remove it entirely.

You must be logged in to create new topics. Click here to login