• This topic has 2 replies, 2 voices, and was last updated May 16, 2018 by Rich J.

Excluding drives from replication

  • So we have a very large SQL server and we want to the server replicated, but do not require the entire DB drive (its ~ 8 TB) at the DR site.  If a DR event ever occurred, we would just add the empty drive to match the drive letter in production and the staff would FTP the DBs required from a 3rd party, attach the DBs and they are in business.

    Anyways, has anyone ever had any success where they replicate only the OS drives but omit the swap and DB drives?

    Their seems to a discussion where large drives, once replicated, can be marked as swap drives, which prevents further replication.  Any assistance / recommendations greatly appreciated.

    thanks

    Rich.

    Rich,

    Your understanding is correct. Zerto can exclude a drive from replication, with one (large) caveat. Zerto can only exclude that drive from replication (by marking it as a swap drive) after the initial replication is complete. After the initial replication it will ignore any changes to that drive.

    So you either have to replicate the entire drive (in that case why wouldn’t you just continue to protect it?), or figure out a way around it.

    You could always “seed” the data volume so it wouldn’t have to actually replicate the data on the initial sync. That would speed things up for you. It would just do a delta sync on that volume, then ignore it going further.

    Hope that helps.

    Thanks Matt, appreciate the response.

    The main reason we want to exclude the drive is cost (we pay a third party to host our stuff and 8+TB adds up) and the data within SQL changes a lot, so its chatty, not a big deal, but we want to be efficient with our bandwidth.

    I think the answer is to reduce the size of this large drive by removing a bunch of unneeded DBs, then replicating the entire VM, and then marking this drive as a swap drive.  We tick most of the boxes there….

You must be logged in to create new topics. Click here to login