- This topic has 9 replies, 6 voices, and was last updated June 18, 2015 by Ryan H.
Migrating DR Environment
Sam NMay 20, 2015 09:27:43 PM
We will be migrating our DR environment this coming weekend. Currently DR has been living about 2 feet away from production to seed and test for the last several months.
We will be fork-lifting that to our DR site starting Saturday – stay tuned for updates!Shannon SMay 20, 2015 11:53:33 PM
Good luck Chuck! I’ll be interested to hear how it goes.Senior Technical Architect at ZertoTagged: MigrationsMike GMay 21, 2015 12:06:26 PM
We did something similar when we seeded to a new UCS systems being shipped off to a new data center. The process worked good and was successful. Things to watch out for are change rates of data and bandwidth availability. Some of our SQL servers had so much daily change rate we had to adjust the application backup process for a couple days to allow replication to catch up. Good luck with your move!Systems Engineer at LTCG Minneapolis, MNSam NMay 21, 2015 04:13:25 PM
Mike, What size pipes were you working with between these sites? We have been seeding at 1G/10G, but I have set the bandwidth throttle to 35Mb/s and that has been pretty illuminating (DR only has a 40Mb/s pipe and we still need to run the plant as well). We’re not going to need a bigger boat, just much bigger pipes!Paul FMay 21, 2015 09:02:25 PM
We did this with our DR environment last year. Worked a treat but you do need to understand the amount of changes that will occur during the move. This did affect the availability of our DR site for a few days while it caught up.Sam NMay 21, 2015 09:06:00 PM
Our gameplan is to shut down all protected VMs while the transition is happening – the holiday weekend is the only time we could string a few days in a row.
I’d be interested to hear how many VMs you were protecting and what size pipe you had between them? We have a smaller implementation (17 VMs).Mike GMay 22, 2015 12:22:03 PM
In our case, this was done using a dedicated 100Mb MPLS link. We synced servers by VPG to allow applications to come up in groups instead of trying for the “all at once” approach.Systems Engineer at LTCG Minneapolis, MNSam NMay 26, 2015 05:44:26 PM
We had worked out a solid game-plan with my home-slice Sean Masters (he has basically held my hand through our whole Zerto implementation). We decided that to limit the amount of config changes, we would temporarily setup the VLANs supporting the DR environment in HQ so that we would not have to change any IPs after moving the equipment to it’s final happy place.
We shut down protected VMs & paused the associated VPGs. After this we shut down the DR VRAs, ZVM, vCenter, hosts, fabric, & storage. We removed any network wizardry that we had in-place and then got the routing back for the VLANs we had setup temporarily pointing now to DR across the WAN.
All DR toys were then road-tripped to DR location, unloaded, and racked-up. We starting cabling back up, but by the time I hooked up the wrong cable to the wrong switch for the eleventeenth time, we decided to call it a night.
After getting some rest we were able to get it all wired up the next day. The fabric, SAN, hosts, vCenter, ZVM, & VRAs were started back up (in that order) and once Zerto re-established communication, we un-paused all of the VPGs and started the protected VMs back up.
We were completely surprised by how well everything came back up – again thanks to Sean for his guidance (and patience).
Now if we can get our internet pipes to hulk out, I think we’ll be golden!Sean MMay 26, 2015 05:58:52 PM
Thanks for the shout-out sir, and most importantly thank you for sharing your story with the community!Ryan HJune 18, 2015 03:40:22 PM
I synced all my VPG’s from Brazil (our most bandwidth limited site) to our DR site here in the USA WITHOUT pre-seeding anything. The Brazil site has a 20MB MPLS link and is syncing quite a large chunk of data real time. I guess I should mention that we do have Riverbed appliances deployed end to end as well. **
I just spun up another VPG this morning from that site replicating ~650 GB, and it is estimating at about 13 hours to finish it’s initial sync. I would bet it will be done replicating in less time than that though.
**When deploying a 3rd party compression method you should disable the WAN compression on the VPG and let your 3rd party tool do the lifting.