Zerto http://www.zerto.com BC/DR for virtual IT and Cloud Tue, 21 Oct 2014 09:25:22 +0000 en-US hourly 1 http://wordpress.org/?v=3.5.1 Scary Disaster Recovery Stories http://www.zerto.com/blog/general/scary-disaster-recovery-stories/ http://www.zerto.com/blog/general/scary-disaster-recovery-stories/#comments Mon, 13 Oct 2014 16:44:12 +0000 Zerto http://www.zerto.com/?p=8822 Turn on your lights and lock your doors! It’s time for Tales from the IT Crypt…

We’ve all got a story, an IT disaster story, that spooked us into taking DR seriously. Unplanned outages, deletions, natural disasters… This year Zerto is asking for your most frightening IT story. Plus, we’ve got some good spooky ones we will share with you on our Master of Disaster: The Scariest Disaster Recovery stories – EVER! Webinar, on October 28th!

Share your story and WIN:

  • All stories we add to this post will win the submitter a special edition Zerto t-shirt
  • Win the big prize for scariest story, which will be announced during the webinar!
  • Share these stories and scare your friends on twitter, using the hashtag #ScaryDRStories

Make sure to tune into our webinar, on October 28th to find out which scary story wins the big prize and hear some other Tales from the IT Crypt!!

Read below for those who have already shared their stories that are sure to give you GOOSEBUMPS!

ScaryDRstories2

Matt C.'s Story

As the clock chimed 3 times, then 4, then 5, signaling the 5 o’clock hour was upon us, and with the slightest glimpse of sun peeking in through the crack of my 3rd floor window, the beads of sweat running down my brow became more plentiful. “People on the east coast are going to be waking up shortly, and not a single critical system is running” I said to the other 4 engineers on the conference call...

“Let’s just keep focused and keep working on it” was the response. Any hope that my clock was wrong, and it wasn’t really 5am was shattered by the familiar sound of garbage trucks in the alley behind my downtown apartment. The time was correct, but last I could remember, it was 9pm.

See it was just 8 hours earlier that a major power outage took out 10 city blocks, including my main office location, where the core of my infrastructure is located. At first glimpse of the situation, things seemed to be breaking my way. The timing was perfect, everyone was done working for the day, and I had all night to failover the systems to my remote location, not that I’d need all night, but it was nice to have right?

As 9pm turned to 10, then 11, then midnight struck, it became painfully obvious that failing over these systems was going to be far more challenging than anyone imagined. With a failover procedure so complex, and with so many intricate details during a failover, odds are that 1 of the pieces will not work as expected and would take some time to resolve. Nobody expected that every portion of this plan would behave in that manner though. “This whole process needs to be evaluated” I commented as I try over and over to get the VMFS volumes to mount, unsuccessfully.

By 4am, a few non-critical systems started to come online. This was a minor victory that nobody was going to break out the champagne for. The mood of everyone on the call was positive, but at times you could hear a pin drop, the eerie silence of utter focus on the tasks at hand. The elephant in the room was painfully obvious, we were hours away, if not more, from having the systems online, and as the clock continues to relentlessly move forward without forgiveness or empathy for sleep deprivation, we all know phones are going to start ringing, and questions are going to be asked. Questions that we don’t have the answer to.

As I sit, surely 4-5 pounds lighter from the loss of body fluid in the form of sweat, a break happens. “Guys, the power just came back on” I said with a crackle in my voice akin to a 13 year old boy going through puberty. Could this really be happening or had I fallen asleep and was dreaming of this fortuitous power restoration? It was real. After analyzing the situation I speak up “We can halt the failover procedure right now and bring everything back up in production.” The response is direct and succinct; “Do it.”

By 6am all systems are back online in production, and for this night, crisis was averted. I couldn’t escape the feeling that this was some type of warning. I knew it, everyone knew it. The entire process had to be evaluated and re-architected from the ground up, and fast.

Travis F.'s Story

After a large storm rolled through the area that i work in it knocked out power to our building and the generators did not power on when the power went out. That morning i had over 100+ users trying to login to our network and no one could. We had to go up to our server room and manually boot each server and re-post end of day...

Nothing to dramatic but then we had over 20 remote locations that were trying to login to our terminal server and we told them it would be a few minutes until they could login and most of the people just never logged in at all that day because they figured if they couldnt login at that time they wouldn’t have to at all that day.

Matt M.'s Story

This one time I came into work through roads that had over 3 feet of snow to find out that we were on generator power. There wasn't enough coffee in the building to keep me going that day...

We had to run extension cords and duct tape those down all over the place to emergency power. We were down for 28 hours. That is why during winter time I carry a bag of clothes and shower supplies in my car. Just in case. I would elaborate more but winter is coming and this story makes me sad, very sad.

Dino T.'s Story

As a Systems Administrator, most of you can relate to the Boss that doesn't care and probably couldn't understand RTO/RPO. At the time of this story, I was working in a data center where Murphy ruled and the laws of physics didn't exist. One year my liebert backup failed - I only had one unit at the time...

. After the outage and bringing up all the servers, I had funds to purchase a backup liebert unit. No data loss – down time… 1 – 2 hours. The next outage was really fun. With the newly configured battery backup systems, and a gas generator for redundant power. My data center could last for 45 minutes on batteries alone. Plenty of time for the generator to kick in. (Usually about 48 seconds). Apparently, Gas generators have a $4.00 fuse that needs to be replaced annually – guess who’s was never replaced? So during a rather unique summer storm when power was knocked out, on a saturday night, my backup generator did not kick in. I didn’t know about the outage until Sunday morning when my monitoring software quit talking to me (email). Not such a big issue, again I went in and brought everything up and all was well come Monday morning. Then it happened. Something that could not happen. I’m on vacation driving to families for Christmas when my phone rings. “Nothing is working, no one can access anything”! I am over 150 miles away and realize that there is a major outage, but cannot get back in time to implement DR procedures. I call a local vendor and ask a favor. 1 hour later I get a call. “Your primary SAN lost 3 drives and both fibre connections to the secondary SAN have failed”. Remember I talked about how Murphy ruled the Data Center? Disk 0, 1, and 2 failed at the same time!!! (RAID 10) configuration. In fact, LUNS on both the primary and backup SANs were corrupt. No backup, no virtual machines, no production anything – no DC’s, DNS, Print or file servers, no email, no SQL servers – NOTHING. All I had was tape. 1.68 TB of data to restore and re start. Again, remember how I said the “Boss” didn’t understand RTO/RPO? Well I had everything on tape, but for the next few weeks, I kept trying to explain why backups happen (as the Boss put it) so quickly and why it was going to take 3 – 4 weeks to restore. In the end all was restored except for 4 .pst files. 10MB out of 1.68 TB if data. But my disaster recovery plan was a complete failure. During the restore operation, a user accidentally deleted a file that was nelly created and needed it to be restored immediately. I told the user that they would have to wait.

Chin's Story

My scariest story ... an old server room of a very large manufacturing company was under a sewage tank. One auspicious day, the cleaners accidentally punch a hole in the sewage tank on the floor above...

The sewage literally fell on the NetApp boxes in the server room below. I was called around lunch time. The customer told me “CF, we are in deep sh*t! Literally sh*t!” I will never forget that moment. I had to scramble to loan NetApp gear from resellers. I got the stuff together and drove 5 hours up north to deliver to this manufacturer. I got there at 1am, and the other engineers were cleaning 5 NetApp systems with everything removed to its barest parts. By 4am, we had cleaned almost all with industry cleaning fluid and had put the NetApp systems back together again. As the fan turns, the smell of the data center room was something I will never forget. Surprisingly 4 of the 5 NetApp ran without a hitch. The customer decided to use their insurance to replace the machine.

Leo G.'s Story

I was working OT in data center, suddenly one of the main servers went down for 10 minutes. When it was back up a messages read year 1999 12am. Then a light started flashing in the room and became very,very dark....

I felt chill down my back. Minutes later the power come back to normal and all servers restarted automatically.
I checked the system everything was O.K. Only one strange thing, the printer printed
In big letter “HI”.

Jayme W.'s Story

What seemed to be a normal day, soon turned dark and dreary as evidence of a benevolence force began to stack up! A force from the Crypt-o-locker!!!...

Yes! That evil menacing ransom-ware had reached our network and began its attack on kind and gentle file servers. These servers did nothing but good! They stored data for end users, held databases for production floor manufacturing equipment, provided a safe location for archived emails – all the good things you would expect from a file server.

This evil crypt-o-locker took nearly all these kind files and locked them away in its dark underworld and users cried out for help. Where are my files?? What happened to my archived email?? Why can’t we run the floor equipment?? Many more cries could be heard as we rushed to fight this evil intruder. Soon we found that this attack had found its way to our replica file server and there was no longer a quick fix, hope was quickly being lost. Days ticked by as answers were searched for, and the evil was being backed into a corner and forced back to its dark crypt.

Many days passed before the cries for help began to diminish. Slowly the tide turned towards the good, but at a heavy cost. Servers were lost, data had to be restored, from a media not heard from in these days!……Something called tape? No matter, this tape served a purpose this time but would there be another? Would this crypt open once again and how could we keep such evil at bay??

We found a weapon to combat crypto and quickly deployed it for protection. NOW when the danger comes knocking again a simple reset to a point in time using Zerto will instantly give the good people access to their data and vanquish crypto back to the depths from where it came. No more ancient technology and doubts of victory. Next time we KNOW we will be victorious over the evil!

Rik G.'s Story

Before there were really cool things like Zerto and virtualization, DR for the hospital group I worked for was a disaster itself. Our plan consisted of physically taking tape on a 5-hour drive, restoring to rental equipment from Sunguard (which took over 2 days to just restore email) and then bring a system online one at a time...

  With over 1500 servers at the time, we really had no DR plan that would work. Fortunately we built in a lot of HA to begin with but I can’t think of anything scarier than not having a functional DR plan in a critical environment. If we had Zerto back then, we would have had a very solid DR capability for a majority of our systems.

Brian K.'s Story

An HR person accidentally sends out the cumulative summary of last year’s performances reviews for several hundred managers to everyone instead of sending out a blank form and then leaves for the day at noon. ...

While recall got a few back panic and horror set’s in on this upcoming nightmare a light bulb occurs. We tagged the file name as a virus preventing anyone from opening the file and are allowed to awake from a truly possible train wreck.

Greg S.'s Story

I was the hands and feet at a bank in Vancouver doing an upgrade while the guys back East did their job. The SQL crashed and I had to stay over at the bank for 16hrs. I slept on the couch they had in the lunchroom ...

It was fixed eventually. It was a re-imaging process of all the computers and the SQL server was not getting the new computer names right. I believe it may have been on a Friday and I left before any employees came in if they had worked that Saturday. I don’t think it really impacted the business, but it sure impacted my OT hours!

 

 

]]>
http://www.zerto.com/blog/general/scary-disaster-recovery-stories/feed/ 0
Tired of Not Being Able to Believe in Your Storage Vendor? Become an Agnostic with Zerto http://www.zerto.com/blog/general/tired-able-believe-storage-vendor-become-agnostic-zerto/ http://www.zerto.com/blog/general/tired-able-believe-storage-vendor-become-agnostic-zerto/#comments Wed, 24 Sep 2014 13:35:12 +0000 Zerto http://www.zerto.com/?p=8699 By Shannon Snowden, Zerto’s Sr. Technical Marketing Architect

Storage-Vendor3Recently Nigel Poulton published a great blog post, http://blog.nigelpoulton.com/xtremio-craps-on-emc-badge/, where he eloquently laid out his disenchantment with his storage vendor’s marketing message versus the reality of what they actually sold him.

This came in the same week where there was more than one Twitter battle between storage vendors.  While the ping-pong matches on Twitter are kind of funny to watch as each vendor digs up some dirt on the other, we empathize with Nigel’s situation. He is talking about significant financial impact to the business because he is unable to do non-disruptive upgrades to the storage, even though the product was sold to include those capabilities.

Zerto Virtual Replication is software defined data protection and mobility, and is completely agnostic to the storage hardware underneath.  If you need to completely evacuate the data from your storage array to some other array, even from a different vendor, or even to a different site, you can do that with Zerto. In fact, Fujitsu did it, and another Fortune 100 company completed a large-scale application migration in 18 minutes

Unfortunately, we shouldn’t be holding our breath until the message actually matches the product with many storage vendors.  You can, however, prepare yourself by becoming more agnostic to hardware.  There is a reason why software-defined everything is the future in our industry. It is much more flexible and forgiving to change, and helps you overcome known, or surprise unknown, deficiencies with the hardware.

The future will be even more dynamic and unpredictable with more surprises ahead.  Zerto is continuously helping you prepare for that by adding more hypervisors and more public cloud support.  We call it Zerto Cloud Fabric

Zerto-Cloud-Fabric

A good indicator to us that our message matches our product is what we hear when we co-present with one of our customers at various VMUG events. In fact, yesterday, one of our customers was asked by a prospect if he should do a POC with Zerto. Our customer answered,

“I just want to say that I am increasingly frustrated with IT products. You read their marketing materials and they sound great, so you try them out. You waste time energy and resources as these products do not deliver. Zerto does deliver. It does what its products say it is going to do. Do the POC. You will not be wasting your time.”

Customers consistently confirm that Zerto does exactly what we say it does.

That’s something even an agnostic can believe.

 

]]>
http://www.zerto.com/blog/general/tired-able-believe-storage-vendor-become-agnostic-zerto/feed/ 0
Disaster Recovery and Horse Racing with Zerto http://www.zerto.com/blog/general/disaster-recovery-horse-racing-with-zerto/ http://www.zerto.com/blog/general/disaster-recovery-horse-racing-with-zerto/#comments Thu, 11 Sep 2014 12:48:25 +0000 Zerto http://www.zerto.com/?p=8656 We are excited to be hosting a series of events at different race tracks throughout England. Join our ‘UK Race Day Roadshow’ and learn more about protecting your data with our award winning business continuity and disaster recovery (BC/DR) solution.

How are the two related?

During the event we will present a live product demo with an overview of our solution including real customer stories and show you how our RPO and RTO times are quicker than your favorite horse!

HorseRacingPic

After, we will enjoy a fun day playing the ponies.

Remember, Gambling can be fun, but not with your data!

Details for the event are below, sign up here and reserve a place now for your desired date.

12th September – Chester – 12noon

25th September – Newmarket – 12noon

1st October – Newcastle – 12noon

20th October – Windsor – 12noon

5th November – Nottingham – 11am

20th November – Chepstow – 11am

We look forward to seeing you there!

 

]]>
http://www.zerto.com/blog/general/disaster-recovery-horse-racing-with-zerto/feed/ 0
VMworld 2014 Roundup http://www.zerto.com/blog/general/vmworld-2014-roundup/ http://www.zerto.com/blog/general/vmworld-2014-roundup/#comments Tue, 09 Sep 2014 13:07:53 +0000 Zerto http://www.zerto.com/?p=8606 It has been a week since VMworld 2014 finished, and as everyone is waking up from their VMworld Hangover (aside from all the sales guys), we wanted to brief roundup of the event.

We’ve got a full recap for you on our “What Happened at VMworld” webinar, but the highlights are here:

It was a great event this year, with lots of exciting news about what is on the horizon for the rest of this year and into 2015. One of the big themes that was present throughout the conference was the future of virtualization in the Cloud.

During our webinar, we will dive deeper into what Zerto displayed at VMworld in regards to our role in the advancement of virtualization in the cloud, including:

  • Zerto Cloud Fabric, the technology that will allow you to be able to connect hybrid clouds together and to your environment.
  • Gain insight about the forthcoming Hyper-V support and how it is a significant step in making the Zerto Cloud Fabric vision a reality.

We want to thank everyone who came by our booth, it was great meeting the new faces and always a pleasure to see the people we already knew.

Every year we work to perfect the booth to create a welcoming and knowledgeable environment. We want people who spend time at the Zerto booth to enjoy their stay with our friendly team, learn about what we do and what exciting new areas we are exploring next. For anyone who spends time with Zerto, we strive to make them feel as much of the Zerto family as our employees.

Zerto-Booth-VMworld3

For this reason, we were incredibly proud of the shout-out we received in Jason Buffington’s post “Event Marketing Doesn’t Get Enough Credit” and the experience he shared regarding the Zerto booth. We are already working on raising the bar for next year.

This year also brought another fun and successful Zerto Red Wed Event. The sea of Zerto red, including both new and legacy Zerto t-shirts, was fun to watch and a great time for all.

Don’t forget to watch to our round-table, wrap up of VMworld 2014 from September 9th . You will hear from:

  • Sean Masters, Solutions Engineering, who will talk about sessions designed for the Enterprise Administrator and key learnings from those sessions.
  • Jason Nolan, Cloud Sales Manager, who will talk about Cloud Services and the hybrid cloud.
  • Shannon Snowden, Senior Technical Marketing Architect, presented at Tech Field Day and vBrownBag and will provide updates on all things social.
  • Jennifer Gill, Director, Global Product Marketing, will provide an overview of Zerto’s activities at VMworld and feedback from our customers and booth attendees.

 

 

]]>
http://www.zerto.com/blog/general/vmworld-2014-roundup/feed/ 0
VMworld 2014: Zerto Red Wed http://www.zerto.com/blog/general/vmworld-2014-zerto-red-wed/ http://www.zerto.com/blog/general/vmworld-2014-zerto-red-wed/#comments Thu, 28 Aug 2014 15:36:33 +0000 Zerto http://www.zerto.com/?p=8561 Another great VMworld ended with another fun Zerto Red Wed! Some information on Red Wed: it is an annual promotion that Zerto does at VMworld. We hand out our yearly, creative red Zerto t-shirts and everyone wears them on the Wednesday of VMworld in order to win prizes.

ZertoRedWed7

As you can see, VMworld was covered in a sea of Zerto red. This year we added a special twist and added that people can also wear their Zerto t-shirts from previous VMworld events.

This year’s prizes were:

  • Intergalactic light-saber popsicle makers
  • High-powered marshmallow shooters
  • Beats Pill Speaker with Dude Stand
  • Latest edition of Parrot MiniDrone Rolling Spider
  • Grand Prize: Fully-loaded Zerto box-of-awesome Home Lab!

Enjoy the vines and pics of this year’s events. Thanks to everyone for participating and congratulations to the winners!

ZertoRedWed8

 ZertoRedWed2

ZertoRedWed1 ZertoRedWed3 ZertoRedWed5 ZertoRedWed9

]]>
http://www.zerto.com/blog/general/vmworld-2014-zerto-red-wed/feed/ 0
Performance Impact of Snapshot-Based Replication http://www.zerto.com/blog/general/performance-impact-snapshot-based-replication/ http://www.zerto.com/blog/general/performance-impact-snapshot-based-replication/#comments Wed, 27 Aug 2014 15:45:43 +0000 Zerto http://www.zerto.com/?p=8496 By Chris Snell, Zerto Sales Engineer, EMEA

This is the third post in our three-part ‘blogging from the VMworld show floor” series, covering the most common questions we get at VMworld. The most popular question today is, “Does Zerto use snapshots?”

In a word, “No” – but an explanation is due:

There are many products in the market now which provide protection for virtual environments using snapshots. One of the reasons is that VMware provides a technology, APIs for Data Protection (VADP), whose primary aim is to provide a simple mechanism for protecting virtual machines(VMs) via snapshots. VM snapshots certainly present a superior mechanism to backup VMs compared to legacy technologies, such as backup agents designed for physical machines, but there are some negative impacts of using snapshots.

Snapshot-SQL-Server

First of all, snapshots are not backups or replicas. VMware snapshots work by keeping a record of changing data, the delta file, while the original disk does not change. So unless you copy the snapshot to secondary media, any media failure within the production data is likely catastrophic. Moving the snapshot data to a secondary storage area takes time, leaving the snapshot open during the process.

Snapshots can use up a lot of storage, both  in terms space and IOPS. As snapshots work by keeping a record of changing data, it is necessary to store all of the changes that happen to the disk while the snapshot is in use. The size of the delta file has a direct impact on the length of time it takes to delete the snapshot associated to the child disk.

Deleting the snapshot at the end of a snapshot based backup/replication is an important consideration when looking at implementing new technologies. When a snapshot is in place the changes that happen to a VM are kept in temporary Consolidate Helper Snapshot files. If we imagine a VM hosting SQL Server, there will be a lot of IO happening during the backup/replication. These changes in the Consolidate Helper Snapshot files must be added to the VM as the snapshot gets deleted, once it has been fully copied. This VMware knowledge-base article helps to explain why this problem occurs.

A quick look at any vendor who uses snapshot based backup/replication technology will confirm that this problem still happens, despite improvements from VMware. Of course, the gravity of the stun is dependent on factors such as the type and configuration of storage used. It is possible, with a fair amount of forethought and planning, to iron out such issues.

Something else to consider when using traditional snapshot based backup/replication, is the Recovery Point Objective (RPO). As explained above, to convert a snapshot in to something that is considered a backup/replica involves copying the snapshot to another location. Transferring a typically sized 100GB snapshot would take at least 15 minutes. To understand the time it would take the full backup/replica process to finish would include allowing time for the snapshot to be created and committed, plus other tasks to happen. This all leads to large intervals between recovery points and explains why snapshot-based technologies are not achieving the required near zero RPO that for business critical applications and VMs.

RPO-Site1

An additional consideration to highlight is that many snapshot-based technology vendors, including the household names, are moving away from utilizing VM Snapshots, and are instead pushing storage array integration to achieve their published RPO\RTO targets. There is a general ecosystem-wide admission that, while VM Snapshot technology was a good temporary solution, ultimately there is a better way of doing things that has less impact to the virtual machines, better recovery points achieved, better recovery times achieved, and so on.

Here at Zerto, we have achieved an aggressive and automated approach with no impact to the production VMs whatsoever. By creating a hypervisor based solution, copying the write data between hypervisor and storage, Zerto creates no impact on the production VMs that are being protected. Users are able to work as usual without suffering outages. By offering a near-synchronous and continuous stream of replicated data from production to disaster recovery site, Zerto is able to create RPOs of just seconds. The continuous stream of data is then used to update the replica, but also stored in time order within the journal, providing write order fidelity. Users are can then select any point in time in the recent past as a failover point, unlike snapshot based backup/replication technology where you are limited to recovery points on the widely spaced snapshots.

To see the benefits of Zerto’s hypervisor-based solution for yourself click here for our free trial.

 

]]>
http://www.zerto.com/blog/general/performance-impact-snapshot-based-replication/feed/ 0
Zerto vs. Site Recovery Manager (SRM): How Do They Compare? http://www.zerto.com/blog/general/zerto-vs-site-recovery-manager-srm-replication-comparison/ http://www.zerto.com/blog/general/zerto-vs-site-recovery-manager-srm-replication-comparison/#comments Tue, 26 Aug 2014 15:15:28 +0000 Zerto http://www.zerto.com/?p=8463 This is the second post in our “blogging from the VMworld show floor” series, covering the most common questions we get at VMworld. The most popular question today is, “How does Zerto Virtual Replication (ZVR) compare to VMware’s Site Recovery Manager (SRM)?”

zerto vs vmware SRM

The best answer to this question is actually a deep dive written by VCP and vExpert David Davis “Comparison: Zerto vs. Site Recovery Manager (SRM) & vSphere Replication.” on his virtualizationsoftware.com site.

David goes through the impact virtaualization has had on disaster recovery, and gets into a bit of the history there. Then he goes on to review the replication options VMware supports, namely vSphere replication which can be used with or without Site Recovery Manager (SRM) also from VMware, or the use of SRM with Array-based replication products from a storage provider. He provides a clear chart of the differences between the products and their use cases.

He notes that vSphere Replication, “replicates individual VMs within or across vSphere clusters. Certainly it is better than recovering from an offsite backup but its recovery time and scalability (even when combined with SRM) may not be enough to satisfy your needs.” Adding that SRM overcomes many of these issues, it is not able to fully cover the replication limitations that vSphere Replication presents. For this reason, lots of the organizations using SRM choose to replicate using array-based replication, which is expensive and often involves configuration and support hurdles.

Zerto provides replication at the hypervisor level that also provides automation of failovers and recoveries. The administration is very straightforward with Zerto since there is no need to coordinate with other components to perform successfully.

hypervisorreplication1500x500

Read the full post here

Enjoy VMworld 2014 and make sure to come ask us questions at booth #723. Maybe your question will be the subject of tomorrow’s post!

 

 

]]>
http://www.zerto.com/blog/general/zerto-vs-site-recovery-manager-srm-replication-comparison/feed/ 0
Zerto vs vSphere Replication Comparison: How to Choose? http://www.zerto.com/blog/general/zerto-vs-vsphere-replication-comparison/ http://www.zerto.com/blog/general/zerto-vs-vsphere-replication-comparison/#comments Mon, 25 Aug 2014 20:15:27 +0000 Zerto http://www.zerto.com/?p=8471 By Joshua Stenhouse, Zerto Senior Solutions Engineer

I’m blogging from the VMworld show floor this week, and one question that often comes up when discussing Zerto Virtual Replication (ZVR) is “How does ZVR compare against the built-in free vSphere Replication engine?”

Right off the bat, it should be noted that vSphere Replication on its own provides no central interface, no automation, recovery or testing workflows and requires VMware Site Recovery Manager for this functionality.

This blog post will walk you through some specific examples of Zerto and vSphere Replication in order to show that Zerto Virtual Replication is a product relevant for enterprise replication and recovery and why we feel vSphere Replication is better suited to replicating a small numbers of VMs from branch offices.

Zerto Virtual Replication provides continuous block level replication of changes giving a Recovery Point Objective of seconds, consistent across a multi-vm application, using Virtual Protection Group technology. With built-in bandwidth optimization and compression, Zerto Virtual Replication will automatically utilize the maximum assigned bandwidth and compress changes by 50%+ in order to maintain a consistently low Recovery Point Objective, that is consistent across a multi-VM application.

Here we can see all of the VMs in the CRMApp1 Virtual Protection Group are recoverable to 7 seconds ago from the Current RPO metric:

Meeting-SLA

vSphere Replication doesn’t replicate continuously, nor does it replicate on a schedule like traditional replication technologies. vSphere replicates using a time slider which sets between 15 minutes and 24 hours. This controls how often it will attempt to replicate, subject to the data change rate (I.E completely randomly).

RPO-Calculator

Be careful to ensure that this slider is of a sufficient gap to allow the data changing enough time to replicate, otherwise the replication will fail with an RPO violation. To attempt to work out how often the replication will occur, check out this official VMware blog:

vSphere Replication has no consistency grouping of VMs and no bandwidth optimization and compression. This means vSphere Replication cannot replicate changes any faster, even if the bandwidth is available. Furthermore, it requires separate bandwidth compression appliances for compressing the replication traffic. Each replication interval will take a random amount of time, on a per VM basis, depending on the data change rate; meaning vSphere Replication will always run at random intervals with no way to control when it is or isn’t replicating.

So what does this fundamental difference mean in the real world?

I will demonstrate using a small environment consisting of 9 VMs that form 3 multi-VM applications; totaling 90GB of data replicating over a 1 Gbps link in my demo lab. Each application is a relatively basic CRM system with a database VM, fileserver VM and webserver VM that I need replicated to the same point in time. Without a consistent point in time, I could have a database referencing files that do not exist and a webserver containing logs pertaining to activities that are not in the database. Even with just this basic application you can appreciate this is not a good situation.

Using vSphere Replication if I protect the 9 VMs that form my 3 applications on a 1 hour RPO, and leave it running for 1 day, then what point in time am I able to failover my applications to?

Replcation-Point-in-time

The answer is: I have no idea. vSphere Replication is giving me completely random point in time, on a per VM basis thus rendering the recovery of my enterprise application potentially useless. As a VMware admin I would really want to avoid this for anything other than ROBO VMs (which Zerto Virtual Replication is also capable of protecting). In addition to this, I have no idea when vSphere Replication will attempt to replicate, and therefore I am certainly not going to enable VSS integration when I have no clue as to when it will attempt to quiesce my production applications!

This example is just from my demo lab running over a LAN, with no contention on the network link. What we have seen from customers trying to run vSphere replication in production over real WAN links is the lack of built-in compression makes the point in time difference between different VMs many hours, or just fails to complete at all, making the problem significantly worse.

If we take the exact same environment and protect it with Zerto Virtual Replication, leaving it for the same period of time, then what point in time can I failover to across my applications?

Failover-Point-in-Time

Looking at the Actual RPO column above, we see the answer is a few seconds ago to a consistent point in time across all of the VMs that form each application.

In this, the most basic of environments, which of the 2 replication engines do I want to rely on as a VMware admin? From my 10 years of IT admin experience, it is an absolute no-brainer for Zerto Virtual Replication. I can also schedule when I take VSS points in time and I can failover to any point in time, every few seconds for crash consistency, up to 5 days in the past.

This fundamental difference is the main reason why Zerto Virtual Replication is scalable to thousands of VMs for enterprise datacenter-to-datacenter replication. vSphere Replication is a ROBO feature for a handful of VMs created in order for VMware to check a box against an increasingly threatening Hyper-V feature set.

Don’t just take my word for it; feel free to validate my opinion and results by testing out Zerto Virtual Replication against vSphere Replication yourself with our free trial.

Also, make sure to check out this case study from Leadway Pensure. They bought VMware Site Recovery Manager to use with vSphere Replication and replaced it with Zerto due to the fundamental difference:

The points made above are also be worth considering when looking at using vCHS-DR against Zerto Virtual Replication to replicate to a cloud provider, considering that vCHS-DR runs using the vSphere Replication engine capability. Thanks for reading – and come meet me at booth #723!

 

]]>
http://www.zerto.com/blog/general/zerto-vs-vsphere-replication-comparison/feed/ 0
VMworld 2014 Social Events Map: Where You Need to Be! http://www.zerto.com/blog/general/vmworld-2014-social-events-map-guide-need/ http://www.zerto.com/blog/general/vmworld-2014-social-events-map-guide-need/#comments Tue, 19 Aug 2014 17:17:38 +0000 Zerto http://www.zerto.com/?p=8414 We understand your dilemma, so many  VMworld events and so little time. To help you out we created this infographic map to show you the CAN’T MISS social events for VMworld 2014 and where they are in San Fransisco, so you don’t get lost!

Visit us at booth #723 to pick up your t-shirt and enter to win more great prizes.

Take a look below and enjoy. We look forward to seeing you next week!

VMworld2014-Social-Events-Map

]]>
http://www.zerto.com/blog/general/vmworld-2014-social-events-map-guide-need/feed/ 0
Should you consider vCHS-DR or a Zerto-powered Cloud Service Provider for DRaaS? http://www.zerto.com/blog/general/consider-vchs-dr-zerto-powered-cloud-service-provider-draas/ http://www.zerto.com/blog/general/consider-vchs-dr-zerto-powered-cloud-service-provider-draas/#comments Tue, 05 Aug 2014 12:15:06 +0000 Zerto http://www.zerto.com/?p=8227 By Will Lin, Zerto Cloud Solutions Engineer

If you have heard of Zerto’s award-winning Zerto Virtual Replication (ZVR), you know that ZVR is used in hundreds of enterprises as the preferred disaster recovery (DR) & virtual replication solution.  Enterprises use ZVR to easily replicate virtual machines from their production data center to an alternate DR data center, achieving recovery point objectives (RPO) of seconds, and recovery time objectives (RTO) of minutes.  ZVR is completely storage agnostic and can work with different vSphere versions (include legacy vSphere 4.x).

What you may not have known, is that there are currently over 130 cloud service providers (CSPs) around the world, who are providing DR as a Service (DRaaS) powered by Zerto.  These CSPs (which include well-known names like Colt, Terremark, Kelway, Veristor, Peak10, iLand and Bluelock) are using ZVR to replicate hosted workloads between their own IaaS cloud data centers, as well as providing a DR cloud target for their enterprise customers.

Zerto-cloud-service-providers

Recently, VMware announced the availability of a DR offering in their multi-tenant public cloud, vCloud Hybrid Service (vCHS-DR).  vCHS-DR allows customers running VMware vSphere to make live replicas of their running VMs to one of VMware’s vCHS data centers.

If you are an organization that’s exploring the possibility of using an external cloud service provider as your DR target, should you consider vCHS-DR or one of the Zerto-powered CSPs?

Well, judge for yourself.  Besides pricing (hint: Zerto-powered CSPs are extremely price-competitive and vCHS has been noted to be pricey as you can read here), there are some significant technical differences between vCHS-DR and a Zerto-powered CSP.  I’ll just list 3 obvious ones here:

DR Automation and Orchestration:

  • vCHS-DR uses vSphere Replication (VR) without Site Recovery Manager (SRM) to deliver its DR service. In fact, vCHS-DR isn’t compatible with SRM.  Moreover, vCHS-DR uses a version of vSphere Replication that is incompatible with the production version of VR. So any recovery plans you may already have will need to be completely redone. The vCHS-DR VR is just a replication mechanism that creates a VM replica at the target site.  Without SRM, there’s no automation or orchestration to automate and coordinate your actual DR failover.  With VR, you have to individually configure each VM for replication, each with its own schedule and RPO.  RPO is unpredictable due to the snapshot approach vCHS-DR uses. The minimum advertised RPO is 15 minutes, which doesn’t match what you will normally see in actual production performance. It will usually be much higher unless snapshots are run every 15 minutes, which will cause considerable slow-down in the production datacenter. Also, there is no ability to execute a non-disruptive DR test without actually failing over the VM.  Without automation, there is no guarantee to achieve a consistent RTO. This requires several labor-intensive hours to recover just a few VMs. Even with the expected new features , industry experts agree that vCHS doesn’t meet the requirements for most organizations.
  • ZVR is both a per-VM replication mechanism, as well as a full-featured DR automation/orchestration solution. RPO is measured in seconds and RTO is usually measured in minutes, even with heavy I/O workloads. With ZVR, you can pre-configure protection groups, protect multiple VMs together in a consistency group, re-IP the failed over workloads, customized the boot order for failed over VMs, and execute failback.  You can also execute non-disruptive failover tests any time you want, without interrupting production workloads and without break in replication & RPO. With ZVR, you can recover an entire site within minutes just by pushing the red button.

Failover1

Coverage:

  • Currently there are six vCHS datacenters, with expected expansion to 10.
  • There are over 130 active cloud service providers offering Cloud DR powered by Zerto Virtual Replication. This encompasses global coverage across hundreds of datacenters.
  • All Zerto-powered cloud service providers offer production-level support and SLAs.

Multiple Point-In-Time Recovery:

  • Even though VR 5.5 has the ability to recover to multiple point-in-time instances, this ability is disabled when using vCHS-DR.  This is probably because VR needs to use VMware snapshots to create these multiple PIT recovery points.  If you’re a VMware administrator and have seen/used multiple, chained VM snapshots, you know it’s something that no one (including vCHS admins) wants to do!
  • ZVR is a CDP-like replication solution, with the ability to failover to any point-in-time within the journal.  The journal is configurable from 1 hour to 5 days.  Recovering to a particular point-in-time with ZVR is literally as simple as operating a Tivo/DVR.

Failback:

  • You cannot failback with vCHS-DR!  Of course, that’s not what vCHS marketing material will tell you.  Imagine that you are a vCHS-DR customer, and you actually suffered a disaster and failed over your workloads over to vCHS.  In order to get the workloads back to your original production site, you may actually incur a much longer outage then the original disaster itself!

This is how vCHS-DR “failback” works:

  • Power off all production VMs at the vCHS cloud side.  Outage starts here.  End-users will not have access to the production application service during this outage.
  • Go to the original source site and manually rename or delete the original VMs from vCenter inventory.
  • Manually copy the VMs from vCHS back to the original source site using vCloud Connector (basically a vCD export and then a vSphere import).  This is essentially a FULL COPY!  [Note: Have you ever try to copy large VMDKs across the WAN?  Both processes are incredibly time-consuming.]
  • Wait for the copy over the WAN.
  • Once the VMs are copied back to the original source site, manually edit the VM network settings to connect to original source site port groups.
  • Manually power on VMs at the original source site.  Outage now ends.
  • Manually reconfigure and restart replication back to cloud.
  • All of the above steps are  manual.  Remember, there is no automation of any of these steps with vCHS-DR.
  • With ZVR, reverse replication and failing back a protection group literally takes a few mouse clicks and is fully automated.  If the VMDKs from the protected VMs are still intact at the original source site, ZVR will intelligently use those VMDKs as pre-seed targets.  This means that when ZVR starts the reverse replication, only those changes to those VMDKs are transferred, not the entire data set of the VMDKs.  This dramatically reduces the replication time and WAN bandwidth utilization. Once the two sites are in sync, the actual failback process will only take minutes.  Done and done.

If you are considering DRaaS, I encourage you to check out one of the Zerto-powered cloud service provider partners in order to enjoy industry’s most robust, full-featured, disaster recovery & replication solution.

 

]]>
http://www.zerto.com/blog/general/consider-vchs-dr-zerto-powered-cloud-service-provider-draas/feed/ 0
vDM Spotlight on Zerto http://www.zerto.com/blog/general/vdm-spotlight-zerto/ http://www.zerto.com/blog/general/vdm-spotlight-zerto/#comments Wed, 30 Jul 2014 12:19:24 +0000 Zerto http://www.zerto.com/?p=8208 VirtualDesignMasterRecently, our own Shannon Snowden got to sit down with Eric Wright from Virtual Design Master (vDM).

It was an incredibly fun conversation, Shannon and Eric talked about Zerto, our history, and many aspect of the Zerto product: disaster recovery, failover testing, working with Cloud Service providers, and much more (we don’t want to ruin it for you).

Just to fill in for Shannon, during VMworld we can be found at booth 723.

Eric talked about the current vDM challenge contest and what they are looking for from those competing.

Virtual Design Master is an online reality show challenging virtualization professionals to come up with innovative infrastructure designs. It is an exciting contest that lasts a little over a month, with competitors receiving challenges to complete. These challenges get judged and the approved competitors move on to the next round. Learn more about the challenged here .

Stay tuned tomorrow, July 31st, where the 3rd challenged will be judged.

Among the prizes that can be won is Zerto’s red box of awesome .

Watch the video below, we hope you enjoy it as much as we did making it.

 

]]>
http://www.zerto.com/blog/general/vdm-spotlight-zerto/feed/ 0
Zerto’s Mariah West Talks Disaster Recovery with CRN http://www.zerto.com/blog/general/zertos-mariah-west-talks-disaster-recovery-crn/ http://www.zerto.com/blog/general/zertos-mariah-west-talks-disaster-recovery-crn/#comments Mon, 21 Jul 2014 14:03:03 +0000 Zerto http://www.zerto.com/?p=8059 Zerto’s own Maria West, Director of Partner Marketing and recently named to CRN’s Women of the Channel Power 100, sat down with CRN to talk a little bit about Zerto.

The interviewed covered the last year for Zerto, which has seen significant growth in the last year.

Mariah also shared some of her wisdom regarding opportunities for the channel around disaster recovery.

For the full interview and to learn from one of CRN’s Women of the Channel Power 100, watch below!

 

]]>
http://www.zerto.com/blog/general/zertos-mariah-west-talks-disaster-recovery-crn/feed/ 0
Tips for VMware Disaster Recovery http://www.zerto.com/blog/general/tips-for-vmware-disaster-recovery/ http://www.zerto.com/blog/general/tips-for-vmware-disaster-recovery/#comments Tue, 08 Jul 2014 10:49:12 +0000 Zerto http://www.zerto.com/?p=7973 By Samuel Calder, Zerto Pre-Sales Engineer

What is the best way to replicate data in VMWare? It is always best to take the direct route and go directly to the source.  Avoid detours and get your data from point A to B by performing hypervisor based replication.

There are many attractive reasons for going from a physical environment to a virtual environment. A few advantages are:

-          Resource optimization

-          Business continuity

-          Centralized control and management

If we take a look at traditional methods for replication, then we can see the steps required and road blocks that we need to overcome to replicate effectively in a virtual environment.

replication

Array based replication is performed at the storage layer, which limits granularity and replicates the whole LUN

  • Requires frequent coordination between Storage, Network and VMWare teams
  • Presents architecture design challenges to meet the storage requirements between both sites

Snapshot-based replication often requires a significant local storage overhead and impact to the production environment

  • VMs may freeze for long periods leaving applications inaccessible to users of IT
  • VM snapshots for replication can conflict with backup products that also require snapshots

Guest-based replication goes inside the guest operating system of each VM and requires agents, which use OS and VM resources

  • Performance of the protected VMs is impacted because of the increase in disk I/O
  • Very difficult to manage and scale due to agents per guest

The methods above all require multiple components that need to be coordinated to get the job done. Each component adds a level of complexity that takes away from at least one of the advantages of virtualizing; resource optimization, business continuity, and/or centralized control and management.

Now let’s take a look at replicating at the source via the hypervisor.

HypervisorReplication

  • No impact to production VMs and can be managed by the virtual team rather than multiple teams needing to coordinate efforts
  • Snapshots are not used and block level replication is performed in near-real time, creating significantly lower RPO’s down to the single digit seconds
  • Underlying storage becomes a non-factor and agnostic, so you have the flexibility to make the decisions that the business demands

Why not use snapshots? Many snapshot based technologies require database administration, such as quiescing the database prior to the snapshot being taken to avoid issues that may arise with snapshots such as database inconsistency and failures. By performing replication at the hypervisor, there is no need to perform database administration to meet the demands of a snapshot. There are no snapshots. Also, snapshots only replicate to a point in time that is some fifteen minutes ago, at best.

With hypervisor-based replication your environment retains crash consistency; continuous replication via the hyper-visor layer maintains write-order fidelity, ensuring the disks in the target site are always crash consistent. Only confirmed writes are replicated, which means data is only replicated when the OS has confirmed the change has been written to the local disk. This also means that you are able to go back to a point in time only seconds ago, minimizing the amount of data loss.

HypervisorReplicationMeme

By performing replication at the hypervisor layer, your replication strategy allows you to take full advantage  of the benefits of virtualization without sacrificing on your business continuity process.

 

]]>
http://www.zerto.com/blog/general/tips-for-vmware-disaster-recovery/feed/ 0
Plug and Play Disaster Recovery by Zerto http://www.zerto.com/blog/general/plug-play-disaster-recovery-zerto/ http://www.zerto.com/blog/general/plug-play-disaster-recovery-zerto/#comments Thu, 26 Jun 2014 09:56:09 +0000 Zerto http://www.zerto.com/?p=7893 By Samuel Calder, Zerto Pre-Sales Engineer

Starting as a pre-sales engineer at Zerto, the first bit of business is to get Zerto up and running in your lab.

My first thoughts were, being that Zerto’s specialty is disaster recovery for virtual environments and requires replication, how complex and how many moving parts is there going to be? The answer, “it is plug and play.”

I log into the Zerto support portal, download the latest version of the replication software, at the time version 3.1 update 2, and begin the installation. The following details out the process for installing.

Install the software on a Windows Server VM (version 2003 and above)… easy enough. I deployed a Win2k12 from an OVF and fifteen minutes later I am consoled in and running the Zerto installer. Three minutes later and the installer completes; Zerto is now installed.

(Side note, you will need one VM per vCenter server with the Zerto software installed. Generally, there is one at the production site and one at the disaster recovery site.)

ZertoVirtualManagerSetup

 

Next, Log into vSphere and click on the Zerto tab, please enter license key. Done.

vSphere

 

Final installation step, install replication appliances…navigate to the Setup tab in the Zerto GUI (prior to the 3.5 release this was the VRA tab) and install a VRA for each host that you will be replicating to and/or from. To do this you click the checkbox next to the host, choose Actions then Install. Complete the information for the network configuration settings. Once complete a new VM will automatically deploy in the environment, which is the virtual replication appliance.

ZertoGUI

Zerto Virtual Replication is now in place and ready to be used. Simply click the New VPG button, add the VM’s in this group that you want protected and choose the target site for replication. Save and the initial sync begins. Once complete Zerto will be protecting the specified VMs.

Manage VPG1

The installation is quick and painless because Zerto is able to leverage the existing infrastructure that is in place…truly making Zerto a plug and play solution.

But don’t just take my word for it, try it out for yourself by clicking  here for our free trial!

 

]]>
http://www.zerto.com/blog/general/plug-play-disaster-recovery-zerto/feed/ 0
Companies who are Masters of Disaster Recovery http://www.zerto.com/blog/general/companies-masters-disaster-recovery/ http://www.zerto.com/blog/general/companies-masters-disaster-recovery/#comments Thu, 19 Jun 2014 12:23:49 +0000 Zerto http://www.zerto.com/?p=7809 So often we talk about becoming a Master of Disaster and are constantly encouraging everyone out there to become masters. For a long time IT departments and organizations have felt that mastering disaster recovery was out of their reach.  We wanted to take this opportunity and highlight some companies who do disaster recovery the right way.

Below you will find some companies who have taken a proactive approach, developed strong BC/DR strategies and are ready for whatever nature or humans have to throw at their mission-critical applications.

WoodforestWoodforest National Bank: In 2008, Hurricane Ike passed through Houston and its surrounding areas, Woodforest’s primary datacenter lost power and remained on generator power for 10 days after the storm ended. Fortunately, prior to the hurricane, Woodforest migrated all customer-facing applications to a disaster recovery site.

After that experience, Woodforest transitioned from disaster recovery to disaster avoidance preparedness. Each June, all production applications are pre-emptively “failed-over” to a secondary site, with a return to the primary site by mid- to late-October, avoiding the hurricane season. If a hurricane strikes the area, all of the systems are already protected.

KingfisherKingfisher IT: Kingfisher needed a solution that would meet their aggressive service levels without impacting their production applications. The infrastructure supports almost 1,000 retail outlets across Europe; any impact to the infrastructure can hurt sales. They have several critical applications that need to be available to run the business.

Kingfisher now protects their key applications, while simplifying their migration plans and dramatically reducing operational and capital costs.

EUKORLogoEUKOR Car Carriers Inc: EUKOR annually transports around 4 million cars worldwide, utilizing a global network of offices and agents. Their customers include Hyundai Motors, KIA Motors, Mercedes-Benz, BMW, Audi, Volkswagen, Porsche, General Motors, Ford and many others.  EUKOR delivers a dynamic service with lots of moving parts across the globe.
EUKOR’s main system is located in IDC in Korea. However, the service is being updated continuously from 10 other offices, located in eight countries and numerous agents around the world. Therefore, the reliability of the whole system is extremely important as it directly influences the core business processes. EUKOR needed a solution which was flexible and agile to align to its IT strategy.

The key to EUKOR’s BC/DR strategy was the service level agreement they needed to support.  Now with Zerto Virtual Replication (ZVR), EUKOR is meeting their service levels easily and with confidence, exceeding their IT infrastructure requirements while delivering superior customer service.

SGSSGS Turkey: SGS needed to ensure their virtual environment was protected as unavailability directly impacts their customers. The IT team had a growing concern that it would only be a matter of time before human error, hardware failure, natural disaster, or any other adverse condition stopped a perfectly running IT operation. SGS Turkey is highly dependent on technology in its operations, any down time results in a significant financial impact.

Now they replicate the same amount of data in half the time when compared to a backup-based replication solution. They are protecting Microsoft SQL Server, file servers and other applications. They are delivering aggressive service levels, with recovery point objectives of seconds and recovery time objectives of minutes. They are replicating between multiple sites and are planning to consolidate operations further to ensure data and application availability for their global operations.

leadway logo smallLeadway Pensure: Leadway Pensure is a pension fund management company that is 95 percent virtualized. It embarked on its virtualization journey to conserve space, reduce power and cooling costs, but most importantly, to simplify BC/DR. The initial synchronization took too long and due to a lack of bandwidth optimization, replication was not able to maintain a consistent RPO in line with the requirements of the business.

With these issues they turned to Zerto which Uzah Chinedu, Leadway Pensure’s Infrastructure Manager, called “a miracle for BC/DR.”

Leadway Pensure now meets their aggressive goals -  RPOs of seconds and RTOs of minutes. They are easily are able to recover Microsoft SQL Server database, Exchange, File servers and other applications well within their SLA.

Want to join these organizations as Masters of disaster recovery? Join our monthly Master of Disaster webinar and join the ranks!

 

]]>
http://www.zerto.com/blog/general/companies-masters-disaster-recovery/feed/ 0
Beers with your Peers, and World Cup Football http://www.zerto.com/blog/general/beers-peers-world-cup-football/ http://www.zerto.com/blog/general/beers-peers-world-cup-football/#comments Sun, 08 Jun 2014 11:37:38 +0000 Zerto http://www.zerto.com/?p=7787 WorldCup2It is that time again, the once in four year event we have all been waiting for.  No, not a leap year, the World Cup!

If, like us, you cannot make it to Brazil for the tournament, join the Zerto EMEA team for some food, football and of course, beer. Zerto will be hosting 8 match-day events (four in the UK, two in Germany and two in Italy). Register below to join us.

To sweeten the deal, if you register now, Zerto is offering the chance to win a flat-screen TV to one person who correctly picks the World Cup champions. Register, pick the winning team and you could win big!

 

Details for the events are below (in order of date):

June 16th- Frankfurt, Germany – Germany vs Portugal (game starts at 6pm (CET) but hosting from 5pm).

Register Here

June 19th – London, UK – Uruguay vs England (games starts at 8pm but hosting from 7pm)

Register Here

June 19th – Manchester, UK – Uruguay vs England (games starts at 8pm but hosting from 7pm) – we are sponsoring the Manchester vBeers before the event which starts at 5pm in the same venue!

Register Here

June 20th – Milan, Italy – Italy vs Costa Rica (game starts at 6pm (CET) but hosting from 5pm)

Register Here

June 24th – London, UK Costa Rica vs England (game starts at 5pm but hosting from 4pm)

Register Here

June 24th – Manchester, UK – Costa Rica vs England (game starts at 5pm but hosting from 4pm)

Register Here

June 24th – Milan, Italy – Italy vs Uruguay (game starts at 6pm (CET) but hosting from 5pm)

Register Here

June 26th – Neuss, Germany – 26th June – Germany vs USA (games starts at 6pm (CET) but hosting from 5pm)

Register Here

We look forward to seeing you there, cheering on your team with beer and food in hand!

GO Team [fill in the blank] !

WorldCup1

]]>
http://www.zerto.com/blog/general/beers-peers-world-cup-football/feed/ 0
A Brief Introduction to ZVR 3.5 http://www.zerto.com/blog/general/brief-introduction-zvr-3-5/ http://www.zerto.com/blog/general/brief-introduction-zvr-3-5/#comments Tue, 03 Jun 2014 11:50:19 +0000 Zerto http://www.zerto.com/?p=7617 We are excited to announce that Zerto Virtual Replication (ZVR) 3.5 is now available!  There are so many good things to learn about this version but just wanted to give a brief look into what is new in ZVR 3.5.

The big news with the release is the Offsite Backup paradigm, which will provide you with a brand new way to protect your data. This new feature increases the usefulness of data that is replicated for the purposes of business continuity and disaster recovery (BC/DR). A duplicate backup infrastructure is no longer required for backup, significantly reducing the overhead on the production environment.

ZertoOffsiteBackup

Zerto Offsite Backup will:

  • Automate retention management, ensuring that the following is available at all times per Virtual Protection Group (VPG): Daily copies for the last week, Weekly copies for the last month, Monthly copies for the last year
  • Configure backup start times and target repository
  • Ensure backup or disaster recovery operations are executed as needed
  • Review backups and retention times to ensure business requirements are met
  • Configure reports to be sent on a daily or weekly basis to provide complete backup information

Other 3.5 features will include:

  • Clearer status and activities for VPGs
  • Change datastores quickly without impacting replication
  • Enhanced Alert Handling (Include applicable problem and resolution information)
  • Deliver value with a rich set of APIs to automate management

To learn more about what else is new in Zerto 3.5 you can download our data sheet here, or view the video below.

]]>
http://www.zerto.com/blog/general/brief-introduction-zvr-3-5/feed/ 0
Replicating Raw Device Mappings with Zerto http://www.zerto.com/blog/general/replicating-raw-device-mappings-with-zerto/ http://www.zerto.com/blog/general/replicating-raw-device-mappings-with-zerto/#comments Wed, 28 May 2014 10:26:51 +0000 Zerto http://www.zerto.com/?p=7541 By Joshua Stenhouse – Zerto Senior Solutions Engineer

I’m often asked if Zerto Virtual Replication supports Raw Device Mappings (RDM) in VMware, and I can usually tell the person asking is expecting me to say no, but the answer is actually a resounding YES! This is always followed by great responses such as “amazing” or “I don’t believe you”. I then list exactly what we do support and all becomes clear; which is:

1.    Replication of both Physical RDMs (pRDM) and Virtual RDMs (vRDM)
2.    Replication from RDM to thin or thick VMDK or pre-provisioned RDM in the target site
3.    Replication from pRDM to vRDM and vice versa
4.    Move, failover and failback
5.    Failover to VMDK (Virtual Machine Disk) then failback to RDM
6.    Failover testing with no shutdown of the protected VMs in production and no break in the replication
7.    Failover and failover testing to previous points in time using a journal of changes in the recovery site
8.    Consistency between VMDKs and RDMs in the same VM and across a VPG (Virtual Protection Group)
9.    Cloning target RDMs to VMs in the inventory with the RDMs converted to VMDKs
10.    Migration from RDM to VMDK with configuration of failback to VMDK removing the source RDM

This is made possible as Zerto replicates using software in the hypervisor. This makes Zerto Virtual Replication both storage agnostic and agnostic to VMFS, NFS, VMDKs, pRDMs and vRDMs. Here you can see the Zerto selection screen for replication to and from VMware RDMs within the Configure Volume section for a protected VM:

ConfigureVolume
The default behaviour of Zerto is to replicate an RDM to a thick VMDK, but this is very easily changed to a thin VMDK or a pre-provisioned pRDM/vRDM in the target site when protecting the VM using the screen shown above.

All of the supported Raw Device Mapping in VMware configurations really do allow you to be very granular in how you configure protection around your RDM based VMs.

If you have any unresolved questions regarding replicating Raw Device Mappings with Zerto please feel free to add a comment below.

]]>
http://www.zerto.com/blog/general/replicating-raw-device-mappings-with-zerto/feed/ 0
Cisco Live 2014: Zerto Interview with Cirrity http://www.zerto.com/blog/general/cisco-live-2014-zerto-interview-with-cirrity/ http://www.zerto.com/blog/general/cisco-live-2014-zerto-interview-with-cirrity/#comments Thu, 22 May 2014 07:33:24 +0000 Zerto http://www.zerto.com/?p=7528 During Cisco Live US 2014 in San Francisco, we got interviewed Cirrity. Watch below and hear about their recent big news!

]]>
http://www.zerto.com/blog/general/cisco-live-2014-zerto-interview-with-cirrity/feed/ 0
Cisco Live 2014: Zerto Interview with Peak 10 http://www.zerto.com/blog/general/cisco-live-2014-zerto-interview-with-peak10/ http://www.zerto.com/blog/general/cisco-live-2014-zerto-interview-with-peak10/#comments Thu, 22 May 2014 07:29:40 +0000 Zerto http://www.zerto.com/?p=7527 At Cisco Live US 2014 in San Francisco, we got to sit down with Peak 10. Hear about their new certification and enjoy!

]]>
http://www.zerto.com/blog/general/cisco-live-2014-zerto-interview-with-peak10/feed/ 0
Change the Game: Solving the Disaster Recovery Challenge http://www.zerto.com/blog/general/change-game-solving-disaster-recovery-challenge/ http://www.zerto.com/blog/general/change-game-solving-disaster-recovery-challenge/#comments Wed, 21 May 2014 08:15:28 +0000 Zerto http://www.zerto.com/?p=7495 By Eric Hannah, Director of Sales – Central U.S. 

RubiksCube“Process and protocol are valid for when things are happening as they should. In the event of a disaster, process goes out the window and we are forced to change our approach to find a solution.”

I recently had the pleasure of listening to Lt. General Russel L. Honoré, USA (Ret.) who was the commander of the Joint Task Force, Katrina and the Global Preparedness Authority speak at an event about leadership in the event of a disaster. Honoré’s leadership was crucial in the recovery from Hurricane Katrina and the anecdotes he shared were both inspiring and entertaining. The quote above resonates with me as I think about today’s IT marketplace and the value of changing the game. How so?

For years, IT has faced the lingering challenge; what do we do in the event of a disaster? How do we recover and get our business back online? I’ve heard everything from “we keep our fingers crossed,” to “we update our resumes,” and “we have a disaster recovery (DR) process in place with a few products… but have not proven it works completely.” And there is that word “plan” that should work. But the nature of a disaster is things are not going to go as planned.

Zerto solves the problem of data loss and application downtime that negatively impacts a company. It also provides the IT team, tasked with solving this problem, confidence in a solution that works. Zerto delivers this confidence by changing the approach of delivery when recovering from a disaster.

For years, the IT market has invested in ways to get data off site and found multiple solutions. However, when disaster strikes, protocol for delivering IT to the business abruptly changes and getting applications back up becomes ‘panic time’.

Avoiding this panic is the key to Zerto’s approach. Making disaster recovery testing automated and simple is the only way to ensure that there will be no panic when an actual outage happens. Watch this video to see how simple it is to automate DR testing with Zerto:

Zerto is purpose-built for a virtual environment, and represents a new approach to traditional data replication solutions. It is our clients’ biggest challenge to have the VMware team, storage team, and director level of technology embrace a new approach to disaster recovery as a team. If you and your team will embrace the thought of a single solution protecting the application, its data, and providing full automation from the top down (and not from the traditional “data replicated one way – machines recovered another way” approach), you’ll find BC/DR does not have to be such a hectic situation where all processes go out the window. You’ll see you’ve found a solution to a problem, one built with a tried-and-tested approach for those times when disaster strikes, and your business will thank you for it.

]]>
http://www.zerto.com/blog/general/change-game-solving-disaster-recovery-challenge/feed/ 0
Hybrid Cloud: Challenge Accepted!! http://www.zerto.com/blog/general/zertos-role-hybrid-cloud/ http://www.zerto.com/blog/general/zertos-role-hybrid-cloud/#comments Wed, 14 May 2014 12:00:22 +0000 Zerto http://www.zerto.com/?p=7465 517-challenge-accepted

  • Thinking about scaling your IT operations using a hybrid cloud model?
  • Wondering how you can migrate workloads into cloud?
  • Curious about making use of Cisco InterCloud infrastructure?

Hybrid cloud use cases enable IT to utilize on-premise and cloud-based infrastructure seamlessly for cost reduction, bursting and disaster recovery. Zerto will be at Cisco Live! next week, May 19-22 talking about how to use Zerto’s cloud fabric technology to solve the challenges of workload mobility, migration and data protection in hybrid clouds.

Learn more about the cloud fabric, data recovery, and Zerto’s interoperability with Cisco InterCloud at booth 222 – in the meantime watch Gil Levonai, Zerto’s VP marketing and products, explain how Zerto can help solve your hybrid cloud challenges:

]]>
http://www.zerto.com/blog/general/zertos-role-hybrid-cloud/feed/ 0
Best Practices for Oracle Database Recovery http://www.zerto.com/blog/general/best-practices-oracle-database-recovery/ http://www.zerto.com/blog/general/best-practices-oracle-database-recovery/#comments Tue, 22 Apr 2014 11:56:06 +0000 Zerto http://www.zerto.com/?p=7234 OracleImage

Oracle makes one of the most widely used database management systems in the world, supporting operations for 98 percent of Fortune 500 companies. If a database is unavailable, revenue, productivity and reputation can be damaged. Replication and recovery for databases involves a high risk endeavor that includes manual processes and the close supervision of a database administrator (DBA).

So how can companies effectively protect against outages for high-transaction applications?

The Zerto solution offers three advantages over traditional replication methods for Oracle database recovery in virtualized environments:

Application Consistency

  • Zerto Virtual Replication (ZVR) delivers application consistent checkpoints with support for Oracle’s hot backup mode
  • Simple, centralized management with a single solution for both the database and application

Simple, Automated Testing

  • Automated disaster recovery testing allows verification that BC/DR processes are in place and will execute properly when needed. During the testing, the Oracle database is still processing, users are still working, and changes are replicated without interruption
  • Failover can be executed even during a test, in the off-chance that an incident occurs that requires a failover action

Robust Replication

  • Designed to excel under even the most extreme rates of change, ZVR has continuous, block-level replication capturing all I/O, even during the times with the highest number of transactions. This methodology provides aggressive service levels and minimizes the impact on the end-user productivity
  • Provides the lowest possible application data loss with Zerto Virtual Protection Groups (VPGs) which ensure crash consistency and write-order fidelity across all VMs
  • Enables significant cost reduction by replicating VMs only. With more exact replication, a company can reduce the storage and bandwidth required to support the BC/DR process.

Since many of the largest businesses in the world depend on Oracle, they cannot afford to have their database become unavailable and impact their revenue. With the benefits of ZVR these enterprises have a consistent disaster recovery process spanning their entire virtual environment which ensures reliable application and information availability.

Download the data sheet on Zerto Virtual Replication for Oracle to learn more.

(The image was taken from Don Burleson’s blog “What type of DBA are you” http://www.dba-oracle.com/art_dbazine_what_kind_dba.htm ) 

 

]]>
http://www.zerto.com/blog/general/best-practices-oracle-database-recovery/feed/ 0
Cloud Disaster Recovery: Getting Started http://www.zerto.com/blog/general/cloud-disaster-recovery-strategy/ http://www.zerto.com/blog/general/cloud-disaster-recovery-strategy/#comments Wed, 16 Apr 2014 13:15:34 +0000 Zerto http://www.zerto.com/?p=7032 Enjoy part  two of our “Frequently Unasked Questions” video podcast, with Truth in IT‘s David Littman,  Zerto Cloud Sales Manager, Jason Nolan, and Dante Orsini, iLand’s Senior VP of Business Development. This short video focuses on the biggest obstacles to developing a disaster recovery (DR) strategy and how cloud can help.

Take a look at our 5 questions below for you to consider:

What are the issues forcing companies to take a closer look at their DR plan?

What should companies know about disaster recovery in the cloud environment?

How do I get started with my cloud disaster recovery strategy?

How does using a cloud provider help my local IT team?

What do I need to look for when choosing a cloud provider for disaster recovery?

Watch below:

]]>
http://www.zerto.com/blog/general/cloud-disaster-recovery-strategy/feed/ 0
How to Choose a Disaster Recovery Provider http://www.zerto.com/blog/general/choosing-a-disaster-recovery-provider/ http://www.zerto.com/blog/general/choosing-a-disaster-recovery-provider/#comments Mon, 14 Apr 2014 11:47:58 +0000 Zerto http://www.zerto.com/?p=7028 Check out the first in our three-part video podcast series with Truth in IT‘s David Littman,  Zerto Cloud Sales Manager Jason Nolan, and Dante Orsini, iLand’s Senior VP of Business Development. The first part discusses top disaster recovery (DR) mistakes and how to avoid them.

Answer these five questions to help you choose a cloud service provider with the right DR strategy for your business:

What do I need to take into account for my DR strategy?

What is the level of experience of the cloud provider?

Can the provider guide me and build a disaster recovery solution that is unique to my application requirements?

Will this DR solution satisfy our technical and financial department requirements?

Do I have to replicate everything or can I find a disaster recovery provider that can replicate individual applications?

Watch here:

 

]]>
http://www.zerto.com/blog/general/choosing-a-disaster-recovery-provider/feed/ 0
Application Recovery Across Multiple LUNs http://www.zerto.com/blog/general/application-recovery-across-multiple-luns/ http://www.zerto.com/blog/general/application-recovery-across-multiple-luns/#comments Wed, 09 Apr 2014 11:39:05 +0000 Zerto http://www.zerto.com/?p=7051 Guest post by Justin Nordeste, Zerto Cloud Technical Engineer 

Today we’re going to talk about multi-tiered applications and disaster recovery (DR) strategy. You’re probably just like everyone else; you want the lowest Recovery Point Objectives (RPO) possible, but don’t want your SAN design to conflict with the needs of your Array Based Replication (ABR) solution, right? Well, Zerto is here to help!

Traditionally, ABR is an all or nothing sort of thing. All of the VMs for the application are on a single LUN or multiple LUNs and these LUNs are added to a consistency group. This could give you consistent recovery points, but then everything on the LUNs will be part of that consistency group. This can lead to a few things. For example, you can architect your storage to align with your BC/DR strategy. This can take a lot of configuring, planning, and overall adds a lot of management complexity for the storage team. Replicating this way could also mean that any IO for VMs that aren’t needed for the application but happen to be on those LUNs gets replicated too; this consumes additional bandwidth and could increasing overall recovery time. There’s got to be a better way, right?

With Zerto Virtual Replication (ZVR), your VMs can reside on any datastore connected to the host; ZVR even supports individual disks for the VM being spread across different datastores. When configuring ZVR, we place the protected VMs in a Virtual Protection Group (VPG – think ‘application consistency group’). When configuring this VPG, you get the flexibility to choose which recovery datastore the disks replicate to at the disaster recovery site. This applies to all disks for any VM in vSphere – even RDMs!

What exactly does this mean? It means that your storage design can be focused on what it should be focused on – ease of management, high performance, and efficient design focused on the application needs. It also means that any storage that can be seen by the remote hosts can be used – no more vendor lock-in!

Let’s take a look at this in action! Here’s a basic example with a small test VM in my lab. Check out how the VM has 2 disks, each on a different datastore:

Disk 1:


Disk1 

 

Disk 2:

Disk2

 

Now, let’s see how we can protect this VM in Zerto. Here is the VM in a VPG. I’ve selected a few default values for recovery resources, but we will go in and configure this VM individually instead of just accepting the default values.

Manage VPG

Everything looks good, but we want to replicate this VMs disks from multiple datastores and too multiple datastores, right? This is where things get a bit more interesting, let’s dive in and configure the individual VM by selecting it in the table and clicking the ‘Configure’ button.

ConfigureVM

The “Volumes” table shows the source location on the two different datastores as well as the path to these disks on the datastore itself. Note how we can select each volume and configure it individually. Let’s see what other datastores are available for this volume at the DR site.

ConfigureVM2

Here, I’ll select a different datastore available to the disaster recovery host, cluster, or resource pool that this VM will replicate to. Once I click save, note how the target is now updated.

ConfigureVM3

Everything looks great! We now have each disk for the VM configured to replicate to different datastores at the recovery site. I hope this basic example illustrates this game-changing feature and how easy it is to set things up.

Regardless of what the source storage is or where the VMs disks are located, Zerto can protect it and help your ease your storage configurations for BC/DR. Still not convinced? Contact our sales team and get your hands on a free trial so you can test it yourself!

 

]]>
http://www.zerto.com/blog/general/application-recovery-across-multiple-luns/feed/ 0
Disaster Recovery: Five Frequently Unasked Questions http://www.zerto.com/blog/general/disaster-recovery-five-frequently-unasked-questions/ http://www.zerto.com/blog/general/disaster-recovery-five-frequently-unasked-questions/#comments Wed, 02 Apr 2014 13:12:41 +0000 Zerto http://www.zerto.com/?p=6968 In this podcast with Truth in IT‘s David Littman and Sean Masters, a Zerto Sales Engineer, Sean explains some of the lesser-known tips for Disaster Recovery in a virtual environment.

]]>
http://www.zerto.com/blog/general/disaster-recovery-five-frequently-unasked-questions/feed/ 0
Top 5 Frequently Asked Questions on Data Protection with Zerto http://www.zerto.com/blog/general/top-5-frequently-asked-questions-data-protection-zerto/ http://www.zerto.com/blog/general/top-5-frequently-asked-questions-data-protection-zerto/#comments Tue, 25 Mar 2014 10:24:55 +0000 Zerto http://www.zerto.com/?p=6808 Guest post by Sean Masters, Zerto Sales Engineer

SeanMasters

“What do you mean it’s up and running already? It can’t be this easy!” – actual trial customer after we installed his proof-of-concept, completely non-intrusively, in under 30 minutes.

As a Solutions Engineer with Zerto, the majority of my day is spent discussing the benefits of near-real-time replication and orchestrated recovery built in to a single, simple platform (ok, I’m sold). Today I want to pull back the curtain a bit and cover the most frequently asked questions about deploying Zerto Virtual Replication.

1. “I don’t have time to upgrade all of my hosts to install your software! Do my vCenter Servers and ESX\ESXi hosts all have to be at the same patch level?”

At Zerto, we don’t believe that you should have to manage your infrastructure just to manage your data protection solution. In fact, many of our customers and providers run mixed environments for a variety of reasons. The only requirement we have with regards to vSphere versions is that each vCenter and ESX\ESXi host be at a minimum version of 4.0 Update 1.

2. “I’m currently running VMware vSphere Enterprise licensing at one site, do I have to upgrade my other sites to match?”

Not at all. Zerto does not concern itself with your VMware vSphere licensing levels, as long as you have something. You can even run vSphere Enterprise Plus in one datacenter and vSphere Evaluation in another and things will work just fine. You could add a third datacenter running vSphere Standard and you’d still be fully-supported.

3. “I can’t have any downtime just for testing. Will Zerto need to restart or reboot my vCenter or put my hosts into Maintenance Mode?”

No to all-of-the-above. Zerto installs seamlessly into the environment. The only thing you do is point it at a vCenter, enable the Zerto plugin in your vSphere Client, and then deploy one or more of our Virtual Replication Appliances right from the Zerto GUI. You can even skip the vSphere Client or Web Client and view your Zerto solution right in a browser.

4. “I don’t have any lab resources. Can I trial Zerto in Production or do I have to put this in the lab?”

Zerto installs seamlessly and non-intrusively, and many of our customers install right in to their Production environment and start protecting live applications right away. Because we allow any number of sites to communicate and replicate, customers are able to test within 1 site or between all of their sites – yes, even during a POC.

5. “The last DR / data protection solution that I tested took days or weeks to install. Will I need Professional Services to stand up a POC?”

Zerto POCs are delivered in one of two ways: either you can be guided by a Zerto Solutions Engineer or one of hundreds of Zerto Alliance Partners, or you can install the software yourself with a basic level of VMware vSphere knowledge and be up and running in under an hour. What are the benefits of a guided POC? You get a Zerto Solution Engineer or a Zerto Alliance Partner all to yourself for a couple of hours, during which time we will discuss your BC\DR goals, deploy and configure Zerto to meet those goals, and start protecting applications right away.

Now you know how easy it is to get up and running with Zerto Virtual Replication. What are you waiting for? Check out some of our videos or request a free trial today!

 

]]>
http://www.zerto.com/blog/general/top-5-frequently-asked-questions-data-protection-zerto/feed/ 0
The Launch of ‘Virtually Sober’ http://www.zerto.com/blog/general/launch-virtually-sober/ http://www.zerto.com/blog/general/launch-virtually-sober/#comments Thu, 13 Mar 2014 11:47:34 +0000 Zerto http://www.zerto.com/?p=6660 JoshuaJoshua Stenhouse, our Solutions Engineer in the UK is often talking about how he uses PowerShell scripting to extend the functionality of Zerto. When we talked about getting him to post some of these examples on our blog, he prepared a list of over 25 scripts he had written to extend Zerto functionality for things like recovering files and folders using PowerShell, automating reports on un-protected VMs and scheduling VPG cloning. It became clear that Joshua needed a blog of his own to cover all of the ins and outs of scripting with Zerto.

We’re excited to announce that his blog, “Virtually Sober” went live this week! Those of you who have had the fortune to meet Joshua will recognize just how well the title of his blog fits! Joshua also tells a bit of his journey to Zerto on the blog.

In his words,

“I will be blogging about common queries I often come across in my role as Solutions Engineer for Zerto, as well as cool Zerto features, but I will mainly cover the many different PowerShell scripts I write in my spare time to extend the functionality of Zerto. Some of the scripts I will be releasing as standalone tools. For these I will create a new page dedicated to each tool to go into more detail.”

In his first scripting post, on best practices when scripting with Zerto, he sets the stage for some of the posts that will follow.

Please visit Joshua’s blog here.

 

]]>
http://www.zerto.com/blog/general/launch-virtually-sober/feed/ 0
Agility, Recovery and Hurricane Ike – Lessons Learned http://www.zerto.com/blog/general/agility-recovery-hurricane-ike-lessons-learned/ http://www.zerto.com/blog/general/agility-recovery-hurricane-ike-lessons-learned/#comments Tue, 18 Feb 2014 14:14:26 +0000 Zerto http://www.zerto.com/?p=6516 Woodforest

How are you preparing your data center for hurricane season? With such a rough winter in the Northeast, this year’s hurricane season is expected to be harsh.

How do your peers handle hurricane season in their datacenters? By using a Disaster Avoidance strategy, like the one Woodforest National Bank uses.

A national bank headquartered in Texas, Woodforest National Bank takes an “always on” approach to relationship banking, offering 24×7 live banking with tellers and personal bankers and only closing its doors one day each year – Christmas Day.  For more than 30 years, Woodforest has delivered unsurpassed quality customer service, now offering both consumer and business products and services at more than 750 branches in 17 states throughout the United States.

In 2008, Woodforest experienced a disaster in the truest sense. When Hurricane Ike passed through Houston and its surrounding areas, Woodforest’s primary datacenter lost power and remained on generator power for 10 days after the storm ended.  Fortunately, prior to the hurricane Woodforest migrated all customer-facing applications to a disaster recovery site.

After that experience, Woodforest transitioned from disaster recovery to disaster avoidance preparedness. Each June, all production applications are pre-emptively “failed-over” to a secondary site, with a return to the primary site by mid- to late-October, avoiding the hurricane season.  If a hurricane strikes the area, all of the systems are already protected.

Woodforest Rich Corl

After implementing Zerto Virtual Replication, they are now able to migrate their entire datacenter in one Saturday morning. The functionality used to make changes via scripts is now provided for in the application itself – IP addresses, gateways, mask and DNS entries  and things that are unique to our environment  - all are now automated by Zerto.

Hear from Rich Corl, Solutions Architect Manager at Woodforest National Bank. He will discuss the bank’s 24/ 7 consumer banking approach, which requires their applications to be always-on and how Zerto helps with this approach.

Title: Hear from your Peers: Hurricane Avoidance, is it for you?

Date: Tuesday, March 4, 2014

Time: 1:00 PM – 2:00 PM EST

Register Button

 

]]>
http://www.zerto.com/blog/general/agility-recovery-hurricane-ike-lessons-learned/feed/ 0
What do other Cloud Service Providers know that you don’t? http://www.zerto.com/blog/general/cloud-service-providers-know-dont/ http://www.zerto.com/blog/general/cloud-service-providers-know-dont/#comments Tue, 11 Feb 2014 10:36:00 +0000 Zerto http://www.zerto.com/?p=6475 If you were all of a sudden the CIO of a Cloud Service Provider (CSP) you would find out quickly that disaster recovery as a service (DRaaS) and In-Cloud DR are significant opportunities.

There are so many customers that would like these services, but the challenges of setting up a multi-tenant DR solution have historically been cost prohibitive for CSPs.

There are several reasons:

DR is unlike any other service offered to customers. It isn’t limited to applications running on simple web servers,

  • It requires the customer network be connected to the CSP
  • Customers need to interact with their data at the server level to ensure failover success
  • Connectivity between sites requires the CSP have access to the customer site that is unlike other cloud services
  • Storage hardware and replication hardware must match. Having the same hardware that every customer has is very costly
  • Even if the hardware matches, the vSphere versions must match between the CSP and each customer for most DR solutions. Customers have their own upgrade schedules and they do not often match
  • DR solutions that are not hardware based are very difficult to manage and to scale
  • Traditional DR solutions require significant application development time to create portal access for customers
  • Traditional DR solutions are not designed to be deployed to multiple tenants
  • Backup solutions are often used as “DR”. These do not scale in the case of a disaster. Real disasters require orchestrating the start-up of multiple servers in specific order. Backup products are not designed to do this

At Zerto, we work with over 100 Cloud Service Providers. Zerto Virtual Replication (ZVR) is designed to solve these problems.

Zerto Virtual Replication is:

  •  Hardware agnostic – customers can have any type of storage from any vendor and any type of server. The only requirement is vSphere.
  • Integrated deeply with VMware products like vCloud Director.
  • Built to support  multiple cloud provider datacenters and they are centrally managed by the Zerto Cloud Manager (ZCM)
  • Customers can be located across the globe, but logically organized into Zerto Organizations (ZORGs). The ZORG eases administrative functionality for CSPs by providing centralized management capability.
  • Easily deployed to customers and centrally licensed and supported.
  • Simple for customers to connect to the CSP’s datacenter securely in an isolated environment. ZVR is designed to keep the infrastructure straightforward and relatively easy to support for CSP administrators.
  • Multi-tenant by design, the same ZVR infrastructure is used for all customers.
  • Built for fast customer acquisition with the Zerto Self Service Portal (ZSSP). The ZSSP is an out of the box product that provides role based permissions access for the customer. The ZSSP can be stand alone or integrated into the existing CSP portal.
  • Able to support dissimilar versions of vSphere. Customers can have vSphere 4.1 through the latest version.
  • Designed to allow for normal vCenter cluster maintenance with no disruptions to replication. vSphere hosts can be taken offline for upgrades and support.
  • Able to integrate resource usage and billing information into CSP billing systems
  • Built to  offer much more aggressive Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs). This allows for the CSP to easily meet traditional RPO and RTO expectations from customers, but also provides and new service level offering that can generate more revenues using the same ZVR solution.
  • The foundation for a DRaaS and In-Cloud DR service offering, which leads to increased revenues as more and more customers are embracing to cloud, in cloud and hybrid solutions.

Zerto has over 100 CSPs using the ZVR solution in production. ZVR has proven to be the most capable DRaaS product on the market and is validated daily by our CSP partners and customers. ZVR protects thousands of production VMs for CSPs and Zerto continues to enhance the product to enable even more service offerings and revenue opportunities.

So if you really are a decision maker for a CSP and aren’t using ZVR, it is very likely that your competition is and they are growing their DRaaS and In-cloud DR business. Find out what they already know. ZVR is a quick download and installs very fast. Try it out!

]]>
http://www.zerto.com/blog/general/cloud-service-providers-know-dont/feed/ 0
Talking DR… in the DR http://www.zerto.com/blog/general/talking-dr-dr/ http://www.zerto.com/blog/general/talking-dr-dr/#comments Sun, 02 Feb 2014 20:53:18 +0000 Zerto http://www.zerto.com/?p=6445 This post was penned by Shannon Snowden, Zerto’s Sr. Technical Marketing Architect.

How is a company able to have a kickoff at a tropical resort in the Dominican Republic in only its third year?  The reason is remarkable but the answer is quite simple. It is a promise fulfilled.

Joshua Beach ZertoAt a frigid 2013 sales kickoff in Israel – during one of the worst snowstorms in the country’s history – Ziv Kedem, Zerto’s CEO, made a promise that if our aggressive revenue targets are not only met, but exceeded by at least 20%, we would be somewhere warm at the next kickoff.

Not only did we meet, but we substantially exceeded even the most optimistic goals.  This article in Forbes provides a little more detail about how well Zerto is being accepted in the marketplace.

The remarkable part is how quickly Zerto is becoming the de facto standard for disaster recovery replication and workflow automation.  As each of the regional teams reported their experience an actual numbers over the last year, a common theme emerged:

  • Our reseller partners and cloud service providers are all experiencing tremendous adoption rates.
  • Our customers are sleeping better at night knowing that they have an actual functioning disaster recovery solution in place with Zerto.  (Check out the quote from our new customer, ING!)
  • Zerto is a talented and motivated organization. In fact, we had many new faces at this year’s kickoff, as the company adds new talent every week. We had a few join from our new UK office, who had only recently joined Zerto.

One of the most compelling parts of the kickoff meeting is the roadmap session. Without divulging the actual details yet, it is accurate to say that in 2014, Zerto will be raising the bar several times and redefining what it means to protect data.

Zerto’s momentum is real. One of the biggest challenges for us now is how do you top this year’s kickoff? I think we may need to talk to the SpaceX team.

 

]]>
http://www.zerto.com/blog/general/talking-dr-dr/feed/ 0
True Hybrid Cloud – Is It Finally Here? http://www.zerto.com/blog/general/true-hybrid-cloud-is-it-finally-here/ http://www.zerto.com/blog/general/true-hybrid-cloud-is-it-finally-here/#comments Tue, 28 Jan 2014 12:45:12 +0000 Zerto http://www.zerto.com/?p=6392 By Gil Levonai, VP Marketing and Products at Zerto

Cisco’s exciting announcement today, regarding Cisco InterCloud, reflects a shared vision for hybrid cloud that we at Zerto believe is finally ready; hybrid cloud for any workload, on any hypervisor, on any provider’s cloud.

But let’s take a step back. The concept of hybrid cloud, touted by many vendors and service providers as the next big thing, enables IT to utilize on-premise and cloud based infrastructure seamlessly for cost reduction, bursting, disaster recovery and other use cases. The key to hybrid cloud acceptance in the marketplace is providing this “seamless” capability for all applications, including those production applications that are core to the business.

So why are enterprises not yet using hybrid cloud scenarios for their core production workloads?  The challenge has been that ERP, healthcare, financial and other similar complex applications are spread across multiple virtual machines, are database heavy, normally have terabytes of data, and have multiple dependencies. These production workloads are business-critical and cannot afford lengthy downtimes; they have strict SLAs and they need to be managed centrally. All of these constraints mean that these workloads are very hard to mobilize, very dependent on infrastructure and networking for performance and require strict data protection and DR solutions. Bottom line: These workloads cannot realize the benefits of hybrid cloud until more hybrid clouds will be “production ready”.

Some vendors are trying to solve this vertically, by creating an integrated stack of on-premise and cloud offerings. But even they don’t yet have the means to mobilize heavy production workloads with minimum downtime. They don’t have a fully orchestrated DR solution for these production workloads.

And even if they did have everything the production workloads need, why would an enterprise want to lock in to a specific vendor and a specific hypervisor? True hybrid cloud means the freedom of choice. Any workload, on any hypervisor, on any cloud. The choice needs to be driven by cost, performance, SLAs, not by vendor lock-in.

The good news is that the first real step toward ‘production-ready’ hybrid cloud adoption is happening! Cisco’s InterCloud announcement today, one that Zerto is proud to be a part of, is a great step for the future of production-ready hybrid cloud.

As a Cisco ecosystem partner, we share the same vision for the hybrid cloud as Cisco: one in which even complex applications span across on-premise and cloud datacenters, and can be managed and accessed anywhere. Cisco InterCloud will enable hybrid clouds by allowing organizations to combine and easily move workloads – including data and applications – across different clouds depending on business and IT requirements. This flexibility and workload portability will allow organizations to cost-effectively select their optimal cloud strategy. Specific features of workload portability with InterCloud include moving applications without the need to change routing or firewall configurations between primary and recovery sites. Network flexibility in these complex areas helps to support easier cloud-based recovery. Being a member of the InterCloud ecosystem means that any Cisco-enabled service provider can offer Disaster Recovery as a Service (DRaaS) and In-Cloud Disaster Recovery powered by Zerto, joining more than 100 cloud service providers who are already working with Zerto. However, the key functionality for customers considering hybrid cloud is not only robust data protection and recovery, but a new infrastructure layer enabling production workloads to be easily mobilized, without any impact on business operations, all while being easily and centrally managed and protected.

The joint solution by Zerto and Cisco delivers key functionality required for production workloads to utilize a hybrid cloud, including:

  • Workload Mobility – the ability to move production workloads regardless of complexity or size, between remote physical locations without business interruption – for private clouds, managed services clouds and public clouds
  • Network Agility – mobilize workloads without changing routing or firewall setting at recovery site, even in very dynamic networks
  • Disaster Recovery – protect workloads to the cloud or between clouds, with SLAs as low as seconds of RPO and minutes of RTO
  • Data protection – long retention of copies in different locations, including cost effective cloud archiving
  • Cross Hypervisor Portability – mobilization of workloads between hypervisors while preserving application properties such as boot dependencies, IP configuration and more

We’ve found that when customers talk about ‘hybrid cloud’, they want flexibility. To them, hybrid means open to any cloud, transparent, simple to use – where production workloads can be easily mobilized, centrally managed and protected.

Hybrid Cloud, it appears your time has come.

 

]]>
http://www.zerto.com/blog/general/true-hybrid-cloud-is-it-finally-here/feed/ 0
3 Step Failover Testing for Disaster Recovery http://www.zerto.com/blog/general/3-step-failover-test-for-disaster-recovery/ http://www.zerto.com/blog/general/3-step-failover-test-for-disaster-recovery/#comments Sat, 25 Jan 2014 21:58:21 +0000 Zerto http://www.zerto.com/?p=6375 This post was contributed by Joshua Stenhouse, Zerto’s UK Sales Engineer.

In my role as a Sales Engineer at Zerto, I often get asked about testing disaster recovery. Testing is one of the most complex aspects of disaster recovery, and so it’s clear why so many companies I speak to test their DR solution once a year or less. With this post, I thought I’d detail the specifics of disaster recovery testing with Zerto, particularly relating to it’s simplicity and the logic behind it.

With Zerto Virtual Replication performing failover testing involves three steps:

Zerto-FOT-Screenshot1

1. Selecting the Virtual Protection Groups to test (VPGs).

Zerto-FOT-Screenshot22. Selecting the point in time to test from using the journal of changes.

3. Clicking ‘failover test’.

Zerto will then automatically perform the following actions:

1. Register the VMs in the recovery site with the name in the format of “vmname – failover test”.
2. Create a temporary scratch thin VMDK on a per VM basis in the target datastore which stores any changes made in the failover test VMs.
3. Connect the failover test VMs to the port group specified for testing (which is hopefully not routable to production!).
4. Boot the VMs allowing you to log into the console to check the data is consistent and the applications work.
5. Leave the protected VMs powered in production and continues replicating changes.

Zerto-FOT-Screenshot5

 

 

 

 

 

 

 

 

 

 

 

 

 

This is done with no break in the replication or impact in production meaning you can perform disaster recovery failover testing in working hours, in minutes with just a few clicks. Once you have finished your failover testing you click to stop the failover test and Zerto asks for the result in testing the application and allows the addition of notes.

Zerto-FOT-Screenshot3

Zerto will then perform the following actions:

1. Remove the failover test VMs from the inventory.

2. Delete the scratch VMDKs removing any changes made as part of the failover test.

3. Keep an up to date copy of all the changes made in production during the failover testing so there is no re-sync period or interruption of the replication.

Zerto-FOT-Screenshot6You can then easily build pdf reports detailing the outcome of the failover testing which you can customize with your own company logo.

This is all pretty cool and easy to do, but hopefully now you might be wondering what else you could use this for? After all, if you can bring a temporary copy of your VMs online from previous points in time (increments every few seconds in up to 5 days of changes in the journal) and give vSphere Web console access to any user you desire, then you don’t only have to use them for testing disaster recovery! Some great ideas are:

  • Testing both operating system and application upgrades before applying them to production which is great for pesky change requests.
  • Recovering files and folders using VMware tools and PowerCLI, removing the need for network access to the testing recovery VM. I will be covering this in more depth in a separate blog post coming soon.
  • Giving access to a copy of the VM to developers for short term testing.
  • Performing database consistency checks.
  • Running reports on databases without impacting production.
  • Bringing a copy of up to date Active Directory services online in the failover test network for use with VMs which require Active Directory for a successful failover test.
  • Dynamically building a complete training environment on the fly with the ability to use an up to date copy of data with no changes made in production and all the changes deleted when the training has finished.

These are only the ideas I can think of, but if anybody can think of their own then please feel free to share. If you’d like to try out the simplicity of testing disaster recovery in Zerto then click here to request a trial today.

 

]]>
http://www.zerto.com/blog/general/3-step-failover-test-for-disaster-recovery/feed/ 0
SQL Server Log Shipping Does Not Replace Replication http://www.zerto.com/blog/general/sql-server-log-shipping-sql-recovery/ http://www.zerto.com/blog/general/sql-server-log-shipping-sql-recovery/#comments Thu, 16 Jan 2014 22:37:26 +0000 Zerto http://www.zerto.com/?p=6286 SQLProtecting databases is challenging as transactions are written quickly and there is a high rate of change within the application. Many organizations use log shipping as a recovery method, which typically translates to a manual recovery process and overall higher risk. In addition to this complexity, just getting the data to the recovery site is insufficient as the order of the I/O is critical for database recovery. If write-order fidelity is not preserved, the database cannot be recovered without additional manual intervention.

Log shipping is used between two copies of a single database that are housed on different computers. Updates made to the primary database are shipped to the secondary database. The transaction log from every insertion, update, or deletion made on the primary database gets applied onto the secondary database.

So what happens in case of a disaster? According to this article from a popular DBA blog, “In a disaster where the primary server cannot be restarted because of a power grid failure, you may stitch the primary server’s active transaction log and transaction log files together by using a third-party transaction log analyzer to identify transactions that did not make it across and manually apply them.” Basically, piecing your database applications back together after an outage is going to be painstakingly manual. [Read more about this in our Recovery for Microsoft SQL Data Sheet]

Zerto is excited to invite Uzah Chinedu, the Infrastructure Manager at Leadway Pensure PFA Limited to our “Hear from your Peers” webinar series. Leadway relies heavily on Microsoft SQL Server. Mr. Chinedu struggled to locate a solution which would keep up with the high rate of change of SQL Server. He had evaluated several solutions, but none of them maintained the write-order fidelity of the application. With Zerto Virtual Replication, Mr. Chinedu can maintain the integrity of his database when replicating. Additionally, seamless testing provides a new level of confidence throughout the organization in terms of the ability to recover quickly from an outage.

Join the webinar on February 4, 2014 to hear from Uzah and learn about simple, effective SQL recovery with Zerto.

Register Button

 

]]>
http://www.zerto.com/blog/general/sql-server-log-shipping-sql-recovery/feed/ 0
Top 5 Disaster Recovery Planning Mistakes http://www.zerto.com/blog/general/top-5-disaster-recovery-planning-mistakes/ http://www.zerto.com/blog/general/top-5-disaster-recovery-planning-mistakes/#comments Tue, 10 Dec 2013 12:16:33 +0000 Zerto http://www.zerto.com/?p=6217 This post was contributed by Joshua Stenhouse, Zerto’s UK-based Solutions Engineer.

JoshuaI spend every working day discussing disaster recovery and assisting with trials of Zerto Virtual Replication. I also implemented Zerto as an end user myself and so based on all of this experience I’d like to share the top 5 most common disaster recovery mistakes I see people making in virtual infrastructures:

1. Not performing end user acceptance-based testing

In many virtual environments, with complex enterprise applications, simply checking that the services have started and the Virtual Machines (VMs) can communicate isn’t a true disaster recovery test. Yes, it’s important that these are checked, but a real disaster recovery test is an end user (commonly application owner) actually checking that the application works in a test failover. In many applications I have seen all of the services start, and yet vital functionality was broken due to interdependent VMs missing from the failover test. When I implemented Zerto I used a combination of Zerto vCenter permissions to delegate the ability to start failover tests (and not failover!), selected a vCenter folder to bring the VMs online (the application owner also had access to the VM console) and then configured alerts if the application was not tested every 3 months. This then allowed me to simply provide the ability to do failover testing with no impact on production and no break in the replication. Also important: I did this without having to figure out how to actually use any of the 50 applications I was responsible for protecting!

2. Not replicating required VMs

This might seem obvious but until you have completed a successful disaster recovery test (including end user acceptance) then you really don’t always know exactly which VMs are required for disaster recovery. There are many commercial tools for trying to assess connections between servers, but these can be complicated and I often found many connections were in fact redundant. It was only when we performed user acceptance testing that I truly realised exactly which VMs needed to be protected.

3. Presuming a 24 hour Recovery Point Objective is actually ok

I ask many people what the business expects as a reasonable recovery point objective (RPO; the highest amount of data you are willing to lose) and I often get answers like “24 hours is good for us”, “1 hour is ok for my key applications” or “15 minutes is fine”. From my own personal experience while these can be reasonable SLAs, just because the business is ok with this doesn’t mean it doesn’t actually want significantly better! If the business did actually lose this amount of work/data the impact could be many times the cost of actually implementing any disaster recovery solution. Even at the small company with 100 users I worked at when I first started in IT in 2004, we had the potential to lose £10,000+ per hour if data was lost and productivity ceased. The SLA from the business was 24 hours, but I can guarantee if we had ever had a disaster and actually lost a full 24 hours I either would be out of a job the next day or certainly would not be up for a promotion next time around! We never had a disaster but if we did I would have wished for Zerto – providing me with continuous data protection and replication with a RPO of just seconds.

4. Not having a plan to actually use or allow access to the failover VMs

I find this topic is rarely discussed and therefore often the most over looked aspect of disaster recovery. If you have lost your primary site, you’re now running in your recovery site (in minutes and not hours thanks to Zerto!) then how would you give users access to their data and applications? The fact that with Zerto you have only lost seconds of data is great, but the impact on business productivity and revenue will start to increase the longer nobody can actually do any work! I know there are many presumptions to this scenario but I recommend to apply this question to your own environment and ask yourself how could you give all of your users access to their applications and data? VMware’s Duncan Epping alludes to this in his post, “Prepare for the Worst” where he explains the need to think “more about the strategy, the processes that will need to be triggered in a particular scenario” not just on the IT. My preferred solution was a recovery site VPN and some replicated terminal servers with the instructions for access written into the disaster recovery plan.

5. No written disaster recovery plan

This one builds on #4 above. With the complexity of the old way of doing replication and disaster recovery it is very easy to forget the most important aspect of disaster recovery, actually writing down a plan. Companies focus solely on just trying to get everything replicated between the storage arrays then mapped to the virtual infrastructure in completely different interfaces. In Zerto everything from the replication, management, protection groups, failover and failover testing is managed from our single interface. When I installed Zerto I simply specified my SLAs for replication, created my virtual protection groups, selected the VMs to protect and then Zerto took care of all the replication in the background. How simple is that? This made protecting VMs so easy that it gave me the time to actually write everything down and form a plan!

Hopefully you’ve found my blog on the top 5 most common disaster recovery planning mistakes in virtual infrastructures of interest. Please feel free to add a comment, ask a question or share any of your own commonly-seen disaster recovery mistakes.

For additional tips from Joshua, check out his Twitter stream or follow the hashtag #ZertoTechTips.

]]>
http://www.zerto.com/blog/general/top-5-disaster-recovery-planning-mistakes/feed/ 0
Is Your Cloud in the San Francisco Bay? http://www.zerto.com/blog/general/google-floating-data-center-is-you-cloud-in-the-san-francisco-bay/ http://www.zerto.com/blog/general/google-floating-data-center-is-you-cloud-in-the-san-francisco-bay/#comments Tue, 29 Oct 2013 08:12:49 +0000 Zerto http://www.zerto.com/?p=6069 Google floating data center bargeIt looks like Google is up to something in the San Francisco Bay, and rumors are, it’s a floating data center – perhaps built to use sea water cooling units to maintain temperature control in the four-story data center. Theoretically the sea could be used as a power source as well. Pretty cool stuff!

According to an article in Cnet, the barge has been built over the past year in a hangar on Treasure Island in San Francisco Bay. In 2009 Google was granted a patent for a “Water Based Data Center” and then granted another patent for their floating data center project in 2010.  The specific cooling mechanism is detailed in the patent application:

“A method of maintaining a computer data center located on a floating structure, comprising: drawing in cooling water from a sub-surface area of an open natural body of water; pumping the cooling water through a first side of a first liquid-to-liquid heat exchanger; circulating a first portion of liquid from a computer data center through a second side of the first liquid-to-liquid heat exchanger and to one or more cooling units at the computer data center, such that heat is transferred from the first portion of liquid to the cooling water as the first portion of liquid passes through the second side of the first liquid-to-liquid heat exchanger; circulating the cooling water from the first liquid-to-liquid heat exchanger to one or more cooling towers; and evaporating at least a portion of the cooling water in the one or more cooling towers.”

Why would Google build a floating data center? Power and cooling in large data centers represent two of the biggest costs of running and managing a data center. In addition, a company like Google can have a big environmental impact by cooling and powering their data centers with seawater. We have some experience with helping companies to manage the cost and size of their data centers, by reducing the costs of secondary data centers built for data recovery. At Zerto, we help companies to:

a) get their replication and recovery to the Cloud, thereby eliminating the costs of a secondary data center altogether and,

b) significantly reduce the costs of their secondary data center by helping companies reduce their storage footprint. For HAPO Community Credit Union, we helped reduce storage by 43% and eliminated their need to purchase storage for the foreseeable future.

Others have speculated that the floating barge is an “Apple store-like marketing center for Google Glass” but that doesn’t seem nearly as cool.

Hat tip to Wes Schifone for pointing this out!

(Photo: James Martin CNET/CBS Interactive)

 

 

]]>
http://www.zerto.com/blog/general/google-floating-data-center-is-you-cloud-in-the-san-francisco-bay/feed/ 1
Silver Peak and Zerto – “Services in a Second” http://www.zerto.com/blog/general/silver-peak-and-zerto/ http://www.zerto.com/blog/general/silver-peak-and-zerto/#comments Wed, 16 Oct 2013 18:29:18 +0000 Zerto http://www.zerto.com/?p=6014 Zerto team EuropeHello from Barcelona!

Zerto is at VMworld Europe this week, showing off our own Zerto Virtual Replication 3.0 and supporting Colt in the launch of their vCloud Replication Service (which is powered by Zerto). One additional technology on display at VMworld that we’re excited about is Silver Peak’s Agility, which is a virtual network service. Like Zerto, Silver Peak Agility is ‘virtual-aware’ meaning that it works at the workload level. The product tackles the problems of network optimization and management, and makes network optimization of workloads simple and efficient. Operational mangers simply right-click on a VM from within the management console, selecting “backup” or “optimize,” and instantly that service is enabled.

We’re excited for enterprises and service providers to use the two solutions – Zerto Virtual Replication and Silver Peak Agility – together, for robust disaster recovery and predictable, carrier-grade networking. Silver Peak blogged about the solution here.

]]>
http://www.zerto.com/blog/general/silver-peak-and-zerto/feed/ 0
Colt Launches vCloud Replication Service Powered by Zerto http://www.zerto.com/blog/general/colt-launches-vcloud-replication-powered-by-zerto/ http://www.zerto.com/blog/general/colt-launches-vcloud-replication-powered-by-zerto/#comments Thu, 10 Oct 2013 16:35:23 +0000 Zerto http://www.zerto.com/?p=5988 Colt Zerto Cloud RecoveryToday we’re happy to announce the news of our partnership with Colt, a Cloud Service Provider and member of the Zerto Cloud Discovery Ecosystem. After working on the implementation of our DR into the Colt cloud for some time, today marks the official launch of their vCloud Replication service.

What is Colt offering?

Colt’s vCloud Replication Service allows replication of production workloads into a Colt vCloud environment in a near-synchronous fashion. Customer production workloads may reside in either customer vCenter or Colt vCloud capacity. A vCloud Replication Service Portal, based in the Zerto Self-Service Portal (ZSSP), enables Colt to quickly offer DR services by permitting the end-consumer of the service to manage these actions alone, for maximum flexibility and ease of use

The portal will allow the following actions to be performed:

  • Managing virtual protection groups
  • Initiating failovers
  • Initiating test failovers
  • Roll-back to the production site after failover

Colt’s replication service offerings are powered by Zerto Virtual Replication 3.0, which incorporates some new functionality for effective disaster recovery including vCloud integration, the Self-Service Portal and Zerto’s Cloud Manager (ZCM). The ZCM is an abstraction layer that enables CSPs to deliver DR services with centralized management of resources across multiple customers and several cloud locations.

Zerto Virtual Replication 3.0 will be demonstrated at VMworld Europe in Barcelona, Spain, on October 14-17. Colt will also demo its vCloud Replication Service and the vCloud Replication Service Portal at VMworld Europe.

]]>
http://www.zerto.com/blog/general/colt-launches-vcloud-replication-powered-by-zerto/feed/ 0
Do Disaster Recovery from Your Tablet http://www.zerto.com/blog/general/disaster-recovery-from-your-tablet/ http://www.zerto.com/blog/general/disaster-recovery-from-your-tablet/#comments Sun, 22 Sep 2013 11:00:51 +0000 Zerto http://www.zerto.com/?p=5897 This is the first in a series of posts written by Director of Product Marketing, Jennifer Gill, about new features in our just-released Zerto Virtual Replication 3.0.

In our recent customer survey, we discovered that most disasters are not ‘natural disasters’. In fact, the most common cause of a disaster is actually hardware failure.  We also learned that two of the top challenges of disaster recovery are that Disaster Recovery (DR) is “difficult to manage” and “complex” to use.

dr challenges

We see these themes time and again. Recovering from failure can make for a LONG day and night for an IT administrator, unless of course, they have Zerto Virtual Replication 3.0 (ZVR). A key attribute of ZVR is simplicity, and we continue to expand and extend ZVR to further simplify a complex BC/DR process.

ZVR 3.0 introduces robust management from anywhere. From a laptop in the office, a desktop at home or from a handheld device, all the functionality of ZVR is available. Issues are not planned and they may not happen when the IT administrator is in front of the vCenter console. Now, the IT administrator has full control of DR operations anywhere they have internet access. This extends the flexibility and agility of virtualization to a “manage anywhere” BC/DR process.

disaster recovery on a tabletWith a web interface, the IT administrator can give access to BC/DR capabilities without giving full VMware vCenter access. This again, continues to simplify daily operations, as someone with web access can’t go into VMware vCenter and power on a bunch of VMs, or delete an application, causing a management nightmare. Many Zerto customers are leveraging the ZVR Web UI by providing a link to the ZVR reports in their corporate intranet.  Managers and executives can see the status of the testing, failovers or migrations while eliminating the step of the DR administrator creating and providing the reports.

ZVR 3.0 expands the reporting capabilities to deliver historical resource reports for effective resource planning as well as reports to satisfy audit requirements. The recovery reports detail each step within the failover process with the time it takes for completion.  These reports are created automatically and accurately, freeing up IT resources to focus on revenue generating projects instead of environment maintenance.

Complexity within the BC/DR process leads to errors and failures at a time when your business is the most vulnerable. By taking a refreshing approach to Disaster Recovery by focusing on making the complexity underneath simple to implement, manage and maintain, ZVR helps ensure production applications are available, so revenues and the corporate brand are protected.

]]>
http://www.zerto.com/blog/general/disaster-recovery-from-your-tablet/feed/ 0
Top Five Takeaways from VMworld 2013 – DR Edition http://www.zerto.com/blog/general/top-five-takeaways-from-vmworld-2013-dr-edition/ http://www.zerto.com/blog/general/top-five-takeaways-from-vmworld-2013-dr-edition/#comments Tue, 10 Sep 2013 10:12:08 +0000 Zerto http://www.zerto.com/?p=5832 vmworld 2013 Backup and Disaster Recovery continue to be hot topics of discussion at VMworld. What those terms mean in a virtual environment and how companies are achieving DR are topics discussed in many sessions, and by many attendees. We list here our top five takeaways from VMworld 2013 on disaster recovery in virtualized environments; two of those are from statements made by VMware, and three are takeaways from a DR roundtable session.

1. DRaaS is the number one cloud growth engine. vCloud Hybrid Service announcing their Disaster Recovery as a Service roadmap is yet another indication that the market sees DRaaS as the number one cloud growth engine, something we have believed since 2011. Cloud DR is the logical on-ramp to greater cloud adoption by enterprises both big and small. One nice shout out to Zerto came at the #VSVC4570 VCDX session – when the panelists were asked about vSphere Replication to vCHS – and they recommended Zerto!

2. VMware also made statements about production workload migration as a significant piece of their Hybrid Cloud offering. Workload migration and mobility is being highlighted because it is much more complex than it need be, and also because it’s an obvious use-case for hybrid cloud. VMware, Zerto and other companies are acknowledging and addressing this market need.

At the “DR Roundtable” session, officially, “BCO1003-GD – Disaster Recovery and Replication with Ken Werneburg,” Ken led a group discussion around DR and Replication – how it’s being used, what products are popular…

What did we take from the session?

3. There’s still a lot of confusion on BC/DR vs Backup and the concept of Hypervisor-based replication vs storage-based replication. We’ve seen this confusion in the marketplace; so we’ve addressed the Disaster Recovery vs Backup discussion here and also created a comprehensive comparison of the many DR solutions being used in virtualized data centers.

4. Vendors are asking end-users to make concessions on having true enterprise class DR vs having more virtual aware technologies that reduce complexity. We don’t think that’s a tradeoff that any company needs to make. BC/DR can be robust, high-end, scalable and still be flexible and simple to use. Some comments from Ken and the crowd:

“Stop replicating LUNs” – we agree

“VM mobility is compromised when using storage-based replication” – totally

5. Enterprises are trying to grasp where the cloud comes into their data center strategy going forward, and a major consideration is protecting the assets that are being sent to the cloud.

Surprisingly, in one of the polls taken during the session 45% said they had no DR solution in place today, but that they are looking to change that. Companies wishing to try DR for virtualized and cloud environments that is simple to use, enterprise-class and cost-effective can contact us for a free trial.

We created a “storify” list of the Zerto VMworld activities – Have a look!

]]>
http://www.zerto.com/blog/general/top-five-takeaways-from-vmworld-2013-dr-edition/feed/ 0
Cool Company to See at VMworld 2013: PLUMgrid http://www.zerto.com/blog/general/plumgrid-cool-company-vmworld-2013/ http://www.zerto.com/blog/general/plumgrid-cool-company-vmworld-2013/#comments Mon, 26 Aug 2013 12:43:17 +0000 Zerto http://www.zerto.com/?p=5778 PLUMgridWalking around the show floor at VMworld 2013, we see companies who are doing some pretty cool things.  One company we’re glad to partner with is PLUMgrid.  According to their site, PLUMgrid is, “a new kind of Virtual Network Infrastructure (VNI) that allows businesses to completely replicate the functions of a physical network infrastructure (PNI) in a virtual environment, without requiring new hardware investment or changes to existing hardware infrastructure.” Basically, PLUMgrid enables the creation of virtual domains that replicate the physical network infrastructure in a virtual, programmable environment – without changes to existing hardware.

On one of their recent blog posts, they explain what they’ll be showing off at VMworld, and a bit about their strategy for virtual network infrastructure:

“As vendor strategies shift between SDN, OpenFlow, virtual networking and network function virtualization (NFV), PLUMgrid has taken an independent platform approach that easily meets the network agility and extensibility needs of the cloud datacenter. PLUMgrid is out spreading the word about Virtual Network Infrastructure!  Catch us at: VMworld San Francisco Booth #2035″

You can read more about them here: Virtual Domains and The Power of Distributed Network Functions

To see a demo of PLUMgrid integration with Zerto, visit them at booth #2035 and read more here: PLUMgrid at VMworld – See the power of Virtual Network Infrastructure live.

 

]]>
http://www.zerto.com/blog/general/plumgrid-cool-company-vmworld-2013/feed/ 0
5 Ways To Master Disaster Recovery http://www.zerto.com/blog/general/5-ways-to-master-disaster-recovery/ http://www.zerto.com/blog/general/5-ways-to-master-disaster-recovery/#comments Mon, 29 Jul 2013 08:19:55 +0000 Zerto http://www.zerto.com/?p=5471 Zerto recently completed a survey of IT Managers, VMware Admins and Sys Admins, and BC/ DR professionals who work in more than 10 different industries. The data collected highlighted two significant trends in DR preparedness:

  1. Data center outages are frequent and companies find themselves ill prepared
  2. Legacy DR solutions are not providing desired levels of protection

We’ve highlighted five helpful ways to get your disaster recovery plan on track and master disaster recovery below:

Master Disaster Recovery

www.masterdisasterrecovery.com

]]>
http://www.zerto.com/blog/general/5-ways-to-master-disaster-recovery/feed/ 3
June is Disaster Recovery Awareness Month http://www.zerto.com/blog/general/june-is-disaster-recovery-awareness-month/ http://www.zerto.com/blog/general/june-is-disaster-recovery-awareness-month/#comments Thu, 04 Jul 2013 10:36:24 +0000 Zerto http://zerto.com/?p=5452 Disaster Recovery Awareness MonthIt’s that time of year – with hurricanes in the Eastern US and tornadoes in the Southern and Central regions – when executives are reminded of the flaws in their plans for disaster recovery and business continuity. This month, Virtual Strategy Magazine decided to pull together a number of articles on DR from virtualization industry experts. There are new posts uploaded daily, so it’s worthwhile to bookmark and keep checking the feed for new articles.

Here are a few of the posts we’ve liked so far on Virtual Strategy’s Disaster Recovery Awareness Month:

1. Disaster Recovery Wake Up Call – by Michael Hart, vExpert at CSI Louisville

Michael walks readers through a familiar DR scenario in this post:

Imagine waking up to a phone call at 2am from your CIO saying that she isn’t getting email on her smartphone. Then you receive another call from your CEO saying the branch office in California cannot access the shared directory. Now imagine that you arrive at the office and find the fire department coming out of your building as you receive the call from your CIO that she just got a call that there was a fire. You are happy to find out that the FM-200 fire suppression system did its job and reduced the damage to the servers and storage however your wiring closet, where the fire started, was not that lucky. Fiber cables have been melted, switches destroyed and C-Level folks wanting to know when this will be fixed.

Stop. Take a deep breath. Now call your BC/DR partner.

2. Ben Miller at Bluelock on “5 Disaster Recovery Misconceptions”. Ben’s first ‘misconception’ addresses whether you’re covered if you have a backup solution:

Misconception # 1: Backup-as-a-Service and Recovery-as-a-Service are the same.

A good DR plan is not about backups, but rather it’s about getting back up and running as quickly and efficiently as possible. The placement of that one space makes a big difference.

Backups and Backups-as-a-Service traditionally are concerned about the data within an application. The recoverable pieces are much like a jigsaw puzzle, they need to be put back together in a usable way, otherwise they’re just data. If you’re a business unit owner, imagine your excel application crashes. You have the file backed up, but you have no way to access or run that file because you need the application to do so. Backups will be a component of a complete disaster recovery solution, but they’re not the complete answer.

Recovery-as-a-Service is about protecting the entire application and everything that’s inside of it. Cloud-based Recovery-as-a-Service should use protection groups to ensure the safe recovery and protection of every single virtual machine (VM), virtual application (vApp) and piece of data within your application. Each is recovered at the same point in time, ensuring a quick stand-up of your application post-declaration. There is no putting together of the puzzle, the puzzle is saved as one complete picture.

3. Our very own Gil Levonai – VP Products and Marketing here at Zerto, with a post on the right questions to ask on your DR Checklist.

Is the existing solution “virtual ready”?

Can it support mission-critical application protection?

It is hardware and vendor agnostic?

Read these posts and others at Virtual Strategy. Stay Safe!!

 

]]>
http://www.zerto.com/blog/general/june-is-disaster-recovery-awareness-month/feed/ 0
Is Synchronous Replication Enough? http://www.zerto.com/blog/general/is-synchronous-replication-enough/ http://www.zerto.com/blog/general/is-synchronous-replication-enough/#comments Wed, 29 May 2013 13:35:53 +0000 Zerto http://zerto.com/?p=5306 This is a guest post by Joshua Stenhouse, Zerto’s UK-based Solutions Engineer.

Is Synchronous Replication enough? Application consistency and replication to a 3rd site are the keys to true disaster recovery.

Failover to either crash or application consistent points in time using ZertoOne misconception I often come across is that synchronous replication is the holy grail of ensuring zero data loss. Let me be clear; synchronous replication is not application consistent and you definitely can lose data! To explain; just because a change is immediately written in both sites does not mean that all of the transactions have been committed from memory. All replication solutions are bound by the same rules when it comes to application consistency.

The irony of this is that any customer who has such valuable data to warrant the expense, distance limitations and complexity of a synchronous replication solution, often only considers their last application consistent point in time as their real Recovery Point Objective (RPO). Otherwise they cannot be sure their data is 100% consistent.

With Zerto we are replicating Virtual Machines from the hypervisor to remove the limitations and complexity of synchronous replication solutions. Zerto replicates the disk writes as they occur within seconds to the recovery site. Zerto is the only replication solution that gives you the ability to failover to any crash or application consistent point in time within the last 5 days, which synchronous replication cannot do. Application consistent points in time can be scheduled on a per application basis, on individual schedules, giving the ability to take them as frequently as possible without impacting production.

Using synchronous replication, recovering to an application consistent point in time requires recovery from backup or snapshot which can be both complex and slow.

If you already have a synchronous replication solution installed I’m certainly not recommending to rip out and replace such a significant investment. However,  combining this solution with Zerto to enable true 3rd site replication gives the ultimate datacentre configuration of both high availability and disaster recovery.

Zerto can be installed and running in just 1 hour, replicating with an RPO of seconds over a standard IP link and is completely storage agnostic. Try that with any other enterprise replication solution! If you are interested in arranging a trial installation please click here.

]]>
http://www.zerto.com/blog/general/is-synchronous-replication-enough/feed/ 0
Mimecast Outage http://www.zerto.com/blog/general/mimecast-outage/ http://www.zerto.com/blog/general/mimecast-outage/#comments Thu, 16 May 2013 11:46:04 +0000 Zerto http://zerto.com/?p=5150 This just in – reports coming from twitter are showing that there has been a major outage in the Mimecast datacenter. Mimecast recently raised a $62.5 million funding round for their cloud-based email management platform. I was alerted to the mimecast outage on twitter by @Jon_G_UK who appears to be a customer:

We’re not in the business of pointing fingers at others – but in the Disaster Recovery market, it’s incumbent upon us to take stock of the types of failures and outages that happen, and to break them down so that we can be sure that Zerto’s disaster recovery solution will be able to handle outages of any kind.

Would we tell people not to use a cloud-based or SaaS offering? No – the benefits often outweigh the costs, and all datacenters are prone to outages. This one, with what seems like over 125 UK law firms that could be impacted, will certainly teach us many lessons about continuity plans.

]]>
http://www.zerto.com/blog/general/mimecast-outage/feed/ 1
It’s all about Change: Data Replication Bandwidth Considerations http://www.zerto.com/blog/general/change-and-data-replication-bandwidth-considerations/ http://www.zerto.com/blog/general/change-and-data-replication-bandwidth-considerations/#comments Mon, 13 May 2013 10:05:35 +0000 Zerto http://zerto.com/?p=5034 This post was written by Zerto’s Sr. Marketing Architect, Shannon Snowden.

One common question we get from service providers, enterprises, and others relates to how Zerto Virtual Replication (ZVR) is about data replication in large environments; how is Zerto able to protect servers with very large storage requirements?

Servers with terabytes of data are very common in a ZVR deployment because we are protecting so many higher-level critical servers.  Because we protect those servers so well, we also often get questions around how ZVR protects very large, even up into the petabytes of data.

It really comes down to ch ch ch..changes.

Changes happen on the VMs in virtual protection groups (VPGs). A ‘VPG’ is the name Zerto gives to affinity groupings of virtual machines, usually by application or other similar functionality, into a single entity for replication. A VPG allows for boot order customization, startup and shutdown scripts if necessary.  You usually want to put VMs together into VPGs that have some association with each other.

Planning for the Rate of Change

Zerto provides planning calculators in our customer portal. One tool is the WAN bandwidth estimator tool. The WAN calculator tool gives an estimate of required bandwidth between the source and recovery sites based on your expected data change rate.

The change rate with ZVR is the important factor as opposed to the total amount of data being stored. You can have petabytes of data but if only a fraction of that is changing then it really is inconsequential to the actual daily replication bandwidth requirements once the initial sync has completed because, “some things ain’t ever gonna change”.

For example, if a server in a VPG that has 1TB of data and 5% of the data changes per day, that is 51.2GB of data that is actually changing.  Therefore, 51.2GB over 24 hours is 621.4 KBps. Using the Zerto WAN sizing calculator, the VPG would use 4.86Mbps uncompressed for the WAN connection. If you use ZVR’s built-in WAN compression, an average we often see is the bandwidth usage will be about 2.5Mbps.

But you wouldn’t want to try to replicate terabytes of data across the wire, so ZVR uses a pre-seed feature to sync the initial data set.

Changes matter more than the total amount of data associated with a server when ZVR is doing the replication and that is one of the reasons ZVR is so efficient with bandwidth usage between sites.

…our next post will cover some details on the pre-seed functionality, stay tuned…

]]>
http://www.zerto.com/blog/general/change-and-data-replication-bandwidth-considerations/feed/ 1
Bluelock Visits the Zerto Blog to Dish on Recovery-as-a-Service http://www.zerto.com/blog/general/recovery-as-a-service-raas-with-bluelock/ http://www.zerto.com/blog/general/recovery-as-a-service-raas-with-bluelock/#comments Tue, 07 May 2013 14:00:19 +0000 Zerto http://zerto.com/?p=5036 Today, Bluelock launched their Recovery as a Service offerings. With us today on the blog is Diana Nolting from Bluelock – here to talk about today’s announcement:

Z: Hi Diana, and thanks for joining us on the Zerto blog.

B: Thanks, Fara.

Z: So tell us a bit about the road leading up to today’s Recovery as a Service announcement.

B: Bluelock has been working closely with Zerto over the past year to develop our cloud-based VMware vCloud Recovery-as-a-Service (RaaS) offering for the public cloud.  Today we’re pleased to announce the official launch of our new RaaS offering, which we call the ‘4-Series Virtual Datacenter’ based on Zerto Technology. The RaaS solution enables organizations to recover critical IT resources with increased efficiencies and complete effectiveness when an adverse situation strikes in the bluelock cloud.

Z: Why would a company be interested in Bluelock’s RaaS offering?

B: With businesses’ increased reliance on technology, the cost of downtime is higher than ever. Whether an organization’s concerns are focused on specific types of incidents such as application corruption, virus, natural disaster, or the need to fulfill a specific service level agreement (SLA) or regulatory requirement, organizations of all sizes are seeking reliable disaster recovery solutions.

Z: Can you explain a bit about cloud disaster recovery and how it works?

B: Sure, Bluelock offers two types of RaaS solutions.  The first is our “To-Cloud” solution which is ideal for customers who want to recover workloads hosted in their own facilities.  The second type is an “In-Cloud” solution which allows current Bluelock cloud customers to recover their hosted workloads to another, geographically separate Bluelock facility.  Both offerings require no re-working of applications, are easily testable and are 100% compatible with VMware.

Bluelock’s To-Cloud RaaS solution offers businesses running VMware-virtualized environments the ability to recover applications to a proven, enterprise-grade Bluelock Virtual Datacenter.

Bluelock’s In-Cloud RaaS solution is the ideal RaaS offering for Bluelock customers with critical workloads that require geographically-separated recovery environments. Using innovative replication technology, Bluelock’s solution provides effective replication of production applications to a second Bluelock datacenter.

Z: So to summarize, what would you list as the top three advantages of RaaS with Bluelock?

B: I’d break out the top three this way:

1. Easier Testing for DR: Testing is encouraged and to make testing even easier and more affordable, resources will be reimbursed for two tests annually when they are scheduled in advance.

2. Visibility: Bluelock RaaS customers have total visibility into their resource allocation and predicted uses through Bluelock Portfolio. 

3. Integration: The tool integrates seamlessly with other Bluelock VDC solutions, with little effort for the end user.  This makes Bluelock’s RaaS an ideal on-ramp for anyone looking to try out a leading vCloud-based public cloud provider.

Z: Thanks, Diana. Find out more about Bluelock’s To-Cloud and In-Cloud RaaS solutions, based on Zerto replication technology, by visiting Bluelock.com or, watch the video below:

 

]]>
http://www.zerto.com/blog/general/recovery-as-a-service-raas-with-bluelock/feed/ 0
VMworld 2013 – Top Sessions We’d Like to See on Stage http://www.zerto.com/blog/general/top-vmworld-2013-sessions/ http://www.zerto.com/blog/general/top-vmworld-2013-sessions/#comments Thu, 25 Apr 2013 12:06:54 +0000 Zerto http://zerto.com/?p=4970 VMworld 2013 San FranciscoIt’s that time of year again! Voting has opened up for VMworld 2013 sessions, and there are some great choices. Lots from the usual suspects but there are still plenty of breakout sessions from some lesser-known presenters.

Here are our top-picks (ours are included in the mix) – please vote for them and get these sessions on the stage!

5189 Reference Architectures for Software Defined Disaster Recovery

  • Shannon Snowden, Zerto and Ben Miller, Bluelock

5250 DRaaS and Compliance: How a Financial Company Leveraged Hybrid Cloud to Cost Effectively Protect Business Critical Applications

  • Ken Seitz, Peak 10 and Shannon Snowden, Zerto

5296 Hypervisor-Based Disaster Recovery: The Missing Link for Virtualizing Mission-Critical Applications

5382 Case Study: Making Money from your Business Continuity Solution

  • Marcus Thimm, Glenbrook High School District 225 and Shannon Snowden, Zerto

5395 vCloud-powered Disaster Recovery for the Software Defined Data Center

  • Shannon Snowden, Zerto and Dante Orsini, iland

5505 Case Study: Protecting Virtualized Healthcare Data and Applications

  • Kevin Crowe, University of Louisville Physicians and Shannon Snowden, Zerto

5779 How LogicsOne Leveraged Cloud Disaster Recovery From Zerto and Cirrity to Meet Customer SLAs and Guarantee Availability

  • Andrew Albrecht, Cirrity

5478 Simplifying Disaster Recovery

  • Bryan Bond, eMeter, a Siemens Company and Jake Robinson, Bluelock

4569 Ask the Expert vBloggers

  • Duncan Epping, William Lam & Scott Lowe of VMware, Vaughn Stewart of NetApp, Rick Scherer, EMC

4818 The Software Defined Datacenter Panel

  • Kamau Wanguhu, William Lam, Duncan Epping, Cormac Hogan & Alan Renouf

5604 A Beginners Guide to Horizon Workspace

5859 Storage and CPU Noisy Neighbor Issues: Troubleshooting and Best Practices


Go ahead and cast your VOTE!!
]]>
http://www.zerto.com/blog/general/top-vmworld-2013-sessions/feed/ 0
Software Defined Disaster Recovery – If You Build It, They Will Come http://www.zerto.com/blog/general/software-defined-disaster-recovery-if-you-build-it-they-will-come/ http://www.zerto.com/blog/general/software-defined-disaster-recovery-if-you-build-it-they-will-come/#comments Wed, 10 Apr 2013 15:48:45 +0000 Zerto http://zerto.com/?p=4865 Post contributed by Shannon Snowden, Zerto’s Sr. Technical Marketing Architect

It is springtime and for many it’s time for baseball to begin again. I’m not a baseball fan, but I do like movies, and what we are seeing at Zerto reminds me of the tag line from the movie Field of Dreams.  “If you build it – they will come”.

An interesting thing has developed over the last couple years at Zerto. We’ve built it and they are coming. We’ve seen adoption by enterprises and cloud service providers (CSPs) of our ZVR product at an incredible rate.

The driving force behind the CSPs choosing Zerto is the fact that we offer flexibility for disaster recovery, which historically has been so bound to static hardware configurations that it made simple DR difficult and cloud-based DR prohibitively expensive. Frankly, for all the talk of DR being something that could easily move to the cloud, it wasn’t actually feasible.

Zerto’s flexibility allows service providers to host purely in-cloud customers as a managed service, a DR as a service solution, or both. In fact, just about any combination that they can come up with is perfectly acceptable as Zerto can support most solutions as long as they meet a minimum set of requirements which include having VMware vSphere hosts and vCenters and some basic network connectivity.

The reason that this is possible is the fact that Zerto works at the hypervisor layer and does not depend on hardware. That being the case we are augmenting the idea of a Software Defined Data Center (SDDC) to include disaster recovery. We call it Software Defined Disaster Recovery (SDDR).

Comparing the levels of effort for a hardware-based solution for DR versus SDDR makes things much clearer.

Example Deployment #1 – Two Sites

Hardware

For a hardware based disaster recovery solution, this simple two site deployment is difficult enough that it is normal to require on-site configuration by the hardware and  software vendors. The build time is several days to several weeks.

SDDR

For SDDR, this is the simplest deployment that normally takes about an hour to install, including testing sample failovers and failbacks.

Example #2 – Multiple Sites

Hardware

For a hardware-based solution, this is actually pretty impractical to deploy because of the prohibitive hardware setup, as well as initial and ongoing administrative costs.  If it were attempted, a multi-week on-site professional services engagement would be the quickest way to get it deployed and the ongoing support would be an administrative challenge.

SDDR

For SDDR, this deployment is not only practical, but also pretty easily implemented. Since SDDR is totally at the software layer, pairing sites is as straightforward as ensuring that they have basic network connectivity between the sites and the correct login credentials to pair the sites. Once that is done, DR and migrations can happen between the site pairs.

CSPs have not only adopted, but have embraced Zerto Virtual Replication by deploying it into critical production environments with confidence.  They are effectively using the same mature product as our enterprise customers; only they are leveraging the multi-tenancy and resource masking capability which is already built into the product.

The Software Defined Disaster Recovery model allows CSPs to initiate their first service offering or to expand their existing services offering.  As they build out their capabilities they have seen their customer base increase and the SDDR server count rise quickly. An attractive SDDR effect is the fact that customers usually want multiple affinity groups of VMs protected at the same time. We’re seeing expansion of service offerings from over 100 CSPs who are using Zerto to power their Software Defined Disaster Recovery.  But, just like at the end of the movie, what we’re seeing is just the beginning…

Want to learn more about Software Defined Disaster Recovery? Click here for our “Two Paths to Disaster Recovery” webinar replay.

]]>
http://www.zerto.com/blog/general/software-defined-disaster-recovery-if-you-build-it-they-will-come/feed/ 1
Is Zerto the iPhone of Business Continuity and Disaster Recovery? http://www.zerto.com/blog/general/zerto-iphone-business-continuity-disaster-recovery/ http://www.zerto.com/blog/general/zerto-iphone-business-continuity-disaster-recovery/#comments Tue, 12 Mar 2013 17:28:21 +0000 Zerto http://zerto.com/?p=4669 Today’s post is by Eric Hannah, Zerto’s Regional Sales Manager for the North Central US Region.

Remember the days when we used individual devices for every task in day to day life?  It wasn’t long ago that we had home telephones, answering machines with tapes, a clock, a camera and film, a map, compass, a television.  Jump ahead a handful of years to today and with the advancements in technology, you have one device that makes calls, keeps voicemail, takes pictures, captures video, receives mail, provides directions, displays movies and tv shows, does banking, connects you with friends….  (Reminds me of one of my favorite Tom Waits songs “Step Right Up” for you music fans out there.)

I’ve spent my career representing data center solutions including servers, storage, security, data protection, performance monitoring, and so on.  I transitioned from the complete data center solution provider to the smaller single product manufacturer (or IT startup) with the goal of representing the most valuable technologies in the market.  And what I’d looked for (and recently found) is a product with the iPhone’s effect of delivering value – a single, complete solution to the everyday problems in IT.

Zerto has that iPhone effect for the challenges of data protection and application availability or BC/DR.

Before Zerto, there were multiple products I would have to recommend that my clients make investments in, train on, manage, and piece together to protect data and recover applications in order to be able to deliver DR services to the business interests – solutions that proved very costly with minimal return, more like an insurance policy.  Let’s take a look at some of these:

Local Data Protection:

To protect a local production application and its data, my clients would invest in multiple products including a backup server, backup software, backup agents (included in the software typically), snapshot software (used like backup software) and a backup media that may have been disk, de-duplication disk, optical disk, tape or a combination of 2 or more.

These devices are required to deliver a copy of the application data locally to the IT department.  Typically these processes are run at night (or possibly an extra snapshot job or two during the day) to provide IT with the opportunity to recover to yesterday’s data set.

Local Application Availability:

Before server virtualization, and even in a few IT data centers today, applications would run on physical servers and would require a complete second server and licensing of the application for high availability.  Of course server virtualization has been a game changer and has become a data center standard.  With virtualization, my clients can invest in add on features to achieve high availability locally in a virtual environment without having to have a complete separate second server but use another in a server cluster – a significant improvement in efficiency over duplicating physical servers but still an investment in product.

Remote Data Protection:

Most enterprises today have the requirement for an offsite recoverable copy of data.  This can be achieved by a contract with an offsite data management company taking tapes or optical disk of site or investing further in a second backup disk or de-duplicated disk backup target for the secondary site and replicate between the two.  Not only does this require duplicate investments in media but also an investment in a second site – power, cooling, electricity, management, floor space, etc.  This additional capital is spent to ensure that in the event of a full site failure, an enterprise can restore its information at a second site.  But how good is that data if we can’t get it online quickly?

Remote Application Availability or Business Continuity:

Most physical replication strategies will get my clients’ data to that second site.  It may be yesterday’s or even last week’s data.  And then when clients suffer a major outage, it’s off to the races to get applications fired up and back online and mapped to the recovered data sets for our businesses to resume.   Clients can invest even further in tools that help automate this process somewhat.  Even then, there is a bit of a manual process required to do remapping of server and storage devices, network updates, etc.

Zerto Brings Complete DR to Applications:

At the core, all these investments go to giving enterprises the ability to recover data and applications.  With the standardization of server virtualization in the data center today, Zerto was built to deliver all of this in a single solution for virtual applications – data protection, replication and application availability.

What is the iPhone effect for DR?

  • Ability to recover data to previous point in time (backup server, backup software, snapshotting)
  • Continuous data replication (array replication, backup disk replication, offsite tape)
  • Complete and automated application recovery (backup software, local high availability, orchestration, array replication)
  • Offsite clone of application data (backup replication, offsite tape)

When you have a technology that can achieve and deliver on the service levels of over four or more technologies, there is a typically a significant return on an investment.

Like the iPhone, Zerto customers are not quite aware of what they were missing before they tried Zerto.  They got accustomed to the fact that juggling many solutions across multiple departments was the way disaster recovery had to be. So if you have multiple data protection and application availability products, I encourage you to get a trial of Zerto to understand the iPhone effect.  I promise it will make what you’re doing for DR now feel like 10 years ago, walking into a cell phone store, and realizing what a single solution can do.

]]>
http://www.zerto.com/blog/general/zerto-iphone-business-continuity-disaster-recovery/feed/ 0
Zerto Virtual Replication 3.0 Explained http://www.zerto.com/blog/general/zerto-virtual-replication-3-0-explained/ http://www.zerto.com/blog/general/zerto-virtual-replication-3-0-explained/#comments Tue, 26 Feb 2013 13:53:28 +0000 Zerto http://zerto.com/?p=4616 We’re thrilled to announce today’s news - a sneak peek look into what will be included in Zerto Virtual Replication 3.0. Everyone’s asking, “What’s new?” so I created a chart to explain some of the details.

As with all of our releases, Zerto Virtual Replication 3.0 includes features for the enterprise customers we serve, as well as for the cloud providers who use Zerto to power their Disaster Recovery offerings.

Zerto Virtual Replication 1.0 2.0 Additions 3.0 Additions
Software-only – Deployed quickly and remotely & installs in one hour Multi-site – ability to replicate between more than one site Remote Office / Branch Office (ROBO) protection – extends disaster recovery to branch offices or environments managed by a single vCenter
“Enterprise-class” – ZVR is consistent and reliable – provides scalable, continuous replication (no snapshots) with RPO of seconds and RTO of minutes Multi-tenant architecture replication to shared infrastructure from multiple customers without network conflicts ‘test-before-you-commit’ function – the capability to test a specific failover point before committing it, enabling 100% assurance that failover will be successful
Array agnostic – replicates any customer environment to any datacenter or cloud regardless of storage vendor or architecture VMware vCloud Director integration vCloud 5.1 seamless integration – including native support for vApps, Storage Profiles, Org Networks, Provider VDC and more
Fully virtually aware – ability to protect and recover specific VMs and create VM protection groups with block-level consistency Recovery reports document the successful execution of BC/DR processes, for easy auditing and reporting
Support for all advanced VMware functionality, vApps, vMotion, DRS, HA Zerto Self-Service Portal (ZSSP) web based portal that integrates with a CSPs existing customer portal
Zerto Cloud Manager (ZCM) for managing customer resources and service levels across physical locations
VMware vCloud Automation Center (vCAC) integration

]]>
http://www.zerto.com/blog/general/zerto-virtual-replication-3-0-explained/feed/ 4
March Madness Starts Here at VMware Partner Exchange http://www.zerto.com/blog/general/march-madness-vmware-partner-exchange-pex/ http://www.zerto.com/blog/general/march-madness-vmware-partner-exchange-pex/#comments Mon, 25 Feb 2013 11:31:03 +0000 Zerto http://zerto.com/?p=4585 Headed to VMware Partner Exchange this week in Las Vegas?

Zerto’s running a fun promotion for partners who close deals in March!

There are two ways to win Beats Headphones:

  • For Partners who are going to be at PEX, get your Zerto swag and details below,
  • Not at PEX? start here to WIN

]]>
http://www.zerto.com/blog/general/march-madness-vmware-partner-exchange-pex/feed/ 0
Top Virtualization Articles to Start Your Week http://www.zerto.com/blog/general/data-protection-articles-to-start-your-week-2/ http://www.zerto.com/blog/general/data-protection-articles-to-start-your-week-2/#comments Sun, 03 Feb 2013 14:29:29 +0000 Zerto http://zerto.com/?p=4355 Some light reading for when you need your Virtualization and Data Protection fix!  Here are the virtualization articles we found interesting in the last week:

1. For pure awesomeness rolled up into a blog post – check out “What Jaws Taught me About Information Technology” well done @discoposse !

2. TheInfoPro, part of 451 Research, released its ‘Servers and Virtualization Study,’ which was based on live interviews with “server professionals and primary decision-makers at large and midsize enterprises” in North America and Europe. One interesting take-away: “Average virtualization levels have increased 13% from last year to 51%, with a notable increase at the higher levels, roughly doubling the number of organizations virtualizing production applications.”

3. This one’s useful – outside the scope of data protection and more focused on incident management in virtualized environments – but well written and interesting – here’s an excerpt from the post “5 steps to Incident Management in a Virtualized environment,” from the InfoSec Institute:

“Incident Management (IM) is a critical component to a security program. Traditional IM approaches are not always effective in a partial or completely virtualized data centers and this article discusses five steps that lead to a tight integration of VM and existing incident response processes. They examine and help remedy system, network, and process design challenges associated with VM placement  incident detection and containment, and business process recovery unique to virtualization.” Read More…

4. Finally – a topic that can never be discussed too much – when to upgrade to the next VMware version.  This time, addressed by Trevor Pott at SearchVmware, “Ready for a vSphere 5.1 upgrade? Depends on your current version

Happy Superbowl Sunday and have a great week!

]]>
http://www.zerto.com/blog/general/data-protection-articles-to-start-your-week-2/feed/ 1
Top 5 Reasons Why Backup is Not Disaster Recovery http://www.zerto.com/blog/general/top-5-reasons-why-backup-is-not-disaster-recovery/ http://www.zerto.com/blog/general/top-5-reasons-why-backup-is-not-disaster-recovery/#comments Mon, 28 Jan 2013 13:10:00 +0000 Zerto http://zerto.com/?p=4371 Today’s post was written by Jennifer Gill, Zerto’s Director of Product Marketing.

Many organizations have a backup strategy but not a disaster recovery strategy, why? Because they think that if they have backup then they have a disaster recovery plan.  Not quite. Here are 5 reasons why backup is not disaster recovery.

Backup Disaster Recovery1. Service levels – low recovery point objectives and recovery time objectives. 

Backup products do not deliver recovery point objectives of seconds and recovery time objectives of minutes. Backups typically happen once per day and at night, so your recovery point objective could be 23 hours. If you are protecting a mission critical application, 23 hour data loss is not acceptable. Rebuilding a virtual machine, and everything that goes along with it, from tape can take days. If you are rebuilding from disk, it might be a little faster – a few hours. Again, this is not a service level that a mission critical application can tolerate.

2. Application impact: Performance and backup window. 

There is a reason why backups occur at night – making a copy of an application and its data drains the CPU on the server. If you need more aggressive RPOs than 23 hours as stated above, that means you have to create copies more frequently. This is possible, but at the expense of CPU. This significantly impacts end-user productivity. Additionally, the backup window is a fixed period of time. As stated, this occurs overnight so most organizations assign 8 hours for a backup to complete. The application must be quiesced and then copied. As the applications grow and grow, quiescing the application and backing it up cannot be completed in the backup window.

3. Retention. 

Backups are typically stored for a very long time for compliance and audit purposes. Disaster recovery information is stored for hours or days. Additionally, for a backup, you will have just one snapshot of the application and data. For an enterprise-class disaster recovery solution, you will have several points in time to failover to, just in case the last point in time is corrupted.

4. Automated recovery. 

Building the environment from a backup, especially a tape backup, is extremely time consuming. This is why the recovery time objectives are so long. With an enterprise-class disaster recovery solution, the entire recovery process can be automated. The VMs on the protected site will automatically be shut down, and then the replicated VMs on the replication site will be started. Any re-IPing will happen to ensure end-users have fast access to the application and data. For mission critical applications, this entire process should take just a few minutes. This is a very different service level from a backup solution. Additionally, an automated process is a foolproof process, since every manual step that is introduced is an opportunity for an error. A disaster recovery strategy must eliminate as many opportunities for error as possible – automation accomplishes this and even verifies it through non-disruptive testing. It is critical that testing can be done without impacting the applications and data so that end-user productivity is not affected in anyway. Once the testing is complete, customers know that failover, recovery and failback will perform as the business requires.

5. Reverse replication. 

Once an application is available on the replication site, end-users are using it, which is great. However, you must make sure that this application continues to be protected. A backup solution will not start taking backups and ship them back to the production site. A disaster recovery solution will ensure the replicated application is protected by replicating back to the source site.

]]>
http://www.zerto.com/blog/general/top-5-reasons-why-backup-is-not-disaster-recovery/feed/ 0
5 Differences Between Zerto and SRM http://www.zerto.com/blog/general/5-differences-between-zerto-and-srm/ http://www.zerto.com/blog/general/5-differences-between-zerto-and-srm/#comments Mon, 21 Jan 2013 10:41:07 +0000 Zerto http://zerto.com/?p=3991 This post was written by Shannon Snowden, Zerto’s Sr. Technical Marketing Architect.  Prior to joining Zerto, Shannon designed and completed many SRM installations.

What are the differences between Zerto and SRM (with Array-Based Replication)?

In a previous blog post we compared Zerto with vSphere Replication. Admittedly, we knew it wasn’t a fair comparison, but it was in response to many questions we received after VMware announced some changes to vSphere Replication at VMworld 2012.

A better comparison is actually Zerto vs. SRM (VMware Site Recovery Manager) together with array-based replication. We frequently get asked to compare these two and although there are many more points on both sides, here are what we believe are five significant differences between the products.

1. SRM was built as an orchestration product, Zerto as a replication product.

SRM

SRM is an orchestration product that ties together a bunch of disparate components and streamlines workflows for moves and failovers. I often refer to it as the glue between vCenter, ESX, storage replication and storage. That’s what it was designed to do, and frankly, performs that task as well as any orchestration product could.

But,  you can actually perform all the steps manually that SRM performs.

Zerto

In contrast, Zerto was designed to provide replication at the hypervisor level that also provides automation of failovers and recoveries – and we do that better than anyone in the business. Here are more details of what Zerto does.

The workflow and automation configuration steps for moves and failovers are integrated seamlessly and the administration is very straightforward with Zerto since we don’t have to coordinate with other components to perform successfully.  So, can you perform all the steps manually with Zerto? No you can’t. Since we are the replication mechanism, you can’t do that manually.

2. Only Zerto can restore from any point in time.

SRM

One of the more frustrating things we hear about SRM is the fact that even though an organization paid for storage replication that has point in time capability, SRM does not use points in time in recoveries or moves. If the last bit of data that was replicated is corrupt, you have to use some other mechanism like restore from backups or bypass SRM altogether and restore directly from the storage.

Zerto

Zerto can be configured to go back as far as 120 hours for point in time recovery. We work with VSS and we can even get application-consistent checkpoints  for applications like SQL, Exchange and Oracle on Windows and Linux platforms. You can manually create checkpoints as well.

But we do something else with the points in time that many of our customers love and frequently use. We can perform off-site cloning using any point in time you want to clone a machine or a group of machines in what we call Virtual Protection Groups  or VPGs (see here for more information on VPGs). With our VPGs, you can take a whole application that is comprised of multiple VMs and do a point in time clone of the whole group of VMs at the same time and have a new test lab with the VMs ready to test at whatever point in time from which you want to start.

After you get your disaster recovery solution built, you really just want to start protecting virtual machines.

3. Zerto requires no storage configuration or reorganization, SRM does.

SRM

Even though SRM is ready to use, the first thing you have to do is either configure the existing production LUNs or volumes with the VMs you want to protect to be replicated, or you have to create new volumes and move the VMs over to the newly created replicated volumes. You need to move these VMs prior to configuring the Protection Groups in SRM because if you storage vMotion the VMs to your new replicated LUN, you have to go into SRM and reconfigure it because SRM doesn’t play well with storage vMotion.

What we find with customers that go from array-based replication either with or without SRM, they usually reduce their LUN count significantly because they don’t have to create extra LUNs specifically for replication. For VMs that are not protected, they are put on non-replicated LUNS and  the VMs they want protected have to be moved to replicated LUNs. They often have to match the spindle count, disk speeds and configuration of the current LUNs or volumes in order to keep the performance as expected.

SRM requires a lot of work before you can actually protect the first VM.

Zerto

For Zerto, underlying storage and LUN configuration is irrelevant, so setup is simple. As soon as you complete the Zerto installation, which usually takes about an hour, all you do is create the above-mentioned VPGs (it’s a couple of clicks to create) and start adding VMs into the VPGs. Once that is done, replication is happening.

If you have several TBs of data to replicate,  you probably don’t want to initiate that replication over the wire. Zerto can leverage pre-seeded VMDKs at the target site. Often our customers or our Cloud Partners copy the VMs to a portable disk, take it to the target site and copy the VMs to the datastores where they want the VMs to run when they are failed over. During the VPG creation or editing, you can granularly select an individual VMDK in a VM and pre-seed map it to the VMDK you copied at the remote site.

Once that is done, only the changes are replicated across the wire. You can actually have a VM with TBs of data fully protected with just seconds of RPO between the sites in just a few minutes with the pre-seeding option.

4. SRM requires the same storage arrays and VMware versions; Zerto does not.

SRM

This is where SRM is challenged not necessarily because of the SRM product itself, but by the nature of what it does. Since it’s coordinating multiple components like chain links,  (storage, path software, array-based replication, SRAs, SRM, vCenter) at each site and each link in the chain has to be at a matching firmware, driver and software level, there are many opportunities for incompatibility during configuration.

Moreover, after you get it successfully operational, each of these components require updates and patches. It can introduce instability into a functioning SRM solutions. You will find support articles for each of the links in the chain that have caused SRM to fail. You will also find frustrated admins looking for the right throat to choke and they are getting passed between software and hardware vendors.

Zerto

Zerto is a solution built from the ground up for hypervisor-based replication – and with workflows integrated, we don’t really depend on any other components to perform our core functionality. So that significantly narrows the window of opportunity for incompatibility between our own products working together. When issues do arise, you have only one place to call and our Support team is on it immediately.

But we are very proud of the fact that we often hear the comment, “Your product actually does what you say it does.”

5. Zerto is multi-site, anywhere to anywhere replication; SRM + array based replication is too expensive for many-site replication.

SRM

Amongst the additional configuration and ongoing support operations for array-based replication with SRM, it is not a cheap solution. So the companies that have it usually only have two sites in their DR plan. A primary and recovery site. It normally isn’t practical to have more than two sites with SRM.

Zerto

With Zerto, you are not constrained to what the array can replicate, so you can create an anywhere to anywhere replication scenario. You can go between your own sites or some combination with a cloud service provider. You can easily replicate data to and from a new data center in the environment, for example, new entities  from a merger or acquisition. You simply pair a new site by adding in the peer site’s IP address, create a Virtual Protection Group (VPG) and start replicating.

It really doesn’t matter to Zerto where you are replicating as long as you meet some very basic requirements.

1. You need a vCenter at each site.
2. You need ESX 4.x (and up) hosts.
3. You have network connectivity between the sites.

Zerto can go between vCenters and Zerto also is the only fully integrated vCloud Director solution on the market today – including SRM.

We hope you find enough reasons here to go get a free trial of Zerto for your Disaster Avoidance, Disaster Recovery and datacenter moves. We also hope you will find this post treated SRM fairly. Our goal is to highlight the differences between the products. Just as we did with the vSphere Replication post, we invite corrections to any technical errors to points we made in this post.

]]>
http://www.zerto.com/blog/general/5-differences-between-zerto-and-srm/feed/ 8
5 Data Protection Articles to Start your Week http://www.zerto.com/blog/general/data-protection-articles-to-start-your-week/ http://www.zerto.com/blog/general/data-protection-articles-to-start-your-week/#comments Mon, 14 Jan 2013 11:00:31 +0000 Zerto http://zerto.com/?p=4264 This week’s ‘Top 5 articles in Data Protection’ post was compiled by Wes Schifone, Zerto’s Managing Director, Inside Sales. 

If you’ve missed some of the recent articles on data protection and recovery – go ahead and check these out:

1. We’ve all seen statistics on what happens to businesses when they experience server downtime, occasionally rolling our eyes at some of the over-the-top data. However the folks at Venture Beat put together this awesome infographic on the high cost of server downtime citing useful statistics such as the most common causes (can you guess the top reason?), as well as presenting a few ways to avoid these interruptions in service (that’s a main part of many of our jobs).

2. Considering a private cloud for your organization? Have you given any thought to implementing a chargeback structure to allocate resources based on the different SLA’s you provide your business units? This article from David Marshall over at Infoworld describing How Virtualization Managers Can Implement Chargebacks provides a guide for IT leaders to get this right.

3. Think SIS’s Brad Brunner describes his firm’s process for find the right mix of solutions for their clients when it comes to Disaster Recovery in a cloud infrastructure. He even provides a really useful diagram to identify and measure value and costs based on things like RPO’s and RTO’s in this blog post Disaster Recovery and High Availability – with a Cloud Infrastructure Provider.

4. Considering a migration? Whether it’s to a new company-managed data center facility, into a hosted data center, into “the cloud”, or a hybrid of these, Varrow’s Advisory Services team describes why moving these assets is literally one of the riskiest activities an IT team can take. “In no other activity do you intentionally risk so much impact to so many IT assets and services, in a single act of bravado” writes Paul Penny. In this article ‘Data Center Migration – Planning is Essential‘ Penny, Varrow’s VP of Consulting Services, outlines the 5 phase methodology they apply for their clients in supporting data center migrations.

5. What’s the best option for your environment – public cloud or vsphere? Techtarget reviews the top factors for each, from security concerns to cost structures and impacts on the environment. Check out the option that best fits your organization by reading about the top four recommendations for finding right mix.

What are your must-read articles on data protection this week?

]]>
http://www.zerto.com/blog/general/data-protection-articles-to-start-your-week/feed/ 0
Stretched Server Clustering is Not a DR Plan! http://www.zerto.com/blog/general/stretched-server-cluster-is-not-a-dr-plan/ http://www.zerto.com/blog/general/stretched-server-cluster-is-not-a-dr-plan/#comments Tue, 04 Dec 2012 13:00:13 +0000 Zerto http://zerto.com/?p=3875

Today’s blog post was written by Joshua Stenhouse, Zerto’s UK-based Solutions Engineer.

A stretched server cluster / stretched SAN is a fantastic solution for protecting against the failure of a single SAN. However in the modern day datacenter with redundant power supplies, generators, controllers and networking a single SAN failure is a most unlikely event. Given the high cost of implementation, requiring high speed low latency links, matched infrastructures, same vendor storage and the relative proximity of the datacenters; stretched SAN clusters are not going to deliver the BC/DR the business expects.

There are four key reasons that stretched server clustering is not right for Disaster Recovery:

1. Stretched SAN clusters are not going to protect you from logical failures.

A key issue with stretched SAN clusters is the most likely of DR events; a logical failure. This can be caused by a system wide virus, human error or multiple database corruption. With a stretched SAN cluster a logical failure is immediately written to both sites and offers no quick Recovery Time Objective (RTO) from such DR events.

Zerto protects against logical failures with the ability to failover a single application or whole site to a previous point in time within minutes, protecting against the most likely of DR events.

2. Stretched server clustering can’t protect you from site link failures.

The second most likely DR event can be a site link failure. With stretched SAN solutions this can lead to a split brain scenario with different VMs if different sites and an end result of no business continuity.

With Zerto; failover initiated manually and then Zerto automatically handles the entire failover process. This ensures that performing a DR failover is a business decision and not at automatic incorrect reaction to a link failure.

3. Close proximity of stretched clusters won’t protect you from environmental disasters.

Regional power cuts, flooding, earthquake or hurricane are all events that are commonly blamed for data center outages and failures. Due to the close proximity that stretched SAN cluster datacenters need to have to ensure adequate performance, they are not likely to protect against these events.

As Zerto can replicate over much longer distances with an RPO of seconds, this is a better guaranteed line of defense against such environmental DR events.

4. Finally, stretched server clustering is high cost.

Bandwidth & storage are not cheap in a stretched cluster scenario.

Zerto is storage agnostic and replicates over existing IP networks. Existing storage can re-used in the DR site irrespective of model, connectivity or vendor, even local disks can be used. Replication can be over links as small as 5mbs with no sensitivity to latency. Couple all the above together with built in bandwidth compression and the cost of implementing Zerto is significantly lower than a stretched SAN solution.

What if you have already implemented a stretched SAN solution?

Zerto can be a great compliment to an existing environment by offering replication to a true offsite location using an IP link with no maximum latency requirement, without any performance or operational impact on the existing stretched SAN solution.

If the data and applications are of significant value to warrant the size of the existing investment required for a stretched SAN cluster, then the added layer of protection offered by Zerto is a no brainer.

In an enterprise infrastructure with sufficient resources the ideal solution is a stretched SAN cluster plus Zerto replicating to a different part of the world. If I had to choose between a stretched SAN solution or Zerto, I’d select the solution protecting against the most common DR events: Zerto.

]]>
http://www.zerto.com/blog/general/stretched-server-cluster-is-not-a-dr-plan/feed/ 2
Rock the Vote! Zerto is Shortlisted for a Tech Trailblazer Award http://www.zerto.com/blog/general/zerto-shortlisted-tech-trailblazer-award/ http://www.zerto.com/blog/general/zerto-shortlisted-tech-trailblazer-award/#comments Mon, 03 Dec 2012 14:30:51 +0000 Zerto http://zerto.com/?p=3864 Vote for Zerto to Win Virtualization Tech Trailblazer of the Year! 

Today, Zerto announced that it has been shortlisted as a Virtualization Trailblazer in the 2012 Tech Trailblazers Awards.  The shortlist was compiled by a judging panel of industry experts, and the winners will be chosen by a public vote.  Voting ends on Friday, December 14th.  Please cast your vote for Zerto!

Zerto was shortlisted on the strength of its Zerto Virtual Replication (ZVR) solution, the industry’s first hypervisor-based replication product, which is fully aligned with the needs of virtualized mission-critical applications. ZVR is the only solution that provides enterprise-class replication features (such as very short RTOs and RPOs) that easily align with flexible virtual environments, with the added advantages of full automation and cloud readiness.

Check out this video we made to help explain why we deserve the Virtualization Trailblazer of the Year award:

]]>
http://www.zerto.com/blog/general/zerto-shortlisted-tech-trailblazer-award/feed/ 0
System Downtime and Human Error: Disaster Recovery is More than a “Nice to Have” http://www.zerto.com/blog/general/system-downtime-and-human-error/ http://www.zerto.com/blog/general/system-downtime-and-human-error/#comments Thu, 15 Nov 2012 12:21:19 +0000 Zerto http://zerto.com/?p=3826 Sometimes, when speaking to IT managers at meetings or tradeshows, we learn that Disaster Recovery at many companies is being overlooked. We ask almost everyone we meet, “What are you doing for DR?” and inevitably, some sheepishly look around  hoping to avoid eye contact. Yes, there are many critical issues to address in the data center, but we would argue that none are really more pressing than ensuring that your company can recover all applications quickly after a disaster.  Here’s why:

System downtime is usually not related to a natural disaster.

Typically, outages are “home-grown” and involve someone deleting a datastore or data being corrupted after a software update.  They occur on a regular and even frequent basis.

“That would never happen to us.”  Really?

Human error accounts for 56 percent of all total system downtime and data loss, according to this article from Continuity Central.

It’s shockingly easy to delete a datastore. In this (perhaps slightly melodramatic) video, “Pixar’s Oren Jacob and Galyn Susman recount how the files for Toy Story 2 were almost lost due to an accidental Linux command and bad backup.”

According to Robert Crawford, lead systems programmer and mainframe columnist:

“Unfortunately, the biggest cause of system downtime is human error. Many times it’s a procedure someone didn’t follow or think all the way through. Another cause may be a system quirk or obscure design flaw that someone didn’t account for. Sometimes, it’s as simple as a typo.

The two most important strategies for avoiding downtime are planning and automation. Planning, of course, works out the best way to make changes and avoid conflicts. Automated changes greatly reduce the chance for human error.”

Disaster Recovery has become incredibly simple. Failover, failback and testing can be fully automated.

Although companies may think robust Disaster Recovery solutions are storage-based – which are expensive, complex, and take days and often weeks to install – Zerto provides a solution that is a fraction of the cost, requires no environment configuration changes, and installs remotely in one hour.  Join our upcoming webinar to see just how easy Disaster Recovery can be.

]]>
http://www.zerto.com/blog/general/system-downtime-and-human-error/feed/ 0
Zerto Disaster Recovery Discussion on the SIS Blog http://www.zerto.com/blog/general/zerto-disaster-recovery-discussion-on-the-sis-blog/ http://www.zerto.com/blog/general/zerto-disaster-recovery-discussion-on-the-sis-blog/#comments Sun, 04 Nov 2012 09:40:07 +0000 Zerto http://zerto.com/?p=3703 On the blog of Zerto partner SIS, Michael Voss discusses the impact of DR on one of their customers, the University of Louisville Physicians.

Here’s an excerpt from the timely SIS post, “Treating Disaster Recovery Issues with Zerto and SIS”:

“You only need to take one look at the news to know that natural disasters do happen, which could put your data and business or organization at risk. Even just a one day outage could cost your business dearly. It’s still too early to tell how badly Hurricane Sandy has affected industries on the East Coast, but the effects of the storm are further reminders of the importance of disaster recovery. Disaster recovery gives businesses, health care professionals, government agencies and other organizations the peace of mind that they won’t be in the dark if disaster does occur. Many industries mandate certain regulations when it comes to disaster recovery, and this is certainly the case for healthcare. Let’s take a look at one example in Louisville.”  Read the rest of the post on the SIS blog.

The post references SIS and Zerto customer University of Louisville Physicians, who recently announced that after trying two other popular disaster recovery solutions that did not meet their needs, they chose Zerto Virtual Replication.

]]>
http://www.zerto.com/blog/general/zerto-disaster-recovery-discussion-on-the-sis-blog/feed/ 0
Hurricane Sandy and Disaster Recovery: Live Updates http://www.zerto.com/blog/general/hurricane-sandy-and-disaster-recovery-live-updates/ http://www.zerto.com/blog/general/hurricane-sandy-and-disaster-recovery-live-updates/#comments Mon, 29 Oct 2012 08:27:04 +0000 Zerto http://zerto.com/?p=3697 With one of the worst storms in recent history brewing on the East Coast, Zerto will be tracking the business impact of Hurricane Sandy here on our blog.  We certainly hope that the storm proves to be uneventful, and that Hurricane Sandy disaster recovery plans keep everyone, and their data, protected and safe.

On this blog post, we’ll be adding live updates of how our customers and other companies are faring the storm.

Final Update – Friday, 10:00 am ET: Now that the storm has officially passed, the hardest hit areas, particularly the coast of New Jersey and New York City, have started to assess the damage and begin the long process of recovery.  Our thoughts and prayers go out to the many people affected by the storm, those who have suffered loss and who have lost their homes, livelihood and the many who are still are without power.  Those wishing to donate to the relief efforts can do so here and here.

Update – Tuesday, 1:11am ET:  TechCrunch reports that “Datagram, the ISP whose Manhattan servers host BuzzFeed, Huffington Post, Gawker, and other sites, has lost power…Basement flooded, fuel pump off line – we got people working on it now. 5 feet of water now,” according to officials quoted in the article.” Reports from TechCrunch a couple of hours later said the sites were back online, “BuzzFeed’s site and story page are back online, thanks to a Content Delivery Network, Akamai, which hosts the content at servers distributed around the world.”

Update – Monday 9:30pm ET: Hurricane Sandy makes landfall, winds reaching 129 mph – gets downgraded to a post-tropical cyclone. An estimated 5.2 million people on the East Coast are without power.  The storm surge in Battery Park in lower Manhattan was 13 feet high.

Update – Monday 5:28pm ET: One of our partners, Integrity Virtual IT, just completed a successful fail over of operations for a customer from Virginia (in the path of Sandy) to Chicago.  The company’s employees are fully operational – working online with the Zerto replicated datacenter.

Update – Monday, 3:10pm ET: Hurricane Sandy, with winds of 90 mph, seems to be picking up considerable speed just hours before it is expected to make landfall.

One of our customers in the path of Hurricane Sandy spoke to us today about their plan to complete a preemtive test of their disaster recovery solution prior to the hurricane.  ”We are not planning to failover today unless we experience a power outage. We feel confident based on a recently executed planned move of all of our Zerto-protected VMs to our recovery site during the day last Friday, and a move back Saturday morning. The move went well in both directions. Overall, the move we performed as a test enabled us to keep all of our vital systems up with very little, if any, noticeable impact on our users, which is just what we wanted.”

Update – Monday, 7:59am ET: Local weather in Boston (where Zerto is headquartered) shows rain, with temperatures of 56º and wind gusts of 39 mph.

Lucas Mearian at Computerworld writes that those in the financial sector on the East Coast feel pretty confident in their disaster recovery plans for Hurricane Sandy, noting that, “In many cases, banks and brokerages will have two data centers in relatively close proximity — 20 or 30 miles apart — for business continuity where they replicate data in real time between the two to ensure if one goes down, the other can still operate. Then, they’ll have a third disaster recovery site to ensure a regional disaster will still not cripple their operations.” According to the article, “Virtualization has had a lot to do with better RPOs and RTOs with regard to x86 server infrastructures in larger companies, and for small- to medium-sized businesses, SaaS cloud services have allowed for better disaster recovery planning than at any time in history.”

 

]]>
http://www.zerto.com/blog/general/hurricane-sandy-and-disaster-recovery-live-updates/feed/ 0
Zerto Interviews Partners and Customers at VMworld 2012 http://www.zerto.com/blog/general/zerto-interviews-partners-and-customers-at-vmworld-2012/ http://www.zerto.com/blog/general/zerto-interviews-partners-and-customers-at-vmworld-2012/#comments Mon, 15 Oct 2012 12:09:28 +0000 Zerto http://zerto.com/?p=3577 Zerto was happy to have the opportunity to meet many of our European partners and customers face-to-face in Barcelona at VMworld 2012. Below are some of the videos we took of interviews with customers, including our customer Kingfisher IT Services and partners ACP & ZX Factory.  A few highlights:

  • “The product does exactly what we want it to do.”
  • “Now we can do DR tests that don’t disrupt, it’s revolutionized our DR.”
  • “One and one is three in the case of DR as a Service!”

]]>
http://www.zerto.com/blog/general/zerto-interviews-partners-and-customers-at-vmworld-2012/feed/ 0
Barcelona Football Club Tour with Zerto http://www.zerto.com/blog/general/barcelona-football-club-tour/ http://www.zerto.com/blog/general/barcelona-football-club-tour/#comments Tue, 09 Oct 2012 21:44:19 +0000 Zerto http://zerto.com/?p=3543 What a day! Zerto treated partners and customers attending VMworld 2012 to a tour of Barcelona FC stadium, Camp Nou.  We heard about the amazing history of this football club, from a famous hungarian who brought the team to glory, to their tiki-taka “total football” style that made them the first club to ever win the “sextuple” of trophies in a single season.  Enjoy the photos! Thanks to the customers and partners who joined us for a great tour.

Press Box Barcelona Tour 1 Barca Trophy Barca Tour 3 Barca Tour 7 Barca Tour 2 Barca Tour 5 Barca Tour 8 Trophies ]]>
http://www.zerto.com/blog/general/barcelona-football-club-tour/feed/ 0
Zerto To Take Customers on FC Barcelona Club Tour http://www.zerto.com/blog/general/zerto-takes-customers-o-fc-barcelona-club-tour/ http://www.zerto.com/blog/general/zerto-takes-customers-o-fc-barcelona-club-tour/#comments Tue, 02 Oct 2012 22:08:01 +0000 Zerto http://zerto.com/?p=3518 During VMworld Barcelona 2012, Zerto will be taking customers on an exclusive tour of the world-class FC Barcelona stadium, Camp Nou, to be followed by dinner at one of Barcelona’s great tapas bars.  The event is invite-only, for Zerto customers.  If you are a Zerto customer attending VMworld Barcelona 2012, please contact us to reserve a spot.

]]>
http://www.zerto.com/blog/general/zerto-takes-customers-o-fc-barcelona-club-tour/feed/ 0
Stephen Foskett & Gabrie van Zanten on Zerto @ VMworld http://www.zerto.com/blog/videos/stephen-foskett-gabrie-van-zanten-on-zerto/ http://www.zerto.com/blog/videos/stephen-foskett-gabrie-van-zanten-on-zerto/#comments Mon, 24 Sep 2012 12:58:58 +0000 Zerto http://zerto.com/?p=3465 In this video from VMworld, Gabrie & Stephen discuss VMworld 2012 and hypervisor-based replication, including Zerto. Hypervisor-based replication was a hot topic at VMworld – we blogged about that here. A few highlights:

  • “DR generally is one of the most compelling uses for VMware…one of the best things you get from virtualized infrastructure”
  • “I can see the need for any to any replication…Zerto has a really good reputation in the industry already…pretty cool to see a small company like that to be a leader and the large companies to be the followers.”
  • “Look at backup and DR, maybe that’s something that shouldn’t be done in the storage, that should be done in software…in the hypervisor”

A bit about Stephen and Gabrie:
Last year, when we launched the company at TFD #6, we had the pleasure of meeting Stephen Foskett of GetsaltIT who runs the TFD events.  We also enjoyed meeting Gabrie van Zanten, who was the first blogger to write about Zerto and guest-blogged from the Zerto blog during VMworld. Both continue to be friends of Zerto and it’s always great when we have an opportunity to catch up with them!

]]>
http://www.zerto.com/blog/videos/stephen-foskett-gabrie-van-zanten-on-zerto/feed/ 0
VMware, Nicira and the Software Defined Data Center (SDDC) http://www.zerto.com/blog/dr/vmware-nicira-and-software-defined-data-center-sddc/ http://www.zerto.com/blog/dr/vmware-nicira-and-software-defined-data-center-sddc/#comments Fri, 07 Sep 2012 13:02:08 +0000 Zerto http://zerto.com/?p=3405 This is a guest post from Zerto’s Director of Central US Sales, Steve Thomsen

A few months back I wrote a post titled, “Is the Hypervisor now the Storage Array?” Well, it seems that post was pretty spot on (wink, wink). Very recently VMware made their biggest acquisition to date with a $1.2B buyout of Niciria; a company that specializes in networking at the Hypervisor level or Software Defined Networking (SDN). Cisco was already a player here, but their technology is still dependent upon  underlying Cisco infrastructure at the physical layer. Not so with Nicira. Nicira is all about the Hypervisor.

If you look at what has happened to date over the past few years you see that VMware’s vision of the Software Defined Data Center (SDDC) has really taken shape. VMware started it off with server virtualization. The security vendors made their move: Altor Networks (now Juniper), McAfee, Symantec, and few start-ups such as Cat Bird. Cisco got into the game with the Nexus 1000v. And then VMware really went all in with the acquisition of Nicira.

But why the craze about data center services in the hypervisor all of the sudden? Certainly it is not about infrastructure consolidation. This is happening because business is about applications. The security vendors, Cisco, and of course VMware, knew that in order to better serve the application; i.e. the business, you have to be really “close” to the application.

Prediction: The SDDC is here is to stay and it is only going to grow; eventually taking over.

The industry has developed storage and server-based data protection services to protect the application. But if the application now lives in the Hypervisor, what does the storage know about the VMs residing on a datastore? The answer – nothing. And what does server/VM-based sw know about the other VMs that comprise the application outside of the VM it resides on? The answer again is nothing.

For Disaster Recovery, this is a pickle. Without “application awareness” in the data center, data protection is severely compromised, involving complex manual processes in order force fit physical DR solutions onto virtualized environments. Enter Hypervisor-based storage sw and data protection.

Zerto: Driving Storage SW into the Hypervisor & across the Software Defined Data Center

When it comes to Hypervisor-based storage software, think of all of the virtual disks in the management platform (vCenter in this case) as you would as the disks in an array. What if you could apply storage software services against these virtual disks, which is now really the “virtual” array, with no regard to the underlying physical infrastructure? Just like VMware decided to do with networking when they bought Nicira.

This is regardless of the underlying storage topology, protocols (FC, iSCSI, NFS, & non-shared storage), number and types of arrays or storage vendors in the mix. You can now create logical groups for protection, what Zerto calls Virtual Protection Groups (VPGs), based upon the VMs and their associated VMDKs. If you have a multi-tiered application (for example say a web server tied an app server that also has a backend database server) with VMs on residing on NFS and FC datastores on disparate arrays, such as NetApp and EMC, you can now protect and recover those VMs as a single application instance. This was not possible before. Virtualization tools such as Storage VMotion and Storage DRS don’t break replication/ protection anymore.

Zerto’s Hypervisor-based storage SW & Integration with Software Defined Data Center

This is where things get really interesting. Because Zerto is so tightly integrated into the Hypervisor, we understand relationships and can perform tasks at a very granular as well as macro level. Rather than thinking of VMs and datastores, think of VMDKs and Applications.

Zerto understands the configuration of the VMDKs comprising a VM. This allows for flexibility in terms of how a VMDK is replicated as well as allocated at the target. If a VMDK represents a swap or page file or perhaps scratch space for a database, then you really probably don’t need to replicate the IO. However you certainly want to recover these VMDKs in the event of a failover, recovery, or testing scenario. Zerto allows you to do just that. Administrators can designate specific volumes as “Swap” in which case Zerto will not replicate the IO to the VMDK but will recreate the VMDK upon failover. This coupled with the fact that Zerto only replicates IO to the protected VMDKs, and not the full LUN, allows Zerto to use on average 20-40% less bandwidth than leading storage replication sw solutions.

There are design benefits to Hypervisor-based storage sw and data protection as well. You can create big ‘ole datastores and still have granularity of recovery. Now you have fewer LUN IDs and datastores to manage. We all love that.

What has really happened is that you have really improved recovery, dramatically impacted your agility in a very positive way, and decoupled your VMs from the storage environment.

Again, it’s about what is important to the business: your applications.

A couple of weeks ago I was having coffee with the product manager of a large, global Cloud Services Provider (CSP). One of the ideas we batted back and forth was that physical servers, network devices, and storage arrays are really now just there for IO. All intelligence has moved into Hypervisor. Servers, security, networking, and now storage software are all there. The Software Defined Data Center (SDDC) has indeed arrived.

Please join us for a brief webinar, September 13 at 2pm ET, where we’ll be discussing the Software Defined Data Center and some of our other key takeaways from VMworld 2012. Register here.

]]>
http://www.zerto.com/blog/dr/vmware-nicira-and-software-defined-data-center-sddc/feed/ 0
vSphere Replication and Zerto – What’s the Difference? http://www.zerto.com/blog/general/vsphere-replication-and-zerto-whats-the-difference/ http://www.zerto.com/blog/general/vsphere-replication-and-zerto-whats-the-difference/#comments Wed, 05 Sep 2012 19:31:10 +0000 Zerto http://zerto.com/?p=3371 This post is the first written by vExpert Shannon Snowden, Zerto’s new Senior Technical Marketing Architect.

Stephen Foskett of Tech Field Day sent us a couple of great questions about VMware’s vSphere Replication compared to Zerto Virtual Replication via twitter over the holiday weekend. Since we had a 140-character limitation for the replies; we decided to tweet a top 10 list of differences between vSphere Replication and Zerto Replication.

We’ve never posted a comparison to another product, not on our site or blog. One reason for this is that FUD is not our thing. We think the features of Zerto Virtual Replication are outstanding because there is really no other product on the market like it.

But Stephen’s question represents one of the most common questions we were asked at VMworld. We also noticed that bloggers picked up on the VR capability discussion after VMware announced that vSphere Replication is now decoupled from VMware’s SRM (Site Recovery Manager).

So, we think it’s worth elaborating on our twitter replies here – plus we had more than ten points to make – the full comparison is here!

  • vSphere Replication is available in VMware Essentials Plus and above. Zerto works with the basic VMware Essentials product
  • Having vSphere Replication working in your infrastructure doesn’t provide automation/orchestration of Disaster Recovery scenario. You will need to create PowerCLI scripts to get basic automation (add to inventory, power off/power on) capability. Zerto has full automation /orchestration features.
  • Each of the Virtual Machines at the Recovery Site has to be powered on manually with VR. Zerto has fully automated recovery capabilities.
  • Each VM has to be reconnected to the correct network with VR in a manual recovery. Zerto is automatic.
  • vSphere Replication does not have compression capability; neither does SRM. Zerto has built in compression capability and we often see a 50% compression ratio.
  • IP re-addressing is a manual process with vSphere Replication. Zerto has IP address change capability built-in to the fail over plan.
  • With vSphere Replication, you cannot conduct non-disruptive testing since you cannot power on the Replica VM if the original VM is still running and reachable. Zerto allows you to map to real or bubble networks on the recovery site.
  • vSphere Replication has no cloning capability. Zerto allows for recovery site clones. This allows for full long-term archival backups of the VMs or file-level recovery from a Point in Time clone.
  • There is no failback option in vSphere Replication, this is only included in SRM. Zerto has automatic workflow failback capability.
  • There is no Point in Time recovery with vSphere Replication. Zerto allows for flexible Point in Time Recoveries.
  • No Physical RDM support (Raw Device Mappings). Zerto works with physical RDMs and virtual RDMS.
  • If the customer wants to add the features of SRM with vSphere Replication, only vCenter 5.x and up is supported and ESXi 4.1 and up. Zerto supports ESX 4.0 and up as well as vCenter 4.x.
  • The replication in VR cannot be increased if more bandwidth is available. The replication is set per VM and the RPO (a 15 minute RPO is the best it can do – up to 24 hours) dictates how much data is replicated on a per VM basis. VMware calls this “optimized for parallel replication of multiple VMs instead of peak throughput for a single disk”.
  • Replication is tracked locally and then sent to the replica site which is additional load on the primary site hosts.
  • ‘Powered off’ or suspended VMs are not replicated with VR. They are only replicated when in a ‘powered on’ state.
  • vSphere Replication does not work with vApps. Zerto leverages vApps to make administration easier. If a vApp is configured with a VPG, then any VM added to the vApp is automatically protected. No other vendor has this capability.
  • Storage DRS (SDRS) is not supported with vSphere Replication or SRM. Zerto works with SDRS.

When you factor in that the target implementations are very different between Zerto and vSphereReplication, the capability differences make more sense.

How so?

vSphere Replication is intended for Tier 2 VMs and SMBs with low SLAs – that’s the product positioning VMware has in mind for vSphere Replication. In fact, we agree with VMware Tech Marketing Ken Werneburg’s description of vSphere Replication:

“VR does not include all the orchestration, testing, reporting and enterprise-class DR functions… but allows for individual VM protection and recovery within or across clusters.” In the primer on VR, Ken goes on to note, “It provides a solution that enables recovery time better than that of restoring from backup.”

Zerto is an “enterprise-class” Disaster Recovery product – meaning it offers a huge feature set and near-synchronous application protection with very aggressive RPO requirements and points in time recovery. Zerto also has point in time VM cloning at the recovery site that enables archival backups and file-level restorations. Along with all these features, Zerto has vCloud Director integration; full DR workflow for fail over, automated fail back and IP re-addressing.

So a more fair comparison really would be Zerto Virtual Replication vs. SRM together with array-based replication (you’d need all the array features functioning and some additional scripting for a true comparison). And as we said on twitter – each customer needs to decide what is the right product for their environment. “We agree with VMware. vSphere Replication is a data protection mechanism – not DR”

Continue the discussion with us and @sfoskett on twitter – we’re @ZertoCorp.

]]>
http://www.zerto.com/blog/general/vsphere-replication-and-zerto-whats-the-difference/feed/ 5
Zerto Red Wed! http://www.zerto.com/blog/vmworld/zerto-red-wed/ http://www.zerto.com/blog/vmworld/zerto-red-wed/#comments Sun, 02 Sep 2012 12:10:01 +0000 Zerto http://zerto.com/?p=3312 Everywhere we looked at VMworld on Wednesday, August 29, we saw Zerto Red. Beats headphones were won by some lucky attendees who remembered to wear their Zerto T-shirts!  Here are some pictures from around the Solutions Exchange, the Hangspace and even one “Master of Disaster” at the VMworld Jon Bon Jovi concert. Thanks to all who participated!

 

IMAG0056 stratogen red wed1 IMAG0063 zrw2 Zerto Beats 2 IMG00399-20120829-1431 i won beats IMAG0055 (1) PICT0034 PICT0005 IMAG0066 IMAG0068 IMG00403-20120829-1951 IMG00402-20120829-1447

 

]]>
http://www.zerto.com/blog/vmworld/zerto-red-wed/feed/ 0
VMworld 2012 Day 2 Roundup http://www.zerto.com/blog/general/vmworld-day-2-roundup/ http://www.zerto.com/blog/general/vmworld-day-2-roundup/#comments Wed, 29 Aug 2012 19:04:03 +0000 Zerto http://zerto.com/?p=3301 This is a guest post from Gabriel van Zanten, vExpert, who is blogging during the show from the Zerto blog.

Just like Monday, we started with a keynote again today. Steve Herrod showed us how the experience of the end-user would change in the next few years and which VMware products would contribute to this.

VMware has some great products up its sleeve. We saw a laptop running Windows XP getting an new Windows 7 image pushed on to it in the background while the user continued working in his XP image. After the push was complete a simple reboot brought the user into his new Windows 7 image, retaining all his profile settings and applications.

This is all made possible through a new product called Mirage, which VMware incorporated with the acquisition of Wanova, earlier this year. Mirage will break a PC image into pieces that a user can change and another piece the user can’t change and then reassemble that image on the fly when deployed to a user.

The demo showed how this image could travel along with the user because a copy was kept in the datacenter, so when the user ‘lost’ his laptop he could easily switch to any other device that had a VMware View client and retrieve his desktop through VDI. Very impressive.

VMware also showed the full potential of Horizon, combined with AppBlast and VMware Mobile. Where VMware Mobile first originated as a mobile hypervisor on Android, VMware now delivers sandboxed apps to the Apple iOS devices which is a slight change in methodology there.

The demos look great and I can’t wait to test all this new stuff, reality however tells us that we might have to wait a long time, since AppBlast which was announced VMworld 2011, still hasn’t been released!

The rest of my day I spent visiting the Solutions Exchange. Seeing a great new release from StorMagic with their vSAN, which has become easier to deploy than ever. With SvSAN you take your local disks from your ESXi host, combine them into a SvSAN mirror group using their virtual appliance and connect to another SvSAN mirror group and create fully redundant iSCSI storage for your vSphere environment. The SvSAN wizard will even create the iSCSI connections on the ESXi hosts for you.

I also talked a lot with Nutanix on their way of offering storage to ESXi hosts or actually any hypervisor you would want to use. Nutanix offers a chassis including up to 4 ESXi hosts with local storage connected which runs the special designed Nutanix distributed file system. This file system can grow to infinity and is always redundant. Cool stuff!

The night was spent with Zerto at their Customer appreciation party at B Restaurant across from Moscone. Great food and drinks, and even better company – great chatting with @virtual_bill @wsellers @zertocorp and @c_weil

Looking forward to day 3… 

]]>
http://www.zerto.com/blog/general/vmworld-day-2-roundup/feed/ 0
VMworld 2012 Day 1 Roundup http://www.zerto.com/blog/vmworld/vmworld-2012-day-1-roundup/ http://www.zerto.com/blog/vmworld/vmworld-2012-day-1-roundup/#comments Tue, 28 Aug 2012 06:01:56 +0000 Zerto http://zerto.com/?p=3296 New VMware Licensing, Auto Deploy and All-in-One Solutions

Our daily VMworld roundups are guest posts from vExpert Gabrie van Zanten.

As expected, VMworld proves to be a very exciting event again. Today started with the keynote where Paul Maritz, on stage, officially handed over his job as VMware CEO to Pat Gelsinger. He introduced Pat Gelsinger as a very good friend and colleague where he has been working with in the past. He promised the audience we would soon notice the drive and speed Pat Gelsinger has.

With Pat on stage, VMware announced the new licensing model for vSphere. No longer a vRAM limitation, no more core limitations: only CPU socket based licensing. This news was received with a big applause.

The new licensing model will be wrapped around the new VMware vCloud Suite 5.1, which will bring vCloud Suite together with all of the components a customer needs to build, operate and manage cloud infrastructure. The key components are:

  • VMware vSphere 5.1
  • VMware vCloud Director 5.1
  • VMware vCloud Networking and Security 5.1
  • VMware vCenter Site Recovery Manager 5.1

After the keynote everybody rushed out to be on time for the first breakout sessions. I visited the session by Kyle Gleed on Auto Deploy (INF-VSP1364 Architecting Auto Deploy for Availability and Scalability). Having done some early Auto Deploy projects myself, it was interesting to learn some best practices on how to make an Auto Deploy implementation more available. Questions like what happens when your whole environment has been powered down because of an outage and how to avoid the chicken and egg situation that might occur then.

Kyle also showed some new features in vSphere 5.1 for auto deploy like “Stateless Caching” and “Stateless Installs”. With Stateless Caching you can deploy a six image to a host and have it installed on local storage, being FC, USB or local disk. In case of for example a network outage or the Auto Deploy server being unavailable, you can still boot your ESXi host from disk and it will come up with a ‘clean’ installed ESXi image. Drawback is that it cannot join your cluster unless the vCenter Server is available to supply a host profile.

Stateless Install is a way to deploy ESXi to your host ONCE and have it run from local storage (USB, FC, local disk) after installation is complete.

The Solutions Exchange was very crowded today, but maybe that is because there seems to be less room between the booths. A big trend on the Solutions Exchange are the all-in-one solutions where vendors offer a box including a single or multiple hosts and storage, usually based on  SSD or SATA, offering a complete and easy to deploy vSphere infrastructure.

Another growing group are the monitoring solutions, keeping an eye on your application, your VM, your hosts, your storage and everything you can think of. Not only showing what their performance is, but also providing a solution on how to solve any performance degradations.

]]>
http://www.zerto.com/blog/vmworld/vmworld-2012-day-1-roundup/feed/ 0
Being Somebody’s “Wow” – Welcome to the Team, Shannon! http://www.zerto.com/blog/general/being-somebody%e2%80%99s-%e2%80%9cwow%e2%80%9d-welcome-to-the-team-shannon/ http://www.zerto.com/blog/general/being-somebody%e2%80%99s-%e2%80%9cwow%e2%80%9d-welcome-to-the-team-shannon/#comments Wed, 22 Aug 2012 14:00:25 +0000 Zerto http://zerto.com/?p=3288 We’re thrilled to announce that Shannon Snowden, Consultant, Blogger and VCP #12 (!) has joined the Zerto team as Senior Technical Marketing Architect.  He explains his move to Zerto, and the “Wow” moment when he met us, in a post on his blog.  Here is an excerpt:

“My Wow #3 - I was a delegate at Tech Field Day 6.  Zerto presented to us under NDA before they officially released 1.0 a few weeks later. Knowing how SRM works and comparing it with how Zerto approached DR, I was very impressed and interested in seeing it in action because it greatly simplifies the whole process of virtualized DR.

The more I’ve had the opportunity to actually work with Zerto in production, the more impressed I have been with its capabilities.” Read more…

Welcome to the team Shannon!

]]>
http://www.zerto.com/blog/general/being-somebody%e2%80%99s-%e2%80%9cwow%e2%80%9d-welcome-to-the-team-shannon/feed/ 0
Not All Data Centers are Created Equal http://www.zerto.com/blog/general/not-all-data-centers-are-created-equal/ http://www.zerto.com/blog/general/not-all-data-centers-are-created-equal/#comments Mon, 20 Aug 2012 09:14:40 +0000 Zerto http://zerto.com/?p=3277 This is a guest post by Ron Offer, CEO, Integrity Virtual IT, a proud partner of the Zerto Cloud DR Ecosystem (ZCE).

How did Integrity Virtual IT, using a Reston, Virginia based data center, manage to avoid the failure that recently took out Amazon AWS in a nearby facility? Integrity and Amazon are even connected to the same electrical grid. So it seems that not all datacenters are created equal – therefore not all cloud hosting vendors are on equal footing. In addition, due to differing server and software configurations of various tenants even within the same data center, some experienced outages while others did not.

Enter, Hosted Cloud Disaster Recovery Powered by Zerto

Zerto Virtual Replication, when deployed as a hosted private cloud DR solution, highlights the underlying weaknesses of failures recently made public in Amazon’s AWS offering.  By implementing replication at the hypervisor level and leveraging disparate storage technologies, Zerto helped our customer in the Reston facility avoid an outage.

With Zerto, Integrity’s primary datacenter remained unaffected by the severe storms as did the company’s DR site in Chicago. The RPO for our customer’s business critical applications remained at under 10 seconds throughout the devastating storm and live failover testing to Chicago showed an RTO of under 15 minutes. And it’s not that the most costly solution wins; Zerto delivered at a significantly lower cost than traditional solutions that leverage SAN based replication. Faster, better, cheaper – Integrity customers don’t have to choose just two!

Who Benefits?

The SMB market will welcome the Zerto Virtual Replication 2.0 cloud offering with open arms as it delivers on faster, better, cheaper. An SMB that does not have advanced IT staff for managing and deploying virtualization clusters and highly available tiered storage can now get a hosted private cloud solution across multiple datacenters for a fraction of the cost. With RTOs and RPOs that measure in the seconds and an automated failover procedure, the only tough question left is which business critical application should be hosted first.

Integrity’s recent experience shows that Zerto’s Virtual Replication solution hosted in a private cloud slashes deployment and operational costs while increasing availability and SLA guarantees.

]]>
http://www.zerto.com/blog/general/not-all-data-centers-are-created-equal/feed/ 0
Amazon Web Services Outage and Cloud Disaster Recovery http://www.zerto.com/blog/general/amazon-web-services-outage-and-cloud-disaster-recovery/ http://www.zerto.com/blog/general/amazon-web-services-outage-and-cloud-disaster-recovery/#comments Sun, 01 Jul 2012 20:01:36 +0000 Zerto http://zerto.com/?p=2938 Looks like quite a few people will be skipping their July 4 vacation to review and rebuild their disaster recovery (DR) plans. By now, even if you’re not a customer of Netflix, Instagram or Pinterest you’ve certainly heard of the power outage at an Amazon Web Services (AWS) data center in northern Virginia on Friday night, caused by an electrical storm in the Washington, D.C. area, where over a million residents found themselves without power and many more people worldwide found themselves without their favorite hobbies and movies.

The facebook post below includes a letter from Amazon to their customers, and includes some explanation for what happened.  RDS is a distributed database where data is replicated over several machines – the goal is to make the database more durable and much faster since you can query any number of machines instead of one machine. In this situation, “inconsistency issues” means that when they got the distributed database back up, inconsistent data made the data impossible to restore. RDS is not a typical database but it should certainly survive an electricity outage!

 

Cloud is Not Magic

A really interesting discussion of a older AWS outage was done by Focus, in their Cloud Roundtable series.  The panelists discuss blame, SLAs and one major cloud challenge, expressed by George Reese as, “the marketing messages and confusion in the marketplace.”  He continued, “People think cloud is the place you don’t have to worry about stuff.  Cloud is not magic.”   A similar sentiment was expressed in this Mashable article about the AWS outage: “The outage to Instagram and other major sites shows that — despite massive hype and momentum in the Internet world — cloud computing isn’t necessarily a magic solution for businesses’ data and IT needs.”

Will this AWS outage, and the other highly publicized outages lead to decreased adoption of cloud computing?  Some of the experts say yes.  Others say no, but they concede that more companies will be pushed towards private cloud.

“Does the need for better disaster recovery (DR) destroy the cloud value proposition?” All of the roundtable participants agreed, “no”. Smarter disaster recovery solutions need to be in place to insure that outages, and they are inevitable, are mere hiccups and don’t disable services for days.  What kind of DR services are you getting from your cloud service provider?  Ask these four questions to make sure your DR solution is the industry’s best.  When you get DR right, cloud can seem pretty magical.

]]>
http://www.zerto.com/blog/general/amazon-web-services-outage-and-cloud-disaster-recovery/feed/ 1
What is Multi-Tenancy and Why is it Important for Cloud Disaster Recovery? http://www.zerto.com/blog/dr-to-the-cloud/what-is-multi-tenancy-and-why-is-it-important-for-cloud-disaster-recovery/ http://www.zerto.com/blog/dr-to-the-cloud/what-is-multi-tenancy-and-why-is-it-important-for-cloud-disaster-recovery/#comments Thu, 21 Jun 2012 12:20:52 +0000 Zerto http://zerto.com/?p=2900 A multi-tenant architecture is the key enabler for cloud service providers to create efficiencies which enable them to offer their services at an attractive price. There’s a lot of useful information online about the concept of multi-tenancy:

The NIST definition of could computing centers around the essential characteristics of all clouds, one of which is multi-tenant architecture – defined as the enabler for pooling of resources.  Another good definition can be found here: “Multitenancy Can Have a 16:1 Cost Advantage Over Single-Tenant”:

“Multi-tenancy is the ability to run multiple customers on a single software instance installed on multiple servers to increase resource utilization by allowing load balancing among tenants, and to reduce operational complexity and cost in managing the software to deliver the service. Tenants on a multitenant system can operate as though they have an instance of the software entirely to themselves which is completely secure and insulated from any impact by other tenants.”

This post, “Architecture Strategies for Catching the Long Tail,” supplies a good example of how multi-tenancy is being used:

“When a user at one company accesses customer information by using a CRM application service, the application instance that the user connects to may be accommodating users from dozens, or even hundreds, of other companies—all completely unbeknownst to any of the users. This requires an architecture that maximizes the sharing of resources across tenants, but that is still able to differentiate data belonging to different customers.”

A recent article in CloudTweaks, “Multi–Tenancy In The Cloud: Understanding Its Benefits,” explains that with multi-tenant architecture, multiple customers may be using an application with specialized customizations:

“Customers may have the ability to configure some parts of the application…although tenants are using the same building blocks in their configuration, the appearance or workflow of the application may be different for two tenants. Also, the Service Level Agreement (SLA) of each tenant can differ.”

Are multi-tenant architectures secure?

According to a recent study by Forrester Analyst John R. Rhymer, “Despite resource sharing, multitenancy will often improve security. Most current enterprise security models are perimeter-based, making you vulnerable to inside attacks. Multitenant services secure all assets at all times, since those within the main perimeter are all different clients. Leveraging a mix of dedicated resources and metadata map architectures, these services can deliver stronger security.”

I’ve heard multi-tenancy compared to tenants in an apartment building who may share much of the building infrastructure – like pipes – but do not actually share the resources – like the water tenants use to wash their dishes. Simplistic, but works for me!  Phil Wainewright, in an article on ZDNet, continues the analogy. “In theory, a single house with a fence around it is much more secure than an apartment in a block shared with many other households. In practice, the householders in the apartment block will pool the cost of having a porter on duty 24×7 to control access to the building and monitor security. Most multi-tenant systems are operated to much higher security standards than standalone systems.

Let’s Summarize –

Multi-tenant architectures allow cloud service providers to offer services which:

  • “are completely secure and insulated from any impact by other tenants.”
  • “maximize the sharing of resources across tenants, but still able to differentiate data belonging to different customers.”
  • “use the same building blocks [but] the appearance or workflow of the application may be different for two tenants.”
  • “despite resource sharing…often improves security.”

What happens without multi-tenancy?

Without multi-tenancy, duplicate instances or infrastructures are required, complexity of the environment is increased significantly and the benefits expected in the cloud – flexibility, ability to scale and adaptability and particularly cost-efficiency – are reduced.

Multi-tenant cloud disaster recovery

Replication can add another layer of complexity to cloud offerings, especially when most DR solutions are not designed specifically for the cloud.  As a result, duplicate servers, appliances and networks are required in the environment and the ability of the cloud provider to pool replication resources is undermined. Leveraging cloud infrastructure to gain economies of scale helps the cloud service provider keep costs in line and offer disaster recovery services to customers at an attractive price. This is compromised without true multi-tenancy. How? Stay tuned for next week’s continuation post; multi-tenancy in Cloud DR architecture –  why you can’t have real cloud DR without it.

]]>
http://www.zerto.com/blog/dr-to-the-cloud/what-is-multi-tenancy-and-why-is-it-important-for-cloud-disaster-recovery/feed/ 0
Vote for Zerto to Present at VMworld 2012 http://www.zerto.com/blog/vmworld/vote-for-zerto-to-present-at-vmworld-2012/ http://www.zerto.com/blog/vmworld/vote-for-zerto-to-present-at-vmworld-2012/#comments Tue, 29 May 2012 19:16:39 +0000 Zerto http://zerto.com/?p=2762 Voting is now OPEN for VMworld 2012 Speaking Sessions!

Help us get some great topics onto the stage – DRaaS in the vCloud; how hypervisor-based replication enables virtualizing mission-critical applications; and a step-by-step guide on how to set up Disaster Recovery for vSphere & vCloud workloads

How to vote:

  • Start by logging in or registering here: http://www.vmworld.com/www.vmworld.com/cfp.jspa
  • Next, select a Track and a dropdown menu with the relevant subtracks will appear. You can also search by session ID.  Our Session IDs are 2312, 2109 and 2257.
  • To vote, click on the “Thumbs Up” symbol next to the Session ID. You will receive confirmation that your vote has been counted.
Zerto Sessions:

Session # 2312

We’re excited to have submitted this session with Shannon Snowden, who will present with us at VMworld.

Hypervisor-Based Disaster Recovery: The Missing Link for Virtualizing Mission-Critical Applications

Mission-critical, tier-1 applications such as database and transactional applications are often the last to be virtualized. Despite the many benefits of virtualizing these applications, some companies still question the ability to protect and recover these applications in virtualized environments. Traditional BC/DR technologies are built for physical environments, requiring manual and complex processes in order to utilize these systems for virtualized applications. New disaster recovery technologies are filling this gap for large and small enterprises alike – delivering the flexibility expected from a virtualized environment, with the aggressive RPOs and RTOs that mission critical applications require.

This session will discuss how moving replication from storage arrays to the hypervisor enables reliable and effective data protection and recovery, freeing companies to move their mission-critical applications to their virtualized environments. Attendees will hear a case study from a healthcare provider whose use of hypervisor-based replication helped them move toward 100% virtualization.

Agenda:

  • Virtualized disaster recovery – how virtualization has changed DR for the better
  • Technical Deep-Dive: Hypervisor-based replication architecture overview and demo
  • Case Study: One healthcare provider’s story – Using DR to move towards 100% virtualization
  • Cloud Deployments – how virtualization enables cloud DR

Session # 2109

We’re excited to have submitted this session with Ian Perez-Ponce from VMware and with Pat O’Day from Bluelock, who will present the talk with us at VMworld.

How to Setup Disaster Recovery for vSphere & vCloud Workloads

Given the increase in proliferation of virtualization management technologies and the industry’s appetite for elastic, self-service cloud compute offerings to host production workloads, the gap between successful deployment of those workloads and the ability to protect critical application data throughout their lifecycle continues to widen. This breakout session aims to expose and demystify the challenges associated with designing and implementing a Business Continuity / Disaster Recovery service framework that can scale with the most demanding virtual application environments hosted on either vSphere and/or vCloud infrastructure platforms.

Session Agenda:

  • Challenges faced by both, Enterprise IT organizations and Service Providers, in providing scalable BC/DR services for cloud deployed production workloads
  • Capitalizing on the vCloud Service Provider (vCSP) market to implement a hosted Disaster Recovery-as-a-Service (DRaaS) solution
  • Overcoming key business and technical challenges to implementing an internal Disaster Recovery-as-a-Service (DRaaS) framework
  • Blueprints for protecting workloads in vSphere-to-vSphere/vCloud and vCloud-to-vCloud scenarios

Session # 2257 (VMworld Barcelona only)

We’re excited to have submitted this session with Christian van Barneveld from ZX, who will present the talk with us at VMworld Europe.

Disaster Recovery as a Service: Cost-effective and Secure DR in the vCloud

How are companies utilizing public clouds for data replication & recovery? In this case study session, ZX Factory will address how implementing disaster recovery for cloud infrastructures built with vCloud Director delivers cost-effective and secure data protection in the cloud. This session will highlight two working DR implementations replicating from a customer site to a ZX Factory Cloud.

The session will cover Disaster Recovery as a Service architecture, particularly vSphere to vCloud implementations, by walking attendees through two case studies of DRaaS users currently in production with the ZX Cloud. In addition, attendees will learn about the scenarios where DRaaS would prove most effective.

  • Challenges & opportunities of protecting virtualized mission-critical applications in the cloud
  • Disaster Recovery as a Service architecture – technical details
  • Case studies: Berenschot Group and Axon overview and case studies
  • Best practices for DRaaS implementation, how to decide if DRaaS is right for you


]]>
http://www.zerto.com/blog/vmworld/vote-for-zerto-to-present-at-vmworld-2012/feed/ 0
Three Ways Your DR Falls Flat – Part III http://www.zerto.com/blog/general/three-ways-your-dr-falls-flat-part-iii/ http://www.zerto.com/blog/general/three-ways-your-dr-falls-flat-part-iii/#comments Mon, 21 May 2012 11:22:14 +0000 Zerto http://zerto.com/?p=2530 This “Three Ways…” series of posts is by Steve Thomsen, Zerto’s Director of Central US Sales. Follow Steve on Twitter.

In our last post, “Three Ways Your Disaster Recovery Falls Short – Part II“, we discussed the ways DR complicates your day-to-day operations.  Today’s post, the third and final post of the series, addresses our final example of the ways in which your DR solution may fall flat – manual and complex failover.

Automating & Simplifying Failover

Failover and failback of applications, workloads, and especially full data centers is not trivial, to say the least. Process and expertise are critical. Without both you aren’t going far. Under normal circumstances, and without a real disaster at hand, testing failover is planned and the team is on the ready. But what happens when the uh-oh event strikes? What if the needed application, storage, or network person(s) is not available? Worse, what if it is a site level event? Now add in a regional event such as a hurricane, tornado, tsunami, earthquake, quarantine or worse something like 9/11. In those scenarios admins are probably thinking a whole lot more about their family than their employer.

Whatever solution is put in place should be manageable by more than one or two people who know and can complete all of the manual steps for failover and failback of the data center. And it better be reliable because you can’t always count on having all the required expertise all of the time. Plus, in a true disaster things will not go as planned, some or most required staff will not come into work, mistakes will be made, etc.

Now your business is in jeopardy.

Zerto provides a single pane of glass to manage failover of individual VMs, multi-tiered applications, a single data center/site, or even multiple data centers across multiple sites. This is all performed with an extremely high level of automation. Users of different levels and areas of expertise can confidently manage and perform failover at the VM, application, or even site level.

This is what virtualization is supposed to be about. It should be an on-demand environment with a high degree of agility that affords the organization a faster time-to-market. Zerto brings this concept to the realm of disaster recovery and data protection; or more properly, IT service continuity.

So let’s review the three ways your DR solution may be failing you:

Find a solution that is easily deployed, where testing is simple and does not impact your production environment and where failover is fully automated, and you, the IT manager tasked with the responsibility for disaster recovery & BC/ DR, will rest easy.

]]>
http://www.zerto.com/blog/general/three-ways-your-dr-falls-flat-part-iii/feed/ 0
Three Ways Your Disaster Recovery Falls Flat – Part 2 http://www.zerto.com/blog/dr/three-ways-your-disaster-recovery-falls-flat-part-2/ http://www.zerto.com/blog/dr/three-ways-your-disaster-recovery-falls-flat-part-2/#comments Thu, 03 May 2012 11:42:28 +0000 Zerto http://zerto.com/?p=2499 This week’s series of posts is by Steve Thomsen, Zerto’s Director of Central US Sales. Follow Steve on Twitter.

Yesterday, I discussed the first problem in your attempts at DR, which was the difficulty of deploying your DR solution.  Today, we hit the next key area of IT service continuity (or, “non-disruptive DR”): ease of managing your day-to-day DR operations:

Step 2: Simplify Your Day-to-Day DR Operations (and Get Your Weekend Back)

Day to day responsibilities of anyone tasked with disaster recovery are two-fold: making sure your DR solution is working (testing) and making sure it is protecting all the applications and data you want to protect.

Problem #1: Testing your DR Solution

When DR is finally set-up and in place, most organizations are loathe to touch anything.  Many forgo yearly DR testing let alone quarterly or semi-annual testing.

Why should this be so?

In many cases DR testing is performed at an application level and done during a maintenance window (goodbye weekend). Testing failover and recovery of more than a handful of applications becomes problematic and just isn’t practical. This is due to many factors, but the overriding gray cloud is “complexity”. There are a lot of moving parts with tight interdependencies involving different vendors. Next thing you know the network, application, and storage teams are pointing fingers, etc. Then there are the questions that keep you up a night like…what happens to my current protection SLAs? How do I restore production after testing? What happens if things go wrong, etc.?

With Zerto, the testing process is not disruptive to production.  It’s predictable, and better yet – repeatable. With a single click an administrator can test failover of individual applications, a full data center, or even multiple data centers. This is all  while production is up and running and replication is still occurring. There is absolutely no service interruption to the business or to DR. The exposure to risk is never more than under normal operations. Most important perhaps, your weekends are free.

Problem # 2: VMs are in Motion

Why should your DR solution impede the agility and reduce the value of your virtual infrastructure? Again, the answer is it shouldn’t. Most legacy DR solutions will get your data to the remote site, and some might even help you recover, but they really don’t like you moving things around or making changes to the environment. If you do, things tend to start breaking all over the place. Add in the common scenario of two production sites replicating to one another and this gets even more complex.

Zerto is all about the application. As the lifecycle of your application and its data moves along the curve, you should be able to adjust and customize protection, testing, resources, and location on-the-fly. With Zerto, you can fully leverage tools such as VMotion, Storage DRS, HA, etc. and still recover easily.

Stay tuned for “Three Ways Your Disaster Recovery Falls Flat Part 3: Failover & Failback” or get the RSS feed.

]]>
http://www.zerto.com/blog/dr/three-ways-your-disaster-recovery-falls-flat-part-2/feed/ 0
Three Ways Your Disaster Recovery Falls Flat http://www.zerto.com/blog/general/three-ways-your-disaster-recovery-falls-flat/ http://www.zerto.com/blog/general/three-ways-your-disaster-recovery-falls-flat/#comments Tue, 01 May 2012 11:58:03 +0000 Zerto http://zerto.com/?p=2491 This week’s series of posts is by Steve Thomsen, Zerto’s Director of Central US Sales. Follow Steve on Twitter.

When organizations think about disaster recovery and the associated processes, the term “non-disruptive” usually isn’t one of the first things to come to mind. In fact, it usually doesn’t come to mind at all. But why is that? DR is about protecting the organization from downtime and data loss right?

Sort of.

Rachel Dines, a Sr. Analyst at Forester Research, has a very interesting take on this. She believes instead of disaster recovery and business continuity, organizations should be thinking in terms of service continuity. Funny how one word can totally change your perception of how things are vs. how they should be. Big shout out to Rachel.

What is IT service continuity (or my fave, “non-disruptive DR”)? At Zerto, we see this as always-on continuous recovery that does not disrupt your production environment in any way. There are three main aspects to developing a strategy for service continuity; deployment, day-to-day management, and by far the most critical – the actual recovery (aka failover).

Guess what?  In all three areas, your current DR solution falls flat.

Your DR Falls Flat – Example # 1: Deployment

For most Disaster Recovery solutions, implementation can be very, very ugly. Particularly if you are looking at storage-based replication, the set-up is quite tedious and expensive. If you are switching out vendors get ready to deal with little things like having the replace your multi-pathing SW, LUNs that would seem to be the same size and configuration but aren’t, potential proprietary data formats, etc. Throw in storage virtualization and you have added another abstraction layer. Then there is the process of mapping your VMs down to the storage. And yes, there is even more of this type of stuff. What you get is pretty much a nightmare and a big bill ($$$) from your vendor. REALLY BIG.

With Zerto deployment is simplified; which also means much lower cost. Implementing Zerto Virtual Replication is non-disruptive and requires no downtime (for real!). This applies whether you are deploying the Zerto VMs into the environment or configuring protection for your virtualized applications. You configure and start replication for a live application running in VMware with no impact to the end users. This includes the policy or QoS setting for protection of the application (RPO goals, point-in-time protection history, bandwidth), the target resources to be leveraged (host/cluster, datastore(s), network), and network failover (re-IPing, DNS changes, etc.). Gone are things like having to go on a scavenger hunt to map your VMs down to the storage, etc.

A few years ago, and in a different life, I was part of project to migrate a very large federal agency into a hosted data center. It took months and a few million $$$ to pull it off. With Zerto we could have cut the downtime to almost zero and come in much, much lower in cost. Zerto is true game changer in terms of DR deployment.

Continue with “Three Ways Your DR Falls Flat…Part 2: Day to Day Operations”. Get the RSS feed here.

]]>
http://www.zerto.com/blog/general/three-ways-your-disaster-recovery-falls-flat/feed/ 1
Step Away From Array – A Disaster Recovery Story http://www.zerto.com/blog/general/step-away-from-array-a-disaster-recovery-story/ http://www.zerto.com/blog/general/step-away-from-array-a-disaster-recovery-story/#comments Tue, 10 Apr 2012 21:45:29 +0000 Zerto http://zerto.com/?p=2439 Last week, we sent out an email entitled, “Step Away from Array” detailing the advantages of Zerto Virtual Replication over Array-based replication solutions.  One response, from @roidude on twitter, touched off my idea for today’s blog post:

Enjoy!

(My apologies to Dr. Seuss…)

 

 

 

The sun did not shine. It was too wet to play.

So we sat in the house with our old, wet Array.

Array said, “Look at me now! Look at me go!”

“I replicate with your storage.  I have short RTOs!”

 

What about Storage sprawl?

“Won’t do any harm.”

“Just build up an entire duplicate server farm!”

Please go away, we started to beg,

He said, “But I already cost you an arm and a leg.”

 

Can you recover just one specific VM?

“No. Ask VMware for that, we leave that to them.”

And then he pulled out a box, marked Thing Two and Thing One,

“SRM and vSphere Replication add so much fun!”

 

And they ran to us fast. They said, “Let us help you!”

And that is when things started to look like a zoo:

We located VMs affecting our app,

Then documented all LUNs, not forgetting to map.

Next moved all other app VMs to a separate LUN,

Now test and check – yes, this is how it’s done.

Configure replication pairs, allocate more LUNs, then verify install.

Make sure replica has the same properties, IPs and all.

vMotion’s a problem, vSphere versions are, too.

They said, “Replication’s not supposed to be easy for you!”

 

And I said, “I do not like the way they play!”

If Mother could see this, Oh, what would she say!

As fast as you can, think of something to do!

Things are getting more complicated with Thing One and Thing Two!”

 

So Sally and I, we decided that day.

We finally said, “Step away from Array!”

Continue here for the real advantages of Zerto Virtual Replication over Array-Based solutions…

]]>
http://www.zerto.com/blog/general/step-away-from-array-a-disaster-recovery-story/feed/ 0
Dallas Tornadoes and Disaster Avoidance: This Week in DR Vol. 3 http://www.zerto.com/blog/twidr/dallas-tornadoes-and-disaster-avoidance-this-week-in-dr-vol-3/ http://www.zerto.com/blog/twidr/dallas-tornadoes-and-disaster-avoidance-this-week-in-dr-vol-3/#comments Thu, 05 Apr 2012 13:25:50 +0000 Zerto http://zerto.com/?p=2409 The damage has not yet been tallied from the multiple tornadoes that touched down in the Dallas area this week. The video below shows a pretty terrifying scene.  Luckily, according to this CNN report, there were no reported deaths.  Data Center Knowledge highlights the potential for damage to businesses in the affected area, “Dallas is a major data center hub, hosting at least 40 facilities for companies offering hosting services, in addition to dozens of corporate data centers.”

Our third TWiDR post centers on the subject of disaster avoidance, or building your infrastructure so that it is always-on, with data always available – even during a disaster.  Time will tell whether or not the DR systems in Dallas data centers held up during the tornado.

Unfortunately, some companies still think of disaster recovery services as, “what we do to get systems up and running after a disaster”.  But as Rachel Dines, Forrester Analyst mentioned on a recent webinar with Zerto, “DR is not an insurance policy.”  The recovery business has changed, and today, BCDR is about making sure that data is always available, before disaster strikes.

One example of the ways companies are thinking about Disaster Avoidance is Woodforest National Bank.  Woodforest has a unique schedule: 24×7 live banking with tellers and personal bankers – data needs to be available 100% of the time! The bank takes an impressive “Disaster Avoidance” approach to customer data. Woodforest migrates its entire datacenter every six months to avoid hurricane season, due to its proximity to the Texas coastline.  Each June, all production applications are pre-emptively “failed-over” to a secondary site, with a return to the primary site by mid- to late-October, avoiding the hurricane season. If a hurricane or tornado strikes the area, all of the systems are already protected.

Another way to insure your data is always-on is to protect it in the cloud. This week, two articles on Cloud DR highlight the trend.  One, from ZDNet entitled ‘Cloud DR Services are Real’, discusses a recently released study on Cloud DR by Forrester Research. According to the report, “Today, cloud-based DR is poised to shake up legacy approaches and offer frustrated infrastructure and operations (I&O) professionals a great alternative.”  Another post, from InformationWeek, highlights the main points of the Forrester Research report.  Their take? “Easier, more frequent, and less expensive testing, as well as pay-as-you-go-pricing” are two key factors of Cloud DR that make it worth looking into.

]]>
http://www.zerto.com/blog/twidr/dallas-tornadoes-and-disaster-avoidance-this-week-in-dr-vol-3/feed/ 0
Is the Hypervisor Now the Storage Array? http://www.zerto.com/blog/virtualization/is-the-hypervisor-now-the-storage-array/ http://www.zerto.com/blog/virtualization/is-the-hypervisor-now-the-storage-array/#comments Tue, 03 Apr 2012 14:10:48 +0000 Zerto http://zerto.com/?p=2368 This is a guest post by Steve Thomsen, Zerto’s Director of Central US Sales.

If you could ask a virtualized application, “What is your definition of storage?” the reply would probably be, “the hypervisor”. This makes sense; after all, the application is still doing block-level data writes, just not to SCSI devices but rather to virtual SCSI devices, aka VMDKs.

One of the really interesting capabilities of Zerto is providing replication consistency for VMs residing on different hosts, clusters or disparate storage arrays. This means multiple VMs and their VMDKs comprising an application can be faithfully replicated and recovered with write order fidelity regardless of the source location of the VMs from a host or storage perspective. This is all performed at the VMDK level and does not require an agent on the VM. Zerto’s logical entity for recovery for these VMs (multi-tiered app) is called a Virtual Protection Group (VPG).

Alternative methods of replication, host, snapshot and storage-based, are not as flexible.

  • Host-based replication solutions can replicate at the VMDK level, but they have no knowledge of what is happening to other VMs – on the same host or otherwise. This precludes them as viable options to protect multi-tiered applications while maintaining write-order fidelity.
  • Snapshot-based replication suffer from some of the same issues of host-based replication solutions in terms of protection but they also introduce a performance penalty when taking snaps. As well, they tend to scale very poorly as the number of VMs and size of snapshots increase.
  • Storage-based replication solutions can protect VMs residing on different hosts, but there are restrictions. If you have a SAN you have to consolidate those VMs on a specific LUN or set of LUNs. This works the same for volumes with NAS. Once those VMs are migrated over to the new LUN/volume(s), they aren’t going anywhere. To move one of those VMs off that LUN/volume means you are not protecting it anymore; which means your entire application is in an inconsistent state in terms of recoverability. This is also a very cumbersome set-up with a lot of moving parts (except the VMs). From a VM perspective it is very static and “un-virtualization-like”.

In a previous blog post about protecting and recovering virtualized applications, Zerto’s Virtual Replication Appliance (VRA) was discussed in terms of the ability to choose which VMs are replicated and which are not. Recall each VRA resides on a host (VM).

Now step back and think about that for a minute. What if Zerto had a VRA on each host in your environment? The answer is Zerto could in effect see all I/O across your vCenter instance. To Zerto, the entire vCenter instance is now the array. Think of the VMDKs as the LUNs. Zerto is simply performing controller level services, in this case replication, across the VMDKs that it is protecting. What we have done is move block-level services up the stack. From a high level the concepts and terms we understand from legacy host and storage-based replication technologies still apply. However the implications of being in the hypervisor are profound. Virtualization is all about allocating the appropriate resources to the data/application at the appropriate time. Today storage does not allow for that.

Note: Zerto’s VRA uses on average only 3-5% or less of host resources. There is minimal overhead for replication.

When we Zerto-ites talk about reducing complexity many in the audience naturally think of our ability to replicate across block and file storage systems or across vendors. Or maybe that Zerto can replicate RDMs (physical or virtual), etc. However the real value of Zerto is de-coupling your VMs from your storage. You can potentially reduce LUN count, design complexity, and reclaim storage.

When setting up a storage array an organization settles on a standard LUN size. To protect and recover virtualized applications individually you have to consolidate the VMs comprising the application on a LUN or set of LUNs. You then replicate the LUN(s) to the target. Here is the scenario:

You have a standard LUN size of X. You are buying a new array and moving more and more applications into VMware. However in order to leverage storage-based replication and recover applications individually, you start architecting a lot of LUNs into your to-be-acquired array. Keep in mind you have a standard LUN size. Well those virtualized applications don’t care about standard LUNs sizes. Next thing you know you have lots of LUNs all over the place. And those LUNs have white space.

With Zerto that goes away. Now you can failover and recover virtualized applications without implications to other applications on the same source LUN or volume.  That is big. Now you can design your storage for I/O and consolidation vs. data protection; which is what shared storage is supposed to be about. Consolidating LUNs and volumes also reduces white space, which in turn reduces storage footprint.

Now think about the impact of only replicating VMDKs vs. full LUNs on bandwidth requirements.

One Zerto customer saw these results: 1. reduced their LUN count by over 40%, 2. a 40% decrease in bandwidth requirements and 3. the storage footprint of the LUNs of the replicated VMs decreased by 25%!!!

Being in the Hypervisor is the place to be.

]]>
http://www.zerto.com/blog/virtualization/is-the-hypervisor-now-the-storage-array/feed/ 0
Disasters as Art? http://www.zerto.com/blog/videos/disasters-as-art/ http://www.zerto.com/blog/videos/disasters-as-art/#comments Sun, 25 Mar 2012 10:41:01 +0000 Zerto http://zerto.com/?p=2326 What is disaster art?

A few weeks ago, while catching up on a few days’ worth of Disaster Recovery articles, I stumbled upon some amazing images of catastrophic moments frozen in time and rendered into sculptures.  I was excited to discover the artist was my former boss Eyal Gever, an expert in the field of 3D animation as well as a renowned traditional artist (and all-around great guy.)

Eyal’s unique sculptures – replicating specific moments of a disaster – explore the sublime beauty within catastrophic occurrences.  As you can see in these photos, the work is breathtaking and allows you to reflect on the design and composition of a disaster, rather than the human element we commonly focus on when viewing images of a car crash, explosion, flood or tsunami.

An article on Eyal Gever recently appeared in Wired Magazine and provides some details on the artistic process:

“Gever uses software to simulate catastrophes on screen and then turns the most compelling frames into 50cm resin models, using a £215,000 Objet 3D printer.   Gever, 41, became a CGI virtuoso in 1990 through the Israeli military. A decade of commercial software development later he decided to channel this expertise into his art. The simulation tracks each visible particle’s trajectory and Gever pauses the video on the most affecting frame. Each piece takes 60 hours to print.”

In Eyal’s own words, “I create sculptures that are based on software I’ve developed that allows you to see the world through the eye of a high-speed 3D simulated camera. It provides us the ability to see something we normally cannot see, the moment of suspension in time. Beauty can come from the strangest of places, in the most horrific events.”

The art is arresting, not only because of the subject matter, but because it reminds us that creativity and productivity can flourish when you step back from the emotion of a given subject matter and approach it in a new way.  You can follow Eyal or find out about upcoming installations of his work at www.eyalgever.com.

The video below explains a bit about Eyal’s creative process:

]]>
http://www.zerto.com/blog/videos/disasters-as-art/feed/ 0
This Week in Disaster Recovery Vol. 2 http://www.zerto.com/blog/twidr/this-week-in-disaster-recovery-vol-2/ http://www.zerto.com/blog/twidr/this-week-in-disaster-recovery-vol-2/#comments Thu, 22 Mar 2012 11:02:45 +0000 Zerto http://zerto.com/?p=2297 Disaster Recovery & Cloud News from Around the Web

Up this week: IT staffers responsible for most outages, and the Cloud Revolution explained.

In a recent post in Computerworld, author Patrick Thobideau asks, “Who’s to blame when data centers break?” The article quotes at study by Uptime Institute, a professional group that studies data center incidents, to say that, “a majority of problems are caused by outside parties like contractors or vendors, with a smaller but still sizeable percentage being the fault of internal IT staffers.”  Interestingly, according to the post, “Uptime said it also found that internal IT operations staffers are responsible for the majority (60%) of the worst abnormal incidents — those that resulted in a system or data center outage.”

We’ve seen this in the field as well – the DR industry is built for the kind of disasters that you hear about on the evening news. But the truth is, human errors are much more common data center “disasters”.  It’s one of the main reasons that companies we talk to like our journaling features, and the ability of Zerto Virtual Replication to roll back to any point in time, particularly the point-in-time before a human error or corruption occurred in the data center.

While not specifically focused on DR, this week’s must-read is an article in CIO by Cloud & Virtualization expert Bernard Golden who makes a strong case to support his assertion that Cloud is truly an IT revolution, and involves a complete rethinking of IT.  He notes that most IT innovations have been “swap-in” – meaning they required changes to the IT environment – but those were changes that IT managers could implement in phases to replace specific tools.  Golden argues that Cloud is an entirely different kind of change.

“What all of this means is that IT is going to have to be rebuilt for cloud computing. Every group, every process, every skill will have to be re-evaluated in light of the need to reduce cost, implement automation and support revenue-focused business initiatives.”

Lots of people have made big Cloud assertions, Golden does a nice job in the post of explaining why and how Cloud is truly different. I’m sure the comments there will be interesting!  As a provider of Disaster Recovery as a Service, Zerto sees Cloud DR as a logical stepping stone to Cloud adoption; once enterprises see their production applications running in the Cloud they move beyond thinking the Cloud is just for dev and test environments.

The ‘most irksome Disaster Recovery article of the week’ award goes to this plug for Cloud based DR by David Wingley on Insider Media, who explains, “…essential business functions such as technical support could be down for only minutes compared with marketing that could be down for a day.” Say what? Always fair game to beat up on the marketing team, I guess.

]]>
http://www.zerto.com/blog/twidr/this-week-in-disaster-recovery-vol-2/feed/ 0
This Week in Disaster Recovery – Volume 1 http://www.zerto.com/blog/twidr/this-week-in-disaster-recovery-volume-1/ http://www.zerto.com/blog/twidr/this-week-in-disaster-recovery-volume-1/#comments Mon, 12 Mar 2012 15:53:33 +0000 Zerto http://zerto.com/?p=2260 Welcome to a new, soon-to-be-regular post from Zerto, “This Week in Disaster Recovery”.  We hope to bring you some of the top news in DR, particularly as it relates to DR of Virtualized and Cloud environments. Up this week:

The article mentions an interesting example: ”…Who owns data which is no longer handled in situ? When a company processes data in the UK, stores it on a server in Ireland but sends it via France – as it may have a subsidiary there – it is not yet clear which country’s law would prevail in a legal dispute.”   (I’ve finally found a reason to use the term “in situ” in a blog post.  Yay!  Up next week: “mens rea“).

- More than half (54%) are considering moving Disaster Recovery to the Cloud.
- Only two thirds (64%) have tested their continuity plan in the last 12 months. (Yikes.)
- Eighty percent said the complexity of network infrastructure is increasing.

  • DR to the Cloud Explained – This is a great explanation of DR to the Cloud, including things like SLAs (Service Level Agreements) and CDP (Continuous Data Protection).  It’s a post by Jacob Gsoedl on TechTarget’s SearchDisasterRecovery.com site. One good quote, “The challenging aspect of using cloud-based backups for disaster recovery is the recovery”.  Yep, we’d agree.  Jacob doesn’t mention Zerto in the post, but it’s still a worthwhile read!

Feel free to post interesting articles you’ve found in the comments below.  ’Till next time…

]]>
http://www.zerto.com/blog/twidr/this-week-in-disaster-recovery-volume-1/feed/ 1
Four Reasons Why DR to the Cloud Can Fail http://www.zerto.com/blog/dr-to-the-cloud/four-reasons-why-dr-to-the-cloud-can-fail/ http://www.zerto.com/blog/dr-to-the-cloud/four-reasons-why-dr-to-the-cloud-can-fail/#comments Thu, 01 Mar 2012 09:51:46 +0000 Zerto http://zerto.com/?p=2219 And Four Ways to Make Sure it Doesn’t Fail!

A modified version of this post first appeared in the new publication Cloud Magazine.

On paper, Cloud-based Disaster Recovery (DR) solutions seem to provide a cost-effective and simple approach to replicating data, right?

There are four key areas where existing Cloud DR solutions often fall short, making DR to the Cloud difficult, if not impossible.

1. The replication technologies used by your Cloud Service Provider may slow down your IT environment.

Many replication technologies utilize snapshots to make a copy of data, slowing production environments and taking resources from your critical applications and systems.  Disaster Recovery, as important as it is, should not hinder your critical business systems.

2. Most Cloud service providers charge for storage and for bandwidth utilized by sending data to their Cloud.  Replicating large quantities of data to a cloud provider can become costly.

Many replication solutions do not have built-in compression and network optimization tools.  Replication without compression can become quite costly, as network costs are often quoted by service providers on a per-GB basis.

3. For easy DR to the cloud, your service provider’s Cloud-based replication site needs to work with your existing storage and technologies.

Currently, Cloud service providers need to offer the same storage and technologies as their customers in order to serve as a replication and recovery site. If you and your Cloud provider are using different storage and technologies, the process of replicating data to their cloud can be costly and involve complex, manual processes.

4.  Replication to the cloud is not cost-effective if cloud providers need to duplicate environments for each customer.

Complex issues are introduced when replicating multiple enterprises to a single cloud. For example, when replicating from separate sites with separate networks to one site with one network, redundancy is required at the replication site. Most cloud providers are solving this by duplicating network and infrastructure elements. This impacts the manageability and control of existing Cloud offerings.

What can you do to make sure your company does not fall victim to failures in these key areas?

1. Avoid snapshots and other replication technologies that tax production.

A disaster recovery solution should be simple, automated and have zero impact on your business systems and applications. Be sure to have an automated mechanism for testing your DR solution as well, so that you can run regular checks of your protected data.  With Zerto, data protection and tests of your DR system will never impact the applications running in your data center.

2. Find a replication solution with built-in bandwidth compression.

Compression is key to low bandwidth utilization and keeping costs down.  Find a provider that works with you to keep network utilization down, by compressing replication data and helping you to prioritize the data that is most important to you.  Zerto even has tools to help keep your setup costs down, in order to avoid a costly hit to the network when setting up your replication solution.

3. Make sure your Cloud provider can work with any storage technologies.

Your Cloud provider will need to offer the ability to mix storage from any vendor using any technology, so that your data and the data of all of their many customers can be protected in their Cloud.  Make sure they are set up to do this, otherwise you’ll end up paying for them to upgrade their storage technology to match yours. Plus, if the Cloud provider can only work with one storage vendor, that locks you to a specific vendor in your data center for the full duration of your Cloud contract.  Make sure your provider can support any storage vendor, so you have the freedom to upgrade or change vendors in your data center without disrupting your Cloud DR plan.

4. To see a real cost benefit from the cloud, make sure your service provider offers true multi-tenancy.

Cloud providers can only securely pool resources if they have the right multi-tenant technology, otherwise you’ll pay for the provider to set up a dedicated, duplicate environment – and pass the costs onto you!

Zerto’s BC/ DR to the Cloud solution uses no snapshots, has bandwidth compression & true multi-tenancy built-in and works with any storage. Zerto Virtual Replication has additional advantages as well: replication of multiple sites, (such as a company’s branch offices or additional data centers) centralized management of DR, and full integration with VMware’s vCloud Director.

Disaster Recovery to the Cloud is a big opportunity, offering secure, automated and low-cost data replication and recovery. Simply knowing what to ask your Cloud provider will make the selection of DR services simple and effective.  Zerto recently announced that Cloud service providers Terremark and ZX are offering Cloud DR using Zerto’s technology.  Contact us to speak to a Cloud service provider using Zerto.

For more technical information on BC/ DR to the Cloud, read our Cloud white paper deep-dive.

 

]]>
http://www.zerto.com/blog/dr-to-the-cloud/four-reasons-why-dr-to-the-cloud-can-fail/feed/ 0
Protecting and Recovering Virtualized Applications is Challenging http://www.zerto.com/blog/replication/protecting-and-recovering-virtualized-applications-is-challenging/ http://www.zerto.com/blog/replication/protecting-and-recovering-virtualized-applications-is-challenging/#comments Wed, 22 Feb 2012 15:53:59 +0000 Zerto http://zerto.com/?p=2105 This is a guest post by Steve Thomsen, Zerto’s Director of Central US Sales.

Protecting and recovering virtualized applications has moved to the forefront of data center strategy. However the tools and methods available are based on technologies that are, well…quite old. Host and storage-based replication solutions have been around since the 90’s. The last big thing to come to either of those two technologies was network-based replication for storage arrays. That was 12 years ago.

Host-based solutions do not scale nor do they do much to address multi-tiered applications such as SAP, SharePoint, or even most Exchange deployments. Storage-based replication solutions scale and can protect multi-tiered applications, but they do so with a terrible impact in terms of infrastructure costs and complexity (this is really where virtualization breaks storage and that’s a discussion for another day).

Zerto’s Hypervisor-based approach to data protection and recovery goes a long way to solving the conundrum of, “How do I best protect my VMs?” Or more accurately, “How do I best protect my virtualizated applications?”

A good example of this method is offering application consistent point-in-time images for recovery. For Microsoft applications Zerto supports VSS through the UI and command line. Any other application you can quiesce, such as Oracle via RMAN/Hot Backup, can be protected in such manner through command line.

What is unique about Zerto’s method for capturing application consistent point-in-time images isn’t what we are doing, it is where we are doing it. To illustrate this, let’s look at taking an application consistent image with legacy technologies (for virtualized or non-virtualized applications). First, you quiesce the application at which point its cache is flushed to disk. Once you have all transactions in flight committed to disk you know you have 100% data integrity. Next you take an application consistent snapshot and finally release the application back to production. This whole process takes 2-3 seconds. For most administrators, or more properly for most organizations, the penalty in terms of time to execute an application consistent snapshot/image is too disruptive to the business. In many cases administrators will only take a very small number of application consistent images/snaps in a day or forgo the process altogether.

So what does Zerto do that it so different?

Zerto’s method of providing application consistent point-in-time images is much the same, the big difference is where we capture the application consistent point-in-time/image. Now if you have read anything about Zerto, you know we use Virtual Replication Appliances (VRAs) to perform replication. These VRAs protect VMs by residing on the host and replicating selected VMs and VDMKs. You do not have to protect all of the VMs on a host.

When you want to take an application consistent image, you basically perform the same process…but faster, much faster. Now when you quiesce the application and all I/O is flushed to disk, the flushed write I/O is copied and sent to the Zerto VRA on that host at memory speed. Zerto already has the rest of the data at the target – all +99.99999…% of it. With the cache dump, we logically have 100% of the data. Before replicating this last bit of change across the WAN, Zerto marks the image as VSS/application consistent image “whatever” and replicates it at the speed of light across the WAN. Because the cache dump and VSS/application consistent image checkpointing is performed in host memory, the entire process takes microseconds vs. the full 2-3 seconds other solutions.

What does that mean to a Zerto customer?  Enterprises can have many more application-consistent point-in-time images when compared with any other solution – and there is no production environment performance impact of capturing those images. The data center is not only crash-proof, but allows easy fail-back to an application-consistent point in time in the event of a corruption.

Application-consistent images can be captured at pre-defined points-in-time in addition to Zerto native continuous near-synchronous replication; which provides hundreds and even thousands of crash consistent point-in-time images for recovery.

Is this the number one reason people choose to adopt Zerto? Not by a long shot. But it is an excellent example of the power of being in the hypervisor. Among other great aspects, it is about speed and efficiency.

]]>
http://www.zerto.com/blog/replication/protecting-and-recovering-virtualized-applications-is-challenging/feed/ 0
ESG Interviews Zerto’s Justin Nordeste http://www.zerto.com/blog/videos/esg-interviews-zertos-justin-nordeste/ http://www.zerto.com/blog/videos/esg-interviews-zertos-justin-nordeste/#comments Wed, 08 Feb 2012 12:22:05 +0000 Zerto http://zerto.com/?p=1900 Zerto’s Justin Nordeste was recently interviewed by ESG’s Taya Wyss at the New England VMUG on January 19, 2012.

Justin talks about the days before Zerto, when there were no enterprise replication solutions built with virtualized datacenters in mind.

]]>
http://www.zerto.com/blog/videos/esg-interviews-zertos-justin-nordeste/feed/ 0
Backup and Replication: What is the Difference? http://www.zerto.com/blog/replication/backup-and-replication-what-is-the-difference/ http://www.zerto.com/blog/replication/backup-and-replication-what-is-the-difference/#comments Mon, 06 Feb 2012 09:59:38 +0000 Zerto http://zerto.com/?p=1805  

How do backup and replication compare?

Can one be used in place of the other? We’ve created a comparison table to explain some of the distinctions:

Backup vs. Replication

 Backup

 Replication

 Definition

Backup involves making a copy or copies of data.

Replication is the act of copying and then moving data between a company’s sites.  It is typically measured in Recovery Time Objective (RTO) and Recovery Point Objective (RPO).

 Requirements 

Backup requires a tape library (usually VTL doing  disk-to-disk backup) and  some place to store archived  tapes.

Replication requires investment in another, identical, infrastructure, effectively doubling your IT costs.

 Purpose

Backup focuses on compliance and granular recovery, such as recovering a single user’s emails from 5 years ago.

Replication and recovery focus on business continuity – quick and easy resumption of operations after a disaster or corruption. Minimizing the recovery time objective (RTO) is key.

 Uses

Backup is typically used for everything in the enterprise, from critical production servers to desktops.

Replication is often used for mission-critical applications that must always be up and running.

 How it Works

Backup typically relies on snapshots which are copies of the data set taken at a  pre-determined point in time.

Replication can be synchronous, asynchronous or near-synchronous and may use Continuous Data Protection (CDP) to enable users to access historic images.

 Bottom Line

Relatively inexpensive way to avoid data loss.  Valuable  for compliance.  Does not  ensure continuity of  operations.

Focused on ensuring that business applications and processes are always available, even after a disaster. More expensive to set up and maintain.

This table begs the question asked by Chris Mellor in a post on The Register (which is a great resource on the subject):

“Can replication replace backup?”

Chris writes,

“Backup started as saving a copy of data to tape. For many users it now means saving a copy to disk, either as a straight file backup or as a virtual tape backup. In both instances data is being moved from one disk to another. That’s what replication does too… Does this mean you can dump your backup software and rely completely on the much simpler replication idea?”

The experts quoted in Chris’ post respond with varying shades of “no”.  Replication still has a higher cost than backup, and some companies are legally required to retain their physical tapes for a certain number of years.  At Zerto, sometimes we hear the question asked the other way,

“Can backup files shipped to a remote location replace replication?”

We don’t think so.

For one, backup snapshots taken at regular intervals throughout the day will significantly slow down your production environment.  For more info on this, read Marcel van den Berg’s post, “Know the Performance impact of Snapshots!”  Even if you take fewer backup snapshots in order to protect your production environment, you’ll be at risk of losing large chunks of data if you have an outage or disaster.

Second, backup solutions simply don’t scale well when used for replication. They work best in smaller environments with less critical data.

Third, and most important, having access to your data at a remote site is only the first step of a reliable disaster recovery (DR) plan.  For a DR plan to be successful, business processes need to be fully operational after a disaster or outage; having data on a disk does not achieve that goal.  Particularly in virtualized infrastructures which are expansive, always changing and often large scale, shipping backups is not a viable solution for a quick recovery following a disaster.  Only continuous replication provides a true business continuity solution for the enterprise.   Backup, even shipped to a remote location, does not replace the always-available benefits of replication.

Bottom line: Backup and Replication have distinct use-cases and both have a place in an enterprise data center, the key is to determine your company’s objectives in order to find either backup or replication solutions that match your needs. However, if your company is looking for disaster recovery and always-on data protection, replication is likely a better option.

You can find more details on Disaster Recovery Planning with Zerto Virtual Replication here.

]]>
http://www.zerto.com/blog/replication/backup-and-replication-what-is-the-difference/feed/ 1
Zerto CTO Oded Kedem’s Interview on vDestination Blog http://www.zerto.com/blog/virtualization/zerto-cto-oded-kedems-interview-on-vdestination-blog/ http://www.zerto.com/blog/virtualization/zerto-cto-oded-kedems-interview-on-vdestination-blog/#comments Sun, 29 Jan 2012 07:59:33 +0000 Zerto http://zerto.com/?p=1784 This week, our CTO & Co-founder, Oded Kedem, was interviewed for a column called ‘The Virtual Boardroom’, on Greg Stuart’s vDestination blog.  Here’s an excerpt from the post – an interesting look at Zerto’s take on the future of the datacenter:

Greg Stuart: Let’s talk a little bit about the future of Zerto. What does the future hold for Zerto?

Oded Kedem: This point in time virtualization of the datacenter is already intact but we’re seeing everything moving toward the cloud. We’re seeing cloud providers building their infrastructures providing services for their customers moving more and more enterprise applications into the cloud. With Zerto, Cloud providers will be able to provide disaster recovery as a service just like any other service they provide…

Read the full interview at vDestination.com

Make sure to vote for Greg Stuart’s vDestination blog as one of your favorite virtualization blogs!  The survey will rank the top 50 VMware and virtualization blogs – vDestination is one of our favorites.

]]>
http://www.zerto.com/blog/virtualization/zerto-cto-oded-kedems-interview-on-vdestination-blog/feed/ 0
New England VMUG Winter Warmer: Top 5 Takeaways http://www.zerto.com/blog/general/new-england-vmug-winter-warmer-top-5-takeaways/ http://www.zerto.com/blog/general/new-england-vmug-winter-warmer-top-5-takeaways/#comments Mon, 23 Jan 2012 17:11:21 +0000 Zerto http://zerto.com/?p=1725

Contributed by Jennifer Gill & Wes Schifone.

The New England VMUG Winter Warmer saw more than 1200 attendees arrive at Gillette Stadium last Thursday to discuss all things VMware – particularly these five key takeaways that we heard from many people throughout the day.

The cloud is more and more prevalent.   Based on the number of questions, sessions, and discussion around the cloud – it is clear there is more interest from customers and some have deployed or are in the middle of a cloud deployment.  We got lots of questions on how Zerto supports BC/DR for the cloud and even how we define a cloud.  Luckily, we have solutions for public, private and hybrid cloud, delivering enterprise-class BC/DR not matter what kind of cloud you have.

There are key applications that run a business – they must be protected and continuously available.  Mother Nature isn’t the only thing that can make an application go down – a DBA might drop a table, or an overzealous marketer might “try out” a new button in an application – either way, the organization NEEDs business continuity and disaster recovery.  It is the IT professional’s responsibility to ensure business critical applications have ‘continuous availability’, and that the organization has the ability to recovery these applications quickly. Data is a means to an end, it’s the applications that their businesses operate off of that are critical to their executives. For Forrester Analyst Rachel Dines’ take on “always-on” business continuity, read more here…

Highly Virtualized? Why Manage From the Storage Layer? – More and more folks realize that managing replication (or DR in general) at the storage layer doesn’t make sense – there is a lack of alignment across the IT strategy and it creates complexities that you were trying to get away from by going virtual in the first place. The hypervisor is the atomic unit of storage today, and that’s only going to continue to be the case more and more going forward.  We had several customers stop by and talk to us and say, “Finally!  Replication at the hypervisor!” – especially in our 3:30 session.  Unfortunately, Zerto is not a trend setter here – Cisco did virtualize switches first.  However, we are lucky to be the FIRST company to offer Hypervisor-based Replication.   

More and More Tier 1/Production Applications Are Being Virtualized – We spoke with VMware users who have 60%, 70%, and even 95% virtualized environments.  Once you get to that level, business critical applications are virtualized. A few customers spoke to us about how most replication solutions are lacking in providing truly ‘virtual aware’ technology – the replication is not at the hypervisor level so their IT strategy is not aligned.  Meaning, they have a virtualized strategy, but are still force fitting a physical solution to cover some key aspects of their SLAs, like BC/DR and replication.  Customers are looking for an enterprise class replication solution – one that doesn’t impact their mission critical applications, like snapshot technology.  Additionally, RTO of minutes is a requirement for an application which runs a business – Zerto provides a RTO of minutes and does not take away CPU cycles from the mission critical applications.  Read about Zerto’s better Disaster Recovery and low TCO.

People Like T Shirt Cannons!  They also like real customers – shooting a T-shirt out of a cannon, even if it is at a wall, is pretty fun.  Although, having Rob volunteer to act as a target did raise some eye brows!  Rich Corl from Woodforest National Bank did a great job discussing his experiences with Zerto.  We saw many in the session with heads nodding, having come across the same experiences. According to Rich, “The NE VMUG was a great event at a great venue!.  I was excited to hear others that saw this product fill a need that they had in a way that no other product does. They seemed to appreciate the real world experience of our story and could better envision how they might use it at their companies. One person claimed that it was just the product that he had been looking for.”

Thanks @csharney , @Knieriemen, and all those who organized the event!

Photo: “T-shirt carnage” with @rbergin was taken by @stu at @Wikibon- thanks!

 

]]>
http://www.zerto.com/blog/general/new-england-vmug-winter-warmer-top-5-takeaways/feed/ 0
Zerto Wins 2011 Product of the Year Award http://www.zerto.com/blog/virtualization/zerto-wins-2011-product-of-the-year-award/ http://www.zerto.com/blog/virtualization/zerto-wins-2011-product-of-the-year-award/#comments Sun, 15 Jan 2012 07:49:44 +0000 Zerto http://zerto.com/?p=1678  

We’re happy to announce another big award on our roster, SearchServerVirtualization’s 2011 Product of the Year award!  Zerto Virtual Replication won the Gold award in Storage and Backup, a category which includes disaster recovery.   This award comes only a few months after winning ‘Best of Show’ at VMworld 2011, and a Gold award in the Business Continuity and Data Protection category at VMworld.

According to TechTarget, the Product of the Year Award, “address the difficulties associated with virtualization backup and disaster recovery. As more mission-critical applications are virtualized, IT shops must contend with shrinking recovery time objectives and recovery point objectives (RPOs), as well as hardware compatibility issues during offsite recoveries.”

The award announcement described Zerto as follows: “Zerto Virtual Replication 1.0 is managed via a VMware vCenter plug-in, which makes administration easy. It works with any storage configuration and offers a very low RPO. Zerto introduces enterprise-grade disaster recovery capabilities to a broader-range of businesses, regardless of storage hardware.”

Thank you for this BIG win!  Read more about Zerto Virtual Replication here.

 

]]>
http://www.zerto.com/blog/virtualization/zerto-wins-2011-product-of-the-year-award/feed/ 0
Replication over WAN: Your DR is only as Good as Your Last Packet http://www.zerto.com/blog/disaster-recovery-planning/replication-over-wan-your-dr-is-only-as-good-as-your-last-packet/ http://www.zerto.com/blog/disaster-recovery-planning/replication-over-wan-your-dr-is-only-as-good-as-your-last-packet/#comments Wed, 28 Dec 2011 08:30:51 +0000 Zerto http://zerto.com/?p=1661 Recovering data quickly in the event of an emergency is inherently dependent on your network.  Replication simply won’t work if the WAN links connecting production and DR sites are down or weak.  As the discussion around DR moves to an “always-on” approach, where replication RTO/ RPO targets are minutes and even seconds, networks are feeling the heat.

This week, VMware’s infrastructure product manager Gaetan Castelein was interviewed for a great post on ReadWriteCloud.  He notes the DR industry’s move from pure disaster recovery to disaster avoidance. “When you talk about the need to have more resilience, what we’re seeing is, people now want to go beyond just disaster recovery to get also into the field of disaster avoidance,” he explains. The instigator of this change has been the growing number of occurrences when companies have seen a hurricane coming in on the radar, and rolled the dice.”

But Mr. Castelein notes one difficulty of a disaster-avoidance, proactive approach to DR. “Distance continues to play a role in the equation of mobility between data centers, because even the smallest degree of latency multiplied by a few terabytes becomes an unmanageable quantum of time. And the bigger your enterprise is, the broader the roadblock becomes.”  The article goes on to mention that, “Castelein concedes that this problem is not solvable for big businesses right this moment.”

Attention big business: we think it is solvable now.  With hypervisor-based replication from Zerto, network resiliency and optimization are not an afterthought, but built into the product. We’ve built networking and WAN optimization features into Zerto Virtual Replication in the following ways:

  • Prioritization: Each application has a priority setting that dictates its WAN priority, enabling the administrator to differentiate between different applications and ensuring that tier-one applications get first dibs on the network.
  • Resiliency and Compression: Zerto has a built in patent-pending compression mechanism and WAN resiliency, capping WAN bandwidth between sites.  This includes automated rollback of uncompleted actions.
  • Outage Recovery: Even though Zerto replication is continuous and does not rely on snapshots, in the case of WAN deterioration, Zerto does recover a block of changes and ship the changed set when the outage is resolved.  In a prolonged outage, the memory footprint created to recover the block of changes is minimal and syncing back up after the WAN is restored does not negatively affect the production environment.

So, today, even large enterprises can proactively manage their data center effectively – despite network instability.  And it’s already happening.  One of our customers, Woodforest National Bank, achieves proactive disaster avoidance by failing over their entire data center to a secondary site prior to hurricane season.  You can read the full Woodforest case study here.

 

]]>
http://www.zerto.com/blog/disaster-recovery-planning/replication-over-wan-your-dr-is-only-as-good-as-your-last-packet/feed/ 1
Happy Holidays from Zerto! http://www.zerto.com/blog/general/happy-holidays-from-zerto/ http://www.zerto.com/blog/general/happy-holidays-from-zerto/#comments Thu, 22 Dec 2011 16:07:17 +0000 Zerto http://zerto.com/?p=1637 Many thanks to all of our partners, customers and friends who helped make 2011 so exciting!

Here’s to a happy holiday season and a wonderful 2012.

]]>
http://www.zerto.com/blog/general/happy-holidays-from-zerto/feed/ 0
400GB of Lost Data, What’s the Big Deal? http://www.zerto.com/blog/disaster-recovery-planning/400gb-of-lost-data-what%e2%80%99s-the-big-deal/ http://www.zerto.com/blog/disaster-recovery-planning/400gb-of-lost-data-what%e2%80%99s-the-big-deal/#comments Mon, 19 Dec 2011 15:38:12 +0000 Zerto http://zerto.com/?p=1614 According to the recent VansonBourne European Disaster Recovery Survey 2011, one quarter of organizations have experienced data loss within the last twelve months.

“These organizations have on average lost around 400GB of data during the last year, with the following having lost the most:

  • Companies in France – around 550GB
  • Energy and utilities sector enterprises – almost 650 GB
  • Companies with more than 3000 employees – more than 600GB

What is 400GB of data?  Is losing it a big deal? 

Let’s take a look at what losing 400GB of data would mean for your organization.

  • A typical record in your company’s CRM is 100k, so losing 400GB of data would wipe out all of your sales team contacts and then some, up to 4,000,000 records.
  • An MRI series (group of scans) can be 100MB to 500MB*, so losing 400 GB of data would wipe out between 800 and 4000 patient records.
  • In 6 months, I’ve used about 1GB of email storage.  If I’m typical, losing 400GB would wipe out 6 months of emails for every single person in a 400 person company.


Yes, losing 400 GB of data is going to have a major material impact on your organization.  A smaller company might not survive this degree of data loss. Even a large organization will spend significant time and money trying to recover that data, not to mention employees’ lost productivity and compensation for customers and clients. Your data is crucial to the bottom line, and protecting it should be a top priority.

 

*information from Advocate Healthcare

]]>
http://www.zerto.com/blog/disaster-recovery-planning/400gb-of-lost-data-what%e2%80%99s-the-big-deal/feed/ 0
DR by the Numbers http://www.zerto.com/blog/disaster-recovery-planning/dr-by-the-numbers/ http://www.zerto.com/blog/disaster-recovery-planning/dr-by-the-numbers/#comments Sun, 04 Dec 2011 10:03:39 +0000 Zerto http://zerto.com/?p=1430 74 percent of European companies are not very confident that they can fully recover systems, 68 percent stated that disaster recovery is their biggest data challenge.

This week brings us lots of new data on the state of corporate preparedness for disaster scenarios. Two studies published this week identified some serious gaps in enterprise disaster recovery (DR) readiness.

The European Disaster Recovery Survey 2011 surveyed 1,750 IT decision-makers in private and public sector organizations across the UK, France, Germany, Italy, Spain, Benelux and Russia. Each organization ranged from 250 to 3000+ employees representing all major industries including manufacturing, retail, financial services, health care, public sector and telecom. The survey was sponsored by EMC.

The survey found that:

  • 74% of organisations are not very confident that they can fully recover after a disaster, according to a new survey of 1,750 European companies
  • 54% surveyed have lost data and/or suffered systems downtime in the last year
  • 61% report hardware failure as the primary cause of data loss and downtime; natural disasters and employee sabotage being much less likely culprits
  • 43% of organisations cite loss of employee productivity as the single biggest economic impact
  • 28% point to lost revenue as a result of a disaster
  • 40% of organisations still use tape for recovery and 80% of these organisations want to replace tape all together, highlighting the need for next generation backup and recovery

In addition, Information Management Magazine quotes a study of 1,200 data management and recovery officials, on the status of their data recovery.

  • 68% stated that disaster recovery is their biggest data challenge
  • only 44% successfully recovered information after a recent data recovery event
  • 27% cite lag in recovery time as their top reason for experiencing data loss

Continuity Central, commenting on the European Disaster Recovery Survey, stated, “The survey shows a reaction after a disaster to spend more on backup and recovery, but the damage is done in terms of time and money during a downtime.” Post-disaster is when companies display increased willingness to spend on DR solutions. My guess is at that critical post-disaster point, companies will over-spend to compensate for the damage done to their datacenter. The best plan for DR is to be prepared in advance, purchasing the necessary tools and solutions to suit your datacenter’s needs.

]]>
http://www.zerto.com/blog/disaster-recovery-planning/dr-by-the-numbers/feed/ 0
TrueBit.TV Talks to Zerto http://www.zerto.com/blog/disaster-recovery-planning/truebit-tv-talks-to-zerto/ http://www.zerto.com/blog/disaster-recovery-planning/truebit-tv-talks-to-zerto/#comments Mon, 21 Nov 2011 10:13:52 +0000 Zerto http://zerto.com/?p=1203 Gil Levonai, Zerto’s VP Products, was interviewed recently by W. Curtis Preston, of Backup Central and Truth in IT.

“What happens if a VM that I’m replicating gets moved by vMotion to somewhere else?”

“You’re a replication product, but do you have the ability to go back in time?”

Watch the full interview to hear Gil answer these questions and more:

W. Curtis Preston Interviews Gil Levonai of Zerto from TrueBit.tv on Vimeo.

]]>
http://www.zerto.com/blog/disaster-recovery-planning/truebit-tv-talks-to-zerto/feed/ 0
Stop Replicating Data, Start Protecting Applications http://www.zerto.com/blog/disaster-recovery-planning/stop-replicating-data-start-protecting-applications/ http://www.zerto.com/blog/disaster-recovery-planning/stop-replicating-data-start-protecting-applications/#comments Thu, 17 Nov 2011 10:11:13 +0000 Zerto http://zerto.com/?p=1201 Disaster Recovery solutions need to provide seamless support for the mobility of VMs in a virtualized environment – but most simply don’t.

This week, an article by Michael Otey in Windows IT Pro made a strong case for virtualizing SQL – something many are “scared” to virtualize – citing reduced cost, improved resource availability and developments in DR from dynamic IT infrastructure products like Microsoft’s Live Migration or VMware’s High Availability and vMotion.

He points out that,

“VMware provides its High Availability clustering feature, which can automatically move protected resources to a backup server in the event of a server failure. That covers the single point of failure exposure… Virtualization lets you abstract the server workload from the underlying hardware that it runs on. This provides huge improvements for availability and disaster recovery. In the event of a disaster, restoring a VM backup is much faster than getting a bare metal restore off the ground, and it can also be faster than having a warm backup.”

Theoretically, that’s all wonderful.  Restoring a backup VM is fast and there are tools that help avoid single points of failure.  However, the motion of individual application components like VMs and VMDKs can break replication policy and consistency.

We call this, “application awareness”, protection that includes consistent replication and recovery of groups of VMs and all dependencies of those VMs deployed on multiple hosts and storage devices. DR solutions need to stop backing up and replicating the individual components, leaving it for the IT department to assemble the pieces at the replication site into a working application. DR solution vendors need to start replicating and recovering entire applications.

Virtual Application Awareness

With Zerto, replication and recovery is done in Virtual Protection Groups (VPGs) that represent an entire application. A VPG is configured and managed as a single entity representing an entire federated application with all its components and dependencies.  The application manager can determine and control replication policy for a complete application. Physical location of the data is irrelevant.

How does a Virtual Protection Group work?

VPGs maintain replication policy and consistency for all the VMs in the protection group and completely support VM vMotion, storage vMotion and DRS.  In addition to creating VPGs through Zerto’s management console, VPGs can be also created from a VMware vApp object, replicating it with full compatibility (all vApp properties are replicated and recovered) and automatically adjusting to the addition and removal of VMs.  A VPG is recovered as a whole on the recovery site while maintaining boot order with all dependencies, and we even support network re-configuration at the VPG level or specific VM level if necessary.

More information on Virtual Protection Groups can be found in our Hypervisor-Based Replication whitepaper.

]]>
http://www.zerto.com/blog/disaster-recovery-planning/stop-replicating-data-start-protecting-applications/feed/ 0
Under the Zerto Hood: What Does Hypervisor-Based Replication Look Like? http://www.zerto.com/blog/disaster-recovery-planning/under-the-zerto-hood-what-does-hypervisor-based-replication-look-like/ http://www.zerto.com/blog/disaster-recovery-planning/under-the-zerto-hood-what-does-hypervisor-based-replication-look-like/#comments Thu, 10 Nov 2011 09:58:48 +0000 Zerto http://zerto.com/?p=1197 We just posted a 30 second Zerto demo on our vimeo page, in it you’ll see just how easy it is to replicate applications using Zerto.  What’s under the Zerto hood?

Zerto 30 Sec Demo from Zerto on Vimeo.

Video Highlights:
Single Pane of Glass: Zerto is fully integrated with VMware vCenter, providing a “single pane of glass” managing applications and protection at the same time.  In the video, you can see Zerto simply adds a tab in the vCenter console.

Recovery Automation:  Zerto dramatically simplifies the recovery process by fully automating it, including creating multiple VMs, attaching data stores, changing network configuration and more.  In addition, Zerto Virtual Replication seamlessly supports all of VMware advanced features, including vMotion and Storage vMotion, DRS, HA vApps.  This ensures seamless consistent replication even while VMs and virtual disks are moved around.  (Compare this to array-based replication, where movement of virtual machines is a headache requiring manual patching and processes.)

One-Click Failover & Testing: Many disaster recovery solutions require manual & time consuming processes for testing their recovery plan, translating directly into higher TCO for DR.  With Zerto, testing failover happens with the click of a button, with no impact on production environment.  In fact, the production environment continues to replicate while the test is running.  In this video, we’re replicating to a cloud – but the demo would look the same when replicating to a physical site.

One note: Yes, your speakers are working! The video has no sound.

]]>
http://www.zerto.com/blog/disaster-recovery-planning/under-the-zerto-hood-what-does-hypervisor-based-replication-look-like/feed/ 0
Yes, You Can Virtualize 100% of Your Datacenter http://www.zerto.com/blog/disaster-recovery-planning/yes-you-can-virtualize-100-of-your-datacenter/ http://www.zerto.com/blog/disaster-recovery-planning/yes-you-can-virtualize-100-of-your-datacenter/#comments Thu, 03 Nov 2011 08:55:02 +0000 Zerto http://zerto.com/?p=1195
This week, we saw Austin Radiological Association announce an implementationof Zerto Virtual Replication into their datacenter. According to ARA, their datacenter is currently 25% virtualized, and they are going to be 100% virtualized by revamping infrastructure – a project that will take approximately 12 months.

“We believe that the virtual machine is the new atomic unit for replication strategies,” said R. Todd Thomas, Chief Information Officer, ARA. “Zerto works at the virtualization level, which allows us greater flexibility in the type of storage we replicate to and removes the limitations around LUN-based consistency groups.”

Virtualization in order to Replicate or Replicate in order to Virtualize?

For ARA, Disaster Recovery is not only about cost-effective application protection – it’s actually the key element in getting their environment 100% virtualized.  If mission-critical applications can’t easily be protected in virtual environments, they become a virtualization barrier. For ARA, getting Zerto in their datacenter is not only about BC/DR, it is about getting the maximum benefit out of virtualization.

All agree virtualization significantly reduces datacenter costs.  Of course you’ll save on pure hardware costs, but there’s also power, servicing, storage space, cooling, real-estate and property taxes – all of these costs are reduced by virtualizing.  According to a recent Baseline Magazine article,

“More than one-third of the 525 companies [we] surveyed say that at least one of their data center facilities will run out of power, cooling and/or space in 2011 or 2012. What’s more, the survey indicates that one-third of the data centers in operation are currently understaffed.”

It’s becoming cost-critical for IT managers to get more of their datacenters virtualized.  Hypervisor-based replication is going to be a key part of virtualizing business critical applications.  I expect we’ll hear more about companies hitting their ’100% virtualized’ target in 2012.

]]>
http://www.zerto.com/blog/disaster-recovery-planning/yes-you-can-virtualize-100-of-your-datacenter/feed/ 0
DR as a Service: A Win-Win for Enterprises and Cloud Providers http://www.zerto.com/blog/disaster-recovery-planning/dr-as-a-service-a-win-win-for-enterprises-and-cloud-providers/ http://www.zerto.com/blog/disaster-recovery-planning/dr-as-a-service-a-win-win-for-enterprises-and-cloud-providers/#comments Thu, 27 Oct 2011 08:52:02 +0000 Zerto http://zerto.com/?p=1193 Back in August, I blogged about some of the reasons an enterprise customer would want to create a Disaster Recovery plan that utilizes the cloud for DR.  Being at VMworld Europe in Copenhagen last week – where there were many cloud providers and so many people discussing all things cloud – made me think about BC/ DR from a cloud provider perspective.  How important is DR to the cloud for a cloud provider?

Is it Just Talk?

Everybody talks about moving data to the cloud. If you ask a CIO today about the importance of his cloud strategy, the answer will usually be, “the cloud is a top priority”. And yet, asking what has already been moved to the cloud will usually be answered with: “test environments, development environment, front end web servers…”. However, when asking about moving critical applications with real SLAs and budgets to the cloud, the answer is more hesitant. CIOs usually point to security, availability, performance guarantees, etc. as reasons that the cloud is not yet ready for their tier-one applications. DR to the cloud bridges this gap as it lets the cloud handle critical applications, without jeopardizing production (which stays in the enterprise datacenter).

DR to the cloud gives enterprises the comfort they need to see production applications running successfully and safely in the cloud, so they will eventually be comfortable migrating more of their tier-one applications to the cloud.

DR as a Service can simplify recovery for enterprise customers while reducing costs.  This also becomes a big win for cloud providers, who can use DR as a Service to get tier-one applications replicated to the cloud.  Once IT becomes comfortable with DR to the cloud, moving enterprise applications is not far behind.

]]>
http://www.zerto.com/blog/disaster-recovery-planning/dr-as-a-service-a-win-win-for-enterprises-and-cloud-providers/feed/ 0
Two for Tuesday: Go Cloud + Traditional Backup Mistakes http://www.zerto.com/blog/disaster-recovery-planning/two-for-tuesday-go-cloud-traditional-backup-mistakes/ http://www.zerto.com/blog/disaster-recovery-planning/two-for-tuesday-go-cloud-traditional-backup-mistakes/#comments Tue, 18 Oct 2011 08:49:30 +0000 Zerto http://zerto.com/?p=1191 It’s Twofer Tuesday, where we like to highlight two relevant articles on disaster recovery planning, data replication and DR to the cloud.  This week’s choice quotes:

Yes, the cloud provides a new alternative for disaster preparedness.”

“One of the biggest mistakes is trying to protect a virtual infrastructure with traditional backup methods.”

 

Jill Yaoz CEO of AFCOM, the “association of data center management professionals,” writes in Forbes about the importance of a disaster recovery plan and strategy.  She goes on to highlight the cloud as a new alternative for disaster preparedness.”

“As with any new technology, the DR team must be vigilant in staying on top of trends and new developments. Running comprehensive DR drills to ensure the failover happens as planned with minimal disruption to the end user will be an imperative. Through the cloud, an IT organization can have all of the benefits of a hot backup site without the expense and waste of underutilized compute and infrastructure resources.”

Another article this week, by Stacy Collett of Computerworld highlights the complications when virtualization meets backup / DR.

“More than a quarter (27%) of the respondents in the Computerworld poll said that server virtualization has complicated backup and disaster recovery.”

Collett goes on to quote Jeff Boles, senior analyst at Taneja Group in his article:

“One of the biggest mistakes here is trying to protect a virtual infrastructure with traditional backup methods,” according to Boles. With traditional backup, “the degradation and backup performance is more than a linear degradation as you scale the number of virtual machines on a piece of hardware. You’re effectively creating a blender for backup contention as you’re trying to protect these virtual servers overnight. You try to do 10 backups simultaneously on this one physical server, and you’ve got a lot of combat going on inside that server for memory, CPU, network and storage,” he says.

The article continues:

“Complicating matters are workload mobility tools, such as VMware’s Storage vMotion, that let users relocate virtual machine disk files between and across shared storage locations. “Now you have to keep a backup going in relation to these virtual servers that are going to be moving around, and possibly run into other bottlenecks. That can be a serious headache,” says Boles.

It’s a headache many of our customers are seeing when using both backup & replication solutions, one that pushed us toward a hypervisor-based approach to recovery & replication – keeping the full range of virtualization tools (such as VMware’s Storage vMotion) in mind – replicating seamlessly despite frequently-moving data and VMs.

If you’re at VMworld Europe – Come by booth #67 to meet us in person!  Follow us on twitter @ZertoCorp to get updates from the show floor.

]]>
http://www.zerto.com/blog/disaster-recovery-planning/two-for-tuesday-go-cloud-traditional-backup-mistakes/feed/ 0
Five Key Takeaways from “Best Practices for Disaster Recovery in a Virtualized World” http://www.zerto.com/blog/disaster-recovery-planning/five-key-takeaways-from-%e2%80%9cbest-practices-for-disaster-recovery-in-a-virtualized-world%e2%80%9d/ http://www.zerto.com/blog/disaster-recovery-planning/five-key-takeaways-from-%e2%80%9cbest-practices-for-disaster-recovery-in-a-virtualized-world%e2%80%9d/#comments Mon, 10 Oct 2011 08:46:30 +0000 Zerto http://zerto.com/?p=1188 We were really pleased that so many of you joined us for the Zerto / Forrester Webinar “Best Practices for DR in a Virtualized World” last week.  The replay is now live on our site.

Rachel Dines, Analyst at Forrester, gave an overview of the DR plans of her clients and talked about the research Forrester recently completed on the subject.

Ziv Kedem, CEO of Zerto, reviewed how Hypervisor-Based Replication provides a new alternative to array-based data replication and recovery that is built specifically for virtualized environments.

Here are the five key takeaways from their interesting discussion:

1. “People are adopting server virtualization for DR”

79% of those surveyed in a recent Forrester report cited improved DR as ‘very important’ or ‘important’ reasons for adopting server virtualization.  Companies are moving towards virtualization (or increased  virtualization) in order make DR easier.

2. “DR is not an insurance policy.”

According to Rachel, the conversation around BC/ DR needs to move from High Availability / Disaster Recovery to IT Service Continuity.  Companies need to stay always on and always available = continuous availability. They need to be uptime focused vs. downtime focused.

3. IT service continuity for virtualized environments can really only be accomplished with some of the new technologies that have recently become available.

Rachel discussed hypervisor-based replication as a scalable and reliable solution for mission-critical applications.

4. “Companies should shift focus from reactive response to downtime, to planning, good processes and preventative efforts.”

5. “Until hypervisor-based replication came along, solutions for large enterprise IT service continuity were not virtual-aware.”

“Virtual-aware” includes elements like protection of VMs and VDMKs as opposed to LUNs, hardware agnosticity, setup flexibility and support for storage DRS.

How can you ensure that your environment is less reactive, focuses on uptime, and that your IT service continuity is virtual-aware?

Find out more by watching the short webinar.

]]>
http://www.zerto.com/blog/disaster-recovery-planning/five-key-takeaways-from-%e2%80%9cbest-practices-for-disaster-recovery-in-a-virtualized-world%e2%80%9d/feed/ 0
Top Disaster Recovery and Cloud Insights: Two for Tuesday http://www.zerto.com/blog/two-for-tuesday/top-disaster-recovery-and-cloud-insights-two-for-tuesday/ http://www.zerto.com/blog/two-for-tuesday/top-disaster-recovery-and-cloud-insights-two-for-tuesday/#comments Tue, 27 Sep 2011 08:44:15 +0000 Zerto http://zerto.com/?p=1186 If, like me, you’ve read all you can stomach about the new facebook timeline or the future of HP, you can refocus on what’s happening in our industry with our first ‘Two for Tuesday’ series. Each Tuesday, we’ll post two articles that covered interesting perspectives on disaster recovery, cloud technology adoption and replication.

In the first, Lauren Whitehouse of ESG writes a post entitled, “Strained or Broken?” in which she concludes that the state of backup/ recovery is in fact, broken.

“My takeaway from the many discussions I had with IT professionals is that we know that backup/recovery challenges spell trouble for most organizations, but upon further inspection, it’s broken.  Really broken.”

On the cloud front, @odedkedem, our CTO and co-founder, pointed out this InformationWeek article on the importance of cloud adoption by the US government.

“The federal government’s adoption of cloud computing is not merely critical to technology innovation, but also is crucial to ensure that the United States can remain economically competitive across the globe.”

Honorable mention goes to this post on the Stateline.org website.  The reporter tallied up the number of federal disasters President Obama declared (so far) this year and came to the number 84!  (For those of you following us on facebook, you can cheat on today’s posted question).

“Last week, when President Obama named New Jersey a federal disaster area for floods that came before Hurricane Irene, he cemented 2011’s ranking as the United States’ most disaster-prone year ever.

Through the third week of September, Obama had issued 84 federal disaster declarations at the request of governors. That is more declarations than in any year since the score was first kept six decades ago. And there are still three months left in 2011.”

Feel free to post links to noteworthy articles you’ve read this week in the comments section – and if you haven’t already signed up – we have a webinar TODAY at 12:00 noon EST on “Best Practices for Disaster Recovery in a Virtualized World”.  You can still register here.

]]>
http://www.zerto.com/blog/two-for-tuesday/top-disaster-recovery-and-cloud-insights-two-for-tuesday/feed/ 0
Biggest DR Challenges are Complexity, Organization & Management Issues http://www.zerto.com/blog/general/biggest-dr-challenges-are-complexity-organization-management-issues/ http://www.zerto.com/blog/general/biggest-dr-challenges-are-complexity-organization-management-issues/#comments Wed, 21 Sep 2011 08:39:40 +0000 Zerto http://zerto.com/?p=1184 “What’s your biggest DR challenge?”

That’s the question we asked to 80+ participants on a recent webinar.  Surprisingly, RTO / RPO goals and cost efficiency did not appear as their number one concern. The number one DR problem IT managers face is data center complexity, followed by cost and their inability to manage application-level policies for DR. While cost & RPO/ RTO are challenges, taken together, we saw that more than 50% of data center managers cite similar concerns; complexity, organization & management issues.

Much of this added complexity and management challenge comes from virtualization.  Virtualization presents many challenges for implementing business continuity and disaster recovery plans, but you may not know that it provides opportunities as well.  In fact, by leveraging the virtualized environment for DR, companies can achieve faster recovery and minimize data loss, while reducing management overhead.

We’re excited to be hosting our first webinar, together with Forrester Analyst Rachel Dines addressing the subject of DR for virtualized environments entitled, Best Practices for Disaster Recovery in an Increasingly Virtualized World

Rachel and I will address the following issues on a webinar on Tuesday, September 27, 2011 at 12:00 noon EST (9:00am PST):

  • Best practices for architecting your disaster recovery environment – think about continuous availability, not just recovery!
  • How to select the right set of technologies and refine virtualization processes
  • Ways to realize the full benefits of virtualization through a new technology that offers enterprise-class replication for business-critical virtual applications

Register for the webinar here !

We hope to see you there.

]]>
http://www.zerto.com/blog/general/biggest-dr-challenges-are-complexity-organization-management-issues/feed/ 0
Is DR Planning Better After 9/11? Hurricane Irene Shows that Companies Have a Long Way to Go http://www.zerto.com/blog/disaster-recovery-planning/is-dr-planning-better-after-911-hurricane-irene-shows-that-companies-have-a-long-way-to-go/ http://www.zerto.com/blog/disaster-recovery-planning/is-dr-planning-better-after-911-hurricane-irene-shows-that-companies-have-a-long-way-to-go/#comments Mon, 12 Sep 2011 08:38:51 +0000 Zerto http://zerto.com/?p=1181 The 30-second post:

  • Virtualization makes DR more difficult to implement
  • Some companies used Hurricane Irene as a test-case for failover to a DR site (seriously?!)
  • Others relied on backup power without a real DR solution
  • We feel hypervisor-based replication is an easy and cost-effective way to be prepared for any disaster scenario

The complete story:

As a company that builds disaster recovery solutions, Zerto has some understanding of the post 9/11 task that the WTC-area companies went through to get corporate life back in order in those terrifying days after 9/11.  The task of rebuilding after a disaster is challenging and frankly, has gotten more challenging in the 10 years since 9/11, mostly due to virtualization.

In fact, the larger the enterprise, and the more virtualized an enterprise is, the more complicated their ability to adequately do DR.  Applications are managed and provisioned as virtual machines (VMs) and virtual disks (VMDKs) – but existing storage-based replication and disaster recovery technologies are still LUN and volume oriented. This results in complex, labor-intensive setup and management of DR.

This problem is most striking when looking at the coverage from Hurricane Irene that hit the East Coast of the US two weeks ago.  According to a recent interview with Forrester Research Analyst Rachel Dines in SC magazine:

“Many of the New York and Boston-based clients she has spoken with this week plan to use the event as an opportunity to gauge the effectiveness of their disaster recovery (DR) and BC programs.

“They’ve told me they’re treating this somewhat as a test scenario,” she said. “You know it’s coming, so there are some things you can do to be ready.”

This may include preparing systems to fail-over to a backup data center in case of an interruption, such as flooding or a long-term power outage, Dines said.”

Frankly, we feel that companies need to have regular test failovers long before disaster strikes, and we’re surprised to hear that established companies would use a hurricane as an “opportunity” to do a failover test.  Zerto’s customers do regular failover tests, long before a crisis – even a small one – occurs.

Other companies choose a different route, using backup generators to power their production data center.  This is another DR solution that seems ineffective.  According to Mike Vizard, in an article on IT Business Edge,

“…a lot of frustrated business owners will be looking into the … merits of buying a generator.”

He mentions that downed power lines caused major work disruptions on the East Coast, and many took a full week to be fixed by the utility companies.

“The business lesson to be learned from this is that public utilities can’t be counted on in any kind of major emergency.”

Generator power as a DR plan seems to be popular.  Many companies use generators to keep their offices open and their data center running in a disaster scenario.  According to an article on datacenterknowledge.com:

“One of the areas that experienced widespread power outages was Richmond, Virginia, where [company name removed] operates a large data center.

“As Hurricane Irene made landfall, Dominion Power suffered extensive outages,” said [the company]. “However, the company’s Richmond Data Center transferred from utility to generator power without impacting our customers. Diesel deliveries will continue to fuel the data center’s generators until electricity is restored, ensuring our customers’ mission-critical data is secure.

“Dominion Power has assured us they are working to restore power to the company’s Richmond Data Center as soon as possible. In the meantime, we have scheduled fuel deliveries to keep our generators operating efficiently.”

Unfortunately, we feel this is one of the least reliable disaster recovery planning tactics for data centers.  Most obvious is the fact that a true disaster – fire, earthquake, tsunami, major hurricane – could wipe out a company’s offices, networks and even employees.  In such cases, the fact that the power is still on is the least important part of the company’s ongoing functioning.

Plus, generators rely on diesel fuel being trucked in to keep the generators running, which is costly and dangerous.  One building, 60 Hudson St. in New York City, nearly had a major disaster on its hands when a small fire broke out.  The building, a hub for telecom operators in NYC, stores 80,000 gallons of diesel fuel in the basement to enable generators to run in case of a major power outage.  One small fire in 2010 nearly created a far bigger disaster than any random power outage could.

Generator power as a DR plan seems to be popular, but it’s a foolhardy solution to use in order to keep a data center running when there are easier and cheaper solutions to enable fail over of company data to a safe location.

So what’s the right solution?

Mike Vizard, in the article mentioned earlier, thinks more companies will rely on the cloud for their DR plan.

“Most providers of cloud computing services expect that Hurricane Irene will wind up being a boon for cloud computing services. After all, nothing demonstrates the need to be able to access remote servers better than massive flooding and a loss of power that prevents people from returning to work.”

We agree.

By using Zerto’s hypervisor-based approach to disaster recovery, companies can simply and cost-effectively migrate their data to the cloud, and continue to work remotely in the event of a serious disaster scenario.  In fact, a cloud provider need not use the same storage vendor in order to serve as a company’s replication site.  Zerto offers the ability to mix storage from any vendor using any technology. And instead of using Hurricane Irene to test recovery (!), with Zerto, failover testing is performed at your convenience, with the click of a button, and has no impact on the production environment.

Time will tell what the impact is of hurricane Irene, in the meantime, we’re pleased that Zerto’s customers were not impacted at all, whatsoever.

]]>
http://www.zerto.com/blog/disaster-recovery-planning/is-dr-planning-better-after-911-hurricane-irene-shows-that-companies-have-a-long-way-to-go/feed/ 0
VMworld 2011 Roundup: Can You Read My Mind? All seem to agree virtualization’s grown up http://www.zerto.com/blog/general/vmworld-2011-roundup-can-you-read-my-mind-all-seem-to-agree-virtualization%e2%80%99s-grown-up/ http://www.zerto.com/blog/general/vmworld-2011-roundup-can-you-read-my-mind-all-seem-to-agree-virtualization%e2%80%99s-grown-up/#comments Fri, 09 Sep 2011 08:30:38 +0000 Zerto http://zerto.com/?p=1179 In honor of the amazing VMworld 2011 Killers concert…

“When you were young”

Everything at VMworld was bigger than in previous years. The number of attendees rose from 15,000 last year to over 19,000.  Many new companies exhibited, with much larger booth space in the solutions exchange for the biggest companies.

And of course, the community activities and networking opportunities were plentiful. As Tony Dunn of VMware noted on Twitter –>

“Can you read my mind?”

Everyone seemed to be on the same page about Virtualization taking over the data center (and no longer just a segment within the data center).  Virtualization is much more mature – we’ve moved beyond ‘should mission critical applications be virtualized’ to the organizational and operational challenges of environments that are already virtualized.  One of the questions we asked people who stopped by the Zerto booth was, “How much of your data center is already virtualized?”  Nearly everyone we spoke to had between 70% and 95% of their data center virtualized.

We see this as particularly significant because products like Zerto, vKernel and others are built for virtualized environments – and specifically for mission-critical and tier-1 applications on virtualized environments.  The discussion at VMworld has moved past the ‘if’s and ‘why’s of virtualization to address the ‘how’s.  How can I best accomplish data protection and recovery in a virtualized environment? How can I adapt the rest of my infrastructure to maximize the benefit I gain from virtualization?  Those were some of the key concerns of CIOs and IT managers we met at VMworld.

“Are we human or are we dancers?”

100% human! We can see how it might be tempting to use outside entertainers like magicians and booth  babes to do lead scanning at a big show like VMworld.  Without naming names, this seemed to be a growing  practice from exhibitors at VMworld, and the subject of much discussion in the VMworld blogosphere.  Truth is, I’d venture to guess that most companies using tricks for lead scanning are now finding that the  leads they scanned at the event are not from people who genuinely have an interest in their product.  We’re  pretty proud that everyone in our booth could talk about Zerto’s product intelligently, and that hundreds  of attendees who are actively looking for DR solutions came over to talk to us.  So, we may not be  dancers, but that doesn’t mean we couldn’t have a bit of fun with our swag!  Zerto’s hyper “visors” were a big hit with the swag collectors.

“Mr. Brightside”

The true bright side of VMworld for us was meeting existing and potential customers who seem to really understand the need for hypervisor-based replication and the value we bring.  Nearly everyone we spoke to had first-hand experience of difficulty in the implementation of their disaster recovery plan.

And of course, the real Brightside, winning  TechTarget’s “Best of Show”, “Gold Award for Business Continuity and Data Protection”, and ViBriefing’s “Technology Excellence Platinum Award”.  We’re honored and humbled.

]]>
http://www.zerto.com/blog/general/vmworld-2011-roundup-can-you-read-my-mind-all-seem-to-agree-virtualization%e2%80%99s-grown-up/feed/ 0
Zerto Wins VMworld 2011 Best of Show! http://www.zerto.com/blog/general/zerto-wins-vmworld-2011-best-of-show/ http://www.zerto.com/blog/general/zerto-wins-vmworld-2011-best-of-show/#comments Wed, 31 Aug 2011 08:26:41 +0000 Zerto http://zerto.com/?p=1175 What a day!

Today, Zerto received the VMworld 2011 Best of Show award and the Gold award in the Business Continuity and Disaster Recovery category!

As Jo Maitland, Executive Editor at SearchCloudComputing.com said when she  gave us the award, “It’s host-based replication done right.”  Wow. We’re glad  to bring a product to market that resonates with people and fills a real need.

Thanks to the  VMworld community for your overwhelming support and thanks to TechTarget and SearchServerVirtualization.com for the opportunity and this huge win!

Our press release with details about the win is here.

]]>
http://www.zerto.com/blog/general/zerto-wins-vmworld-2011-best-of-show/feed/ 0
Zerto at VMworld 2011 – Free ‘Hypervisors’ for all! http://www.zerto.com/blog/general/zerto-at-vmworld-2011-%e2%80%93-free-%e2%80%98hypervisors%e2%80%99-for-all/ http://www.zerto.com/blog/general/zerto-at-vmworld-2011-%e2%80%93-free-%e2%80%98hypervisors%e2%80%99-for-all/#comments Tue, 30 Aug 2011 08:25:17 +0000 Zerto http://zerto.com/?p=1173 Zerto’s entire booth staff made its way to VMworld from (or via) the East Coast of the US in time for the show.  Believe me, the irony (of being a company that makes disaster recovery software, stranded in airports around the world) wasn’t lost on us!  But we’re finally here, and we’re excited to be giving demos of our hypervisor-based replication from the Solutions Exchange expo floor at booth #973.

Here are some pics from Day 1 of the show – mostly shots of our popular ‘hyper visors’! Thanks to @sfoskett and @theronconrey for being good sports!

photo(5) Sfoskett steve ]]>
http://www.zerto.com/blog/general/zerto-at-vmworld-2011-%e2%80%93-free-%e2%80%98hypervisors%e2%80%99-for-all/feed/ 0
Disaster Recovery Planning: Building a Plan for DR to the Cloud http://www.zerto.com/blog/disaster-recovery-planning/disaster-recovery-planning-building-a-plan-for-dr-to-the-cloud/ http://www.zerto.com/blog/disaster-recovery-planning/disaster-recovery-planning-building-a-plan-for-dr-to-the-cloud/#comments Sun, 21 Aug 2011 08:21:48 +0000 Zerto http://zerto.com/?p=1171 Most disaster recovery planning starts in a similar way; a company comes to the conclusion that they need to be prepared for disasters or outages and they require a ‘business continuity and disaster recovery plan’. In these cases someone, typically the CEO or company board, decides that they need a DR plan and approves creating a plan from the point of view of business exposure. They then assign the task to IT to build a detailed DR plan. What might such a plan include? A second datacenter facility, servers, storage, networks, bandwidth between the two sites and replication equipment are all needed.

The capital expense is always much higher than the company anticipated. Add to that the operational costs of managing another datacenter, effectively doubling their operational expenses. The secondary data center is only active a small amount of time: maybe one test per calendar quarter, and during the real failover due to outage, which may happen once a year (if they are unlucky). Essentially, they have a datacenter in use for 5 days a year – but pay for 365 days.

This creates a compelling case to drive enterprises to the ‘pay-by-the-drink’ nature of the cloud. By using the cloud, they only pay for what they use. Sounds like an obvious case for moving DR to the cloud, right?

Not so fast. Unfortunately, there are two key areas where existing DR solutions fall short, making DR to the cloud difficult. For one, cloud providers need to offer the same storage and technologies as their customers in order to serve as a replication site. Second, cloud providers need to avoid taxing their customers’ production environments while protecting data, particularly during failover testing. Cloud providers simply do not have the tools to commit to either of these two crucial elements of enterprise-class disaster recovery.

Zerto’s solution was built to solve these problems, and to help large enterprises get their DR to the cloud. We offer:

  • Storage agnostic replication – the cloud provider need not use the same technology or storage vendor in order to serve as a company’s replication site. Zerto offers the ability to mix storage from any vendor using any technology.
  • Easy recovery testing – Failover testing is performed with the click of a button, with no impact on production environment. In fact, the production environment continues to replicate while the test is running. This is essential for the cloud provider’s SLA to customers.

Since Zerto Virtual Replication removes interoperability issues by enabling array-agnostic replication and allows easy recovery testing, it is an ideal solution for enterprises to use in order to put a cost-effective DR plan into action on the cloud.

To find out more go to our website or come see a demo at booth #973 at VMWorld 2011!

]]>
http://www.zerto.com/blog/disaster-recovery-planning/disaster-recovery-planning-building-a-plan-for-dr-to-the-cloud/feed/ 0
How Woodforest National Bank Found a Summer-time Home for Its Data http://www.zerto.com/blog/general/how-woodforest-national-bank-found-a-summer-time-home-for-its-data/ http://www.zerto.com/blog/general/how-woodforest-national-bank-found-a-summer-time-home-for-its-data/#comments Thu, 07 Jul 2011 08:19:44 +0000 Zerto http://zerto.com/?p=1169 Woodforest National Bank has a long history as one of the strongest community banks in the nation, with more than 750 branches in 17 states across the United States.   Its motto, “Banking Your Way…EVERY DAY AND NIGHT!” reflects a strong commitment to customer service, bolstered by its unique schedule: 24×7 live banking with tellers and personal bankers.  In addition, Woodforest National Bank is open every day of the year, save for Christmas Day.

Even more impressive, Woodforest takes extraordinary precautions with customer data. Woodforest migrates its entire datacenter every six months to avoid hurricane season, due to its proximity to the Texas coastline.  Each June, all production applications are pre-emptively “failed-over” to a secondary site, with a return to the primary site by mid- to late-October, avoiding the hurricane season. If a hurricane strikes the area, all of the systems are already protected.

Unfortunately, migration wasn’t simple.  Woodforest is 95 percent virtualized on VMware vSphere. Using SAN-based replication with Woodforest’s virtualized environment made data migration more complicated.  “Virtualization is supposed to be about reducing complexity, but SAN-based replication makes deploying new applications, partial failovers, and other recovery activities complex and manual processes,” stated Richard Ferrara, CTO. “This type of replication required complex, labor-intensive setup and management and didn’t allow us to take full advantage of the flexibility and mobility of our virtual infrastructure.”

How did Zerto help?

Zerto provides a new replication paradigm – hypervisor-based replication – that removes the storage headaches from the equation. Providing fully virtual-aware replication with true block-level consistency across hosts and storage devices, Zerto is fully integrated into vCenter and makes end-to-end application recovery as simple as the click of a button.

Read the full case study to find out how hypervisor-based replication solved Woodforest’s migration challenge.

According to Ferrara, “Now we’re managing our applications, not our storage. Our disaster avoidance strategy is ready to overcome any disaster.” Watch the video.

We’ve spent the last six months or so installing Zerto’s hypervisor-based replication product in large enterprise data centers across the United States.  The feedback from our beta customers has been very positive and has helped us fine-tune the product.  We’re thrilled that our early adopters have found real value in the Zerto solution – so much so that they’re already publicly supporting our product.

Special thanks to Woodforest National Bank and Richard Ferrara for their support!

]]>
http://www.zerto.com/blog/general/how-woodforest-national-bank-found-a-summer-time-home-for-its-data/feed/ 0
Zerto Launch Day – The Start of Something Big http://www.zerto.com/blog/general/zerto-launch-day-%e2%80%93-the-start-of-something-big/ http://www.zerto.com/blog/general/zerto-launch-day-%e2%80%93-the-start-of-something-big/#comments Thu, 23 Jun 2011 08:18:12 +0000 Zerto http://zerto.com/?p=1167 Zerto is thrilled by the positive reception we’ve received for our hypervisor-based replication technology. We’ve heard great feedback from analysts, bloggers and journalists, as well as the enterprise customers and cloud providers in our beta program. Links to the media coverage from our launch will be updated here. Enjoy!

Israeli Startup Zerto Aims to Bring Disaster Recovery to the Cloud, by Arik Hesseldahl of All things D.

Zerto Announces Itself, by Chris Mellor of The Register.

Zerto Helps when Real Problems Hit Your Virtual Servers by Stacey Higginbotham of GigaOM

Replicating the Hypervisor by Mike Vizard of CTO Edge

Zerto’s Introduction of Hypervisor Based Replication Frees Enterprises to Rethink Their Approach to DR, by Jerome M Wendt of DCIG.

Zerto Replication and Disaster Recovery the easy way, by Gabrie van Zanten at Gabe’s Virtual World.

“I could see using Zerto’s software right now to replace or supplement the expensive SAN replication software that we are currently using” Read the full Post by Christopher Wells at VSamurai.

]]>
http://www.zerto.com/blog/general/zerto-launch-day-%e2%80%93-the-start-of-something-big/feed/ 0
Hello World – Zerto Launches Today! http://www.zerto.com/blog/general/hello-world-%e2%80%93-zerto-launches-today/ http://www.zerto.com/blog/general/hello-world-%e2%80%93-zerto-launches-today/#comments Wed, 22 Jun 2011 08:16:20 +0000 Zerto http://zerto.com/?p=1165 We’re thrilled to finally unveil our company’s technology to the world – Zerto Virtual Replication. You can watch the launch of our company, live-streamed from GigaOM’s Structure conference taking place today, June 22, at 5pm PST!

So what does Zerto do?

Zerto makes the process of protecting and replicating critical applications on virtualized and cloud environments easy and cost-effective.

Let’s back up a bit.

Fortune 500 enterprises such as airlines, banks and manufacturing companies, are NOT yet moving their critical applications to the cloud. They could be, but they have too many concerns, including security & manageability. Another critical reason they are not moving to the cloud is business continuity and disaster recovery (BC/DR).

Disaster recovery is not about backup, disks or data. It is about getting your business back online after a disaster, or even a simple power outage. Business continuity requires a recovery datacenter. This means servers, storage and networking, not to mention the operational nightmare of managing another datacenter.

Cloud providers today cannot offer BC/DR in the cloud for their large enterprise customers. It’s not because they don’t want to, they simply don’t have the right technology. Current BC/DR replication software (a $3B market*) is not designed for the cloud. It is stuck in the physical world: hardware-centric, inflexible, complicated and expensive.

Zerto addresses the shortcomings of existing BC/DR solutions (drumroll please …) by moving replication to the hypervisor – the management layer of virtual environments. It is the only replication solution built to accommodate large enterprises seeking to move datacenter resources and applications to the cloud.

Why the hypervisor?

By moving replication to the hypervisor layer, Zerto enables data replication on any VM. End users can now protect any relevant virtualization elements, such as VMs, virtual disks, virtual networks and virtual applications, instead of less relevant physical elements: physical servers, storage arrays and LUNs. This allows protection of specific applications and all of their dependencies. Zerto’s user interface is also fully integrated into VMware’s vCenter, making it familiar and extremely easy to use. Plus, Zerto is not tied to a particular brand of storage so customers can replicate to a cloud provider using a different storage vendor – it’s completely storage-array agnostic.

We’re also in good company. Cisco is moving its networking switches into the hypervisor. And Juniper just acquired Altor Networks, moving firewalls into the hypervisor. Zerto is moving BC/DR into the hypervisor.

Are you interested in learning more? The product is currently in beta with many companies, including five Fortune 500 customers and three top cloud providers.

Want more info? Check out the video.
Contact us today to get access to Zerto! We’ll show you a better way.

Back to the Zerto Website

]]>
http://www.zerto.com/blog/general/hello-world-%e2%80%93-zerto-launches-today/feed/ 0
The World Has Changed – But is it ready for the Criticloud? http://www.zerto.com/blog/general/the-world-has-changed-%e2%80%93-but-is-it-ready-for-the-criticloud/ http://www.zerto.com/blog/general/the-world-has-changed-%e2%80%93-but-is-it-ready-for-the-criticloud/#comments Mon, 20 Dec 2010 09:07:35 +0000 Zerto http://zerto.com/?p=1160 Clouds are everywhere. Actually, one might think that not only are clouds everywhere, they can also be anything… I’ve attended a number of conferences and met with many IT professionals in the last few months, and in all of these encounters, without exception –in all of them, the cloud part of the discussion was prefaced with a joke, a movie clip, or an apology that basically said ‘there is no chance that we all understand the term cloud the same way, so let me explain what I mean by cloud…’. Amazing, isn’t it? This must be the first time that hype was so open to interpretation.

So now it’s my turn to try to explain what I see in all of these clouds clouding the IT skies.

For a cloud to be a real cloud, be it private, public, hybrid, or any other, it must adhere to a number of principles:

  • Utility / elasticity – the notion of consuming applications and IT resources in an infrastructure agnostic manner. You need more, you consume more. Any dependencies and configuration changes are hidden from the customer
  • Mobility – your workloads must be able to move around (for whatever reason) fairly easily
  • Mix & match – a real cloud cannot support only one technology or one vendor. Just like you would not expect Google or Amazon to support only Windows and not iPhones. A cloud servicing an enterprise must be able to support all of the technologies running in that enterprise
  • Performance – if you need high performance, you should get high performance from the cloud. You may be charged for it, but that’s ok – pay by the drink. You request it and you get it (and pay for it…)

So where do we stand today? The industry is making significant progress in each one of these principle areas. There are a good number of real public cloud providers, as well as internal clouds, providing services and creating offerings that are very close to what we expect from a cloud.

The challenge, as I see it, is in the Mission Critical Cloud, which we can nickname– the ‘Criticloud’

When I was in Kashya / EMC, we dealt mainly with the requirements of mission critical applications: applications with strict RTO / RPO requirements, driven by the business’s dependency on these applications. Take airlines for example, they can’t operate without their applications. If their applications are down, even for a short while, they can cause delays that in turn will trigger a domino effect: flight cancellations resulting in compensation for people on cancelled flights, who in turn rebook on alternate airlines, leading to the loss of customers who decide to fly with someone else next time.

Current cloud technology is not there yet. I know, everybody is trying to position themselves as if they support full enterprise applications rather than VMs. But try to implement mission critical applications in the cloud and you will see it won’t go the full distance. Not because of any single reason, but rather due to a combination of reasons rendering a mission critical cloud to be both highly complex to manage and inflexible:

  • BC/DR (my favorite subject, considering my background) – when you take into consideration BC/DR requirements, at the application level, suddenly you see that mobility is disabled, or scalability is not as ‘effortless’ as it should be
  • Performance – You need your high throughput application to spread across many hosts and LUNs, to ensure maximum performance. Can you configure it this way without manually drilling down to every single layer? Can you still maintain your flexibility?
  • Heterogeneity – What if you have two datacenters: one running EMC storage and the other NetApp (normally the result of an acquisition). How simple is it to manage them in a single cloud? In theory this should be easy, as applications use virtual disks and not real disks, but in practice…

One of the common threads here is, once again, the storage layer. At the server layer we have a single vendor, practically dominating the enterprise cloud space, providing a single platform that does a decent job at answering all our cloud requirements. Not perfect by any means but it looks like it’s heading the right way. However, in the storage layer we still don’t see any real shift in products architecture or design that will enable the storage to adhere to the cloud principles. And on top of that, different storage vendors have different approaches to closing these gaps…

Criticloud? Not yet, but we are on our way.

As everyone is saying, it’s a journey to the cloud. The journey is being slowed down, not just by implementation issues, but also by technology gaps. More and more vendors are addressing these gaps, and I believe we’ll soon have the whole puzzle figured out. How soon? Good question.

And as we journey to the cloud, all of us at ZeRTO would like to wish you all a happy holiday season and a great new year!

]]>
http://www.zerto.com/blog/general/the-world-has-changed-%e2%80%93-but-is-it-ready-for-the-criticloud/feed/ 0
The World Has Changed – Is Hardware Getting Softer? http://www.zerto.com/blog/general/the-world-has-changed-%e2%80%93-is-hardware-getting-softer/ http://www.zerto.com/blog/general/the-world-has-changed-%e2%80%93-is-hardware-getting-softer/#comments Mon, 08 Nov 2010 09:03:10 +0000 Zerto http://zerto.com/?p=1156 A couple of months ago I attended VMworld for the first time. WOW! 17,000 people talking, queuing, eating, and partying, all about clouds and virtualization. Who would have thought?

vmworld attendees

When I came back I looked up the VMworld attendance along the years. The chart tells the virtualization story. And when you talk to the IT guys in the trenches, it makes perfect sense.  Virtualization makes their life easier, more flexible, and more scalable.

That’s the key. Virtualization truly changes the expectations of these IT managers and their executives. It’s not only hype (although there’s no shortage of hype). People really expect flexibility and scalability (both ways) from their infrastructure.

So how does this affect the old Hardware vs. Software battle?

Dramatically!

It’s already evident in the server space. Look at how servers are being marketed today: price and performance. That’s about it, or in other words, commoditization. VMware, Hyper-V, and Xen, had turned servers into plain vanilla boxes.

The change doesn’t stop at servers. There is also a huge shift in the software marketplace, with new and old players bringing new products, as well as changing their existing products to adapt to this new dynamic, virtual IT infrastructure.

Being an ‘old storage guy’, and judging by what I see in the market today, I believe the most dramatic change is still ahead, and will take place in the storage world. Many have said in the past that storage is going to be commoditized. It is already starting.

A few years back there was a lot of hype on storage virtualization. Companies were raising huge amounts of money, made a lot of noise, and usually died quietly after burning all the money or got sold for pennies on the dollar. Storage virtualization was addressing a clear market pain. The solutions developed were actually solving this pain, but causing much greater pains. Now, when everything is becoming virtualized (who doesn’t use “virtualize first” policy?), most enterprises don’t really ask whether or not storage is going to be virtualized.

They ask what IS ‘storage’, and what is NOT really ‘storage’ anymore?

We learned to think of storage as big and expensive boxes full of highly complicated to configure features and services such as the EMC Symm or the HDS Lightning. They did everything inside the storage and applications were forced to be mapped to the storage. Now, when you run on top of a hypervisor and your storage is virtualized anyway (e.g. VMFS), does it really make sense to run all these services and operations inside the storage? Is there any benefit for replication, backup, encryption, clustering, etc. to stay outside the hypervisor, ‘looking up’ to the virtualization and application layers?

I think not.

And that’s why we start seeing new products being developed to deliver ‘virtual software’, providing software solutions for what was once considered pure hardware features and services. Look at the Cisco Nexus for example. Cisco, the 300 pound gorilla of networking, decided to run its next generation switch for virtualized environments as a software product. Who would have imagined a few years ago… In security, the software firewall developed by Altor Networks is challenging the hardware firewalls of Cisco and Checkpoint. Software instead of appliances!

It’s happening in networks and security. It will happen in storage, big time. Seems I am not the only one seeing this.

The way I see it, yes, virtualization is already making hardware much ‘softer’!

Ziv

]]>
http://www.zerto.com/blog/general/the-world-has-changed-%e2%80%93-is-hardware-getting-softer/feed/ 0
It’s Good to Be Back! WOW, the World Has Changed! http://www.zerto.com/blog/general/it%e2%80%99s-good-to-be-back-wow-the-world-has-changed/ http://www.zerto.com/blog/general/it%e2%80%99s-good-to-be-back-wow-the-world-has-changed/#comments Wed, 27 Oct 2010 08:00:13 +0000 Zerto http://zerto.com/?p=1154 Hi, I’m Ziv, ZeRTO’s founder and CEO.

Welcome to our new blog!

Some of you out there might remember me from my previous company, Kashya, which made quite a bit of noise back in 2005. At Kashya we developed some really revolutionary replication technology for disaster recovery, currently used by many Fortune 500 companies.

I co-founded Kashya back in 2000 with some great friends, sold it to EMC in 2006 for multi $$$, and stayed long enough in EMC to understand the needs of big enterprises but also that I need a long vacation :-)

Then the fun part began… Went traveling to some far corners of the world for a long while, saw some really amazing places like Namibia, Fiji, Vietnam… and met even more amazing people. What a world we have. I hope we will find ways to take better care of it! Came back, worked on my PhD. Wasn’t as fun as traveling…

Then it happened again. My brother, Oded, who led Kashya’s R&D managed to pull me back into the IT world. He felt that the IT world is changing, and it is not just a minor change, but a mega disruption that changed some of the core models and technologies. For us entrepreneurs Change = Opportunity to improve and solve new problems.  After several long nights and enough wine, ZeRTO was born!

First thing we did was to talk to all of our old buddies in IT: customers, colleagues, investors, to understand what is going on. Apparently in the three years I was away, IT has made a huge shift, and clearly enough, virtualization / cloud is the new game.

When we sold Kashya back in 2006, virtualization was one of these things people did just to get rid of old hardware. Why give real hardware to your fax server, print server, some old NT server, and an internet server that no one even knows who the owner is? Just stick them all on a hypervisor and save space, energy, and costs. If an enterprise IT person would be asked to virtualize his production servers, the response would be “why?” (or in some cases “have you lost your mind?”).

At the end of 2006, we started seeing some adventurous (at the time we said irresponsible…) IT people starting to actually think about moving critical stuff to VMs. Sometimes, just sometimes, they even started to pilot with virtualizing replicated systems, forcing us in Kashya to support it. Can you believe this was the state of IT less than four years ago?

The IT World has changed!

In the following posts I will share my perspective on this new world: what I learned, what has changed, what stayed the same, and what is still a moving target. Why should you find it interesting? Think of it this way: you take someone as deeply entrenched into the IT world as I was, put him away in a time capsule for 3 years, and bring him back only recently. It’s a different perspective to see the big jump instead of the many minor changes. If that’s not enough, let’s just say that based on what I learned I founded ZeRTO ;-)

It’s really exciting for me to start writing this blog. So much has changed in such a short time, and I do believe that the biggest changes are still ahead.

And yes, I know you’re all anxious to know what ZeRTO is all about. Well, we are still in stealth mode (yes, us too…), so I can’t say much. But, I can say that we have assembled a top notch team, that we’re backed by the best VCs in the world (never said I’m objective), Greylock Partners and Battery Ventures, and are soon starting beta with some great forward thinking enterprises that virtualized their mission critical apps or are transitioning to a cloud model. Stay tuned for the next post.

Welcome!

Ziv

]]>
http://www.zerto.com/blog/general/it%e2%80%99s-good-to-be-back-wow-the-world-has-changed/feed/ 0
Ikea Up in a Cloud (of Smoke) – Is BC/DR Going to the Cloud? http://www.zerto.com/blog/general/ikea-up-in-a-cloud-of-smoke-%e2%80%93-is-bcdr-going-to-the-cloud/ http://www.zerto.com/blog/general/ikea-up-in-a-cloud-of-smoke-%e2%80%93-is-bcdr-going-to-the-cloud/#comments Mon, 22 Feb 2010 09:12:08 +0000 Zerto http://zerto.com/?p=1163 A few weeks ago, a local Ikea megastore went up in flames, literally. Luckily nobody was hurt, but the huge building was destroyed, with all the furniture evaporating in a cloud of smoke.

One interesting anecdote, from a BC/DR perspective, was a small paragraph in the local paper describing how the IT team rushed in with the firefighters to salvage some of their hardware from the flames. The reason quoted by the reporter was their need to recover payroll and HR data…

DR anyone?

The week of the Ikea fire, I joined ZeRTO’s product team for a road trip to meet architects and CTOs of the top cloud providers (two of them have merged since…). Different providers are in different stages of rolling out their cloud offering, some still building V1 and some already working on the second version. However, they all share key pieces of a common cloud vision.

One common denominator is their view regarding the future of BC/DR, and how the cloud will play an integral part in that future.

Thinking about it, it really makes sense. Imagine a Director of Infrastructure in an enterprise, being handed the task to provide or expand their DR. This requirement often comes from the CEO or the board, so it receives the top priority. The textbook way of doing it involves getting a DR site, either owned or a co-lo, purchasing all the necessary hardware and software (CapEx…), installing it, configuring it, managing it, buying the communication pipes, configuring everything… Very expensive, very complex and an excellent trigger for considering alternative solutions!

But in the ‘new age’, aka cloud, there will be easier and simpler alternatives:

Option 1: Moving the complete infrastructure to the cloud and let the cloud provider worry about BC/DR – this is the Holy Grail for cloud providers. However, this is not a realistic option for most enterprises today, as they do not yet feel the cloud is adequate for their enterprise-class needs. Moving production to the cloud will take time, just like virtualizing mission-critical production applications is only now becoming a viable option.

Option 2: DR to the cloud (or DRaaS – DR as a Service) – point your replication solution to a cloud provider, sign an SLA, and that’s it!

Well, DR to the cloud could be a true win-win. For the enterprise, it’s a simple way to achieve high levels of resiliency, but without all the complex and expensive processes required to build and manage their own DR site. For the cloud provider, it is a great first step to get enterprises to start moving critical applications into the cloud, avoiding several reservations associated with moving production itself to the cloud.

But there are some challenges. If we consider enterprise-class DR solutions, of the kind that usually happens at the SAN level (consistency requirements, low RTO/RPO), these solutions require both ends of the DR solution to have similar hardware equipment, which is not really an option for a cloud provider. Some cloud providers do offer this solution today, but due to its nature, it is mostly offered as one-off professional services engagements ($).

Another set of challenges arises from multi-tenancy and manageability from the cloud providers’ point of view. Can they, or should they, over-allocate recovery resources? What happens if there is a regional disaster and several of their customers have to failover at once? How do they protect, and more importantly, recover data from many customers efficiently while maintaining the security levels and SLAs they have promised?

And there are additional challenges. To be cost effective and competitive a cloud provider must be able to set up the service quickly, and without re-architecting the customer’s environment. Additionally, enterprises expect all cloud services to be consumed by subscription. Can cloud providers deliver this with the cost structure of their current infrastructure solutions?

The way I see it, DR to the cloud is going to happen, big time. I’m not the only one seeing this, judging by what I read from Gartner, Forrester, IDC and others. I spent a few days at VMware Partners Exchange last week (more on that in a future post) and everybody I discussed this vision with agreed. How quickly the vision will be realized depends on the time it will take for the cloud providers and technology vendors to overcome these challenges.

BC/DR is changing, the cloud is changing, and we will see it taking off in the next few years!

On a side note, I have been writing recently about virtualization, how it changed the world, and claiming that storage will have to change accordingly. Check out what Steve Duplessie posted in his blog – similar content, but in his fun and colorful style.

Ziv

]]>
http://www.zerto.com/blog/general/ikea-up-in-a-cloud-of-smoke-%e2%80%93-is-bcdr-going-to-the-cloud/feed/ 0