VMWare, Nicira and the Software Defined Data Center (SDDC)
By Zerto, on 7 September, 2012
This is a guest post from Zerto’s Director of Central US Sales, Steve Thomsen
A few months back I wrote a post titled, “Is the Hypervisor now the Storage Array?” Well, it seems that post was pretty spot on (wink, wink). Very recently VMWare made their biggest acquisition to date with a $1.2B buyout of Nicira; a company that specializes in networking at the Hypervisor level or Software Defined Networking (SDN). Cisco was already a player here, but their technology is still dependent upon underlying Cisco infrastructure at the physical layer. Not so with Nicira. Nicira is all about the Hypervisor.
If you look at what has happened to date over the past few years you see that VMWare’s vision of the Software Defined Data Center (SDDC) has really taken shape. VMWare started it off with server virtualization. The security vendors made their move: Altor Networks (now Juniper), McAfee, Symantec, and few start-ups such as Cat Bird. Cisco got into the game with the Nexus 1000v. And then VMWare really went all in with the acquisition of Nicira.
But why the craze about data center services in the hypervisor all of the sudden? Certainly it is not about infrastructure consolidation. This is happening because business is about applications. The security vendors, Cisco, and of course VMWare, knew that in order to better serve the application; i.e. the business, you have to be really “close” to the application.
Prediction: The SDDC is here is to stay and it is only going to grow; eventually taking over.
The industry has developed storage and server-based data protection services to protect the application. But if the application now lives in the Hypervisor, what does the storage know about the VMs residing on a datastore? The answer – nothing. And what does server/VM-based sw know about the other VMs that comprise the application outside of the VM it resides on? The answer again is nothing.
For Disaster Recovery, this is a pickle. Without “application awareness” in the data center, data protection is severely compromised, involving complex manual processes in order force fit physical DR solutions onto virtualized environments. Enter Hypervisor-based storage sw and data protection.
Zerto: Driving Storage SW into the Hypervisor & across the Software Defined Data Center
When it comes to Hypervisor-based storage software, think of all of the virtual disks in the management platform (vCenter in this case) as you would as the disks in an array. What if you could apply storage software services against these virtual disks, which is now really the “virtual” array, with no regard to the underlying physical infrastructure? Just like VMWare decided to do with networking when they bought Nicira.
This is regardless of the underlying storage topology, protocols (FC, iSCSI, NFS, & non-shared storage), number and types of arrays or storage vendors in the mix. You can now create logical groups for protection, what Zerto calls Virtual Protection Groups (VPGs), based upon the VMs and their associated VMDKs. If you have a multi-tiered application (for example say a web server tied an app server that also has a backend database server) with VMs on residing on NFS and FC datastores on disparate arrays, such as NetApp and EMC, you can now protect and recover those VMs as a single application instance. This was not possible before. Virtualization tools such as Storage VMotion and Storage DRS don’t break replication/ protection anymore.
Zerto’s Hypervisor-based storage SW & Integration with Software Defined Data Center
This is where things get really interesting. Because Zerto is so tightly integrated into the Hypervisor, we understand relationships and can perform tasks at a very granular as well as macro level. Rather than thinking of VMs and datastores, think of VMDKs and Applications.
Zerto understands the configuration of the VMDKs comprising a VM. This allows for flexibility in terms of how a VMDK is replicated as well as allocated at the target. If a VMDK represents a swap or page file or perhaps scratch space for a database, then you really probably don’t need to replicate the IO. However you certainly want to recover these VMDKs in the event of a failover, recovery, or testing scenario. Zerto allows you to do just that. Administrators can designate specific volumes as “Swap” in which case Zerto will not replicate the IO to the VMDK but will recreate the VMDK upon failover. This coupled with the fact that Zerto only replicates IO to the protected VMDKs, and not the full LUN, allows Zerto to use on average 20-40% less bandwidth than leading storage replication sw solutions.
There are design benefits to Hypervisor-based storage sw and data protection as well. You can create big ‘ole datastores and still have granularity of recovery. Now you have fewer LUN IDs and datastores to manage. We all love that.
What has really happened is that you have really improved recovery, dramatically impacted your agility in a very positive way, and decoupled your VMs from the storage environment.
Again, it’s about what is important to the business: your applications.
A couple of weeks ago I was having coffee with the product manager of a large, global Cloud Services Provider (CSP). One of the ideas we batted back and forth was that physical servers, network devices, and storage arrays are really now just there for IO. All intelligence has moved into Hypervisor. Servers, security, networking, and now storage software are all there. The Software Defined Data Center (SDDC) has indeed arrived.