Five Reasons a Multi-hypervisor World is a Good Thing

Editor’s Note: Our next #DellWorld partner blogger is Rick Vanover, a product strategy specialist for Veeam Software based in Columbus, Ohio. Rick is a popular blogger, podcaster and active member of the virtualization community. Rick’s IT experience includes system administration and IT management; with virtualization being the central theme of his career recently. Follow Rick on Twitter @RickVanover.

 If you haven’t noticed, there is more than one hypervisor in today’s virtualization competition. For the longest time it seemed like VMware vSphere was the only game in town. Fast forward to Dell World 2011, and one of the central themes is driving data center innovation in the virtual era. Virtualization has grown up, and the benefits of virtualization simply can’t be topped in the IT ways of the past.   How is this innovation driven by virtualization? Historically, we enjoyed capital investment savings and shorter time to market with enhanced deployment processes. Those benefits and others have fully matured to drive value in modern data center operations. And now, Microsoft Hyper-V has brought a credible challenge to infrastructure decision makers on where to virtualize their environments. These decisions we are facing are a good thing, here’s why:

  1. App-to-metal visibility is available on both platforms. This is probably the most critical initial oversight when virtualization became a mainstream practice. Simply put, having visibility from the host hardware to the applications running on them has been a requirement of datacenter operations for quite a long time. With the initial wave of virtualization, we didn’t have a way to natively see how one virtual machine’s issues could be tied to events within the virtualization infrastructure. Alerting, of course, was in place, but the ability to see the entire stack through one pane was missing.Things have changed now. For both vSphere and Hyper-V platforms, infrastructure administrators can deliver a single pane of glass that views the status of the hosts, operating systems and the applications running on them. One key technology doing this is Microsoft System Center Operations Manager, which provides a full view from hardware to applications with the wide array of management packs and collectors to fully represent today’s datacenters.
  2. Rich features for virtual machine protection are available on both platforms. As with the adoption of any technology, one key readiness measure is the ability to recover if something goes wrong. This is a big deal here at Veeam, where we’ve announced that Veeam Backup & Replication will now support both vSphere and Hyper-V platforms for agentless, full-featured virtual machine backups.While this may seem like a simple checkbox to be selected for virtual machine platform support, the technologies could not be more different. vSphere brings a full-featured but proprietary file system (VMFS), a rich set of APIs and a wide range of supported guest operating systems. Hyper-V brings rich framework for file system and application consistency (VSS), a platform that is generally well-known in virtually every data center in the world and a very attractive feature set at incredible value for new virtualization installations.
  3. The ecosystems are already in place. As we all know, today’s enterprise datacenter is composed of many different technologies. Our goal is to manage the dynamic requirements we all deal with in the most efficient ways possible. To that end, Dell has fully embraced virtualization with a number of key initiatives. Whether we are looking for VAAI-supported storage, which offloads heavy I/O tasks from vSphere environments to the disk storage systems, or a hardware-assisted VSS provider, which provides application consistency for Hyper-V; Dell storage has a solution for both platforms. Add in Dell’s AIM (Advanced Infrastructure Manager) and concerns about having to choose a standard hypervisor platform are over.The same goes for the Dell PowerEdge server platform; Dell has server systems that fully support Windows Server technologies with drivers and administration tools for vSphere environments. On the vSphere side, Dell customizations are available for the vSphere Hypervisor, ESXi, to provide an optimized experience on Dell server hardware.
  4. Density goals can be met with both platforms. Initially with virtualization, the measure of success may have seemed to be high consolidation ratios; or running a number of virtual machines on a single host. Today, the consolidation ratio isn’t a significant measurement. Sure, virtual desktop infrastructure (VDI) environments have a closer ratio of guest virtual machine to hosts, but they also have a unique aspect in that these virtual machines are very similar. Server consolidation, however, doesn’t have the same consistency profile found in VDI. Since all virtual machines are not created equal, a blanket consolidation ratio is rather useless.The more important measures include being able to deliver the required performance for all applications and services within the environment. With vSphere and Hyper-V technologies, we can properly provision resources according to the needs of all stakeholders without simply adhering to a consolidation ratio number.
  5. Operating system and application requirements can be met. The key to delivering any service is the ability to run the operating systems and applications that a business needs to satisfy stakeholders. While vSphere supports a large number of guest operating systems, Hyper-V also brings a sizeable offering for data centers. Most notable, is a decent representation of Linux distributions.

For almost any IT organization today, there may be some choice on which hypervisor to deploy; even for those who have already made investments in the vSphere or Hyper-V realm. The ecosystem is there with products and solutions to deliver virtualization NOW.

About the Author: Rick Vanover