Get Back To Where You Once Belonged

In December, EMC launched the Global Data Protection Index (GDPI) which was a survey that interviewed 3,300 end users across the globe. There were many interesting findings, and I encourage you to visit the study’s microsite to learn more. 

In this blog post, I wanted to focus on the cost of downtime as it relates to the number of data protection vendors in use.

One of most telling statistics is that the average cost of unplanned downtime is 4.8x higher for those respondents using three data protection vendors versus one. In this blog, I want to explore how IT practitioners can regain control of their sprawling protection infrastructure and thus reduce the frequency and cost of downtime.

blog1

First, let’s agree that all information must be protected, and so data protection must be a corporate imperative to ensure that SLAs are met for all data types.

However, what often happens is that companies start with a protection architecture that either does not scale or may not have the breadth to cover long-term needs. Over time as environments evolve, IT often outgrows their existing solutions and frequently adds separate protection methodologies on an ad hoc basis. The resulting protection architecture looks nothing like the original and is inconsistent and inefficient.

The problem of uncontrolled protection architecture growth is often exacerbated by departmental, application or infrastructure administrators. Typically these IT practitioners want more control over data protection and as a result they implement their own solutions. They usually choose application-centric protection methods such as Oracle RMAN or one of the many virtualization-centric packages. This is often done outside of the purview of the data protection team which prevents corporate-wide visibility and manageability.

As a result, corporate data protection, comprised of multiple disjointed solutions, becomes fragmented and highly complex to manage. This leads to increased cost of downtime found in the GDPI.

A better way

Clearly, a proliferation of protection solutions is a bad thing from a cost of downtime perspective.

What companies need is a protection architecture that delivers the high-level manageability and reporting required for corporate compliance while at the same time offering application owners the control that they want. Ideally, the solution would leverage existing application-centric tools such as Oracle RMAN or the vSphere Web Client.

By implementing these solutions, IT can ensure that corporate SLAs are being met while at the same time delivering the freedom and control that application owners desire.

How do I get there?

The first step along the path to implementing an integrated approach to data protection is to know what you have. You must assess the various protection applications in use and what they are used for. For example, the Oracle team might use RMAN to dump hourly database copies to local storage. This initial research sets the baseline for what application-specific tools need to be supported and the underlying SLA that needs to be delivered. Additionally, it also provides powerful insight into the various technologies in use in the datacenter.

Armed with knowledge of solutions in place, you should assess you primary enterprise backup application and whether it can provide the needed application-specific functionality. If it does not then you might want to consider alternatives such as EMC’s Data Protection Suite.

The next step is a migration roadmap, which is your plan for moving to an integrated data protection solution. The roadmap should focus on a staged implementation where you gradually migrate specific applications to the new infrastructure. Any change can be challenging for you and your application owners and so it is important to implement the new technologies in a phased approach.

Finally, remember that in some cases, it might actually make sense to have multiple backup environments. An example could be your test and development systems where data is changing rapidly and retention requirements are limited. In these cases, the exercise is still valuable because it helps you better understand the technologies in use.

Summary

In summary, backup environments are fluid. As the GDPI indicated, fragmentation of backup infrastructure results in significant costs and puts your business at risk. As an IT practitioner you need to understand your entire protection infrastructure and the technologies in use. It is only with this knowledge that you can make the right decisions regarding the appropriate system design and ensure that you can meet business requirements.

About the Author: Jay Livens