How Much Data Can You Lose – 3 Technologies to Minimize Data Loss

Data protection is critical to all customers and yet there are a range of technologies that can deliver the required service levels. At EMC, we talk about a data protection continuum, but regardless of you how you define it, the concept is simple – apply the right protection technology to meet application and business SLAs. 

Companies run hundreds or even thousands of applications all of which are generating data. Some might be mission critical like order entry systems, accounting packages or other similar environments while others like personal file shares or test and development servers may be less important for ongoing operations. Don’t get me wrong, you never want to lose any data, but typically users are more tolerant of downtime in less critical applications.

The other consideration is cost. In general, the more frequently you backup (e.g. the more recovery points you create), the more storage you will need and the more expensive the solution will be. (On a side note, an efficient deduplication engine can actually remove the cost of storing multiple data copies over a fixed time period, but I digress.) Cost management is always a challenge, but increased protection expenses can be more easily justified as application criticality increases.

When thinking about application protection, you need to balance your requirements for RTO, the length of time it will take to recover information, and RPO, the length of time between protection points. The technologies that should be considered for data protection include:

Traditional backup – These are backup operations that are typically completed on a nightly basis. However, it is critical to remember that a nightly backup strategy puts you at risk of losing up to 24 hours of data. This may not be problem for less critical or infrequently changing data but can be more challenging for highly transactional or mission critical workloads. Traditional backup is often the best technology for retaining protected information for long periods of time.


Snapshots – This technology is typically embedded in disk arrays and enables non-disruptive creation of recovery points during the day. There are a number of elements to consider with snapshots including application integration, array performance during their creation and the long-term impact of snapshots on array performance. In practice, snapshots are good at creating intraday recovery points and we frequently see users scheduling these hourly. An hourly snap strategy leaves a customer at risk of losing up to 60 minutes of data.


Continuous Data Protection – This model, also referred to as CDP, captures every single storage write and allows you to roll back in time to a previous point. CDP provides the ultimate in flexibility since it allows for the most granular recovery from corruption. In practice, this means that if an outage occurred at 3:29, you could roll back your storage volume to its last known good copy which in this example would have been created at 3:28. From a risk perspective, CDP’s flexible restoration helps minimize unexpected data loss.


As you review your environment, you need to consider each application you are protecting and how much data you are willing to lose. Typically a combination of the above three technologies delivers the best balance of data protection and cost to meet both meet both business SLA and budgetary requirements.

About the Author: Jay Livens