Storage Challenges In The Software-Defined Data Center

What is the software-defined data center and what does it mean for storage?

The software-defined data center is an approach to computing that takes a bunch of physical resources (i.e. compute, network, and storage) and puts them in three big resource pools with the intent of getting the right resource to the right place at the right time. It extends the concepts of compute virtualization popularized by VMware and others to networking and storage. The software-defined data center simplifies the processes involved with defining all of the resources needed to support applications.

In this new data center model, intelligence is moved to a storage software control plane providing a single, system-wide view rather than a device-centric view. This control plane must be capable of controlling heterogeneous storage from high-end enterprise arrays to inexpensive commodity disk. It is a great idea, but with several challenges.  Block and file capacity needs to be pooled without sacrificing either performance or the capabilities of the underlying storage. Provisioning, access, and management needs to be centralized. Additionally, there needs to be support for adding new storage types.

Realizing the Private Cloud

In the bigger scheme of things, software-defined storage is just another step in the journey to the cloud. Enterprise customers have embraced virtualization on x86 commodity servers to maximize compute resource utilization for some time now. The adoption of cloud architectures has furthered the growing need for business agility and improved savings. With the compute layer mostly virtualized and hypervisors a commodity, it makes sense that the move to abstract the physical dependencies would move to networks and storage (e.g. software-defined storage). But, while compute and more recently networking in the modern data center has been greatly commoditized, enterprise storage has been a hold-out. For the most part, it still uses proprietary controller software that is not always easy to manage, especially as a collective entity, or scale.

Storage Challenges in the Cloud

A recent post about addressing multiple storage challenges in the cloud examined the limitations of storage deployed in virtualized data centers, including those moving to private and hybrid cloud models.

To recap, the multiple storage challenges in the cloud include:

  • Multiple control points: There are many storage control points within a data center, including multiple control points often for a single storage vendor. With most organizations running a multi-vendor storage strategy, the number of control points gets compounded by the number of vendors. There is no one single point to control it all.
  • Multiple complex management technologies: Several different management solutions designed for each array type (e.g. block, file) are often used in-house. There is no one standard way to manage it all.
  • Multiple provisioning steps: A multi-step provisioning process with numerous hand-offs and approvals among application, virtualization, server, and storage administrators are commonly required to deliver storage resources. There is no quick and easy way to get storage.
  • Multiple views of the storage environment: Multiple management technologies means visibility into available capacity, usage, and general health of storage resources is not consistent. There is no single, holistic view of storage capacity usage and availability to manage allocations or plan for growth.

Challenges with Software-Defined Storage

These challenges become more acute when viewed through the lens of the software-defined storage.  At VMworld recently, VMware characterized software-defined storage as an extensible storage framework encompassing SAN, NAS, and DAS. This vision compounds the multiple storage challenges in the cloud with more challenges for getting to software-defined storage model including:

  • More control points: The software-defined storage model abstracts and pools legacy SAN and NAS storage with multiple control points; it also extends the concept to include direct attached storage (DAS) on desktops/laptops, increasing the number of control points. Any scripting attempting to effectively manage traffic across data paths connecting these many devices will result in bottlenecks which impede application availability and performance.
  • More complex management abstractions: Software-defined storage as articulated at VMworld conceives data services as being a combination of VMware services and storage vendor data services for replication, encryption, snapshots, and more. Proprietary APIs common to many storage systems are not going to make it easy to integrate and assign vendor-specific data services to classes of storage services or allow the full use of all the capabilities that exist in underlying legacy storage.
  • More provisioning steps: Add new storage types like DAS and the time-consuming multiple provisioning steps common to data centers gets compounded. Multiple control points and management tools encourage bottlenecks rooted in technology limitations. Multiple provisioning steps are an inefficient use of man-power and make the whole process more susceptible to human error.
  • More views of more storage: Additional storage types also lead to more fragmented views. Combining DAS storage at the edge with SAN/NAS storage in the back office might harness all available capacity but it presents a new challenge for a centralized view.

Success yet to be Determined

What’s the recipe for success?

Software-defined storage calls for the same solution articulated for cloud, but with an even greater sense of urgency for the modern data center. You need a single control point, open APIs based on the REST design model, fewer storage provisioning steps (e.g. the time to stand-up storage should align with the time it takes to create VMs), and centralized monitoring and management.  Basic horizontal services like authentication, metering, and chargeback/show-back should also be part of the package to free up IT and service providers to focus on creating higher-value services. Ideally, the solution should also not be tied to any specific cloud stack (e.g. VMware, Microsoft, OpenStack, etc.).

Right now, there is no single definition or clear winners among the ecosystem of storage vendors in this space. Some early claimants to software-defined storage suggest that commodity disk is the preferred path compared to leveraging legacy storage. They also suggest you can save money by getting rid of your legacy storage and replacing your sunken investment with their commodity disks. More realistic is the notion that software-defined storage can help protect data center investments and get the most out of existing storage. It provides a scalable and efficient paradigm to control and manage heterogeneous storage from high-end enterprise storage arrays to inexpensive commodity disk as required to meet the demands of enterprise and cloud workloads.

About the Author: Mark Prahl