Data Services or Predictable Performance? – You Shouldn’t Have to Choose

With All-Flash storage systems the expectation of predictable performance is a given.  So, if predictable performance is a given then what sets one All-Flash array apart from others?

The answer is DATA SERVICES!

Data services are what make today’s All-Flash storage intelligent and add the unique capabilities required for the new cloud era.

So what exactly are data services in the context of All-Flash storage?

Data services provide functionality, above and beyond storing data which help simplify, optimize, protect and at the end of the day get more from your storage investment.  Quick examples of data services include snap copies, quality of service, remote replication, intelligent caching, data reduction, encryption and many more…

So why don’t all storage systems offer all possible data services?  It comes down to design and architecture. Developing, testing and supporting data services, especially at the tier-1 mission critical level, is no small effort and requires a long term commitment and vast engineering resources. Also, running data services within a storage array requires system resources such as CPU and memory, very valuable commodities within today’s storage systems.  If there aren’t enough resources available to run multiple data services then things like predictable performance can be impacted.

Dell EMC offers a portfolio of All-Flash storage systems to meet a range of use cases and customer requirements.  Each product has a unique design and architecture to meet a specific range of requirements and price points.  We understand, for example, that there is a difference between what you can expect from a dual controller architecture (like our industry leading mid-range Dell EMC Unity product line) compared to a multi-controller ‘scale out’ architecture (like our industry leading tier-1 Dell EMC VMAX and XtremIO product lines).  Both certainly play a key role in satisfying our customers’ varying requirements but both also offer their own range of data services based on their architectural design.

What happens when you try to run too many data services on an architecture not designed or proven to be able to handle them?  Simple – you run out of resources (like CPU and memory) and something has to give.

One example of where we believe a storage vendor may be trying to get too much out of their architecture is Pure Storage and their FlashArray product line.

If you have seen the list of data services Pure Storage recently announced (many of which are not yet available) a few questions come to mind:

  1. Can their FlashArray dual-controller architecture handle running everything they announced while maintaining predictable performance?
  2. How will performance tradeoffs be managed?
  3. Will they really be able to execute on their committed timeline?

As mentioned earlier, it is data services that set one storage system apart from another so we understand why Pure Storage is trying to pack their FlashArray with all the basic data services they were missing, some of which customers have been waiting on for a while.  But, when you look at the architecture of their FlashArray product, and when you take into consideration the FlashArray already has to throttle back on data reduction when the system gets busy to maintain performance, we think it seems unlikely it can handle running even more data services in parallel.  How will these additional data services get enough resources to operate without impacting performance and/or other data services already running?

Key Questions to Ask Pure Storage:

  • Is FlashArray now utilizing resources from both controllers (front and back end) to try and provide more resources for data services? If so how will this impact controller failovers and/or upgrades when one controller goes offline?
  • Will there be best practices for deploying data services without impacting each other or overall performance?
  • Can you leverage QoS to make sure performance of critical data services (like remote replication) are not affected by other data services absorbing resources?
  • Will you have to choose between performance and data services based on which, and how many, data services you want to run?

To use an automobile analogy – the Ford Fusion (4 cylinder, 5 passenger car) and Ford Explorer (8 cylinder, 7 passenger SUV) are both consistently best sellers but they have completely different designs and serve different markets.  No matter how much you dress up a Ford Fusion to look like a Ford Explorer it still has the engine and body of a Ford Fusion.  Moral of the story – if you want to offer a bigger and more powerful solution you need to design one from the ground up.

It will be interesting to see how things play out.  Let us know what you hear!

Want to learn more from our ongoing blog series, check out these recent blogs:

NVMe – the Yellow Brick Road to New Levels of Performance

Scale Out or Sputter Out? Why Every All-Flash NAS Platform Isn’t Created Equal

Mission Critical Is More Than Just a Buzzword

Jeff Boudreau

About the Author: Jeff Boudreau

Jeff Boudreau is President of the Infrastructure Solutions Group at Dell Technologies. In this role, Jeff is responsible for a global team of innovators that imagine, design and deliver the ISG portfolio of modern infrastructure—industry-leading solutions that accelerate and enhance data computation, storage, networking and data protection, and are integrated into our converged and hyperconverged offerings. Jeff joined Dell Technologies in 1998 (previously EMC) and has over 25 years of engineering, business management and executive leadership experience in the IT industry. Jeff has held a number of management, operations, and services leadership positions. Most recently, he was President of Dell EMC Storage, responsible for the development and management of a market-leading storage portfolio that helps organizations modernize their data centers, leverage the economics of the cloud and accelerate IT transformation. Prior to that, Jeff was Senior Vice President and General Manager for the Midrange Solutions business, leading the innovative engineering teams that delivered next-generation midrange solutions for managing customer data with less cost, complexity and risk. Jeff completed his undergraduate studies at Wentworth Institute of Technology and received an MBA from Northwestern University’s Kellogg School of Management. Jeff is based in Hopkinton, Massachusetts.