The Hidden Gem in VMAX

<p>Reducing I/O latency has long been one of the most important objectives of information technologists. After all, lowering I/O latency minimizes CPU waits, which increases overall performance.  The storage options available in the market today provide different price performance characteristics. … <a href="http://thecoreblog.emc.com/2014/11/21/hidden-gem-vmax/">Continue reading <span>→</span></a><br /></p><h3>Author information</h3> <div> <div><img src="https://www.delltechnologies.com/uploads/2014/09/adnan.jpg" width="64" alt="Adnan Sahin"/></div> <p><!-- /.ts-fab-photo --> </p><div> <div> <div><strong>Adnan Sahin</strong></div> </div> <p><!-- /.ts-fab-header --> </p><div>Adnan Sahin is VMAX Product CTO in the Enterprise and Mid-Range Systems Division. His responsibilities include architecture and design of Fully Automated Storage Tiering (FAST), flash optimizations and related research areas. Dr. Sahin holds PhD in Electrical and Computer Engineering and high tech MBA from Northeastern University.</div> <div></div> <p><!-- /.ts-fab-footer --></p></div> <p><!-- /.ts-fab-text --></p></div> <p><!-- /.ts-fab-wrapper --></p> <p>The post <a rel="nofollow" href="http://thecoreblog.emc.com/2014/11/21/hidden-gem-vmax/">The Hidden Gem in VMAX</a> appeared first on <a rel="nofollow" href="http://thecoreblog.emc.com/">Thin Blue Line Blog</a>.</p>
Topics in this article

Reducing I/O latency has long been one of the most important objectives of information technologists. After all, lowering I/O latency minimizes CPU waits, which increases overall performance.  The storage options available in the market today provide different price performance characteristics. For example, 10K rpm hard disk drives (HDDs) provide cost-efficient ($/GB) storage while flash drives provide performance efficient ($/IOps) storage and lower latency. In addition most storage arrays have DRAM cache, which provides the highest I/O processing capability as well as the lowest latency. A holistic look at lowering I/O latency requires considering the benefits of leveraging DRAM cache. Though flash media provides lower latency than hard disk drives, latency can be further reduced by deploying intelligent caching algorithms and large DRAM caches. In this blog I’ll discuss how VMAX and VMAX3* use DRAM cache as an ultimate means to achieve lowest I/O latency in combination with flash storage media.

As flash storage becomes pervasive in different parts of the I/O stack, we see nearly all VMAX systems shipping with flash drives. Service level objective (SLO) based management and Fully Automated Storage Tiering (FAST) allows maximum flexibility in a consolidation platform such as VMAX. Depending on applications needs, some may leverage tiered storage while others that need consistently low latency may use all flash media inside VMAX. This approach ensures our customers do not have to sacrifice enterprise data services such as TimeFinder, SRDF, data at rest  encryption (D@RE), validated data integrity through T-10 diff, end to end data integrity, QoS (quality of service), multi-controller, high availability and non-disruptive everything.

Flash media enables reduced I/O latency, either placing applications in tiered media or in all-flash media. The former reduces average I/O latency by serving most I/Os from flash and some from magnetic media. The latter consistently reduces all I/O latency by serving all I/Os from flash media. Furthermore, as IT professionals consider these options, they must keep in mind reduced I/O latency due to caching. As an enterprise consolidation platform, VMAX includes many optimizations to maximize the potential of serving I/Os from its DRAM cache. Thanks to smart caching algorithms and its large cache, VMAX systems consistently achieve high cache hit percentages.

The table below shows average I/O response time from different storage media in VMAX.  As we look at the latency characteristics of different storage media, it is no surprise that serving I/Os from DRAM yields the lowest I/O latency.

VNXgem1

* The caching benefits apply to both VMAX and VMAX3 products. For brevity, I will only use VMAX in the rest of the blog.

To quantify the success of caching in VMAX systems, we have analyzed performance data collected from more than one thousand VMAX systems in the field over many months. The chart below shows read hits as percentage of all reads in 10% increments. The data show that in about 90% of VMAX systems at least 40% of read I/Os are served out of cache with very low I/O latencies.
vnxgem2

In addition to read hits, all write I/Os are written directly into VMAX’s mirrored cache. As a result, all write I/Os will also enjoy DRAM’s low latencies. The chart below shows the percentage of write I/Os observed in the same systems. For most systems the write percentage varies 20%-40%.

vnxgem3

By now, I hope you understand why I decided to use “hidden gem” as the title. The smart VMAX cache algorithms and its sheer size are indeed taken for granted by most. The cache has been one of the most important performance enablers for VMAX and will continue to be so.

To summarize, the VMAX family of products provides you with:

  • A large cache governed by intelligent algorithms allows write I/Os and most of read I/Os to be served out of cache.
  • An SLO-based management model that allows using
    • all flash media for those applications that require all-flash performance
    • tiered storage media for those applications that require a price/performance trade off
  • A rich set of enterprise data services including TimeFinder, SRDF, D@RE, validated data integrity through T-10 diff, end to end data integrity, QoS, multi-controller, high availability and non-disruptive everything.

About the Author: Adnan Sahin

Topics in this article