Reducing I/O latency has long been one of the most important objectives of information technologists. After all, lowering I/O latency minimizes CPU waits, which increases overall performance. The storage options available in the market today provide different price performance characteristics. For example, 10K rpm hard disk drives (HDDs) provide cost-efficient ($/GB) storage while flash drives provide performance efficient ($/IOps) storage and lower latency. In addition most storage arrays have DRAM cache, which provides the highest I/O processing capability as well as the lowest latency. A holistic look at lowering I/O latency requires considering the benefits of leveraging DRAM cache. Though flash media provides lower latency than hard disk drives, latency can be further reduced by deploying intelligent caching algorithms and large DRAM caches. In this blog I’ll discuss how VMAX and VMAX3* use DRAM cache as an ultimate means to achieve lowest I/O latency in combination with flash storage media.
As flash storage becomes pervasive in different parts of the I/O stack, we see nearly all VMAX systems shipping with flash drives. Service level objective (SLO) based management and Fully Automated Storage Tiering (FAST) allows maximum flexibility in a consolidation platform such as VMAX. Depending on applications needs, some may leverage tiered storage while others that need consistently low latency may use all flash media inside VMAX. This approach ensures our customers do not have to sacrifice enterprise data services such as TimeFinder, SRDF, data at rest encryption (D@RE), validated data integrity through T-10 diff, end to end data integrity, QoS (quality of service), multi-controller, high availability and non-disruptive everything.
Flash media enables reduced I/O latency, either placing applications in tiered media or in all-flash media. The former reduces average I/O latency by serving most I/Os from flash and some from magnetic media. The latter consistently reduces all I/O latency by serving all I/Os from flash media. Furthermore, as IT professionals consider these options, they must keep in mind reduced I/O latency due to caching. As an enterprise consolidation platform, VMAX includes many optimizations to maximize the potential of serving I/Os from its DRAM cache. Thanks to smart caching algorithms and its large cache, VMAX systems consistently achieve high cache hit percentages.
The table below shows average I/O response time from different storage media in VMAX. As we look at the latency characteristics of different storage media, it is no surprise that serving I/Os from DRAM yields the lowest I/O latency.
* The caching benefits apply to both VMAX and VMAX3 products. For brevity, I will only use VMAX in the rest of the blog.
To quantify the success of caching in VMAX systems, we have analyzed performance data collected from more than one thousand VMAX systems in the field over many months. The chart below shows read hits as percentage of all reads in 10% increments. The data show that in about 90% of VMAX systems at least 40% of read I/Os are served out of cache with very low I/O latencies.
In addition to read hits, all write I/Os are written directly into VMAX’s mirrored cache. As a result, all write I/Os will also enjoy DRAM’s low latencies. The chart below shows the percentage of write I/Os observed in the same systems. For most systems the write percentage varies 20%-40%.
By now, I hope you understand why I decided to use “hidden gem” as the title. The smart VMAX cache algorithms and its sheer size are indeed taken for granted by most. The cache has been one of the most important performance enablers for VMAX and will continue to be so.
To summarize, the VMAX family of products provides you with:
- A large cache governed by intelligent algorithms allows write I/Os and most of read I/Os to be served out of cache.
- An SLO-based management model that allows using
- all flash media for those applications that require all-flash performance
- tiered storage media for those applications that require a price/performance trade off
- A rich set of enterprise data services including TimeFinder, SRDF, D@RE, validated data integrity through T-10 diff, end to end data integrity, QoS, multi-controller, high availability and non-disruptive everything.