When is 1M IOPS not a good thing?

In the storage world, IOPS is often viewed as a key metric in the application productivity equation.  So when is 1M IOPS not a good thing?

Image by Flickr user Dan DeChiaro

In the real world, where customers have finite budgets and a driving imperative to optimize value for every dollar they spend, storage performance is as much about economics as it is about IOPS. So with any performance proposition, practical-minded customers must first ask whether they can afford the performance, or more fundamentally, whether their workloads really need that level of performance. Value-minded customers must consider how much performance they are getting for every dollar they spend. Strategy-minded customers must ask how the solution will carry them forward in the future and whether the economic value is consistent throughout the lifecycle of the product.

A number of storage vendors focus more on the IOPS metrics of performance and less on the economics metrics. In fact, some vendors tout millions of IOPS but when you evaluate the economics of their performance capabilities, you may find prohibitive up-front costs or $/GB returns that fall short on value. You may also find that the scaling path is fraught with costly contingencies.

With the Dell Storage SC9000 arrays, Dell announced over 340,000 IOPS at sub-millisecond latency for OLTP workloads and over 20 GB/s throughput for sequential workloads.[i ]  These are metrics that could significantly advance productivity for the most demanding business-critical workloads, and when considered in a competitive context, they are quite impressive. For example, they are over 40% higher than NetApp’s quoted database rates for all-flash FAS8060[ii] and up to 20% higher—with up to 2x higher throughput—than the maximum Pure Storage //m70 configuration.[iii]

But there’s an economic side to the SC9000 metrics that is even more exciting for IT leaders facing heightened productivity imperatives with restricted budgets. This is exemplified in the $0.65/GB metric[iv] for the SC9000 AFA, which relates the affordability of usable flash capacity. By comparison, the usable flash $/GB metric that HP recently quoted for the HP 3PAR StoreServ 8000 all-flash platform is $1.50/GB[v]. For value-minded customers, that’s more than twice as expensive as the SC9000. NetApp’s usable flash $/GB metric for all-flash FAS platforms is around $1.25/GB[vi], which is 40% more expensive than SC9000. Even Pure Storage, as it projects its best-case $/GB metric for usable flash with a planned future availability of TLC media in the //m series, will remain over 2x more expensive than SC9000 for the foreseeable future[vii].

For strategy-minded customers who care about TCO, the SC9000 AFA configured with 32 3.8TB TLC SSDs can hold the same amount of database application data as the Pure Storage //m70 all-flash array, but at up to 2.8x lower 5-year cost.[viii] Notably, enhanced compression capabilities in the latest version of the SC Series OS enable up to 93% reduction[ix] in required capacity for workloads that are highly suited to flash, such as databases and server virtualization.  With highly compressible workloads, the SC9000 is capable of effective flash density up to 90TB9 per rack unit. That’s 55% greater density than currently shipping Pure Storage FlashArray //m arrays[x] and 33% greater density when Pure releases TLC media in the future[xi]. Keep in mind that Dell Storage SC Series compression capabilities have negligible performance impact and work equally well on both flash and spinning media, as well as frozen active and inactive data. These aggressive reductions optimize flash density to reduce infrastructure footprint, as well as $/GB to help contain storage costs over time.

As you can see, the SC9000 is uniquely designed to optimize performance economics in addition to ultra-high IOPS. The key design feature that underpins Dell’s ability to deliver outstanding metrics on both performance and economics is intelligent flash tiering. This is the “big brains” of the SC9000 that enables the platform to effectively use low-cost TLC/3D NAND flash media in innovative ways, averting both the performance and wear endurance tradeoffs that would make it less effective in competing disk array architectures. With this unique design approach, Dell is giving IT leaders the multi-faceted performance capabilities they need to not only aggressively advance productivity across more workloads than competing disk array platforms, but also deliver industry-leading TCO going forward.

So for those customers out there who have hard-driving IOPs and latency imperatives, but who also have budget and TCO imperatives, it’s pretty clear that the SC9000 comes out on top.

Check out the all the great new features on the SC9000 here.


[i ] Internal test performed by Dell in September 2015 with Storage Center 6.7 on dual SC9000 controllers. Actual performance will vary based on configuration, usage and manufacturing variability.

[ii] NetApp quotes all-flash FAS8060 is capable of 200K IOPs for 100% reads of 8K blocks for database workloads: http://community.netapp.com/t5/Tech-OnTap-Articles/All-Flash-FAS-A-Deep-Dive/ta-p/87211.

[iii] Performance based on Pure Storage published IOPS and Dell system-level maximum and Dell standard IO/SSD sizing guidelines. Individual customer’s price may vary and data should be used for comparison purposes and to drive conversations with your sales representatives.

[iv] Net effective capacity is after applying a 2.5x compression ratio. Customer results may vary depending on application and configuration.

[v] Based on a $1.50/GB usable capacity metric quoted by HP for the 3PAR StoreServ 8000 array with 25 percent overhead (RAID 5 7+1, + system overhead), 4:1 compaction, 2N system, 24×7 Proactive care 3 years, standard discount, Base OS. http://www8.hp.com/h20195/V2/getpdf.aspx/4AA5-9493ENW.pdf?ver=1.0.

[vi] Based on effective $/GB of $1.25 quoted in the451 article titled “NetApp recovers the ball in all-flash storage,” Nov. 15, 2015. https://451research.com/report-short?entityId=87299&type=mis&alertid=680&contactid=0036000001Pl6NjAAJ.

[vii] Based $1.50/GB usable cited from press coverage of Pure’s announcement of planned support for TLC media : http://www.storagereview.com/pure_storage_introduces_predictive_platform_3d_tlc_memory.

[viii] Based on Pure Storage //m70 at $2.25/GB including 5-year support, compared to SC9000 with all-inclusive software for 5 years and pricing on a controller upgrade in year 3. Individual customer’s price may vary and data should be used for comparison purposes and to drive conversations with your sales representatives. Capacity is calculated using the LZO compression algorithm for databases, and using thin clones to make required database copies.

[ix] Source: Dell internal testing, August, 2015. Best case results for Microsoft SQL application data. Customer results may vary depending on application and configuration.

[x] Based on 120TB usable flash capacity in 3U, quoted by Pure Storage on the FlashArray //m product page: https://www.purestorage.com/products/flash-array-m.html.

[xi] Based on 60TB usable flash capacity per rack U cited from press coverage of Pure’s announcement of planned support for TLC media: http://www.tomsitpro.com/articles/pure-storage-tlc-nand-afa,1-3031.html.

About the Author: Sonya Sexton