It’s nearing the end of the year and the IT vendor conferences are starting to wind down. Next week will be NetApp’s annual INSIGHT event in Las Vegas, and we’d like to share some thoughts on what you might want to ask NetApp if you happen to be attending their event.
NetApp has been in the storage industry for a long time and has had its ups and downs. Given its recent earnings announcements and related media coverage, I think it’s safe to say that NetApp is trying to right the ship now.
"A sharp slowdown in enterprise customers' all-flash array purchases has sucker-punched NetApp,and though it is hiring more sales heads to fix this worry,things aren't forecast to get better anytime soon." Blocks & Files – See full article here.
By comparison, Dell Technologies continues to grow while expanding our #1 position across multiple categories, including Storage, Server, Converged, and Hyper-Converged.
With that said, NetApp does have a very loyal customer base, many of which will be attending their INSIGHT 2019 event next week. Here are a few key questions.
What enhancements to NetApp’s All-Flash (AFF) and Hybrid (FAS) arrays are coming to make them both simpler to manage and more efficient?
NetApp has done well over the years, with its all-flash (A-Series) and hybrid (FAS series) arrays, running their ONTAP storage operating system, but as the industry has matured and data storage solutions have gotten simpler and more efficient, the question is ‘will NetApp make the hard changes required to make their arrays simpler and more efficient for customers to use?’
Within our Dell Technologies storage portfolio, we have storage platforms, as well as converged and hyper-converged solutions that can meet all the requirements mentioned below, and more. We believe that our portfolio provides more choice and the best opportunity for customers to match the right platform to the workloads and use cases they need without compromising simplicity or efficiency.
Questions to ask:
- When will the siloed islands disappear? Meaning, when will their storage resources be truly shared across the array controllers? Is there still value in creating islands of storage that need to be manually configured, optimized and managed?
- When will deduplication be made global to provide optimal efficiency and value?
- When will you be able to provide ‘constant availability’? When will MetroCluster (for remote replication) be truly Active-Active, with volumes Read/Writable at both sites, and providing an RTO of zero instead of ‘near’ zero?
- SCM as persistent storage? What is their plan for Storage Class Memory (SCM) as a true persistent storage tier? And when they do adopt SCM, will they have intelligent and automated tiering to make the most of the SCM storage tier when mixed with NVMe or other media?
Large Scale Unstructured Data:
With a Dell Technologies Isilon solution, you get a global namespace that enables more efficient and flexible management. You need to manage just one volume—with no hierarchy of objects. You can easily and non-disruptively change policies that affect redundancy, retention, efficiency, performance, and security. See more details on Isilon here.
When is NetApp going to bring true global namespace to large-scale file environments?
Today, customers must provision and manage a hierarchy of storage objects that underpin its pseudo-global namespace capability, FlexGroup Volumes. This means that storage managers are faced with inflexible and labor-intensive processes needed to manage dynamic and growing storage for unstructured data. The hierarchy of objects inherent with NetApp’s physical RAID-based architecture requires many manual steps for allocating and assigning ONTAP volumes; for example, setting the protocol, assigning the server, assigning networking to the server, applying permissions. And all these steps must be completed before you can use the storage. Further, storage managers must configure and schedule retention and efficiency on a per-volume basis. Then, efficiency tuning is often required at the pool level. And finally, they must schedule changes to retention and efficiency settings.
How about Hyper-Converged Infrastructure?
With our Dell Technologies industry-leading HCI, VxRail, customers get a fully integrated, pre-configured, and pre-tested hyper-converged system. VxRail meets all the requirements mentioned below while also providing certified integration with VMware Cloud Foundation. VxRail delivers single upgrade packages every 30 days, all automated, tested and installed in the proper order. For more information on VxRail, click here.
If you are like me, you have found NetApp’s HCI messaging a bit confusing recently. For years, NetApp’s HCI solution was sold as a Hyper-Converged Infrastructure, then suddenly, they changed it to disaggregated Hybrid Cloud Infrastructure. Did anything change? Was it ever really Hyper-Converged to begin with?
Questions to ask:
- LCM simple or difficult? How does Life Cycle Management work for NetApp HCI? Does NetApp HCI do automatic server, networking, storage, and VMware patching and upgrades, in the proper order, with the press of a button?
- Smallest building block available for the edge? What options does NetApp HCI provide for edge use cases when customers need smaller solutions for things like remote offices? Isn’t the smallest NetApp HCI 6 nodes?
- Software-Defined? Does NetApp HCI come in a version that doesn’t require their SolidFire storage?
- Is NetApp HCI available as a true op-ex (as-a-service) model – complete HCI stack (hypervisor, server, network, storage), fully managed, on-premise solution available 100% as-a-service?
How successful has NetApp’s Data Fabric vision really been?
At Dell Technologies, we believe in providing our customers with best-in-class cloud solutions that provide customers with flexibility and choice. You can choose from our turnkey Cloud Platforms, Cloud Validated Designs, Complete Op-ex Models for Datacenter-as-a-Service and Public Cloud support across Azure, Google Cloud Platform (GCP) and Amazon AWS. Find all our cloud solution details here.
NetApp has been pitching their Data Fabric for years, but when you look at their earnings announcements, the ‘Cloud Data Services’ represents less than 1% of their revenue. So, the obvious question is ‘how many customers are really adopting the Data Fabric?’
Questions to ask:
- Based on a storage operating system? Data Fabric seems to be based on NetApp’s ONTAP so I can see why some existing NetApp customers may find it useful, but what about new customers? Ask NetApp to explain the benefit of Data Fabric if I’m not an existing NetApp customer using ONTAP?
- What about NPS? Is NetApp Private Storage (NPS) still a viable solution with a roadmap into the future?
If you are attending NetApp’s INSIGHT event, please consider these questions and their importance to your infrastructure, and seek the answers you need. And I hope you have a great week!