From the Road: Dell EMC Service Provider Team at Nokia SReXperts

Last week, I had an opportunity to speak at Nokia’s SReXperts event in Madrid, Spain, and share my views on the telco transformation, specifically around NFV. I gave views of where I thought the industry was trying to drive – towards a single, integrated platform for multiple use-cases. I called this “IaaS for NFV”.

The industry “crossroads” is well-known and well-understood. Technology, operational, and business drivers for industry transformation – led by virtualization – have been a core subject area for the last several years, and even the basis for the original NFV whitepaper (2012). As reminder, this single underlying platform, delivered via new technology and operational tools, leveraging a combination of vendor and open source technologies, was the foundation for driving new business models and operational benefits.

If the industry followed the vision espoused above, we would have seen a move from physical to virtual marked by multiple applications residing on shared underlying infrastructure, and eventually incorporated a transition from virtual functions in hypervisors to virtual functions in containers, and evolution of orchestration to increasingly leverage automation and scheduling of resources across hypervisor-based applications, container-based applications, and bare-metal applications. Maybe it would have looked something like this:

So, did we hit the mark? Are we on path to hit the mark? Let’s discuss…

A History of Network Virtualization in Telco (2012-Present)

I view network virtualization as having gone through four iterations, or phases, in its maturity. Each of the phases were focused on answering a set of questions:

  1. Will it work? This phase led to the creation of ETSI NFV Working Group and various open source spin-offs, such as OPNFV. The output, of course, is a set of standards documents that number in the hundreds of pages in totality.
  2. How does it scale? This phase was marked by investments into evaluating how network functions in a virtual environment scaled, and eventually yielded open source projects, such as DPDK, that drastically accelerated packet process on x86.
  3. What are the economics? It turns out that it is difficult to evaluate the economics of an architecture, especially one as complex as NFV. As a result, the economic viability analysis has been focused on an initial set of use-cases, including Virtual CPE (and associated SD-WAN), Voice-over-LTE, and Virtual EPC (M2M and MVNO use-cases). The net result – lots of proof of concepts, field trials, and small scale deployments of use-case driven NFV.
  4. How is it operationalized (at scale)? Upon understanding the first three phases, the industry is now focused on bringing together vendor technology, open source technology, public and private cloud technology, and integrating them together with the same level of telemetry and service assurance capabilities as their physical counterparts, with increased automation.

How The Phases To NFV Analysis Have Introduced Unforeseen Divergence

The phased approach to NFV, and the tepid investments into operationalization, has put the industry at somewhat of a quagmire. Why? Because the focus initially on driving use-cases before abstracting a common operational model across these use-cases has resulted in a hybrid VNF appliance architecture. See here:

What’s a VNF appliance? 

It appears to be the virtual instantiation of a former physical network function, relying on dedicated underlying infrastructure like its predecessor, with the exception of that underlying infrastructure being x86-based.

Why is this an un-natural (or bad) state?

The original path to a fully disaggregated, microservices-based architecture should have been linear – one that went through a hypervisor-driven approach as a means to learn a new networking paradigm that includes compute, virtualization, control- and user-plane separation, orchestration, automation, and eventually DevOps/NetOps principles in a sandboxed environment. The role of the hypervisor was as much to enable secure environments where engineering and operations organization could learn these skills with relatively lower risk of causing a network outage as much as it was a platform for shared resource utilization. By verticalizing network functions into VNF appliances, we have introduced another step in the journey towards operationalization.

It’s Just Another Step. So What?

As the industry seeks to incorporate increasing amounts of open source technology, the rate of technical innovation is accelerating. The Innovation Adoption Lifecycle is no longer a normal distribution, but instead is skewed left – more innovators and early adopters than ever before. Adding a step into the network virtualization journey serves to introduce and increased amount of confusion into the process, resulting in an uncertain mix of technologies (Containers, Docker Swarm, Kubernetes, Ansible, Mesos, etc.) that are close enough to production-ready but have not been at the forefront of the learning curve battling for mindshare against a set of technologies (Hypervisors, Openstack, MANO, etc.) that are now much better understood but starting to lose their “new and innovative” luster.

What’s next?

Let’s just revisit that “IaaS for NFV” concept one more time. The goals of this platform have to be more than technical – more than just building a common technology platform unified across disparate use-cases with an iterative, release-based delivery model that incorporates increasing amounts of open source componentry and “DevOps-style” tooling. It has to do more than account for the multiple technology directions the industry might take going forward.

The infrastructure stack itself also has to adapt to changes in operational and business models, as well – and account for nuances in current and future procurement processes, operational models, software stacks, and buying paradigms (ie, the shift to consumption-based). To do this, vendors who target the network virtualization arena need to do so with more than just shiny technology, but also adapt their business practices to redefine the entire engagement model with telecommunications SP. At Dell EMC, that’s really the focus right now!

About the Author: Kevin Shatzkamer

Kevin Shatzkamer is Vice President and General Manager, Service Provider Strategy and Solutions at Dell Technologies with responsibility for strategy and architectural evolution of the intersection points of network infrastructure technologies, cloud and virtualization platforms, and software programmability. His organizational responsibility encompasses industry strategy and investment analysis, business development and go-to-market activities, technical architecture and engineering, and infrastructure evolution / futures-planning. He is also responsible for leading the Dell Technologies 5G strategy in close collaboration with industry-leading telecommunications providers globally. Mr. Shatzkamer represents Dell Technologies on the World Economic Forum (WEF) Global Futures Council on New Network Technologies (5G-related). Mr. Shatzkamer's ecosystem-wide, experience-centric approach to working with customers allows for the identification and exploitation of synergies between disparate organizations to derive new technology / business models for the mobile industry, especially as “5G” defines transformation from technical architecture to ecosystem and service offerings. With over 20 years of industry experience, Mr. Shatzkamer joined Dell EMC in 2016, with prior experience at Brocade (Service Provider CTO, Head of Brocade Labs) and Cisco (Distinguished Systems Engineer). He holds more than 50 patents related to all areas of work. He received a Bachelor’s of Science from the University of Florida, a Master’s of Business Administration from Indiana University, and a Master’s of System Design and Management from Massachusetts Institute of Technology. Mr. Shatzkamer is a regular speaker at industry forums and has published two books discussing the architectures and technologies shaping the future of the Mobile Internet (2G, 3G, and 4G networks), from RAN to services.