2018: The Year Man and Machine Meld Together in the Data Center

IT, much like nature, abhors a vacuum. Businesses are more dependent on IT to drive innovation than ever. But the challenges associated with managing IT at the level of scale required to drive that innovation is simply too much for the average IT organization. Because of that gap between the expertise available and the needs of a modern digital business there’s now a lot of interest in employing machine learning algorithms and other advanced technologies to transform how the data center is managed.

The goal is not to replace humans in the data center. Rather, there needs to be a melding of man and machine to take IT to the next level. Instead of always having to react to events advanced analytics infused with machine learning algorithms will enable IT teams to proactively manage data centers in a way most IT professionals could previously only have dreamt about. Because everything will soon be instrumented, application workloads will soon dynamically scale up and down across hybrid cloud computing environments with little to no intervention on the part of an IT administrator.

None of this vision is necessarily all that new. It’s part of an ongoing march towards managing IT at a higher level of abstraction. But it’s not all magically going to occur overnight either. Pre-integrated systems based on converged and hyper-converged infrastructure are a critical first step. Machine learning algorithms require access to data to correlate and interpret events. Trying to aggregate data from IT environment made up of disparate components simply adds another layer of integration complexity to an already challenging task. Organizations that can more easily correlate events occurring across compute, storage and enterprise networks in context of their integration, in support of a workload are going to be able to significantly reduce their mean time to actionable intelligent IT operations. Armed with that data those organizations will be able to implement policies that automate almost every aspect of IT. Instead of focusing on turning knobs daily to optimize individual components in the hopes of wringing a few extra percentage points of performance, the most valued members of the IT teams will be those capable of customizing intelligent operations algorithms to perform those tasks on their behalf.

Reliance on those algorithms won’t just be limited to performance optimization. Machine learning will also play a critical role in securing the environment. Increase in interconnected things leads to a wider attack surface. It will soon be possible for algorithms to spot outdated software or specific integrations that represent a mortal threat to the business. Cybercriminals are not just trying to steal money. They are engaged in everything from trying to purloin intellectual property to sabotaging manufacturing processes. These days all it takes is for them to discover one weak link to laterally spread malware across the enterprise. Algorithms coupled with advances in micro segmentation across the enterprise in the form of, for example, the advance VMware AppDefense security model deployed on top of converged systems leveraging VMware NSX as a network virtualization overlay.

As powerful as algorithms can be, however, the policies on which those algorithms act still needs to be crafted by an experienced IT team. Algorithms will go a long way towards eliminating many of the biases that all too often result in sub-optimal outcomes based on flawed assumptions. But implementing these policies across a complex web of distributed systems represents a challenge that only a highly experienced human brain can effectively process. Naturally, this level of transition will have a significant impact on the careers of many IT professionals.  Organizations are already adapting to this today, as staffs transition from constructing teams that are designed to identify and resolve operations problems with manual intervention to teams that are assembled to find patterns.  These pattern identifying teams are tasked with taking what is found, determine an appropriate automatic operations response and activate that response leveraging intelligent operations tools.   We are on an increasingly important new shift in information technology where technology is the asset that is the fabric of the business. The skills to operate and maintain this vital asset are in short supply. Automation is the means to scale the skills that are in short supply.  Across Dell Technologies we are committed to advancing these capabilities to give the nights and weekends back to our colleagues who consume and operate the technologies we build for them.

Our perspective has continued to be that “the cloud” is not a place it is method of operating modern information technology.  A foundational element of this modern operating model is technology companies building products that embed application programing interfaces (APIs) in the products they build. As enterprise IT continues to evolve IT organizations will dynamically stitch these environments together to optimally run various types of workloads. This brings way to what several refer to as a “mega cloud” where environments are interconnected with millions of gateways and trillions of endpoints to create unprecedented levels of distributed IT. All the data collected by those endpoints, gateways and the applications that operate throughout will be used to drive advanced analytic operations that are already transforming everything from the connected home to how aircraft engines are rented rather than purchased. In fact, there’s no better example of how the cloud, IoT, analytics and machine learning algorithms will all come together than Project Fire, an ambitious three-year, billion dollar distributed computing initiative spanning all of Dell Technologies.

The good news is that managing IT is about to become a lot more fun than it’s ever been. A lot of the drudgery that makes managing IT hard and unnecessarily stressful is closer to disappearing. Not only will there be much fewer configuration errors but the number of alerts being generated by all systems will be radically reduced through intelligent aggregation or correlated suppression. IT professionals might even be able to enjoy a social event without having to worry about whether any given alert means they should immediately drop everything to find a quiet spot to focus and determine a manual response.

Managing IT is going to gain new perspective for the administrator.  At the recent VMworld 2017 conference VMware previewed an instance of the vRealize management and automation platform being accessed via an Oculus Rift virtual reality headset. It won’t be too long before IT teams are navigating across multiple global data centers in virtual and augmented reality! IT teams will be able to discover and resolve IT issues around the globe with new perspective. While that may sound too fantastic to believe the truth is that AR/VR interfaces will eventually become a requirement in the age of the mega cloud.

None of this is happening simply because one new technology or another is now consumable. The entire relationship between IT and the business is being transformed. Business leaders in companies of all sizes now appreciate the strategic role IT plays in bringing organizations closer to the customer. Whether it’s simply a mobile application accessing a range of services or an application that leverages massive amounts of data to predict what customers are going to want next before they even know it themselves, the digital customer experience is now paramount. The real opportunity skilled IT professionals now have is to move closer the to the business than ever before.

As we close in on the end of the second decade of the 21st century something truly profound is occurring across enterprise IT. We’ve employed IT to automate everything in the world except IT. Now the time has come to apply many of the concepts that have been employed across multiple other processes to make IT itself more efficient as true enterprise scale.

You can read other predictions from Dell Technologies here: www.delltechnologies.com/2018Predictions

About the Author: Trey Layton

Trey started his career in the US Military stationed at United States Central Command, MacDill AFB, FL. Trey served as an intelligence analyst focused on the Middle East and conducted support of missions in the first days of the war on terror. Following the military Trey joined Cisco where he served as an engineer for Data Center, IP Telephony and Security Technologies. Trey later joined the partner ecosystem where he modernized the practices of several national and regional partner organizations, helping them transform offerings to emerging technologies. Trey joined NetApp in 2004 where he contributed to the creation of best practices for Ethernet Storage and VMware integration. Trey contributed to the development of the architecture which became the basis for FlexPod. In 2010 Trey joined VCE, where he was promoted by Chairman & CEO, VCE, Michael Capellas to Chief Technology Officer, VCE. As CTO Trey was responsible for the product and technology strategy for Vblock, VxBlock, VxRack, Vscale and VxRail. During his tenure, VCE was recognized as one of the fastest technology companies to reach $1 Billion in revenues and one of the most successful joint ventures in IT history. The origional VCE products Trey has led strategy on continue to be leaders in their respective share categories around the world. In 2016 Trey was asked to lead from concept the development of an all Dell Technologies converged product. From that initial concept Trey led a global team of engineers to deliver Dell EMC PowerOne, the industry’s first autonomous infrastructure solution, embedding open source technologies which enable automated infrastructure integration based on declarative outcomes.