Jon Hyde: Hello, and welcome back to The Next Horizon, a Dell Technologies podcast. I’m Jon Hyde, and together we’ll explore the implications of several major emerging technologies for business, society, and most importantly for you.
Jon Hyde: I’m joined today by Vish Nandlall. Vish leads technology and ecosystems for the Office of the CTO here at Dell Technologies. Vish, I have no idea what that means. Can you explain that to me please?
Vish Nandlall: Well, the bottom line is that I’m here to make sure that we don’t miss the internet. The stop gap between a new technology completely disrupting Dell and Dell actually embracing that technology and anticipating it ahead of the market, that’s the space of the industry that I’m playing in.
Jon Hyde: Got it. Okay. That definitely helps. There’s many topics and the strategic themes are part and parcel for that when we look at this. And for listeners who heard this with John Rose, it’s the strategic themes that Dell Technologies pays attention to. For those who haven’t, how would you characterize those?
Vish Nandlall: We’ve got six strategic themes, and they’re organized into major themes. These include Edge, 5G and data management. And then we’ve got three, what I would call, broad horizontal technology domains. And that includes cloud, AI, machine learning, and security. When we double-click on AI, we’ve really started to split our activities into three work streams. One is called AI for Dell. And these are all the AI solutions that are all about just optimizing our internal processes. There’s AI on Dell, which are solutions that are going to enable customers to run AI workloads on top of Dell products. And then there’s AI in Dell, which I’m assuming everybody was going to anticipate, which is AI solutions that we’ve actually built inside of all the Dell products.
Jon Hyde: Got it. Okay. So, if I replay that just to make sure I make sense there, we have AI in our products, which is there to improve the capabilities of the products and produce a better outcome for a lot of the work that we’re doing. There’s AI on our products, which is how we create great solutions and technologies that customers can then implement their AI solutions on top of. And then there’s AI for our customers and for our solutions where we’re improving business processes, we’re creating better and more efficient supply chains, we’re trading better activities and reactions when our customers need our help. Would that be a good encapsulation?
Vish Nandlall: That’s it exactly. And there’s just a whole hive of activity in terms of AI oriented projects that are in motion.
Jon Hyde: I heard about that. I heard John Roese floated the number somewhere between 500 and 700 AI/ML projects that are happening internally at any one point in time. Are there any ones that are interesting or particularly exciting right now?
Vish Nandlall: Really the more interesting stuff in my mind comes for the AI for Dell, the set of capabilities that we’re doing to leverage predictive analytics that help us to identify whether something’s likely to fail so that we can accelerate repairs in real time. Now, this is something that we’ve done in our supply chain that allows us to take data that comes from our built-in diagnostics to really help us to determine what our tech support workflow looks like, what type of diagnostics we need to be able to do and to anticipate, predict effective parts. All sorts of different capabilities really all come together where we can get, at the end of the day, a predictive repair engine.
Jon Hyde: That is really interesting because I think, at the end of the day, that’s customer satisfaction, that’s faster time to resolution, less unplanned downtime, something that customers are always seeking in many different formats. And I think everything that we can offer to be able to do that is just super, super powerful.
Vish Nandlall: That’s right. The initial pilots that we’ve had on these things have had amazing accuracy, over 80% accuracy in identifying really what parts are not working. And we’re reducing the movement of parts by around 15%. So, to your point, it’s cutting time to repair by almost 20 minutes.
Jon Hyde: Wow. And when we stitch that along with things like this idea of as a service delivery models that our customers are moving towards, the capacity or burst space that we’re providing for customers to move into those areas, that’s a fundamental difference that we can help in the future as far as how to deliver these types of technologies.
Vish Nandlall: That’s right. It’s creating efficiencies where tighter and tighter guarantees that our customers are asking. And as we’ve seen more and more mission critical systems start to rely on Dell infrastructure, the tolerance for system outages and downtime is getting more and more strict.
Jon Hyde: Yeah. I think that dovetails lovely in with this idea that John floated the other week, which had to do with how we’ve been able to create the most efficient supply chain globally by using things like machine intelligence to be able to get better control over what we’re doing. And I think all these things that we’re talking about are converging towards this idea of a more in-time, on-demand type of architecture that we’re able to help customers achieve that’s really efficient, really resilient. You could say, and I think John even alluded to it, is that AI is the most demanding user that you’re ever going to encounter. What is it and why is it that AI is such a difficult user to please? And how can we help our customers think about how to build out the infrastructure to focus on this type of user in the future?
Vish Nandlall: Yeah. AI workloads are notoriously difficult to profile. You have two different phases. You’ve got a production phase, which is really what most people understand from AI. It’s this engine that can do inferencing and prediction. Before you get there, you’ve got to develop the model. And this is the training of learning. And this is where it becomes very data intensive in compute intensive.
Vish Nandlall: When you look at what’s required for this training phase, you need three times as many operations than you do for inferencing. You require much more memory because every step of the training sequence requires you to save the content for the previous step in memory. That also makes it very, very difficult to scale up training because it requires all of these expensive followup steps. And then, finally, you’ve got these real-time requirements. While training can take tens of hours or a few weeks based on the number of computing on the inferencing side, there’s issues where I might have less than 200 milliseconds in the pipeline that I have to sign up to. For instance, if you look at Google’s auto-suggest pipeline, it requires very real-time types asserting needs to be able to insert the right type of advertisement in front of their customers. So these are the things that really make AI such a complex workload to manage.
Jon Hyde: It brings up two points for me. So the first one is, on your second point when we get towards inferencing, AI produces more data, which then needs to be, again, interrogated and interpreted and understood. And it just exacerbates that velocity, variety, and volume conversation that we hear over and over again. And it’s almost like this perpetual engine of information and decisions and information and decisions, which can be used to better train the model over time. And that’s a hugely intensive piece of the puzzle.
Jon Hyde: The other one that you touched on, which I find interesting, and I’m hearing interesting solutions to this, is the training portion. So, training takes a long time, it’s complicated. We’re hearing about customers who are buying pre-trained models and it has maybe 80 or 90% of what they need to be effective right off the shelf. That’s a really interesting way to think about getting farther ahead of the curve because bad data in equals bad data out. There’s a lot of challenges there if you’re not thinking about the right model and not thinking about the things like ethics and morality that fall into those buckets. So, buying that outcome seems like a great jumpstart for a lot of our customers.
Vish Nandlall: Yeah, so the concept you’re talking about is something referred to as transfer learning. And this is where we spent a lot of time to build a very sophisticated model and invest most of the upfront capital in terms of labor and compute and getting it right. And then you can have this derivative effect. For instance, if I’ve built a model that can recognize animals, and now I want to specify it to recognize cats, I only train a small subset of the model. And so, the amount of data I need now isn’t the number crunching exercise that it took to build the small model. It’s a fraction of the amount of time and data. And so this is giving a lot of reuse across the industry off of these standardized models that we’ve come to know and love, things like computer vision, as a for instance.
Jon Hyde: That’s, I think, one of the cruxes that a lot of our customers need to understand is it’s not insurmountable and it’s never too late to start. We’re at a point in time where customers are really starting to think about how they apply, A, a processes to do things within their business, whether it’s decision making tasks or automated processes, whatever the case may be. Scalability obviously concern better resource allocation, definitely a concern, but there are some positive implications that our customers can expect with AI. What are some of the things that you think are most prevalent there?
Vish Nandlall: I think the issue that a lot of enterprises have had is they’ve really only started the journey towards AI. And they’ve started with small experiments and trials. They’re moving, to your point, into more production ready systems, and we’re missing the management practices that are required to really extract the return on investment that enterprises are expecting. Most AI developers, they’re focused on, “How do I do training once?” They’re not really focused on, “Well, when do I retrain?” They’re focused on identifying data and features to build the model, but they’re not focused so much on over the course of particular classification run, “When is my data drifting and how is that going to impact the quality of the prediction that comes out?” And so, it’s really this operationalization of AI that we’re starting to see rise in prominence. It’s how do I move away from DIY code to something that can be scaled and deployed enterprise-wide and managed to understand things like drift and data quality and when I need to retrain to get visibility throughout your pipeline so that I can understand if errors are starting to crop up or if things need to be modified. All of the things, in fact, that come with having a well-administered system.
Jon Hyde: I think the other piece that I found really interesting, and John Roese mentioned it in one way, you’ve mentioned it in a different way. I think I’d like to encapsulate it in that there’s this idea of unintended value that customers get when they start implementing machine intelligence because we go after these projects with the idea of solving a problem. And I’ll put it in the context of the five whys. Why did that problem exist? Well, why did that problem exists? You keep delving down further into the details and you find that the problem that you’re tending to solve is actually not only not the problem, but there’s another value you might get out of just looking at the data a different way.
Vish Nandlall: Yeah. I think you’re hitting the nail on the head. It’s becoming more and more apparent that this approach of using AI to solve a specific problem isn’t where you get the real dividend of AI. It’s when you’re actually using it to transform a particular digital process that you start to realize the outsized gains. Being able to use AI to just provide predictive repair or predictive maintenance of a system is interesting in and of itself. It gives an early warning system to a customer that, for instance, your storage is going to fail or be depleted. That’s great, but if it doesn’t come attendant with a process where now there’s an order entry and that provides new capability in terms of doing a forecasting quarter to quarter and create back pressure into your supply chain, then you’re missing the opportunity that it presents. The efficiency gain needs to be translated across the enterprise so that you realize the full benefit of the gain. Otherwise, it becomes a gain that’s unrealized.
Jon Hyde: Yeah. No, that’s really important to not lose sight of because people throw money at these projects expecting to get an outcome. And when they don’t get the outcome, they’re like, “This was useless,” and they throw it away. But they have all this value that they never recognized and they never capitalized on. They spent all that investment, all that time, and all that opportunity and they just don’t do anything with it. And I think that’s something that I think about a lot when I talk with folks about AI, is there’s a lot of intrinsic value, but there’s a lot of unintended value. And I’m curious, when you think about things like that, is there a customer or a company or an industry that’s really done an impressive job of incorporating and integrating AI into what they do and getting new or better outcomes than what they have?
Vish Nandlall: We’ve been doing a lot of work with customers, including Toyota, to understand how do you go from light automation to do automatic braking, I can do lane following, all the way to I want to be able to do navigation and do that in a way that’s safe when other cars might not have automated navigation capability, that might not be self-driving. And there we’re seeing an acceleration of capability. We’re already seeing trials for what I would call low speed automotive delivery systems, whether it’s grocery delivery. And it’s really been this unrelenting pace of how something as simple as AI can not just revolutionize particular technology, but a whole class of industry in this case, the transportation industry.
Jon Hyde: Yeah, that’s fair. And I think it’s… Just to bring this to a close for our audience. What would be your advice to someone who is hesitant about deploying AI technologies in their business today?
Vish Nandlall: This is a bit of that old adage. If you’re caught in the woods with a friend and a bear approaches, you don’t have to run faster than the bear, you have to run faster than your friend. It’s very similar today in industry. The basis of competition for much of the digital economy is going to be algorithms and models. It’s going to be AI. And if you can’t run faster than your competition, you’re going to get eaten by the bear.
Vish Nandlall: We’ve got to be in a mode across all enterprises where they recognize that the basis of competition has shifted and that many enterprises across the space all recognizes. The impetus for digital transformation, much as we invest a lot of different meanings into it, is all about extracting better insights from our data and fundamentally leveraging that data in different types of models like AI to get ahead of the competition. The race is on. And so, it’s not as simple as I have a choice or I don’t have a choice. I think that’s the poorest framing possible. The real framing is do you want to be competitive?
Jon Hyde: That is really insightful information, Vish. Thank you so much. Again, Vish Nandlall from the Office of CTO here at Dell Technologies. Thank you so much for your time and enjoy the rest of the day.
Vish Nandlall: Thanks, Jon.
Jon Hyde: Thank you.
Jon Hyde: For those of you who enjoyed this podcast, you can find it at www.delltechnologies.com/nexthorizon, along with feature podcasts and other great content focused on emerging technologies. Thank you so much for listening and be sure to subscribe. Until next time, I’m John Hyde and this is The Next Horizon.