Kelly Lynch: Hello, everyone. I’m back again for another deep dive into some curious emerging technologies this week, I’m taking a little bit of a different approach, and I think you’ll see what I mean in a minute, but I wanted to give you a heads up in case you were wondering where the traditional interview style structure had gone off to. I’d love to know what you like better. So let me know, however you feel comfortable or however it’s even possible in the comments or something, but anyhow let’s get to it. I recently learned within the broader Dell Tech Community, that there’s actually an arm of the company focused solely on researching emerging technologies. That group is called the ORO or office of the CTO research office and their purpose is…
Malini B.: Basically to keep a pulse on how technology is advancing and continuously innovating to enable Dell technologies to lead an exploit the industry inflections. Right. So, our mission is to basically serve as an early warning system. And we have laser focus on nascent and emerging technologies. What we do is we conduct a lot of research studies. We try and build exemplary proof of concepts. And through those artifacts, we are responsible to influence the corporate technical strategy overall shaping up Dell Technologies, roadmap and strategy when it comes to these nascent and emerging technology
Kelly Lynch: That is Malini Bhattacharjee, technical product manager here at Dell Technologies.
Malini B.: My main areas of focus within aura has been around 5G, composable infrastructure as well as cloud. So in these strategic areas, I have been able to look at a lot of interesting, like future use cases that technology can enable and basically be involved in figuring out what Dell’s role might be as we move forward and how we can enable some of these technology to solve some very real world’s problems. So, it’s been an interesting journey so far.
Kelly Lynch: Now, one of Malini’s colleagues, Danqing Sha has partnered with Malini to assess the current state of augmented and virtual realities. Otherwise known as AR and VR. Danqing is exploring AR and VR through the lens of advancements in other emerging technologies, much like those Molina has researched including 5G, Cloud and Edge.
Danqing Sha: Currently I’m the principal engineering technologist at ORO China. My research area is mainly focused on advanced human machine interaction technologies, augmented reality, virtual reality. I have been working on different projects and the studies, Cloud AR/VR, AR/VR for 5G, natural user interface, digital twin, and so on. And I also worked on a project which developed an augmented reality-based tool for indoor navigation and the real time data visualization inside a data center.
Kelly Lynch: Before we move forward. I think it’s important to define the difference between AR and VR, because for as long as I’ve known about them, I have embarrassingly used them interchangeably.
Danqing Sha: For VR, you wear a headset, you will be in a fully immersive environment. You will not see the real world, but for AR you can still see the real world. So, it’s a mix of virtual and the real world.
Kelly Lynch: So just to clarify, and yes, I realize this is a very simplified explanation. You can think of the two, like this virtual reality is kind of like one of those games where you’re wearing a full on headset and you’re skiing down a mountain and you see this mountain in front of you, but you’re really just in your living room, swinging your arms around. Well, AR is more like say, if you were to be interested in maybe buying a piece of furniture, like a couch or something online with AR you can use your cell phone to quote unquote place that piece of furniture in your space digitally of course, to see if it will fit or even look good. And as I learned in my conversations this week, these two distinct technology types have massive potential, but they’re also limited in what they can achieve at this point and time.
Danqing Sha: So currently, as you know, there are quite a few technical challenges because existing for AR and VR from mass adoption, such as latency, so AR and the VR, they are very latency sensitive. If the latency requirements are not met, it will cause the user to have motion sickness.
Kelly Lynch: Remember that skiing example we talked about? Yeah, I don’t think any of us want to get motion sick while pretending to ski down a mountain in our living room. So basically latency is bad for AR and VR.
Danqing Sha: Also, resolution, pixel density on current generation AR/VR displays are very limited. The mobility and the field of view are also very limited, especially for VR. In addition, AR/VR applications also require a large amount of processing power and storage, especially for mobile AR and VR experiences. And the headset is always on while the user is using which will also cause overheating and battery life [shortages]. Finally, the cost. AI/VR platforms have been very expensive and often dedicated to the task at hand. So all of these restrictions are limiting the ability in AR and the motion in VR. Now, with what we call the Cloud AR/VR, which is augmented by 5G technologies and the cloud edge computing. Now we can do everything in Cloud. We can execute and render applications with cloud-based processing resources and stream the necessary view to thin client headsets.
Kelly Lynch: Quick clarification here, apologies for the interruption. But in case you didn’t know what a thin client is, thin like thick and thin, I asked Danqing to clarify, and basically she told me this, a thin client is the device with which you are actually using the technology or through which you are viewing the technology. So for VR, it would be something like a headset that you wear over your entire head. And for AR that’d be something like your cell phone. Okay. Back to Danqing.
Danqing Sha: Also Cloud-based rendering of content delivered to the thin client headsets can support a more mobile implementation of AR and VR and a longer battery life. And when offloading the GPU compute requirements to the Cloud, the resolution of the viewpoint content could it be optimized before streaming to the end user headsets over the 5G networks. So to reach its full market potential, 5G is one key enabler enabling Cloud AR to help solve device and the cost constraints pressure AR and VR.
Kelly Lynch: Basically, Cloud and 5G technologies can really expand the possible applications of augmented and virtual reality. Taking this into consideration Malini our friend from earlier conducted extensive research into applying AR and VR technology across a number of different industries, her ultimate goal finding applications where AR and VR could add true value and bring about greater equality across the globe.
Malini B.: So, when we look at say education, right? Some of the things that we wanted to look at is how can we further say equality and education, or how can we make education more attractive to students all across the world. Today the way we deliver education, the way we deliver learning content is more, I would say curriculum centric, or it’s more instructor led. And we do have kind of a one size fits all approach. When we look at the content that is being delivered to students of today, right? With all this technology and the shift towards more virtual experiences, can we move more towards say something like learner centric education, or can we add a layer of personalization so that for each student, right, they have a different view or a different experience interacting with probably the same material or when it comes to education. So, let’s take this example of a professor who probably, I mean, let’s take somebody whose native language is English, right?
Malini B.: So, the professor is speaking and writing something on his or her white board and their native language is English. But then we have the students who are probably distributed across the globe and they have different native languages. So, how can we use technology to make that experience seamless? Is it possible for those students to consume that information that’s coming out from the English speaking professor in their own native language? Right. So that is one aspect that wanted to look at, and we figured that there were already available technology. Like real-time translation is very much a reality today. We do have very good tools for speech to text translation, et cetera.
Malini B.: It was just a matter of stitching together all of that, and then provide a layer of modification. So to speak, what we did was we took the video that was coming out of the professor’s feed and then put a layer on top of that AR if you want to call it that, and then the student basically consumes that in their own native language, vice versa. If the student wants to say, ask a question or participate in the discussion, right? They can speak in their own native language, but then what we could do is build a service that does that translation does the video modification and the professor in turn sees that in English, right? So, we’re essentially taking away the language barrier out of this whole scenario. And we’re making sure that irrespective of where you are in the world or what language you speak, you will still have access to the best quality education that is available out there.
Kelly Lynch: But Malini and her team are not only thinking about AR and VR applications in the education space. They’re also exploring…
Malini B.: Access to healthcare. We were looking at use cases where like, say you have a wearable devices on people. I mean, people are using wearable devices, which sort of take life telemetry on like the vital statistics that are required for you to kind of monitor your health. And that is being transmitted to either the Edge or the Cloud where this data collection is constantly happening. And then you have these AI assisted services, which can use the data and it can kind of recommend a health plan or health care plan in a very automated fashion.
Malini B.: So, using that when a patient has a conversation with their primary care physician, right, we can use this AI assisted service and then we can also use AR/VR there where you can simulate like an in-person doctor visit being in the comfort of your own home. But then you’re also kind of simulating some of that experience where you are touching and feeling. And basically if the doctor is explaining something to you, right? Explaining the diagnosis to you. So they can probably use like 3D models or 3D diagrams, which you can then interact with and that makes it so much easier for the patient to kind of understand what’s going on.
Kelly Lynch: What I’m about to say is not novel or new, but I did want to call out that it’s not always easy for some people to just willingly accept this level of technology interconnectedness, if you will. So, I asked Malini her thoughts.
Malini B.: Well, I do get where the concern comes from, right? Because the moment you’re talking about collecting data or sharing, in essence you’re sharing your environment with somebody that you’re not being able to see, right? Somebody is sitting miles away and you don’t even know who that person is, and they’re kind of, they have access to all your data, they have access to your entire environment, right? So, I do get that feeling. I do get the concerns against privacy, data integrity, et cetera. And we don’t take that very seriously within the oral team as well. Right. So, whenever we do talk about any technology and possibilities that technology enables, one of the things that all play inevitably comes up is the epics of it, right? So we want to be sure that well, okay. Technology enables me to do all of these things, but what are the ethical implications?
Malini B.: Will it lead to unconscious bias? Will it lead to accidental invasion of privacy? Will it basically enhance iniquity inadvertently, right? I mean, how can this data be used in ways that was not originally intended and then end up doing more harm than good. So, those discussions are also always at the forefront when we look at any technology and any decision that we make or any recommendations that we make, right. We always make sure that we have that angle as well. So yes, I do get the concerns, but then I also want to tell the people who have those concerns that we are thinking along those lines. So anything that comes as a recommendation rest assured that those considerations have been taken into account.
Kelly Lynch: And of all the use cases she’s explored, one hits a little closer to home.
Malini B.: So, I have a second reader who is now doing distance learning, and I actually see firsthand how technology is used and perceived by children and how that basically changes the way they learn. So I am really, really interested and really excited to see where technology leads us there. And like, when we are looking at things like field trips, right, how can we replicate some of those or if not purely replicated, how can we augment the experience that students have at this point and time. How can we make it richer for them and more immersive. Right. So, that is what is super exciting for me when I think about this space and yeah, I mean, the possibilities are endless, right? You could probably send kids to a field trip into drastic park or solar system. So, the possibilities are endless with the technology, but then how is that perceived by a child?
Malini B.: If somebody is more of a visual learner, rather than somebody who is more of an auditory learner, is it possible to kind of modify the content that’s being delivered and use some of these technologies to kind of deliver that to the learner in a way that is most meaningful to them? What can we do to make sure that we are delivering content to people in a way that’s the best mode for them that makes it easiest for them to absorb what’s being said at the same time, making it seamless, right? So, we’re making it seamless for the instructor as well as for the learner, but we are also bringing in personalization elements and we’re also bringing in like enhancements to make sure that it’s in the best possible form for whoever is consuming it.
Kelly Lynch: So, what does that mean for you or for your kids besides the possibility of taking a real life “Magic School Bus” adventure with Ms. Frizzle into the solar system? Well, we may not have the answers right now, but what we do know is that AR and VR in conjunction with 5G and Edge Technologies will enable entirely new modes of learning. And I think that’s something everyone can look forward to. That will do it for this week. Thank you for bearing with me as I test out this new podcast format, I realize it’s a lot of my voice, but we’ll hopefully get to a place where it’s a bit more balanced. So join me again in two weeks to learn about another bit of research going on here at Dell Technologies. This time it’s on the topic of explainable AI. So, until then I am Kelly Lynch. And as you know, this is The Next Horizon.