Danielle Applestone: If there was a 1950s science fiction movie playbook, this scene might have its own chapter. A strange machine appears, as if from an alien civilization, as complex and powerful as it is mysterious. Nobody knows how it works or whether there’s a limit to its capabilities. Unlocking its secrets could inspire mind-boggling possibilities, and before its potential is unleashed, some of the world’s great scientific minds are put to work simply trying to understand it. Give or take the 50s setting, the black and white imagery, the Hollywood trimmings, and the alien part, it’s a pretty tidy description of the task before a Montreal based team of researchers. The machine they’re studying, the human brain, the same three pounds of standard issue gray matter that you and I are using right now. This is the story of a new effort to reveal the long-held secrets of the brain and applying them to artificial intelligence, of how it will spark a vast new era of discovery, and of the technology that’s making it all possible. This is Technology Powers X, an original podcast from Dell Technologies. In this episode, technology powers the decoding of the human brain. However new it might feel, AI is an ancient concept. When Jonathan Swift published Gulliver’s Travels nearly three centuries ago, he described something called the engine, a machine on the Island of Laputa. It was a machine he wrote that could enable the most ignorant person, his words, not mine, to write books of Philosophy, Poetry, Politics, Law, Mathematics, and Theology with the least assistance from genius or study. And that was hundreds of years after mathematicians and poets expressed similar ideas about artificial intelligence. The long history of AI is an incredible story for another day. But if you care to skim ahead to the most recent chapter, you might find yourself at the Courtois NeuroMod project in Montreal, Canada. As Project Manager Julie Boyle explains, its mission is to chip away at the barriers that separate artificial intelligence and the human mind.
Julie Boyle: The point of the Courtois NeuroMod project essentially is to train artificial neural nets to behave in a human-like fashion and we plan on doing that by using brain data from participants, six participants whom we scanned intensively across various cognitive domains. And the idea is to use that brain data to train artificial neural nets to behave in a more human-like manner.
Danielle Applestone: Project founder Pierre Bellec is Scientific Director. For him, it’s a passion nearly three decades in the making.
Pierre Bellec: So it’s a little bit of a life calling. So it goes back to when I was quite young as a teenager, I was reading to computers and we’re talking the early nineties here and there weren’t free internet, at least I didn’t have access to the internet. But I was hearing a lot about people hacking and I was really fascinated by the hacker culture. So I started tearing apart games and I found it really fascinating to be able to essentially break into a black box with the proper tools, gain control about how it was working. And I felt with the proper infrastructure and tools, you could essentially break any sort of barrier and understand or at least control how program work internally.
Danielle Applestone: That early fascination with video gaming eventually led Bellec to apply the hacker method to crack the brain, in a field known as Neuroinformatics. Creating a baseline for the group’s neuroAI study made strong demands on computing power and storage.
Pierre Bellec: We take one picture of the brain every half a second, roughly. So with one hour of data, you’re looking at give or take about 5,000 images of the brain for each participant. In total, we have about 6 million full images of the brain, 3d pictures, and that’s roughly 30 terabytes of data. So that gives you an idea of the size of the dataset for that particular benchmark
Danielle Applestone: While GPU’s or graphic processing units had taken center stage in AI research, in time, a shift to CPUs or central processing units would provide them with more flexible memory. With benchmarks established, the team could begin to study both the human brain and artificial intelligence. To Julie Boyle, a helpful place to start is with a hotdog.
Danielle Applestone: For a child who’s learning language, she says discerning the difference between a dog and a hotdog is simple. In the realm of artificial intelligence, it’s a lot more complicated
Julie Boyle: For an artificial neural net, that takes hours and hours and hours and thousands and thousands of examples for it to learn the difference. And in order to do that, you need a lot of computing power and you need a lot of GPU and CPU.
Danielle Applestone: Yu Zhang, a postdoctoral fellow describes the process of using that imagery, effectively reverse engineering simple human activities to better understand the brain activity that prompts them, part of a bigger initiative some call mind reading, but it’s better described as brain decoding.
Yu Zhang: So what is brain decoding? One simple example is like this. We ask the participant in the scanner to move their fingers or different index of the fingers. And then we record their brain activities during the movement. And then we use different tools from machine learning or the deep learning field to try to guess which finger the participant was moving, or what kind of detailed movements like for instance, different directions, or grabs, or something like that. So we can know what the participant was doing by simply looking at his brain imaging data.
Danielle Applestone: Clear from the beginning was the correlation between high performance computing and storage and its impact on the pace and quality of the team’s data.
Pierre Bellec: You need to store in the memory possibly hundreds or thousands of transformation of that image. And unless you degrade the resolution of your images a lot, this can get really big, really fast. So the type of hardware we had access to originally, we had GPU cards that came by groups of fours that were connected to the link, and that would give us a 64 gigs of RAM on the cards which is substantial but not enough for that kind of applications.
Danielle Applestone: Soon, the team saw itself hitting a wall as their needs outgrew the capabilities of university computers.
Pierre Bellec: But then through McGill, there was this initiative of maybe trying to establish a new center in partnership with Dell that would be focused on neuroimaging and neuroscience application, and they were looking for use cases, demonstrating that this could actually have an impact and make a difference in real world scenarios. And so we were super happy because basically we were in a dead end in the project and we just jumped at the opportunity to see if we could move forward using that new type of hardware.
Danielle Applestone: Luke Wilson is a distinguished engineer with Dell Technologies and the Chief Data Scientist of Dell Technologies HPC & AI Innovation Lab. It’s a place for customers to try out new hardware and explore what he calls the art of the possible through high performance computing and artificial intelligence.
Luke Wilson: And so they came to us for two reasons. One was to help them scale up the training of this process. They were having a very hard time training it on a single box And the other was to help build a more accurate model that would allow them to move on to the next stage of their research. And so we helped them develop new models and techniques as well as improve the performance of the training of that neural network so that it could be done very rapidly. In less than 20 minutes on the Zenith supercomputer within the Dell Technologies HPC & AI Innovation Lab
Danielle Applestone: Empowered by the newest generation of high powered computing, work on the benchmark study picked up speed, but the complexity of the human brain remains impossible to grasp in its entirety. Yu Zhang.
Yu Zhang: As we know, in the brain systems we have billions of billions of the neurons and the connections. And from the current technology, we don’t have that amount of resources. I think it is not possible, at least in a very short time, to build neural networks or artificial models that function like the brain. We could only capture a very little aspect from it.
Danielle Applestone: Which is why the Courtois NeuroMod project chose an incremental approach, with separate branches exploring specific facets of brain function as Julie Boyle explains.
Julie Boyle: There’s a branch of people that work on emotions, there’s a branch of people that work on language, there’s a branch of people that are working on vision, there is a branch of people that are working on memory, and then we have a branch of people working on video games. We have a branch working on audiology to make sure that when people are being scanned intensively, it’s safe for their ears. And each one of those branches is a large project in itself and they don’t really know what each other are doing.
Danielle Applestone: The project is a hub that brings together all these different cognitive domains. The aim is to create a single, flexible, multi-domain AI model. Which still leaves the question of ‘how’- how to go about ‘reading’ activity in the creative engine that drives human thought? The researchers determined that a remarkable amount of insight can be gained by studying brain oxygenation. Pierre Bellec.
Pierre Bellec: We look at vasculature in the brain and how oxygen changes locally in the brain. So this is a very slow phenomenon. And actually even with close colleagues, who do brain decoding, there was a lot of skepticism that we’d be able to be cover information about what people were doing by just looking at very brief recordings of blood oxygenation. And it turns out that we are able to get a fairly accurate prediction even using small amounts of data. So with ten seconds of signal, we get over 80% accuracy, close to 90, looking at 20 different tasks that we tried to decode. So the level of chance could be 5% if you want. So it’s much much bigger than chance, but it’s also actually near perfect. And even looking at single time point, which takes us a bit over a second to acquire full brain volume, we’re still able to decode what people were doing in the range of 70% accuracy which for me was a surprise.
Danielle Applestone: And the human activity that helps reveal the complex interactions of the brain. What else? Video games. Julie Boyle.
Julie Boyle: Humans are really great at transfer learning. You learn something in one domain and you use the type of knowledge that you have to apply it to another. Even though you’ve never roller-bladed, if you’ve skated, you can transfer that type of knowledge. The star component, I think, of Courtois NeuroMod or the main component is the video games. And the idea is because video games pulls on all of these modalities. It’s the most complex that we’re using. If you think of just a pure language task or a pure vision task, or a pure auditory task, a video game does all of those things. And on top of it, there’s decision-making, there’s challenges, there’s something called flow where you get into the game and you forget that you’re back to you’re playing, you’re in the zone kind of thing. So it opens up a whole bunch of possibilities for studying and it gets us closer to this multimodal complex cognition that we do so flawlessly as humans.
Danielle Applestone: Despite breathtaking advances in the computing tools needed to understand the brain and despite near universal belief in the potential of AI, most real-world applications are only just beginning to reveal themselves. Some of the most obvious, most urgent are medical. Pierre Bellec believes that using the project’s methods may provide insights to the study of Dementia and Alzheimer’s disease. Julie Boyle cites AI applications where algorithms clearly outperform humans in detecting lung cancer based on patient images. The team’s desire to accelerate innovation in AI is embedded in its open science philosophy. Sharing its data with other scientists they feel honors the spirit of science and discovery and means reaching goals sooner.
Julie Boyle: Open science is a new concept and it makes people who don’t have the similar mindset nervous. Not everyone wants to share their data. So what happens is that we have six participants that we scan really intensively about a hundred hours of neuroimaging data a year. And we’re going to output that data as fast as we can to the community. We’re not going to sit on the data or protect it which is what a lot of the teams do. I’m not judging other teams, it’s just for us getting the data out to the community so that the field can push itself forward is what matters.
Danielle Applestone: An engine helping drive this AI research is the Zenith Supercomputer housed in Dell Technology’s HPC & AI Innovation Lab. This high performance computing cluster is a single system built out of many individual servers. Luke Wilson.
Luke Wilson: So Zenith is an ever-evolving system. And right now it’s built on the PowerEdge C6420 and the R740 Intel-based server building blocks from Dell EMC. But we have a new generation coming out that we’re going to be upgrading Zenith to eventually that consist of the PowerEdge R750 and PowerEdge C6520. And those bring an enormous number of improvements in terms of the ability to process artificial intelligence workloads, the ability to process high performance computing workloads, and the ability to integrate the latest generations of accelerator peripherals and network cards thanks to the move to fourth generation PCI Express. And so the ability to deliver extreme compute density, the ability to deliver extreme scalability, the ability to deliver better results faster through this next generation of PowerEdge servers is something really exciting. I think it will help to very much advance the art of the possible and scientific research and high-performance computing capabilities going forward.
Danielle Applestone: Just as valuable to the team as the supercomputer was the guidance and advice they received and how to use it effectively. Pierre Bellec.
Pierre Bellec: I’d like to emphasize it was not just getting access to the servers, although that was arguably the biggest contribution. But we also needed a lot of help to get to use this hardware correctly. So we need to compile our machine learning libraries for that hardware and use the memory properly and distribute our jobs across multiple compute nodes also appropriately
Danielle Applestone: An incredible feature of this relationship is that by allowing this team to put a supercomputer through its paces to advance AI, they’re also helping inspire the next generation of technology. Luke Wilson.
Luke Wilson: These types of projects, whether they come from a university, they come from industry, they come from our technology partners, they help us push the art of the possible and then that helps us invent the next generation of PowerEdge servers, PowerSwitch networking equipment by helping us to know what the world of computation is going to look like in the next 18 months to two years. And so pushing the art of the possible with our customers in terms of algorithms and software and capabilities allows us to push forward the quality and abilities of the PowerEdge server portfolio and the rest of the Dell EMC infrastructure portfolio in ways that we wouldn’t be able to do without it.
Danielle Applestone: One goal for AI is to imbue machines with imagination. And that can unlock so many possibilities, the human imagination may need time just to grasp them. What helps drive teams like Courtois NeuroMod is the unknown and the uncertainty as to the long-held mysteries their work may help resolve. Pierre Bellec.
Pierre Bellec: Because we’ve been talking about a lot of very advanced technology and things really can make people dream, definitely make me dream, I’d like to emphasize that I have no idea what I’m doing. And like many people right now, we are mixing technologies in a way that have not been mixed before. And we need to learn along the way a lot. And at the end of the day, there’s many unknowns about where that will lead us.
Danielle Applestone: And that says Julie Boyle is the very essence of discovery.
Julie Boyle: What excites me about this project is the fact that we don’t entirely know where we’re headed. I love working with the team, I love working on something that’s so cutting edge that we don’t entirely know where we’re going to be in six months. And there’s something really exciting about being part of that type of science because you feel like you’re forging the way in some small way.
Danielle Applestone: This is Technology Powers X, an original podcast from Dell Technologies. For more information on the latest Dell EMC PowerEdge servers including the R750 and the C6520, go to DellTechnologies.com/Servers. For more information on Dell Technologies HPC Solutions, go to DellTechnologies.com/HPC. To discover more about this episode, our speakers, and to read the transcript, visit DellTechnologies.com/TechnologyPowersX. I’m Danielle Applestone. Thanks for listening.