2.10 – Bringing Goodness to AI Through Explainability

Transcript
Subscribe
All The Next Horizon Podcasts

In this episode:

  • Bridging the gap between fear and trust in AI (01:19)
  • Review of traditional programming vs. AI (01:45)
  • Re-imaginging traditional programming vs. AI in the context of a paper shredder (02:30)
  • Difficulty understanding outputs from AI and ML algorithms (04:20)
  • Why it would be impossible to write a program for an autonomous vehicle (05:02)
  • The best definition for explainable AI (06:36)
  • A clear example of the need for explainability through the lens of autonomous cars (07:13)
  • Explainable AI as Responsible AI (07:59)
  • Is explainability only going to help the internals of AI algorithms? (09:07)
  • How voice assistants are becoming more and more regionally aware (10:20)
  • Reaching a new level of trust with AI (11:30)

Voice assistants. Movie recommendations. Car insurance. Facial recognition software. All of these things use artificical intelligence and machine learning in one way or another. For the most part, we don’t feel the need to question why we get a particular movie recommendation on whatever streaming application we use. However, when we start using applications that have the potential for more serious outcomes,  understanding that “why” is far more important.

This is where explainable AI comes in. Unlike traditional programming, AI and ML algorithms are trained using batches of data; not a set of specific instructions. So when Netflix recommends a rom-com, we look to understand why and that isn’t always clear. With things like autonomous vehicles (developed with AI/ML algorithms) increasingly becoming a reality, comprehending why that car stopped when it did becomes much more of an imperative.

This week, Roberto Stelling and Adriana Prado, two researchers from the Office of the CTO Research Office in Brazil, join host Kelly Lynch to talk through exactly what explainable AI is, why it is such an important factor in the future development of algorithms, and how, once we can fully trust it, AI has the potential to positively impact our lives.

Guest List

  • Kelly Lynch is the host and producer of The Next Horizon podcast as part of the Technology Thought Leadership Team at Dell Technologies.
  • Roberto Stelling is a data science advisor in the OCTO Research Office (ORO) in Brazil, researching applied AI/ML techniques.
  • Adriana Prado is a data scientist consultant in the OCTO Research Office (ORO) in Brazil doing research on AI/ML technologies.

Kelly Lynch: Hello again. Thanks for coming back to spend some time with me this week. As promised, I’m joined by another two of my colleagues at Dell Technologies from the Octo research office, this time from Brazil. We’re digging into the topic of explainable AI this week. And if you haven’t already, I highly recommend going back to listen to episode four of the second season. In that episode, Bronze Larson talks through the third wave of artificial intelligence. And in that third wave, explainable AI will become even more relevant. So do you want to go listen to that now? Yeah, go back and check it out. Then come back. I’ll still be here.

Roberto Stelling: One of the roles that explain [inaudible 00:00:45] will play on AI, on artificial intelligence is really in this mistrust for artificial intelligence solution.

Kelly Lynch: That is Roberto Stelling, a researcher at Dell technologies, exploring the need for explainability within artificial intelligence and machine learning models.

Roberto Stelling: There is this kind of a Frankenstein complex in most people, so when they think of AI, they think of the Terminator or they think of something that will do eventually wipe out humanity. So one of the roles of explanation is to bridge this gap between the fear of AI in the future and the reliability and trust in the use of AI.

Kelly Lynch: So how do we bridge that gap between fear and trust in artificial intelligence? I think before we move forward, we should quickly revisit the evolution of AI and machine learning, just so we know what we’re dealing with, at least I needed a little bit of a refresher. If you wouldn’t mind, just giving me an overview, what is artificial intelligence and machine learning and what do those current algorithms look like?

Roberto Stelling: Okay. So, when we talk about artificial intelligence in this context, we are usually referring to the use of machine learning within artificial intelligence solutions. Machine learning is an area of artificial intelligence that use a set of methods that learn answers from data. And it’s a big shift from traditional programming, in a traditional program, like a telephone, a map, a game or a spreadsheet. Our input goes through a set of instructions and the program produces an answer. Based on these instructions, every action of the program happens inside this loop of input, instructions, output.

Kelly Lynch: I’m going to need to stop already because as a non-programmer, I needed some time to sit with this idea and find a relatable, tangible non-computer based example of what traditional programming is in the context of, as Roberto said, “input, instructions, output.” So what did I come up with? A paper shredder. Okay. Okay. Hear me out. So the input is the paper. The instructions are what the person who built the machine instructed it to do when it received set input, i.e., the paper, and then the output is what happens after the paper goes through the instructions in the machine i.e., shredded paper. So, there’s an input of paper, a machine that has been fed instructions to shred paper when it is given paper, shreds the paper, and then the output is a shredded piece of paper. And there you have it folks, traditional programming brought to you by paper shredders, and non-programmers like myself everywhere.

Roberto Stelling: This model of solving problems is fine for simple tasks, but is inadequate for some complex problems or for tasks where you must have proper action, even in the presence of unexpected input.

Kelly Lynch: These more complex tasks like Roberto was talking about is where machine learning comes in. So in the instance of a paper shredder, if we’re really going to stick with that analogy, it goes a little, something like this: you have that same input, the paper, and you have the output, the shredded paper and a machine that is yet in a sense, uninstructed, like a blank slate. So instead of telling via programming or traditional programming, that machine to shred paper, when it is given said paper, you teach the machine what an input looks like and what an output looks like, plain paper and shredded paper, and anticipate that it will derive instructions, for example, to shred the paper on its own.

Roberto Stelling: The flip side of this point is that although it’s somewhat easy to figure out what went wrong in a traditional program, it may be very difficult if possible to figure out why our machine learning model as a deep neural network, for example, reduces certain answer to give an input.

Kelly Lynch: So here is where explainability really starts to come in. It is much more difficult with machine learning to understand an output, because as you’ll remember, no instructions were written and provided by a programmer to the machine. The machine just gradually learned a set of instructions on its own after being fed both the inputs and the outputs to learn from. So I wondered then why don’t we just stick with traditional programming?

Roberto Stelling: So imagine writing a program to drive an autonomous car. Input for this program is everything that is around the car, the direction it is moving and any other information captured by the car sensors. And the output is the full set of actions the car must take at every moment, lane going a bit to the right, stopping, reducing speed, accelerating, signaling, or taking evasive action in case of danger. Now imagine writing the instructions for such a self-driving car for all possible scenarios. We can not do that. The next best thing is to develop an artificial intelligence solution that will be fed and trained with an absolutely high number of possible scenarios, like driving at night, at dawn, in urban areas, in rural areas, with animals on the road, with cars carrying bikes, with cyclists, with a tow truck towing a truck and so on, many other examples.

Kelly Lynch: Ah. Okay. So writing a program for an innumerable number of inputs and potential outputs is not only super inefficient, but it’s likely impossible, but if we can’t figure out what goes wrong, if something does go wrong, do we have to understand what’s going on behind the scenes? Or is it okay to just accept the outcome?

Adriana Prado: So the thing Kelly is that some problems are much less sensitive to the lack of explainability.

Kelly Lynch: That’s Adriana Prado, another researcher at Dell Technologies and a close colleague of Roberto’s, who is also investigating the concept of explainable AI.

Adriana Prado: I recently conduct a study on this topic, and what I realized was that there is no consensus actually in the literature about what explainable AI is exactly. But in my view, the best definition I found is that explainable AI or XAI for short is the idea of creating AI algorithms, capable of explaining the rationale and decision-making process in a human understandable way that is leave a clear sequence of logical arguments.

Kelly Lynch: And why exactly do we need this clear sequence of logical arguments?

Adriana Prado: So what if the AI algorithm lists the auto-driving driving system to make a mistake and put someone’s lives in danger? How could the developer of the system or an authority investigate what went wrong if we do not understand the internals of the algorithms?

Roberto Stelling: At a certain point, autonomous cars will drive better than humans. But then again, even when an autonomous cars make a bad decision or a mistake, we will need to understand why it made a mistake. Minimally to prevent it from happening again, but also for ethical regulatory or liability issues.

Kelly Lynch: And these ethical issues, Roberto brings up highlight yet another serious need for explainability in AI algorithms, which luckily for me, Adriana went into a little further.

Adriana Prado: And xAI is very much related to the term responsible AI, which refers to an AI that also considers ethical and fairness issues as Roberto mentioned before. And the idea here is to be sure that there is an auditable, improvable way to defend decisions being made by AI algorithms. So for example, we can consider ML algorithms being used by creditors. It’s unacceptable that such algorithms discriminate against any applicants due to their race, religion, gender, and the like.

Kelly Lynch: So, embedded explainability can help us get to a point where instead of you or your loan officer wondering why you were not approved for specific AI powered loan application, explainability can pinpoint exactly why you received that rejection. And that’s important for so many reasons, the primary one being to reduce and eliminate bias, which frankly is a topic for an entire podcast of its own. But explainability doesn’t just stop with the internals of the algorithm. It can also help with understanding the entire modeling process.

Adriana Prado: Which data did you use for training? Why did you use it? What were the pre-processing steps, which algorithms did you use it? Is it really necessary to train a deep neural network? Could you use a less complex model in order to have more explainability? How did you validate your model? What are the models limitations? So, this point brings explainability and also reproducibility, which is also important, especially for us researchers. So this is how we move forward. So it’s not only understanding Turners, but the whole process sees the beginning, sees the data that you are capturing until the deployment of the model.

Kelly Lynch: And whether you realize it or not, AI and machine learning algorithms are all around us already.

Roberto Stelling: We see it in earthquake prediction software, in stock market analysis, in thermography image analysis, in face recognition, in movie recommendations, in real estate devaluation and in chatbots, in hiring solutions, in credit lineup cases, and many other solutions.

Kelly Lynch: Solutions including that little voice assistant you might have on your coffee table right now.

Adriana Prado: So some weeks ago, for example, I was surprised by seeing my son playing with Google Assistant because the Assistant gave answers that only makes sense in Brazil. We have acai in Brazil and commonly, we put it with granola on the top. So my son, he said, “Google, I love you.” And then the Assistant said, “You are, the granola of my acai,” like this. So this was so amazing for us. So we had a lot of fun together just enjoying the answer that the Google Assistant was giving to us.`

Kelly Lynch: Yes, yes. I know this is certainly a more fun and less serious example of AI’s involvement in our lives. But, Adriana made sure to drive home the point that AI has far, far reaching potential to improve our lives beyond just our voice assistants telling us they love us. However, it cannot help us to truly improve our lives unless we trust in those AI solutions. And how do we get that trust? Yep. Explainability.

Adriana Prado: So what I believe is that our society will not trust AI systems until we demonstrate clearly that they perform well and pose minimal risk to people’s lives. Especially for systems now being applied in crucial areas of society like justice and health.

Kelly Lynch: And if you’re still not certain or fully trusting of AI, even with this notion of explainability being applied across the solutions, think about this, a quote from Adriana and Roberto’s colleague.

Adriana Prado: When he was giving a talk about AI in a benefits event our group organized, he said that AI is a field for those who are interested in solving hard problems, but who are also good at heart. So, I believe this quote really wraps up the concept that explainability is a way to bring goodness to AI, to the AI practice in general.

Kelly Lynch: It’s pretty clear that artificial intelligence and machine learning will be around for a while. So again, if you’re a little nervous about its impact on your future, let me just venture to say that you’re probably in pretty good hands with the minds and the hearts of people like Roberto and Adriana working toward a future of explainability and goodness in artificial intelligence. Thank you again so much for joining me this week. Explainable AI is certainly something I learned a ton about while talking with Adriana and Roberto and I hope I was able to share our conversation in a way that also maybe taught you something new. With 2020 wrapping up soon, I’ll bring you one more podcast before the end of the year and I hope you join me for that. Seriously, thank you so much for listening. Again, I’m Kelly Lynch and this is The Next Horizon.