By Marty Graham, Contributor
Rana el Kaliouby, co-founder and CEO of Affectiva, believes that teaching algorithms to read human faces and recognize our emotions is an important step in building artificial intelligence (AI) that we can trust—and that will trust us.
“A lot of the rhetoric around AI is “can humans trust AI,” but I think it’s a two-way street—it’s reciprocal trust. … I think of this as a human-machine partnership and for a partnership to work, you need this reciprocal trust, so we turn it on its head and we ask, “‘Should AI trust in humans?'”
“A lot of the rhetoric around AI is “can humans trust AI,” but I think it’s a two-way street…”
—Rana el Kaliouby, co-founder and CEO, Affectiva
As strange as that question sounds, it’s not all that abstract or obscure; for people to partner with AI, AI must learn the nuances of human communication. There are situations where safety may depend on it.
Consider semi-autopilot vehicles, those with a human at the wheel, though not continuously driving. There are situations the car’s algorithm can’t anticipate, like when the vehicle approaches a flatbed truck full of bundled yard waste with a bicycle tied to one bundle. Is it a truck or a bicycle? Which training protocol—safe following distance or slowing for the bicycle—should the algorithm rely on to react?
Without the driver, the algorithm doesn’t have the cognitive intelligence to quickly make sense of what it’s seeing and react appropriately. At a moment like this, being able to pass control between the algorithm and driver is critical. But the algorithm needs to know the driver is paying attention so it can trust the person to take full control, el Kaliouby explains.
Training the algorithm to recognize when the driver is unprepared to step up is one of the intelligent, human-sensitive projects based on advanced facial recognition algorithms el Kaliouby and Affectiva are bringing to market.
The tool also works to monitor drivers in cars; it alerts drivers who are distracted—whether texting or falling asleep—by reading the tilt of their head and narrowing of the eyes.
Emotive Recognition in Algorithms
El Kaliouby has been an early pioneering evangelist for incorporating emotional intelligence into AI. In 2009, during her second of four years at MIT, she and professor Rosalind Picard spun Affectiva out of the university’s Affective Computing Group, where they began developing, testing, and refining this unusual use of machine learning: recognizing emotions by physical signs.
She started training algorithms on emotive recognition to help children diagnosed with autism spectrum disorder, whose reluctance to look others in the eye and difficulties understanding social cues often cause considerable challenges as they navigate the world.
Through Google glass and a tablet containing the algorithm, facial expressions that may baffle autistic children are quickly read and the information on what the look suggests plays on the glass.
Once the MIT team found success with the nascent emotion recognition algorithm, it was a short step to ask if the algorithm could gather and analyze emotional expressions for other uses. The first product Affectiva brought to market in 2010, called Affdex for market research, was an emotion recognition algorithm that’s now in use in 87 countries by a quarter of the Forbes 500 companies. It is designed to gather viewers’ facial responses, allowing Affectiva’s clients to ensure their videos will elicit the desired viewer reaction.
“The gold standard before our technology came along was just ask people … ‘Hey, did you like the ad?’ And that’s very biased, it’s not objective, it’s not actionable,” el Kaliouby explains. “So now we can, moment-by-moment, understand how people respond to the ad, and the marketers that advertise it use [this information] to either edit the ad in a particular way or decide how much media dollars they are going to put behind [it].”
However, part of el Kaliouby’s mission is to be sure that her work is used for good, to advance human life and well-being. A few years ago, el Kaliouby turned down a big payday from a CIA-backed venture capital fund because she didn’t want her algorithm used to spy on people. Before potential clients can use Affdex, they are vetted for intent and must agree to limit its use, promising not to leverage it for surveillance or lie detection, or other nefarious activity.
It’s in the Training
Emotion recognition is complex—it includes speech patterns, physical movement of the head, as well as facial expression—which means it requires an enormous amount of data to train the algorithm. It also requires that data be drawn from a very diverse population.
El Kaliouby has engaged millions of people—sometimes at their own computers and phones—to capture facial images of a wide range of emotional reactions, to begin teaching algorithms to read emotions, ranging from falling asleep to excitement, joy, confusion, and fear.
“We use deep neural networks or deep learning to train these algorithms, and the way these networks work is they’re very data-hungry, so you have to feed them. For example, if you were training the algorithm to tell the difference between a smile and a smirk,” she illustrates, “you would give it hundreds of thousands of people smiling, and then hundreds of thousands of examples of people smirking. The deep neural network learns the difference between expressions.”
Diversity of data matters at least as much—and likely more—in emotion-smart AI as it does in any other machine learning context, she continues. Differences in how people look, in cultural norms, in languages and nuances are vitally important to capturing a trustworthy data set.
Just as teaching children to read faces evolved to studying reactions, seeking extensive diversity of data moved el Kaliouby to think about emotional intelligence and bias within the job interview process.
In a test run, incorporating emotional intelligence into hiring practices resulted in a significant increase in hiring people of diverse backgrounds—and that benefits the employer, she says. Utilizing the emotional quotient approach, Unilever found the company hired a workforce 13 percent more diverse.
“That’s an example where AI could actually help mitigate some of the biases we have,” she says. “AI is fascinating in the sense that it’s almost a mirror of who we are as a society.”