A group of researchers — Nathan R. Kuncel, Deniz S. Ones, and David M. Klieger — analyzed 17 studies of job applicant evaluations and found that a simple algorithm outperforms human decision-making by at least 25%. Why does this seem counterintuitive? Shouldn’t human experience and understanding of company culture have a higher predictive power?
It turns out that human beings are good at defining what a job is, and also good at getting information from candidates to help evaluate them. But people are simply bad at synthesizing that information, and making the right determination. Why? The researchers explain:
The problem is that people are easily distracted by things that might be only marginally relevant, and they use information inconsistently. They can be thrown off course by such inconsequential bits of data as applicants’ compliments or remarks on arbitrary topics—thus inadvertently undoing a lot of the work that went into establishing parameters for the job and collecting applicants’ data. So they’d be better off leaving selection to the machines.
Needless to say, there would be strong resistance to this idea. Surveys suggest that when assessing individuals, 85% to 97% of professionals rely to some degree on intuition or a mental synthesis of information. Many managers clearly believe they can make the best decision by pondering an applicant’s folder and looking into his or her eyes—no algorithm, they would argue, can substitute for a veteran’s accumulated knowledge. If companies did impose a numbers-only hiring policy, people would almost certainly find ways to circumvent it.
Other research has shown that human cognitive biases get in the way, too. People tend to lend more weight to experiences and background that they share with candidates, like attending the same schools, speaking the same dialect, or having a common religion.
As a result, more companies are leaving decisions about hiring to algorithms. For example, direct marketing company Harte-Hanks uses software from the company Cornerstone (formerly Evolv) to pick the best candidates for call center workers. This is based on what Cornerstone calls “workforce science,” a hard turn toward data-driven, algorithmic HR. Aki Ito reported on that in 2013, writing:
Harte-Hanks found call center agents selected by Evolv’s software had a 35 percent lower 30-day attrition rate, reported 29 percent fewer hours of missed work in the first six months and handled calls 15 percent more quickly than those hired through the company’s existing recruiting services provider at the time.
Algorithmic HR raises some legal questions — because people who aren’t rated a good fit are effectively blocked from the jobs — but the results are hard to argue with.
And the managers who believe that looking in a candidate’s eye still has a place may be interested in the current facial analysis program at the Milwaukee Bucks. The team has hired Dan Hill, a “facial coding” expert, to analyze draft picks, and to rate them psychologically. Hill is an exponent of Paul Ekman’s well-regarded FACS, or Facial Action Coding System, to determine which of the 43 facial muscles are working at any time. These translate to specific emotions, like the seven core emotions: happiness, surprise, contempt, disgust, sadness, anger and fear.
Kevin Randall in his New York Times article, Teams Turn to a Face Reader, Looking for That Winning Smile, wrote:
Before the 2014 draft, Hill spent 10 hours with Milwaukee’s team psychologist, Ramel Smith, watching video of various college prospects and picking apart the psyches of potential picks. The Bucks had the No. 2 selection over all as well as three second-round picks, one of which they traded.
A vexing player at the top of the draft was Dante Exum, a point guard from Australia who was projected to be taken among the top four selections. Smith had done player personality analyses but wanted to validate them by having Hill present his player assessments first. The Bucks selected Jabari Parker with their top pick, and Exum fell to Utah at No. 5.
“Nothing against Exum, but emotional resiliency, stability and an immediate, assured presence were all key considerations in support of selecting Parker,” Hill said.
Until he sustained a severe knee injury on Dec. 15, Parker was among the leading candidates for Rookie of the Year honors, averaging 12.3 points and 5.5 rebounds. Exum is averaging 4.9 points and 2.0 assists coming off the bench for the Jazz.
And of course, Ekman has been working with computer vision and artificial Intelligence (AI) researchers to develop an automated tool for FACS, because it turns out that while people can be trained in the system it’s hard and slow work to analyze people frame-by-frame in videos.
Just another dimension where AI will be doing a better job than human beings in the next few years.
This post was written as part of the Dell Insight Partners program, which provides news and analysis about the evolving world of tech. To learn more about tech news and analysis visit Tech Page One. Dell sponsored this article, but the opinions are our own and don’t necessarily represent Dell’s positions or strategies.