A Watershed Moment for AI: Machine Learning at the Frontline of Cyber Threats

For years, researchers have been asking what else they can do to detect real fraud. In the annals of cyber warfare, the ability to use machine learning to detect and thwart threat confronts the very real problem of too many false alarms.

By Russ Banham, Contributor

With a huge magnitude of data flowing across the network and an equal magnitude of threats to scour for, security experts shoulder the burden of hunting for anomalies that could indicate the presence of an outsider.

But not everything that looks suspicious actually is suspicious. Compounding security experts’ already daunting challenge of monitoring thousands of malware variants and malicious URLs is that traditional intrusion detection systems often aim the searchlights at too many potential suspects. (For example, the employee who is on the network to print birthday invitations late at night.)

“Just because a behavior is anomalous doesn’t mean it’s malicious, but at least a security analyst can gain more evidence to this effect.”

—Jon Ramsey, chief technology officer, Secureworks

“Just because a behavior is anomalous doesn’t mean it’s malicious, but at least a security analyst can gain more evidence to this effect,” explains Jon Ramsey, chief technology officer at Secureworks, an information security services provider that protects customer networks, computers, and information assets. In other words, machine learning narrows the field to the most likely threats. “Now a more in-depth investigation can begin.”

While this form of AI to analyze true data correlations that fall outside of normal parameters is still in its nascent stages, its high success rate is stirring hopes that a critical new weapon is at hand.

Another Arrow in the Quiver

For security experts, machine learning is not a replacement for current threat assessment practices. Rather, it’s a valuable adjunct that confronts the very real problem of too many false alarms.

“Most intrusion detection systems are rules-based—if a specific condition occurs, you respond according to what the rules state,” explains Samir Hans, a partner in Deloitte’s risk and financial advisory practice who focuses on vigilant cyber threat management solutions. “But that’s a challenge all data security specialists have to contend with, since not every alert is an actual threat. There’s so much noise, making it difficult to confirm what is and isn’t a threat.”

“There’s so much noise, making it difficult to confirm what is and isn’t a threat.”

—Samir Hans, partner, Deloitte’s risk and financial advisory practice

Given this high decibel level, most companies cannot hire enough information security analysts to listen in on every possible intrusion. “The threats just keep adding up,” says Hans. “It begins to feel like a losing battle, even with accuracy improvements in rules-based systems.” For years, he explains, really smart researchers have been asking what else they can do to detect real fraud.

That’s where machine learning has come into the picture. For Hans and his team at Deloitte, machine learning algorithms achieve two very clear goals—first, they sample unique data behaviors so security staff can improve their discernment of a threat, and second, they help experts learn from the experience. “We’re not throwing away the rules,” he says, “we’re just layering more advanced techniques like machine learning to enhance the speed and precision of our threat detection capabilities.”

This is important given the serious shortage of skilled cyber security employees, as Ramsey describes it. “Consequently, we want our security resources focused on the threats that machines can’t determine are malicious or have low confidence that they are malicious.” He says organizations can think of this as “tri-state logic: ‘no, it’s legitimate; yes, it’s malicious; or I don’t know.’ In the cases the machines don’t know, you get a human involved.” For example, if three people look at a threat suggested by the algorithmic calculations and agree it looks like the real thing, that’s considered an efficient use of resources. Otherwise, he says, “everyone is looking at every possible threat.”

Despite the need for machine assistance to supplement the shortage, Ramsey emphasizes that machine learning is not a replacement for people—it’s just another tool for security specialists to sharpen their analyses. “We humans are imperfect and mathematically inconsistent; sometimes we’re right, sometimes we’re wrong,” Ramsey says. “Machine learning can be a great training tool to increase the odds of being right.”

Ground Truths

To underscore the value of machine learning technology to identify large-scale cyber threats, Ramsey highlights a scenario of three separate attacks against companies in three different industries—oil and gas, copper and gold mining, and agricultural. “Since the three companies have little to do with each other, the attack against one company would appear to have no relationship to the attacks against the other two,” he explains.

By using an algorithm to simultaneously study all three attacks, however, the technology can detect data correlations that otherwise would not be apparent to an unassisted human being. “The algorithm may suggest that the attacker in all three scenarios was interested in profiting from natural resources, indicating that a single attacker was possibly at play—what we call a ‘ground truth,'” Ramsey says. “By drawing this connection, we’re able to infer that the same threat actor might go after a similar entity engaged in natural resources.”

Machine learning can be a way to ferret out similarities and anomalies in different types of malicious behaviors such as these. And while, in theory, security specialists could undergo a similar analysis, algorithms have the capacity to draw these inferences much sooner and with greater accuracy.

Machine learning can be a way to ferret out similarities and anomalies in different types of malicious behaviors. Algorithms have the capacity to draw these inferences much sooner and with greater accuracy.

It’s these same benefits of anomaly detection—and speed—that have compelled large multinational companies such as Mastercard to use AI to help protect its customers against fraud. The financial services giant is familiar with biometric authentication tools, such as fingerprint and facial recognition software, yet machine learning presents a new opportunity to protect and provide value to customers.

“We’ve started to use an algorithm to examine how customers interact with their mobile devices,” explains Nick Curcuru, Mastercard vice president, global big data consulting. “Their interactions with the device’s keyboard, for instance, create a unique signature of typical behaviors, giving us the ability to paint a more refined profile of that person for verification purposes.”

Machine learning algorithms analyze these customer behaviors, or what Curcuru calls “passive biometrics,” to detect unusual patterns. If the algorithm suggests an atypical behavior that does not align with the customer’s profile, the information may indicate attempted fraud by a threat actor.

“This is all about the customer. This is all about the experience to make things seamless.”

—Nick Curcuru, vice president, global big data consulting, Mastercard

For Mastercard, Curcuru points out, this potential fraud detection has to happen within a matter of nanoseconds so a “go or no-go” decision regarding the customer’s transaction can be made instantly. “This is all about the customer. This is all about the experience to make things seamless. Make things frictionless.”

Illuminating the Thief

The security experts anticipate refined improvements in AI’s capabilities to fight cyber threats in the next three to five years. “I believe we will see tremendous progress in the sophistication of the algorithms,” Hans predicts. “We have plans to build ever more robust threat models, possibly on an industry sector basis.”

Meanwhile, Secureworks plans to apply machine learning to other cyber security aims. “The more we know about ground truths, the better we can apply that to other needs, such as whether or not a threat actor has stolen data,” he explains. “Right now, there’s typically no factual evidence to be sure that data has actually been stolen. AI can at least help narrow these odds.”

And, Ramsey adds, if information security providers can reach a consensus to work together on giving machine learning greater visibility, their collective clout will mount an impressive offense against the enemy.

“We and other security firms using machine learning models have improved the accuracy of our threat detection,” he says. “Assuming we can collectively share our data insights, a significant shift in cyber risk management will be at hand. This is a potential game-changer that will go down as a pivotal moment in cyber security.”

Russ Banham is a Pulitzer-nominated financial journalist and best-selling author.