How Game Designers Are Outsmarting Fake News

Designers are putting gamers in the position of fake newsmongers and it's working. Players were found to be 21 percent less likely to believe fake news after completing this specific game.

By Pragati Verma, Contributor

In early 2018, University of Cambridge researchers built an online game that puts players in the role of a propagandist. In one round, players can opt to impersonate the president of the United States and declare war on North Korea while tweeting from a fake account. In another round, they distort truth to incite conspiracy theories about vaccines with emotionally-charged headlines, while deflecting fact checkers.

YOU MAY ALSO LIKE THIS PODCAST ABOUT FAKE NEWS

Along the way, players impersonate celebrities, manipulate photos, create sham news sites, and build an army of Twitter bots to stoke anger and inflame social tension. They are rewarded with badges for completing certain tasks and identifying misinformation techniques commonly seen in false and misleading new stories—impersonation, conspiracy, polarization, discrediting sources, trolling, and emotionally provocative content. To excel in the game, players need to keep an eye on their followers and credibility meters at all times. The more unscrupulous and devious they are, the greater their chances of winning.

Putting people in the position of fake newsmongers might sound sinister, but game designers have legitimate reasons for walking people through the process of creating fake news. “It trains people to be more attuned to the techniques that underpin most fakes news,” says Sander van der Linden, director of the Cambridge Social Decision-Making Lab, which created the game Bad News in collaboration with Drog, a Dutch media collective.

Vaccine Against Disinformation

Inspired by the concept of vaccines using weaker or attenuated viruses to generate immunity, Linden and his team created the game to see if they could preemptively debunk fake news by exposing people to a weak dose of the methods used to create and spread disinformation. They knew that they needed some creativity to make it work. “We wanted to create mental antibodies against falsehoods, but we didn’t want to simply provide facts in a boring way,” he says.

“We wanted to create mental antibodies against falsehoods, but we didn’t want to simply provide facts in a boring way.”

—Sander van der Linden, director, Cambridge Social Decision-Making Lab

Initially, they created a board game where players competed to spread fake news by using shady practices, such as conspiracy theories and inflammatory headlines to polarize people. They soon realized that the game needed a social media component and created the online browser-based game with a simulated Twitter feed. Next, they added a follower meter and a credibility meter, and now they provide instant scores of players’ performance. To gauge the effects of the game, players are asked to rate the reliability of a series of different headlines and tweets before and after gameplay.

About half a million people have already played the initial English version of the game that is now available in 14 languages. And about 15,000 players agreed to share their information with Cambridge researchers.

Bad News players were found to be 21 percent less likely to believe fake news after completing the game.

Their study, published in the journal Palgrave Communications, shows that the gamified simulation increases “psychological resistance” to fake news. Players were found to be 21 percent less likely to believe fake news after completing the game. Impact of impersonating celebrities and other personalities went down by 24 percent and deliberately polarizing headlines by 10 percent. Effectiveness of discrediting tactics—attacking a legitimate source with accusations of bias—reduced by 19 percent, and conspiracy theories that spread false narratives blaming secretive groups for world events were 20 percent less effective after people played Bad News.

Follow the Money

While many spread false stories and polarizing content to push a political agenda, others are there for ad dollars. “Disinformation has moved, in the last few years, from being a propaganda to a business model,” says Clare Melford, co-founder Global Disinformation Index (GDI), a global coalition aiming to rate media sites on their risk of carrying disinformation.

“Companies, often without realizing, purchase ads that end up on sites generating low-quality and false news. The more outrageous your content, the more clicks you get.”

—Clare Melford, co-founder, Global Disinformation Index

Today, most online ads are placed automatically in real time, she explains, and advertisers have no way of stopping their ads from going to publishers that spread disinformation. “Companies, often without realizing, purchase ads that end up on sites generating low-quality and false news,” she says. Eyeing these ad dollars, several low-quality news websites publish emotionally-charged, false narratives to maximize engagement. “The more outrageous your content, the more clicks you get. And more clicks mean more ads,” Melford explains.

GDI is currently working on building a prototype disinformation index that would give a risk rating to the online news domains in the United Kingdom and South Africa. They plan to classify websites in two ways. First will be an automated machine learning assessment that can classify large volumes of low-quality but high-volume sites in real time. Second will be a manual assessment of higher quality disinformation outlets that may not be easily identifiable by automated means. These may include indicators such as whether a domain has been involved in a disinformation campaign in the past.

Eventually, GDI plans to create a global index and cut off the funding to disinformation while providing advertisers control over how their brands would be seen. “We will feed these risk ratings directly into ad exchanges so that advertisers can decide in real time whether they want to put their clients’ money on sites that spread false news,” Melford says.

GDI is not alone. Several entities, such as Credder, NewsGuard, and Pravda, are working on providing scores to news publishers so that both audience members and advertisers can judge their credibility.

Fighting Mob Violence

Fake news is not just misleading voters and influencing elections; it’s killing people in India. Several deaths and mob lynchings have been linked to videos and messages—often fake or edited—spreading on WhatsApp.

The Facebook-owned messaging app, with more than 1.5 billion users globally, is restricting message forwards to crackdown on the spread of rumors. It is also giving 20 different research groups $50,000 to help it understand the ways that rumors and fake news spread on its platform.

One of the research groups is Cambridge’s Social Decision-Making Lab, which is building a new version of Bad News for WhatsApp users in India. “We will use the same principals, but the engine will be slightly different,” says Linden. For one, they will simulate WhatsApp instead of Twitter as a tool to spread false information. Another difference, he points out, will be how people win or lose the game. “Instead of gaining followers and losing credibility, they will lose ‘lives’ when someone in their network reports or blocks them,” he adds. They will work with India-based Digital Empowerment Foundation to translate the game, adapt it to the local cultural context, and test it in rural areas.

According to Linden, the game is now ready and they have finished testing in the UK. “We plan to start testing in India in the next few months and release it later this year,” he says.

Fight Fire With Fire

Several researchers are trying another interesting tactic in the war against fake news—make more of it. One such example is Grover, an AI model created by the University of Washington and Allen Institute for AI computer scientists. They claim that their neural network is extremely good at generating fake and misleading news articles in the style of actual human journalists and is equally good at spotting AI-written online propaganda.

YOU MAY ALSO LIKE THIS PODCAST ABOUT AI AND COMEDY WRITING

The idea of using AI to both generate and identify fake news is not new. AI research company OpenAI’s natural language model GPT-2 created a controversy earlier this year, when its leaders decided that their text-generating AI tool was too dangerous to release to the public.

But Grover’s creators believe that it’s the best tool against AI-generated propaganda. “Our work on Grover demonstrates that the best models for detecting disinformation are the best models at generating it,” said University of Washington professor and research paper co-author Yejin Choi in a press release. “The fact that participants in our study found Grover’s fake news stories to be more trustworthy than the ones written by fellow humans illustrates how far natural language generation has evolved—and why we need to try and get ahead of this threat.”

When Seeing is No Longer Believing

These new technologies have accelerated ongoing discussions about the potential dangers of AI-generated content, especially deepfakes—AI systems that adapt audio, pictures, and videos to make people say and do things they never did. By creating realistic representations of events that never happened, they are threatening to take the war of disinformation to another level. “Everyone is worried about deepfakes. We were at the European Commission a few months ago and the first question they asked was about deepfakes,” says Linden. “It is on our to-do list.”

Linden and his team now plan to enhance Bad News to add deepfakes to the round where players are asked to impersonate an authority figure. “We will upgrade our impersonation badge to include tricks to spot fake videos, such as fake Obama or Mark Zuckerberg videos that went viral,” he says.

Deepfakes are horrifying everyone today, but do researchers like Linden have a silver bullet solution for the crisis? Maybe not, but they plan to step up their game, as malicious actors adopt new and more vicious propaganda techniques. “It’s just like the flu vaccine,” he says. “We need to adapt proactively every season as the virus changes.”