ARTICLE AD BOX
By Tom Gerken
BBC News
YouTube plans to show adverts that educate people about disinformation techniques, following a successful experiment by Cambridge University.
Researchers found the videos improved people's ability to recognise manipulative content.
They will be shown in Slovakia, the Czech Republic and Poland to combat fake news about Ukrainian refugees.
Google said the "exciting" findings showed how social media can actively pre-empt the spread of disinformation.
The research was founded on a developing area of study called "prebunking", which investigates how disinformation can be debunked by showing people how it works - before they are exposed to it.
In the experiment, the ads were shown to 5.4m people, 22,000 of whom were surveyed afterwards.
After watching the explanatory videos, researchers found:
- an improvement in respondents' ability to spot disinformation techniques
- an increased ability to discern trustworthy from untrustworthy content
- an improved ability to decide whether or not to share content
Beth Goldberg, Head of Research and Development for Google's Jigsaw unit, called the findings "exciting".
"They demonstrate that we can scale prebunking far and wide, using ads as a vehicle," she said.
The peer-reviewed research was conducted in conjunction with Google, which owns YouTube, and will be published in the journal Science Advances.
'Common tropes'
Jon Roozenbeek, the lead author on the paper, told the BBC the research is about "reducing the probability someone is persuaded by misinformation".
"Obviously you can't predict every single example of misinformation that's going to go viral," he said. "But what you can do is find common patterns and tropes.
"The idea behind this study was - if we find a couple of these tropes, is it possible to make people more resilient against them, even in content they've never seen before?"
The scientists initially tested the videos with members of the public under controlled-conditions in a lab, before showing them to millions of users on YouTube, as part of a broader field study.
The anti-misinformation campaign and prebunking campaign was run on YouTube "as it would look in the real world", Mr Roozenbeek said.
"We ran them as YouTube ads - just like an ad about shaving cream or whatever... before your video plays," he explained.
How the study worked
Advertisers can use a feature on YouTube called Brand Lift, which tells them if, and how, an advert has raised awareness of their product.
The researchers used this same feature to assess people's ability to spot the manipulation techniques they had been exposed to.
Instead of a question about brand awareness, people were shown a headline and asked to read it. They were told the headline contained manipulation and asked to identify what kind of technique was being used.
In addition, there was a separate control group who were not shown any videos, but were shown the headline and corresponding questions.
"What you hope to see is that the group that saw the videos is correct in their identification significantly more often than the control group - and that turned out to be the case," Mr Roozenbeek said.
"On average, the group that got the videos was correct about 5% more often than the control group. That's highly significant.
"That doesn't sound like a lot - but it's also true that the control group isn't always wrong. They also get a number of questions correct.
"That improvement, even in the noisy environment of YouTube, basically shows that you can improve people's ability to recognise these disinformation techniques - simply by showing them an ad."
'Evidence-based solutions'
Cambridge University said this was the first real-world field study of 'inoculation theory' on a social media platform.
Professor Sander van der Linden, who co-authored the study, said the research results were sufficient to take the concept of inoculation forward and scale it up, to potentially reach "hundreds of millions" of social media users.
"Clearly it's important for kids to learn how to do lateral reading and check the veracity of sources," he said, "but we also need solutions that can be scaled on social media and interface with their algorithms."
He acknowledged the scepticism around technology firms using this type of research, and the broader scepticism around industry-academia collaborations.
"But, at the end of the day, we have to face reality, in that social media companies control much of the flow of information online. So in order to protect people, we have come up with independent, evidence-based solutions that social media companies can actually implement on their platforms."
"To me, leaving social media companies to their own devices is not going to generate the type of solutions that empower people to discern misinformation that spreads on their platforms."