ARTICLE AD BOX
By Zoe Kleinman
Technology editor
Facebook groups are using the carrot emoji to hide anti-vax content from automated moderation tools.
The BBC has seen several groups, one with hundreds of thousands of members, in which the emoji appears in place of the word "vaccine".
Facebook's algorithms tend to focus on words rather than images.
The groups are being used to share unverified claims of people being either injured or killed by vaccines.
Once the BBC alerted Facebook's parent company, Meta, the groups were removed.
"We have removed this group for violating our harmful misinformation policies and will review any other similar content in line with this policy. We continue to work closely with public health experts and the UK government to further tackle Covid vaccine misinformation," the firm said in a statement.
However, the groups have since re-appeared in our searches.
One group we saw has been around for three years but rebranded itself to focus on vaccine stories, from being a group for sharing "banter, bets and funny videos" in August 2022.
The rules of the very large group state: "Use code words for everything". It adds: "Do not use the c word, v word or b word ever" (covid, vaccine, booster). It was created more than a year ago and has more than 250,000 members.
Marc Owen-Jones, a disinformation researcher, and associate professor at Hamad Bin Khalifa University in Qatar, was invited to join it.
"It was people giving accounts of relatives who had died shortly after having the Covid-19 vaccine", he said. "But instead of using the words "Covid-19" or "vaccine", they were using emojis of carrots.
"Initially I was a little confused. And then it clicked - that it was being used as a way of evading, or apparently evading, Facebook's fake news detection algorithms."
Just got invited to a Facebook group with a couple of hundred thousand members where people share stories about why they think the Covid vaccine killed people they knew. But instead of saying vaccine they use the 🥕 symbol, presumably to evade censorship. Very odd pic.twitter.com/mvXBv1aXBX
— Marc Owen Jones (@marcowenjones) September 11, 2022The BBC is not responsible for the content of external sites.View original tweet on Twitter
Moderating risk
In 2021 data from the Office for National Statistics suggested that there was a one in five million risk of dying from the Covid vaccine, compared with a risk of 35,000 deaths per five million of dying from Covid itself, if unvaccinated.
The tech giants use algorithms to trawl their platforms for harmful content - but they are primarily trained on words and text, wrote Hannah Rose Kirk in a blog for the Oxford Internet Institute.
Ms Rose Kirk was part of a research team which created a tool called HatemojiCheck: a checklist for identifying areas where AI systems do not handle emoji-based abuse very well.
"Despite having an impressive grasp of how language works, AI language models have seen very little emoji," she said. "They are trained on a corpora of books, articles and websites, even the entirety of English Wikipedia, but these texts rarely feature emoji."
Emojis and racism
The platforms have already come under fire for failing to block or remove emojis of monkeys and bananas when posted as a racist gesture on the accounts of black footballers.
If the Online Safety Bill comes into law in the UK, the tech giants will face steep penalties for failing to identify and quickly removing harmful material on their platforms. But there are concerns that tools currently in use are not good enough to cope with the sheer volume of content that is posted, and the nuance and cultural differences that can cloud meaning.
Hiding in plain sight
Emojis can have multiple meanings, alongside whatever is officially declared by Unicode, the consortium which manages them.
"It's a modern form of steganography: writing and hiding a message in plain sight, but such that unless you know where to look you don't see it," said Prof Alan Woodward, a cyber-security expert at Surrey University.
"What all of this demonstrates is the futility of trying to automate moderation of content to prevent the sharing of 'harmful' material," he said. "At the very best you will be playing a game of whack-a-mole, as people develop new dialects with which to communicate."
Facebook said last year that it had removed more than 20 million pieces of content containing misinformation about Covid-19 or the vaccine since the start of the pandemic.
US President Joe Biden has criticised the tech giants for not doing enough to tackle the spread of misinformation about the vaccine online. "They're killing people," he said of the companies in 2021.