Firms 'going to war' against rivals on social media

2 years ago 28
ARTICLE AD BOX

By Will Smale
Business reporter, BBC News

Lyric JainImage source, Logically

Image caption,

Lyric Jain says we seem to be "on the cusp of an era" of businesses spreading lies about rivals on social media

A growing number of unscrupulous companies are using bots or fake accounts to run smear campaigns against their competitors on social media, it is claimed.

That's the warning from Lyric Jain, the chief executive of Logically, a high-tech monitoring firm that uses artificial intelligence (AI) software to trawl the likes of Twitter, Facebook, Instagram and TikTok to find so called "fake news" - disinformation and misinformation.

Mr Jain set up the business in the UK in 2017, and while its main customers are the British, American and Indian governments, he says that he is increasingly being approached by some of the world's largest retail brands. They are asking for help to protect themselves from malicious attacks by rivals.

"We seem to be on the cusp of an era of disinformation against [business] competitors," he says. "We are seeing that some of the same practices that have been deployed by nation state actors, like Russia and China, in social media influence operations, are now being adopted by some more unscrupulous competitors of some of the main Fortune 500 and FTSE 100 companies.

"[The attackers] are trying to use similar tactics to essentially go to war against them on social media."

Image source, Getty Images

Image caption,

Do you trust everything you see on social media?

Mr Jain says that a main attack tactic is the use of fake accounts to "deceptively spread and artificially amplify" negative product or service reviews, both real or made up.

In addition, the bots can be used to damage a competitor's wider reputation. For example, if a retailer has disappointing financial results in a certain three-month period, then an unscrupulous competitor can try to exaggerate their rival's financial woes.

Mr Jain says that while such attacks are being led by "foreign competitors" of Western brands, such as by Chinese firms, he doesn't rule out that some smaller Western businesses are also doing the same against lager rivals.

"Yes foreign competitors [are doing this], but even potentially some domestic ones who don't have the same standards around their operations," he says. "It is usually an emerging company that goes after an incumbent using these means."

Mr Jain adds that he wouldn't be surprised if "some established [Western] brands are also employing these tactics".

New Tech Economy is a series exploring how technological innovation is set to shape the new emerging economic landscape.

To help companies defend themselves against such attacks, Logically's AI trawls through more than 20 million social media posts a day to find those that are suspicious. The firm's human experts and fact checkers then go through the flagged items.

When they find disinformation and misinformation they then contact the relevant social media platform to get it dealt with. "Some delete the account, while some take down the posts but not the accounts," says Mr Jain. "It is up to the platform to make that decision."

He adds that when it comes to attacks on companies, the posts or accounts are typically removed within two hours. This compares to just minutes for posts considered to be of "greater societal harm", or threats of violence.

Mr Jain says that while the firm's AI "drives speed and efficiency" in its operations, its 175 employees in the UK, US and India remain key. "There are clear limitations of going with a technology-only approach... and so we also retain the nuance and expertise that the [human] fact checkers are able to bring to the problem.

"It is essential in our view to have experts be central to our decision making."

Factmata, another UK tech firm that uses AI to monitor social media for disinformation and misinformation on behalf of company clients, takes a different approach.

Its chief executive Antony Cousins says that while it can involve humans in the monitoring work if clients request them, the AI can be more objective. "Our true aim is not to put any humans in the middle of the AI and the results, or else we risk applying our own biases to the findings," he says.

Image source, Factmata

Image caption,

Antony Cousins says Factmata's AI is able to differentiate between lies and satire and humour

Set up in 2016, Factmata's AI uses 19 different algorithms, which Mr Cousins says are "trained to identify different aspects of content, in order to weed out the bad stuff, and discount the false positives, the good stuff".

By false positives he is referring to content that on first glance might be considered to be fake, but is in actual fact "humour, satire, irony, and content that could well be drawing attention to issues for a good cause, a good reason". He adds: "We don't want to label those as bad."

And rather than just finding fake tweets or other posts to be deleted, Mr Cousins says that Factmata's AI digs deeper to try to find the source, the first account or accounts that started the lie or rumour, and focus on getting them removed.

He adds that more brands have to realise the growing risks they face from fake news on social media. "If a brand is falsely accused of racism or sexism it can really damage it. People, Generation Z, can choose to not buy from it."

Prof Sandra Wachter, a senior research fellow in AI at Oxford University, says that using the technology to tackle fake news on social media is a complicated issue.

Image source, Sandra Wachter

Image caption,

Prof Sandra Wachter says that even some humans can struggle to identify humour

"Given the omnipresence and volume of false information and misinformation circling the web, it is absolutely understandable that we turn to technologies such as AI to deal with this problem," she says.

"AI can be a feasible solution to that problem if we have agreement over what constitutes fake information that deserves removal from the web. Unfortunately, we could not be further away from finding alignment on this.

"Is this content fake or real? What if this is my opinion? What if it was a joke? And who gets to decide? How is an algorithm supposed to deal with this, if we humans cannot even agree on this issue?"

She adds: "In addition, human language has many subtleties and nuances that algorithms - and in many cases humans - might not be able to detect. Research suggests for example that algorithms as well as humans are only able to detect sarcasm and satire in around 60% of the time."

Mr Cousins clarifies that Factmata is "not acting as the guardian of the truth". He adds: "Our role is not to decide what is true or false, but to identify [for our clients] the content we think could be fake, or could be harmful, to a degree of certainty."

Read Entire Article