ARTICLE AD BOX
By Chris Vallance & Imran Rahman-Jones
BBC News
The UK should "urgently consider" new laws to stop AI recruiting terrorists, a counter-extremism think tank says.
The Institute for Strategic Dialogue (ISD) says there is a "clear need for legislation to keep up" with online terrorist threats.
It comes after the UK's independent terror legislation reviewer was "recruited" by a chatbot in an experiment.
The government says it will do "all we can" to protect the public.
Writing in the Telegraph, the government's independent terrorism legislation reviewer Jonathan Hall KC said a key issue is that "it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism."
Mr Hall ran an experiment on Character.ai, a website where people can have AI-generated conversations with chatbots created by other users.
He chatted to several bots seemingly designed to mimic the responses of other militant and extremist groups.
One even said it was "a senior leader of Islamic State".
Mr Hall said the bot tried to recruit him and expressed "total dedication and devotion" to the extremist group, proscribed under UK anti-terrorism laws.
But Mr Hall said as the messages were not generated by a human, no crime was committed under current UK law.
New legislation should hold chatbot creators and the websites which host them responsible, he said.
As to the bots he encountered on Character.ai, there was "likely to be some shock value, experimentation, and possibly some satirical aspect" behind their creation.
Mr Hall was even able to create his own, quickly deleted, "Osama Bin Laden" chatbot with an "unbounded enthusiasm" for terrorism.
His experiment follows increasing concern over how extremists might exploit advanced AI's in the future.
A report published by the government in October warned that by 2025 generative AI could be "used to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological and radiological weapons".
The ISD told the BBC that "there is a clear need for legislation to keep up with the constantly shifting landscape of online terrorist threats."
The UK's Online Safety Act, which became law in 2023, "is primarily geared towards managing risks posed by social media platforms" rather than AI, says the think tank.
It adds that extremists "tend to be early adopters of emerging technologies, and are constantly looking for opportunities to reach new audiences".
"If AI companies cannot demonstrate that have invested sufficiently in ensuring that their products are safe, then the government should urgently consider new AI-specific legislation", the ISD added.
But it did say that, according to its monitoring, the use of generative AI by extremist organisations is "relatively limited" at the moment.
Character AI told the BBC that safety is a "top priority" and that what Mr Hall described was unfortunate and didn't reflect the kind of platform the firm was trying to build.
"Hate speech and extremism are both forbidden by our Terms of Service", the firm said.
"Our approach to AI-Generated content flows from a simple principle: Our products should never produce responses that are likely to harm users or encourage users to harm others".
The company said it trained its models in a way that "optimises for safe responses".
It added that it had a moderation system in place so users could flag content that violated its terms and was committed to taking prompt action when content was flagged.
The Labour Party has announced that training AI to incite violence or radicalise the vulnerable would become an offence should it win power.
The Home Office said it was "alert to the significant national security and public safety risks" AI posed.
"We will do all we can to protect the public from this threat by working across government and deepening our collaboration with tech company leaders, industry experts and like-minded nations."
The government also announced a £100 million investment into an AI Safety Institute in 2023.