AI 'godfather' Prof Yan LeCun says it won't take over the world

1 year ago 81
ARTICLE AD BOX

Meta press photo of Yan LeCunImage source, META

Image caption,

Prof Yan LeCan is known as one of the three godfathers of AI and works as Facebook-owner Meta's top scientist

By Chris Vallance

Technology reporter

One of the three "godfathers of AI" has said it won't take over the world or permanently destroy jobs.

Prof Yann LeCun said some experts' fears of AI posing a threat to humanity were "preposterously ridiculous".

Computers would become more intelligent than humans but that was many years away and "if you realise it's not safe you just don't built it," he said.

A UK government advisor recently told the BBC that some powerful artificial intelligence might need to be banned.

In 2018 Prof LeCun won the Turing Award with Geoffrey Hinton and Yoshua Bengio for their breakthroughs in AI and all three became known as "the godfathers of AI".

Prof LeCun now works as the chief AI scientist at Meta, the parent company of Facebook, Instagram and WhatsApp. He disagrees with his fellow godfathers that AI is a risk to the human race.

"Will AI take over the world? No, this is a projection of human nature on machines" he said. It would be a huge mistake to keep AI research "under lock and key", he added.

People who worried that AI might pose a risk to humans did so because they couldn't imagine how it could be made safe, Prof LeCun argued.

"It's as if you asked in 1930 to someone how are you going to make a turbo-jet safe? Turbo-jets were not invented yet in 1930, same as human level AI has not been invented yet."

"Turbo jets were eventually made incredibly reliable and safe," and the same would happen with AI he said.

Meta has a large AI research programme and producing intelligent systems as capable as humans is one of its goals. As well as research, the company uses AI to help identify harmful social media posts.

Prof LeCun spoke at an event for invited press, about his own work in so-called Objective Driven AI which aims to produce safe systems that can remember, reason, plan and have common sense - features popular chatbots like ChatGPT lack.

Image caption,

Prof LeCun speaking to the press at Meta in Paris

He said there was "no question" that AI would surpass human intelligence. But researchers were still missing essential concepts to reach that level, which would take years if not decades to arrive.

When people raise concerns about the human-level or above machines that might exist in the future, they are referring to artificial general intelligence (AGI). These are systems, that like humans, can solve a wide range of problems.

There was a fear that when AGI existed scientists "get to turn on a super-intelligent system that is going to take over the world within minutes", he said. "That's you know just preposterously ridiculous."

In response to a question from BBC news Prof LeCun said there would be progressive advances - perhaps you might get an AI as powerful as the brain of a rat. That wasn't going take over the world, and he argued "it's still going to run on a data centre somewhere with an off switch". He added: "And if you realise it's not safe you just don't built it".

It has been argued that AI has the potential to replace many jobs, and some companies have paused recruiting for certain roles as a result.

Prof LeCun told the BBC: "This is not going to put a lot of people out of work permanently". But work would change because we have "no idea" what the most prominent jobs will be 20 years from now, he said.

Intelligent computers would create "a new renaissance for humanity" the way the internet or the printing-press did, he said.

Prof LeCun was speaking Tuesday ahead of a vote on Europe's AI Act which is designed to regulate artificial intelligence.

He said from his conversations with AI start-ups in Europe "they don't like it at all, they think it's too broad, maybe too restrictive". But he said he wasn't an expert on the legislation,

Prof LeCun said he was not against regulation - but in his view each application would need its own rules, for example different rules would govern AI systems in cars and those scanning medical images.

Read Entire Article