Musk and others call for a temporary halt to the development of artificial intelligence systems, "due to their danger to society."

US billionaire Elon Musk and a group of artificial intelligence experts and executives called in an open letter to stop for six months from developing systems more powerful than the chatbot (GPT-4) recently launched by OpenAI, citing Potential risks of such applications to society.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth version of its artificial intelligence program (ChatGBT), which has won users the love of engaging them in human-like conversation and helping them compose songs and summarize documents. long.

“Strong AI systems should only be developed when we are confident that their effects will be positive and that their risks will be controlled,” said the letter from the Future of Life Institute.

The European Union Transparency Registry said the main funders of this non-profit organization are the Musk Foundation, the London-based Founders Pledge Group and the Silicon Valley Community Foundation.

“AI makes me really nervous,” Musk said earlier this month. Musk is one of the founders of OpenAI, a pioneering company whose company, Tesla Motors, uses artificial intelligence in its self-driving systems.

OpenAI did not immediately respond to a Reuters request for comment on the open letter, which demanded that development of AI systems be halted until independent experts worked out common safety protocols.

“Should we allow machines to flood our media channels with propaganda and lies? Should we develop non-human minds that may ultimately outnumber and intellect us, outpace us and replace us?” the letter said.

“Such decisions should not be delegated to unelected tech leaders,” she added.

More than 1,000 people have signed the letter, including Musk.

Sam Altman, Sundar Pichai and Satya Nadella, chief executives of OpenAI, Alphabet and Microsoft, were not among the signatories to the letter.

These concerns come at a time when the chatbot (ChatGBT) has attracted the attention of US lawmakers, who have questioned its impact on national security and education.

On Monday, the European Police Agency (Europol) warned of the danger of using the app for online phishing attempts, dissemination of disinformation and cybercrime.

The British government has unveiled proposals for an “adaptable” regulatory framework around AI.

Leave a Comment

Your email address will not be published. Required fields are marked *