Tech executives call for Al labs to temporary pause training

More than 1,100 signatories, including Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology have signed an open letter that was posted online recently, calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

The letter reads:

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

The letter argued that there is a “level of planning and management” that is “not happening,” and that instead, in recent months, unnamed “AI labs” have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

The letter’s signatories, some of whom are AI experts, say the pause they are asking for should be “public and verifiable, and include all key actors. If said pause “cannot be enacted quickly, governments should step in and institute a moratorium,” the letter said.

Those who have signed the letter include some engineers from Meta and Google, Stability AI founder and CEO Emad Mostaque, and people not in tech, including a self-described electrician and an aesthetician. No one from OpenAI, the outfit behind the large language model GPT-4, has signed this letter.

OpenAI CEO Sam Altman told the Wall Street Journal that OpenAI has not started training GPT-5. Altman also noted that the company has long given priority to safety in development and spent more than six months doing safety tests on GPT-4 before its launch.

“In some sense, this is preaching to the choir,” he told the Journal. “We have, I think, been talking about these issues the loudest, with the most intensity, for the longest.”

Altman more recently sat down with computer scientist and popular podcaster Lex Fridman and spoke about his relationship with Musk, who was a co-founder of OpenAI but stepped away from the organisation in 2018, citing conflicts of interest. (A newer report from the outlet Semafor says Musk left after his offer to run OpenAI was rebuffed by its other co-founders, including Altman, who assumed the role of CEO in early 2019.)

Altman said he finds some of Musk’s behaviour hurtful.

Source: Excerpts from techcrunch.com

Nadine Harris: