On Tuesday, scientists, industry leaders, and experts in the field of artificial intelligence (AI) issued a new warning regarding the dangers AI poses to the survival of humanity.
The letter was signed by hundreds of significant individuals, including Geoffrey Hinton, the computer scientist known as the pioneer of artificial intelligence, and Sam Altman, the CEO of ChatGPT creator OpenAI. These leaders of the field have become more open about their worries about AI and the need for technological safeguards, such as government scrutiny.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” stated the one-sentence statement, which was issued by the Centre for AI Safety, or CAIS, a nonprofit organization in San Francisco.
CAIS claimed that the note was made in an effort to encourage discussion about the urgent issues associated with artificial intelligence among the public, lawmakers, journalists, and AI professionals.
“The latest warning was intentionally succinct — just a single sentence — to encompass a broad coalition of scientists who might not agree on the most likely risks or the best solutions to prevent them. There’s a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority. So we had to get people to sort of come out of the closet, so to speak, on this issue because many were sort of silently speaking among each other,” said Dan Hendricks, executive director of the San Francisco-based nonprofit Center for AI Safety, which organized the move.
According to reports, Elon Musk and more than 1,000 experts and technologists signed a letter earlier this year asking for a six-month hold on AI development because they believed it creates serious risks to society and civilization. This letter was sent in reaction to OpenAI’s publication of the GPT-4 AI model, but executives from OpenAI, Microsoft, and Google declined to sign on and rejected the demand for a voluntary industry pause.
Altman stated during a recent Senate Judiciary subcommittee hearing about prospective AI governance, “I think if this technology goes wrong, it can go quite wrong.” “And we want to be vocal about that. We want to work with the government to prevent that from happening”.