A letter warning that the AI technology being developed by top artificial intelligence firms might endanger humanity’s existence was signed by over 350 executives, researchers, and engineers.
According to the New York Times, more than 350 CEOs, researchers, and engineers from leading AI companies have signed an open letter to warn the world that the AI technology they are developing might pose a danger to humanity’s survival.
According to a statement made by the Center for AI Safety, a nonprofit group, “mitigating the risk of extinction concerning AI should be a top priority worldwide alongside additional societal-scale risks, such as global epidemics as well as nuclear war.” Top leaders from OpenAI, Google DeepMind, along with Anthropic are among the signatories.
Among the well-known signatories are Dario Amodei, CEO of Anthropic, Demis Hassabis, CEO of Google DeepMind, and Sam Altman, CEO of OpenAI. The letter was also signed by Geoffrey Hinton and Yoshua Bengio, the remaining two of the three researchers who received the Turing Award for their revolutionary work on neural networks.
The open letter is published at a time when concerns about the potential drawbacks of artificial intelligence are growing. Recent advancements in “large language models”—the type of AI system that is utilized by ChatGPT along with other chatbots—have sparked worries that AI could very soon end up being utilized at a scale to broadcast disinformation and propaganda, or that it could possibly displace millions of white-collar jobs.
Recently, Vice President Kamala Harris along with President Biden met with Altman, Hassabis, and Amodei to explore AI policy. After the meeting, Altman appeared before the Senate, warning that the hazards posed by highly developed AI systems were substantial enough to call for government action. He argued for government regulation of AI due to the potential threats it presents.
According to Dan Hendrycks, executive director of the Center for AI Safety, the open letter functioned as a “coming-out” for several company executives who had previously expressed worries about the risks associated with the technology they were developing, but only in private. “Even inside the AI community, there is a widespread assumption that there are only a small number of doomers,” according to Mr. Hendrycks. “However, a lot of individuals really voiced their worries about these things in private.”