An open statement published by the Center for AI Safety (CAIS) has garnered signatures from numerous AI experts, including the CEOs of OpenAI, Google DeepMind, and Anthropic. The statement emphasizes the need to prioritize mitigating the risk of AI-induced extinction as a global concern, alongside other societal-scale risks like pandemics and nuclear war.
The single-sentence statement, which reads, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” has attracted notable figures in the AI field. Signatories include Geoffrey Hinton, often referred to as the “Godfather” of AI, Stuart Russell from the University of California, Berkeley, Lex Fridman from the Massachusetts Institute of Technology, and even musician Grimes, among others.
While the statement appears straightforward, it reflects a divisive topic within the AI community. Some experts believe that current technologies may inevitably lead to the emergence of AI systems posing existential threats to humanity. However, there are opposing opinions, with experts like Yann LeCun and Andrew Ng asserting that AI is not the problem but rather the solution.
Those who support the statement, such as Hinton and Conjecture CEO Connor Leahy, view human-level AI as an inevitable reality and advocate for taking action now. The specific actions sought by the signatories remain unclear. It is evident, though, that the intention is not to halt the development of potentially dangerous AI systems, as the CEOs and heads of major AI companies have also signed the statement.
OpenAI CEO Sam Altman, one of the signatories, recently appeared before Congress to discuss AI regulation, urging lawmakers to take action in that regard. Additionally, Altman’s Worldcoin project, combining cryptocurrency and proof-of-personhood, has attracted attention after raising substantial funding.