Dozens of artificial intelligence (AI) experts, including the CEOs of OpenAI, Google DeepMind and Anthropic, recently signed an open statement published by the Center for AI Safety (CAIS).
We just put out a statement:“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc.https://t.co/N9f6hs4bpa (1/6)
The statement contains a single sentence:
Among the document’s signatories are a veritable “who’s who” of AI luminaries, including the “Godfather” of AI, Geoffrey Hinton; University of California, Berkeley’s Stuart Russell; and Massachusetts Institute of Technology’s Lex Fridman. Musician Grimes is also a signatory, listed under the “other notable figures” category.
Related: Musician Grimes willing to ‘split 50% royalties’ with AI-generated music
While the statement may appear innocuous on the surface, the underlying message is a somewhat controversial one in the AI community.
A seemingly growing number of experts believe that current technologies may or will inevitably lead to the emergence or development of an AI system capable of posing an existential threat to the human species.
Their views, however, are countered by a contingent of experts with diametrically opposed opinions. Meta chief AI scientist Yann LeCun, for example, has noted on numerous occasions that he doesn’t necessarily believe that AI will become uncontrollable.
Super-human AI is nowhere near the top of the list of existential risks.In large part because it doesn't exist yet.Until we have a basic design for even dog-level AI (let alone human level), discussing how to make it safe is premature.
Read more on cointelegraph.com