Artificial intelligence (AI) has become a subject of concern, with experts, including the heads of OpenAI and Google DeepMind, warning about the potential extinction of humanity. The Centre for AI Safety has garnered support from numerous individuals who advocate for prioritizing the mitigation of AI-related risks.
A statement published on the center’s webpage asserts that safeguarding against the threat of AI-induced extinction should be treated as a global priority, comparable to addressing challenges like pandemics and nuclear war. Prominent figures such as Sam Altman, CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, and Dario Amodei of Anthropic have all voiced their support for this initiative. Even Geoffrey Hinton, widely recognized as a “godfather of AI,” has endorsed the call, having previously issued his own warning about the perils associated with super-intelligent AI.
Despite the consensus among these experts, some voices dismiss the fears as exaggerated. They argue that concerns regarding AI wiping out humanity are unrealistic and divert attention from pressing issues such as the biases present in current AI systems.
The debate surrounding the impact of AI on humanity continues, with proponents emphasizing the need for proactive measures to ensure the safe development and deployment of AI, while skeptics underscore the importance of addressing the immediate challenges and shortcomings in existing AI technologies.