Artificial Intelligence could end the world.
Heads of AI have revealed their hesitation on AI and its lead to the extinction of humanity. Many industry heads issuing these warnings as AI takes over the market. Though some are letting the market know that these warnings are overblown.
The Center for AI Safety, a group of AI experts and public figures, issued the following statement on the risks of AI,
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Notable signers of the document have, so far, included the chief executive of Chat GPT, Sam Altman, the chief executive of Google DeepMind, Demis Hassabis, and previous early warner Dr. Geoffrey Hinton.
Breaking: 350+ leading AI researchers (including the CEOs of OpenAI, Anthropic, and Google DeepMind) have signed a remarkable statement warning that AI poses a “risk of extinction,” and comparing it to pandemics and nuclear weapons. https://t.co/Fllcgzlq0C
— Kevin Roose (@kevinroose) May 30, 2023
Elon Musk, Tesla Leader and Twitter CEO has also urged a pause on the next generation of AI. In his statement, he mentioned the threat AI is posing to the world and technology. In Musk’s statement, he asked
“if we should ‘develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us.'”
This comes just days after a group of world elites met to discuss threats to society, one being AI. Now it has been revealed that leaders at the G7 summit opened a working group with the sole task of poking on AI. What do you think, is AI dangerous for humanity? Will it be the end of life as we know it?