Several technology leaders have warned that artificial intelligence (AI) could pose a threat to human extinction, emphasizing the need to regulate and control it with utmost priority.
The warning came in a statement from the Artificial Intelligence Safety Center, signed by many technology company leaders, including Sam Altman, CEO of OpenAI, the developer of the ChatGPT program. It also included executives from DeepMind, Google's AI subsidiary, and Microsoft.
The statement stated that "mitigating the risk of human extinction from artificial intelligence should be a global priority alongside other societal risks such as pandemics and nuclear war."
These technologies have accelerated in recent months, especially after the public release of the ChatGPT chatbot program in November. It quickly gained widespread popularity, attracting 100 million users within just two months of its launch.
ChatGPT amazed researchers and the general public with its ability to generate human-like responses to user queries, raising concerns that AI could replace jobs and mimic humans.
The statement mentioned an increasing debate about "a wide range of important and urgent risks that artificial intelligence may pose," but it added that it might be "difficult to articulate concerns about some of the most severe risks of artificial intelligence."
The purpose of the statement is to overcome this obstacle and initiate discussions about these risks.
Elon Musk of Tesla and former Google CEO Eric Schmidt, among other technology leaders, have also warned about the risks posed by artificial intelligence to society.
In an open letter in March, Musk, along with Apple co-founder Steve Wozniak and several other technology leaders, urged AI development labs to stop training systems that aim to be more powerful than the current version of the "ChatGPT-4" program, the latest large-scale language model developed by OpenAI.
They also called for a halt of at least six months before pursuing any further advanced development of this technology.
The letter stated that "contemporary AI systems have surpassed human performance on an increasing number of general tasks."
The letter raised several questions, such as "Should we enjoy the automation of all jobs? Should we develop non-human minds that may eventually surpass us in number, outsmart us, and replace us? Should we risk losing control over our civilization?"
Eric Schmidt, the former CEO of Google, separately warned about the "existential risks" associated with artificial intelligence as the technology advances.