AI expert calculates 99.9% risk of human extinction within the next century

Artificial intelligence researcher Roman Yampolskiy believes civilisation faces extinction due to AI development.

The computer scientist, who focuses on AI safety and cybersecurity at the University of Louisville, forecasts a staggering 99.9% probability that artificial intelligence will wipe out humanity within the coming century, according to his appearance on podcaster Lex Friedman's programme released on Sunday.

Throughout the extensive two-hour interview, he argued that not one AI system launched to date has proven secure and voiced pessimism about future iterations avoiding critical flaws. He joins a select group of pioneering AI developers sounding such alarms during President Donald Trump's artificial intelligence competition.

Yampolskiy published a volume last year entitled "AI: Unexplainable, Unpredictable, Uncontrollable," described as providing "a broad introduction to the core problems, such as the unpredictability of AI outcomes or the difficulty in explaining AI decisions.

"This book arrives at more complex questions of ownership and control, conducting an in-depth analysis of potential hazards and unintentional consequences," he said.

"The book then concludes with philosophical and existential considerations, probing into questions of AI personhood, consciousness, and the distinction between human intelligence and artificial general intelligence (AGI)."

Technology experts observe that the original pioneers of AI, including Yampolskiy, are amongst those delivering the most severe warnings about the potential devastation and apocalyptic consequences that this technological progress might bring.

Nevertheless, certain research questions Yampolskiy's extinction forecast, indicating a significantly lower threat than his calculations suggest.

Research carried out by the University of Oxford in England and Bonn in Germany discovered that there is merely a 5% probability that AI will eliminate humanity, based on evaluations from over 2,700 AI researchers.

"People try to talk as if expecting extinction risk is a minority view, but among AI experts it is mainstream," warns Katja Grace, one of the paper's authors. "The disagreement seems to be whether the risk is 1% or 20%."

Several leading AI specialists have completely dismissed assertions about AI causing an apocalyptic situation, including Google Brain co-founder Andrew Ng and AI pioneer Yann Lecun, the latter of whom accused technology leaders such as OpenAI's Sam Altman of harbouring hidden agendas behind their alarmist rhetoric about catastrophic AI outcomes.

OpenAI's Altman has made numerous troubling statements regarding his industry. He cautioned that AI will probably destroy countless jobs, which he characterised as not "real work," comments that sparked fierce criticism.

Mirroring Altman's forecasts, numerous AI sceptics have similarly cautioned that the technology could trigger an economic catastrophe, replacing countless workers across every sector without exception.

More than ten years ago, in 2015, Altman ominously declared, "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies."

He also encountered sharp backlash for stating earlier this year that the widespread adoption of AI will necessitate "changes to the social contract."

AI Article