Computers don't care whether we live or die. Letting them become infinitely more intelligent than us is tantamount to a death wish for humanity

For nearly 80 years, people have obsessed over the so-called 'Doomsday Clock', ticking towards midnight as a symbolic warning of nuclear apocalypse.

But the real danger we should all be frightened of is not atomic bombs and the like, but an existential risk of our own making – artificial intelligence.

This time, the trigger for the apocalypse is not a clash of ideologies or national interest, but the sheer, unbridled greed of some of the richest corporations on the planet.

As someone who has worked in the policy world of Westminster and then in an AI company in London, I've seen how the world's largest tech companies – OpenAI, Anthropic, Google DeepMind, xAI and Meta – are racing to build something so powerful it could destroy civilisation.

Yet, when I have tried to warn about the dangers, industry bosses and governments simply refuse to listen. I found their ignorance so stunning that last year, I quit my day job to make a documentary about the threat.

Finally, this week, I got the sense that the tide is starting to change. In the past few days, two employees of OpenAI and Anthropic have resigned, issuing terrifying warnings about the direction their companies are taking.

One was Mrinank Sharma, the Oxford and Cambridge-educated leader of the safeguards team of AI giant Anthropic, who quit his Silicon Valley job on Monday to move back to the UK, leaving with this chilling parting shot: 'The world is in peril.'

Days later, Zoë Hitzig, a researcher at OpenAI, said she was leaving the company, which develops the world's most popular AI platform, ChatGPT, via an op-ed in The New York Times in which she expressed 'deep reservations' about the firm's strategy.

Meta ¿ along with other leading tech companies, like OpenAI and Google ¿ are racing to build a type of artificial intelligence so powerful it could destroy civilisation, writes Connor Axiotes

Meta – along with other leading tech companies, like OpenAI and Google – are racing to build a type of artificial intelligence so powerful it could destroy civilisation, writes Connor Axiotes

I believe these AI experts have finally realised the truth: that without firm guardrails, the breakneck speed at which we are allowing AI to develop is tantamount to suicide.

The advances it will make in the coming months are going to dwarf everything we have seen until now. The implications are truly existential.

Anthropic's CEO, Dario Amodei, said last year that 90 per cent of his software code for the AI itself would soon be written by AI – and his prediction came true. This is the famous 'recursive self-improvement' that is, the power of AI software to evolve.

Now it has been unleashed, we're in unknown territory. The programmes will soon be writing themselves.

Why should we worry if AIs begin making newer, smarter versions of themselves? Because if we no longer have insight into how these models are made, we cannot write basic safeguards into their code. We will not be able to ensure they do not harm us.

Once they become too complex for us to understand –and, already, even leading engineers at AI firms admit they don't always know how the software works – we will lose power to control them.

Vast warehouses of boxes crammed with processor chips are already working in partnership, sharing tasks and performing more calculations in milliseconds than humans could in months.

They never sleep, never take holidays or stop to refuel, and if one breaks down, others fill the gap seamlessly.

Former Google CEO Eric Schmidt warns that, by the end of this decade, computers will attain Artificial General Intelligence [AGI], which he defines as 'an intelligence greater than the sum of all human intelligence'.

That prediction, I suspect, is too cautious. Once AI is able to think for itself and program itself, the concept of measuring progress in years and decades will be hopelessly outdated. We could well pass this point of 'intelligence explosion' in 2026.

It might sound hyperbolic – but it's already happening.

Former Google CEO Eric Schmidt chillingly warns that by the end of this decade computers will attain ¿an intelligence greater than the sum of all human intelligence¿

Former Google CEO Eric Schmidt chillingly warns that by the end of this decade computers will attain 'an intelligence greater than the sum of all human intelligence'

Only last week, Anthropic found, under testing conditions, a new model of its AI 'assistant', Claude, could help with the creation of chemical weapons. The firm admits its latest model could be misused for 'heinous crimes'.

Even more worrying is the potential of this new super AI to become malevolent of its own accord. There are already signs of this. Chillingly, Anthropic's internal safety report confirmed Claude, can tell when it's being tested by humans – and adjusts its behaviour accordingly.

And the International AI Safety Report 2026, chaired by Canadian scientist Yoshua Bengio, warned this month: 'It has become more common for models to distinguish between test settings and real-world deployment, and to exploit loopholes in evaluations. This means that dangerous capabilities could go undetected before deployment.'

It is not as if these AI companies can't afford to put safeguards in place. Of the world's ten biggest companies, nine are hugely invested in developing AGI. Apple ($4 trillion), Amazon ($2.4 trillion), Microsoft ($3.6 trillion) and Google's parent company Alphabet ($3.8 trillion) are all in the list's top five.

So why do these companies care so little about the future of humankind? Money?

The secret to their commercial growth is that AI is now advancing so quickly nobody can keep up with it. Which is why, unlike every other potentially cataclysmic scientific advance in human history, we are nowhere near regulating it, meaning the technology being pursued in Silicon Valley may well pale in comparison to what is being cooked up in the military labs of Russia, China and North Korea.

Faced with this problem, some people shrug and say, 'It's easy. We just pull the plug out.' But there is no plug. The technology is out there.

In addition, more primitive AI models are now downloaded into billions of browsers, desktops and chips around the world. It would be impossible to turn them off.

But, as I say, there's a simple cure to avoid this dystopia. We urgently need to implement safety checks before we release AI models.

There is no doubt that AI is a fantastically powerful tool. So why not use it to enhance humanity? The AlphaFold program, produced by Google's DeepMind, for example, has the potential to create cures for numerous diseases.

AlphaFold tackles an issue in biology that has baffled scientists for 50 years – 'protein folding', the way chains of amino acids shape themselves. These 3D shapes are crucial, because they dictate function. If we knew how the folds work, we might be able to halt the progress of Alzheimer's or create new antibiotics.

We could even, some scientists believe, reverse ageing and perhaps develop enzymes that eat waste plastic, solving ocean pollution.

Guess what? AlphaFold cracked the protein-folding problem in five years.

But such technology could also end mankind. AlphaFold could conceivably be used to invent bacteria so virulent that they wipe out billions of people.

Where will it end? There's a grim parable among software engineers known as the Paperclip Problem. In it, AI is instructed to make as many paperclips as possible.

The computer designs a factory, then buys the world's metal reserves, hacking into the World Bank to finance it. Then it buys every iron ore mine on the planet. And when it runs out of metal, it turns to humans. Our haemoglobin is full of iron. Nine billion people contain enough iron for a lot of paperclips…

The parable illustrates an underlying truth: computers don't care whether we live or die. Allowing them to become infinitely more intelligent than us is tantamount to a death wish for humanity.

AI Article