Rishi Sunak has been at Bletchley Park, hosting an AI Safety Summit for global politicians, tech executives and experts, and speaking of the need for “honesty and openness” in its use.
Just before the Hamas attack on Israel, the renowned Israeli author, historian and philosopher Yuval Noah Harari was in Azerbaijan, to give a keynote address at the opening ceremony of the 74th International Astronautical Congress1 (IAC) in Baku. His evocative and challenging words brought everyone crashing down to Earth. Despite suggesting that AI has the potential to help humanity, Harari expressed serious concerns about the inherent dangers it presents to humanity.
“Soon the era of human domination of this planet might come to an end”, he warned.
“For tens of thousands of years humans have dominated Earth; but if we could go forward in time, we are likely to find a planet dominated by an alien intelligence.
“AI is an alien intelligence. It processes information, makes decisions and creates entirely new ideas in a radically alien way. We have already met this alien intelligence here on Earth and, within a few decades, it might take over our planet.
“Today, it already surpasses us in many tasks, from playing chess to diagnosing some kinds of cancer and it may soon surpass us in many more. The AI we are familiar with today is still at a very, very early stage of its evolution.”
He described AI as being still at its “amoeba stage” but, unlike human evolution over billions of years, it wouldn’t evolve at such a slow pace.
“Digital evolution is millions of times faster than organic evolution. The AI amoebas of today may take just a couple of decades to get to T-Rex stage.” If Chat GPT is an amoeba, what do you think an AI T-Rex would look like, he asked?
Harari suggested that AI has great potential to help humanity, from exploring other planets to protecting the eco-system of Earth, providing us with much better health care and raising standards of living “beyond our wildest expectations”.
But he issued a stark warning that it would bring many new dangers.
“AI is likely to de-stabilise the global job market and the global economy, worsen existing biases like racism, misogyny and homophobia, spread outrage and fake news, destroy trust between people and thereby destroy the foundations of democracy,” he said.
“Dictatorships too should be afraid of AI, for they work by silencing and terrorising anyone who might speak or act against them. It isn’t easy, however, to silence and terrorise AI. What would a 21st Century Stalin do to a dissenting Bot? Send it to Bot Gulag?”
Harari also believes AI poses a series of existential threats to the very survival of the human species:
“The problem isn’t that AI might be malevolent, the problem is that AI might be so much more competent than us that it will increasingly dominate the economy, culture and politics, while we humans lose the ability to understand what is happening in the world and to make decisions about our future.”
“AI might destroy humanity not through hate and fear, but because it doesn’t care, just as humans have driven numerous other species to extinction by carelessly changing and destroying their habitats.
“I personally have a deep fear of this scenario. I believe that what really matters in life is not intelligence, but consciousness.”
Intelligence versus consciousness
Harari said intelligence should not be confused with consciousness. “Intelligence is the ability to solve problems, like winning at chess or curing cancer. Consciousness is the ability to feel things like pain and pleasure, love and hate. In humans and also in other mammals and birds, intelligence goes hand-in-hand with consciousness.”
Despite an immense advance in computer intelligence over the past half century, he acknowledged there has been exactly “zero advance” in computer consciousness with no indication that computers are anywhere on the road to developing it.
“Just as spaceships, without ever developing feathers, fly much further than birds, so computers may come to solve problems, much, much better than human beings without ever developing feelings,” he said.
“If human consciousness goes extinct and our planet falls under the dominion of super intelligent but entirely non-conscious entities, that would be an extremely sad and dark end to the story of life.”
How can we avoid this dark fate and deal with the numerous challenges posed by AI? The good news is that while AI is nowhere near its full potential, the same is true of humans too.
Harari suggested that humanity first needed to focus its attention on this existential threat of AI.
“We humans need to stop fighting among ourselves and co-operate on our shared interests. Unfortunately, in too many countries, like in my own country of Israel and elsewhere, people are not focused on our shared human interests, but rather on fighting with the neighbours about a few hills. What good would it do to win these hills if humanity loses the whole planet?”
Even if humans across the world co-operate, he described the task of regulating AI as a difficult and delicate one.
“Given the pace at which AI is developing it is impossible to anticipate and regulate in advance all the potential hazards. Therefore regulations should be based less on creating a body of rigid rules and more on establishing living regulatory executions that can quickly identify and respond to problems as they arise,” he said.
“To function well the institutions should also be answerable to the public and should stay in close contact with the human communities all over the world that are affected and impacted by AI.”
Harari believes regulatory institutions will need one more crucial asset – strong self-correcting mechanisms – if we are to prevent an AI catastrophe.
“The greatest danger to humanity comes from a false belief in infallibility. But even the wisest people make mistakes and AI is not infallible either”, he said.
“If we put all our trust in some allegedly infallible AI, in some allegedly infallible human being or in some allegedly infallible institution, the result could be the extinction of our species.
“In the past humans have made some terrible mistakes, like building totalitarian regimes, creating exploitative empires and waging world wars.
“Nevertheless, we survived because previously we didn’t have to deal with the technology that can annihilate us. Hitler and Stalin killed millions but they couldn’t destroy humanity itself. Humanity got a second chance to learn from its catastrophic mistakes and experiments.”
But Harari warned that AI is very different. “If we make a big mistake with AI we may never get a second chance to learn from it. We should not allow any single person, corporation or country to take a gamble on the fate of our entire species and perhaps on the fate of all earthly life forms”, he said.
“As far as we know today, terrestrial animals may be the only conscious entities in the entire galaxy or perhaps in the entire universe. There might be other conscious beings out there somewhere, but at least to the best of my knowledge we haven’t met any of them, so we cannot be sure.
“We have now created a non-conscious but very powerful alien intelligence here on Earth. If we mishandle AI it might extinguish not just the human dominion over this planet, but the light of consciousness itself, turning the universe into a realm of utter darkness. It is the responsibility of all of us to prevent this.”
1 The 74th International Astronautical Congress (IAC), in Baku, Azerbaijan, between 2-6 October 2023, was organised by the International Astronautical Federation (IAF) in conjunction with Azercosmos (the Space Agency of the Republic of Azerbaijan) under the theme ‘Challenges and Opportunities: Give Space a Chance’. In 2024 the IAC will be held in Milan, Italy.