The Future of Humanity and Artificial Intelligence: A Cruel Fate Worse Than Extinction

If totalitarian governments had not been defeated throughout history, what would they look like today? The Nazis were using 20th century technology that didn’t end development until they lost WWII. If the Nazis had built the atomic bomb before the US, how powerful would they have been and how long would they have lasted? The Nazis could have consolidated power and changed the course of history if they had had access to the most advanced technology of their time.

Tragedies like nuclear war or an asteroid strike often come to mind when we imagine the risks that exist. However, there is another future threat that is less well known – one that, while unlikely to wipe out the human race, is just as bad.

It’s like the “world in chains” scenario, in which, like the thought experiment that once was, a global totalitarian government uses new technology to lock most of the world’s population into permanent misery. If that sounds scary, yes, it is. But is it even possible? Researchers and philosophers are beginning to think about how it came about – and, more importantly, what we can do to avoid it.

Potential risks are catastrophic because they trap humans in a single fate, such as the permanent collapse of civilization or human extinction. They could be caused by natural causes, such as an asteroid impact or a super-volcanic eruption, or they could be caused by factors such as nuclear war or climate change. If that were to happen, it would be “a tragic end to the human condition,” says Haydn Belfield, academic project manager at the Centre for the Study of Existential Risk at the University of Cambridge. and will make us ashamed of hundreds of generations of our ancestors.”

Hitler examined the advanced engineering in Germany at the time – what if the Nazis had an unbeatable advantage?

Toby Ord, a senior research fellow at the Future of Humanity Institute at the University of Oxford, believes that the chances of a natural disaster occurring in this century are less than one in 2,000 because humans have survived for 2,000 centuries without natural disasters. However, if man-made disasters are counted, Ord believes the probability increases to a staggering one in six. He calls this century the “precipice”, meaning that the risk of losing the future of humanity has never been higher.

Researchers at the Center on Long-Term Risk, a non-profit research organization in London, have taken the unknown risks a step further and come up with even more chilling predictions. These “super risks” have been defined as “astronomical suffering, far greater than any that has ever existed on Earth to date. “In such a scenario, billions of people would still be alive, but in such extremely poor quality and with such bleak prospects that death would be the better option.” In short: a future of negative value is worse than a future of no value.

That’s where the “world in chains” scenario comes in. If a nefarious organization or government suddenly gained power over the world through technology and nothing could stop it, it could lead to a long period of human misery. 2017, the Global Priorities Project, in conjunction with the Institute for the Future of Humanity and the Ministry of Foreign Affairs of Finland, published a report on the potential risks. warns that “in the long future, humanity could be worse off than total extinction if ruled by a particularly brutal global totalitarian state”.

Assumptions of the single-case model

While global totalitarianism remains a niche research topic, researchers in the field of potential risks are increasingly turning their attention to the most likely cause of global totalitarianism: artificial intelligence.

In his singleton hypothesis, Nick Bostrom, director of the Oxford University Institute for the Future of Humanity, explains how artificial intelligence or other powerful technologies could be used to form a global government, and why it might not be overthrown. If that body, he writes, “takes a decisive lead using breakthroughs in artificial intelligence or molecular nanotechnology,” there could be a world dominated by a “single decision-making body at the highest level. Once in power, it would take full advantage of technological advances that prevent internal challenges, such as surveillance or the autonomous use of weapons, and would maintain perpetual stability under such a technological monopoly.

If single decision-making were totalitarian, human civilization would be bleak. Once upon a time, even in countries with the strictest regimes, news could flow in from other countries and people could choose to go into exile. Global totalitarian rule would have completely eliminated those hopes. Worse than extinction, “it would mean that we would have no freedom at all, no privacy, no hope of escape, no ability to control our lives at all,” says Tucker Davey, who is the director of the Life Institute for the Future of Massachusetts’ (Massachusetts), a writer for the institute, which focuses on potential risks.

“In the totalitarian regimes of the past, there was so much paranoia and psychological torture because you didn’t know if you were going to be killed for saying the wrong thing,” he continued. “Now imagine that there wasn’t even that problem, that everything you said was reported and analyzed.”

In a recent interview, Odeh said, “We may not have the technology to make that happen yet, but the technology that’s being developed makes that increasingly possible.” “At some point in the next 100 years, it seems likely that this will become a reality.”

Artificial intelligence and authoritarianism

While global domination by totalitarian governments remains unlikely, AI has already enabled authoritarianism in some countries and the seizure of more and more basic public power by opportunistic tyrants in others.

“It’s thought-provoking to see that the future that AI brings to humanity has shifted from being very utopian to being anti-utopian,” Elsa Kania, an adjunct senior fellow at the Center for a New American Security, a nonpartisan, nonprofit center for the study of national security and defense policy, said.

If a benevolent government installed surveillance cameras everywhere, it would make it easier for totalitarians to rule the country in the future.

In the past, surveillance required the involvement of hundreds of thousands of people – one in every 100 citizens in East Germany, for example, was an informant. But now, this can be achieved through technology. Before domestic surveillance ceased in 2019, the National Security Agency (NSA) collected the call and text message records of hundreds of millions of Americans. It’s estimated that there are between 4 million and 6 million CCTV cameras across the UK. Eighteen of the world’s 20 most monitored cities are in China, but London is third. There is no difference in the technology used between them; the difference is more in the way the technology is used.

What would happen if the US and UK extended the definition of illegal acts to include criticism of the government or belief in certain religions? The infrastructure for surveillance using information technology was no longer a problem, and the United States National Security Agency had begun experimenting with artificial intelligence, which allowed agencies to gather data faster than ever before.

In addition to increased surveillance, AI can also create cyber disinformation, another tool available to authoritarian governments. Advanced rumor tools created by AI can spread false political information and algorithms can be used to micro-target social media to make propaganda appear more convincing. This undermines the very foundation of democracy, the cognitive security of citizens. People are unable to determine what is factual and act on it.

Belfield said, “In the last few years, we’ve seen social trends and actions resulting from filtered information, where people are misled by all sorts of algorithms and start believing all sorts of conspiracy theories, even if they’re not exactly conspiracy theories, they’re only partial truths.” “You can imagine things getting worse and worse, especially with deep disinformation and things like that, until it becomes harder and harder for us as a social group to decide what is factual and what we have to do, and then take collective action.”

Preemptive measures

The Malicious Use of Artificial Intelligence report, authored by Bellfield and 25 authors from 14 institutions, predicts that such trends will lead to more serious threats to political security and the emergence of new threats in the coming years. Nonetheless, Bellfield says his work gives him hope, and that the positive aspects, such as discussions about democracy in AI and policy development (e.g., the EU considering a moratorium on facial recognition in public places), allow us to remain optimistic about avoiding a catastrophic fate.

David agrees with this statement. “We now need to decide: what is acceptable about using artificial intelligence? How is it unacceptable?” “We need to be careful that we don’t let it control so much of our infrastructure. If we equip the police with facial recognition systems and the federal government can collect all of our data as a result, that’s a bad place to start.”

If you’re still skeptical that artificial intelligence is so powerful, consider the world before nuclear weapons were about to be invented. Three years before the first nuclear chain reaction, even the scientists who tried to make it happen thought it was impossible. Nor was mankind prepared for a nuclear breakthrough, and it was fortunate that no great catastrophe occurred before treaties and agreements were put in place to limit the global proliferation of this deadly weapon, at a time when mankind was teetering on the brink of “mutual annihilation. We should combine the lessons of history with a vision to prepare for the invention of some powerful technology. We can do the same thing with artificial intelligence.

The world may not be able to prevent totalitarian regimes like the Nazis from rising again in the future, but we can at least avoid giving them unlimited power.