I saw a glimpse into a possible future in a book titled Homo Deus, written by Yuval Noah Harari in 2016. Before his journey forward, the author explores the past. His conclusions challenged my core belief system in a very uncomfortable way. His arguments were logical and thoughtful (whether I agree with them or not), and based on a foundation of life science, algorithms, and biotechnology. This look forward once again raises the question of Ethics. The author himself makes this point when he says:
“The rise of AI and biotechnology will certainly transform the world, but it does not mandate a single deterministic outcome. All the scenarios outlined in this book should be understood as possibilities rather than prophecies. If you don’t like some of these possibilities you are welcome to think and behave in new ways that will prevent these particular possibilities from materializing.”
I’ve given the main themes of this possible future a label and provide a quick look using excerpts directly from the book. You decide for yourself if this future represents an enhanced or diminished humanity.
Humanity’s next targets: immortality, happiness and divinity. In 2012 Ray Kurzweil was appointed director of engineering at Google, and a year later Google launched a sub-company called Calico whose stated mission is to solve death. Kurzweil maintains that anyone possessing a healthy body and a healthy bank account in 2050 will have a serious shot at immortality by cheating death a decade at a time.
A new home: Although humans live in the world of organic compounds, life will break out into the vastness of the inorganic realm, breaking out of planet earth, as a non-organic artificial intelligence will find it far easier to colonize alien planets.
The elite and the useless class: those that can afford to give themselves super-human powers will – and those that cannot become members of the useless class. In the twenty-first century we might witness the creation of a massive new un-working class: people devoid of any economic, political or even artistic value. This useless class will not merely be unemployed – it will be unemployable. Once we focus on healing, no clear line separates it from upgrading. Medicine almost always begins by saving people from falling below the norm, but the same tools and know-how can then be used to surpass the norm (Viagra was a blood pressure medicine and plastic surgery was focused on injured soldiers). What starts as a way to save a child from a disabled life can quickly lead to designer babies.
We really aren’t conscious beings: and we really don’t have free will. Today they can use brain scanners to predict people’s desires and decisions well before they are aware of them (free will, or algorithm-driven). In addition, life sciences can’t find a mind that is separate and distinct from the neurons (algorithms) that drive our intelligence. So we will finally decouple consciousness and intelligence, ensuring that inorganic algorithms can take on most human tasks.
Abandoned by the system: humans will lose their economic and military usefulness, hence the economic and political system will stop attaching much value to them. The system will continue to find value in humans collectively, but not in unique individuals, unless they are upgraded super humans. Humans were important in a military and economic context. We could lose our relevance in an economic context, and the most advanced armies of the twenty-first century rely far more on cutting-edge technology. Countries now need only small numbers of highly trained soldiers, special-forces and a handful of experts who know how to produce and use sophisticated technology. Hi-tech forces of pilot-less drones and cyber-worms are replacing the mass armies of the twentieth century, and generals delegate more and more critical decisions to algorithms.
The growing concentration of wealth and power: As algorithms push humans out of the job market, wealth and power might become concentrated in the hands of the tiny elite that owns the all-powerful algorithms, creating unprecedented social and political inequality. Alternatively, the algorithms might themselves become owners. An algorithm could then own a transportation empire or a venture-capital fund without having to obey the wishes of any human master.
A world of reinvention: The crucial problem isn’t creating new jobs; it’s creating new jobs that humans perform better than machines. Since we do not know what the job market looks like in 2030 or 2040, we have no idea what to teach our kids. Most of what they currently learn will probably be irrelevant by the time they are forty. Traditionally, life’s been divided into two main parts: a period of learning followed by a period of working. Very soon this traditional model will become utterly obsolete, and the only way for humans to stay in the game will be to keep learning throughout their lives, and to reinvent themselves repeatedly.
You know me so well: the system will understand humans better than they understand themselves and will make most of the important decisions for them. The system will thereby deprive individuals of their authority and freedom. An algorithm that monitors each of the systems that comprise my body and my brain could know exactly who I am, how I feel and what I want. Once developed, such an algorithm could replace the voter, the customer and the beholder. It will advise us which movie to see, where to go on holiday, what to study in college, which job offer to accept, and even whom to date and marry. Habits such as democratic elections will become obsolete, because they will represent even my own political opinions better than I can. Algorithms will go from oracle (answer questions) to agent (drive to a specific location for us).
Once the oracle to agent transition happens, they might start speaking directly with one another on our behalf. My agent might contact yours and agree on a place and time for a meeting. My agent could sit in on a job interview for me, or compare notes with the agent of a potential lover to decide whether it’s a good match.
The body snatchers: Just as my home computer is constantly attacked by viruses, worms and Trojan horses, so will my pacemaker, hearing aid and nanotech immune system. If I don’t update my body’s anti-virus program regularly, I will wake up one day to discover that the millions of Nano-bots coursing through my veins are now controlled by a North Korean hacker.
Some of this is self-inflicted: Already today many of us give up our privacy and our individuality by conducting much of our lives online, recording our every action and becoming hysterical if our connection to the net is interrupted even for a few minutes. The shifting of authority from humans to algorithms is happening all around us, not as a result of some momentous governmental decision, but due to a flood of mundane personal choices. If we are not careful the result might be an Orwellian police state that constantly monitors and controls not only all our actions, but even what happens inside our bodies and our brains.
Techno-Humanism: the belief that Homo sapiens as we know it has run its historical course and will no longer be relevant in the future, concluding that we should use technology to create Homo Deus – a much superior human model. Homo Deus will retain some essential human features, but will also enjoy upgraded physical and mental abilities that will enable it to hold its own even against the most sophisticated non-conscious algorithms. Since intelligence is decoupling from consciousness, and since non-conscious intelligence is developing at breakneck speed, humans must actively upgrade their minds if they want to stay in the game.
Dataism: the belief that humans can no longer cope with immense flows of data, and distill data into information, knowledge or wisdom. The work of processing data should therefore be entrusted to algorithms, whose capacity far exceeds that of the human brain. You may not agree with the idea that organisms are algorithms, but you should know that this is current scientific dogma, and it is changing our world beyond recognition. As both the volume and speed of data increase, venerable institutions like elections, political parties and parliaments might become obsolete because they can’t process data efficiently enough. These institutions evolved in an era when politics moved faster than technology. In the nineteenth and twentieth centuries the Industrial Revolution unfolded slowly enough for politicians and voters to remain one step ahead of it and regulate and manipulate its course. Yet whereas the rhythm of politics has not changed much since the days of steam, technology has switched from first gear to fourth. Technological revolutions now outpace political processes.
We lack grand visions: traditional democratic politics is losing control of events, and is failing to present us with meaningful visions of the future. Consequently, in the early twenty-first century politics is bereft of grand visions. Government has become mere administration. It manages the country, but it no longer leads it. Power vacuums seldom last long. If in the twenty-first century traditional political structures can no longer process data fast enough to produce meaningful visions, then new and more efficient structures will evolve to take their place.
As with everything in life, our path forward requires balance: enabling the good, while mitigating the risk of unintended consequences. I don’t know about you, but the notion of a useless class (for example) is not very appealing. You may say this is all science fiction, and we are centuries away from that world. Maybe, but they also said cars would never drive themselves – just saying. As the author said:
“If you don’t like some of these possibilities you are welcome to think and behave in new ways that will prevent these particular possibilities from materializing.”