Add Singapore to the list of Nations establishing a Smart Nation agenda. In a recent Article authored by Eileen Yu, she describes the launch of Singapore’s national artificial intelligence (AI) strategy. The Singapore government aims to drive AI adoption to generate economic value and provide a global platform on which to develop and test AI applications. As future scenarios go, Smart Nations represent a complex intersection of multiple ecosystems – broader than similar complexities associated with the Smart City scenario.
“When considering potential risks from future technology, one should not be content with merely analyzing what’s likely to happen—instead, one should look at what’s possible, even if unlikely.” – Jaan Tallinn, founding member of Skype, and co-founder of the Centre for the Study of Existential Risk
Very well said. I’m a big believer in that quote – the reason I spend so much time painting pictures of possible futures. Mr. Tallinn expects the backbone of technology in the 2020s to be defined by gradual improvements in biotechnology, nanotechnology, and Artificial Intelligence. What else can we expect in the next decade? A recent Article by George Dvosrsky – a senior staff reporter at Gizmodo – explores the futuristic developments in the next ten years.
Although I believe prediction in this era is a fools errand, we can look at the trajectory of our Science and Technology Curve to make some informed guesses. That’s precisely what the above article attempts to do. Here are the author’s informed guesses.
Maurice Conti is the Chief Innovation Officer at Alpha focused on what he calls the Augmented Age. He talks about it this way: We’re heading for a future where our natural human capabilities are going to be radically augmented in three ways: Computational systems will help us think. Robotic systems will help us make. And a digital nervous system will connect us to the world far beyond what our natural nervous system can offer.
I recently ran into a TCS colleague at a forum in which I presented. Ryan Metz is a Data Scientist working at our Cornell Innovation Lab. Ryan mentioned an Article he had written about the short term impact of AI – versus the long term concerns voiced by the likes of Elon Musk and Stephen Hawking. As he states in the article, the long term concern is that we will produce machines so intelligent that we lose control over them. They will become a new form of life that rules over us as we do the the animal kingdom.
Two recent articles highlight the dilemma faced in this era of rapid innovation: the potential to enhance humanity, and the opportunity to diminish it. This article on DeepFakes describes the challenge that society will face as Deepfake video and audio make it impossible to tell the difference between reality and fiction. Audio attacks using convincing forgeries can send stocks plummeting or soaring. How about mimicking a CEO’s voice to request a senior financial officer to transfer money? These are real examples provided by Symantec. This short Video describes the money transfer scenario.
Nick Burnett, Futurist and Education and learning entrepreneur, recently published an Article on education and artificial intelligence. The article launches a series that explores exponential technologies and their impact on learning and teaching. The post was co-authored with Nick Kairinos and the Fountech team. A focus on learners, teachers and leaders is critical, as education is the key to success in the 21st Century. I recommend the article.
In the past several weeks, the topic of Digital Ethics has come up several times. A critical piece of this discussion involves the bias that is and will be built into the applications of artificial intelligence. Amy Webb is a Quantified Futurist, Professor, Strategic Foresight at NYU, and the Founder and CEO of The Future Today Institute. In March of this Year, Amy published a book titled The Big Nine.
In her book, she tackles the issues associated with bias; specifically, the lack of diversity in computing. In this recent Article, Amy discusses the consequences of computer systems that don’t anticipate all the types of people who might use them. For example, Computers have started issuing prison sentences. A quick look at one of the largest technology companies underscores the severity of the issue: At Google, more than 95 percent of technical workers are white or Asian.
In reacting to the big focus on STEM, AMY had this to say: “If everyone is focused on the nuts and bolts of making software quickly at scale, where will they learn to design it with equity and care? Critical thinking is what the computers won’t be able to do,”. I recommend both the book and the article as a means of education and awareness regarding this critical issue of bias.
Amy Webb is a Quantified Futurist. She is a Professor, Strategic Foresight at NYU, and the Founder and CEO of The Future Today Institute. In March of this Year, Amy published a book titled The Big Nine.
Here is a description of the book via Amazon:
In this book, Amy Webb reveals the pervasive, invisible ways in which the foundations of AI–the people working on the system, their motivations, the technology itself–is broken. Within our lifetimes, AI will, by design, begin to behave unpredictably, thinking and acting in ways which defy human logic. The big nine corporations may be inadvertently building and enabling vast arrays of intelligent systems that don’t share our motivations, desires, or hopes for the future of humanity.
Much more than a passionate, human-centered call-to-arms, this book delivers a strategy for changing course, and provides a path for liberating us from algorithmic decision-makers and powerful corporations.
With the start of a new year, the traditional focus on predictions has begun. Prognosticators do so at their own peril – as this New Yorker Article clearly articulates. It seems that in 1968, the Foreign Policy Association (formed in 1918 to promote the League of Nations), celebrated its fiftieth anniversary by publishing a book of predictions about what the world would look like in fifty years. Well, here we are fifty years later, and if history is any guide, futurists have very little credibility. Although predictions these days may be even more difficult given our complex systems are progressing and scaling at an unprecedented rate, a macro-level focus can provide an ability to scan beyond the horizon. So rather than attempt to predict, here are my 2019 thoughts on several macro-level forces.
The dialog surrounding artificial intelligence and ethics is amplifying. As society continues this rapid progression towards an automated future, the role of ethics becomes increasingly important. After painting a picture of several possible futures for an audience, I would ask this question: Is this a society that you want to live in. This discussion was captured in a post on Technology and Ethics, along with a poll to capture my readers thoughts on this same question (please take the poll to keep the dialog moving).
This question underscores the need to proactively manage the path towards these possible futures – and the discussion surrounding ethics plays a critical role. Enter initiatives like Moral Machine from MIT. As described by their website, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The moral machine provides a platform for building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas. Given the challenges of coding an ambiguous set of morals and ethics into machines, crowdsourcing makes great sense. So visit the Moral Machine platform and add your voice to the conversation.
I just finished a fantastic book on artificial intelligence and the evolutionary path of China and the U.S.. Author Kai-Fu Lee inspires, as he focuses on the astounding capabilities of AI, and the one thing that only humans can provide; love. The journey includes the author’s own brush with mortality, and proposes a path forward: the synthesis on which we must build our shared future is AI’s ability to think, coupled with a human’s ability to love. He believes this synergy harnesses the undeniable power of artificial intelligence to generate prosperity, while also embracing our essential humanity. His hope for our future lies both in this new synergy between artificial intelligence and the human heart, and an AI-fueled age of abundance that fosters love and compassion in our societies.
I recommend reading this book from cover to cover. In the meantime, here is a summary organized by several key themes.
Knowledge is the engine that drives human development – and it has been throughout history. Knowledge expanded in the hunter-gatherer days with the invention of fire. In those days, a human obtained all its food by foraging. Although the source of food did not change, fire allowed humans to cook food and consume more calories. The human brain expanded with this caloric increase, and soon we invented language – the first in a series of innovations that drove the growth of knowledge.
The brain is clearly one of the next great frontiers. In this World Economic Forum Article on reading minds, we get a glimpse into the exponential progression of brain science. The author cites research published by AI experts in China, the US and Japan showing that computers can replicate what people are thinking by using functional magnetic resonance imaging (fMRI) machines that measure brain activity – linked to deep neural networks that replicate human brain functions.
I had the pleasure of recording a Podcast with AJ Goldstein on a wide ranging set of topics. The central theme was artificial intelligence. We took a journey to the future and explored several possible paths for artificial intelligence. AJ had the following kind words to say as he shared the Podcast in various channels:
Last month I had the opportunity to sit down with one of the world’s leading futurists, Frank Diana, to discuss the future of artificial intelligence.
Today the episode has been released, and it’s one of the most meaningful conversations I’ve had on the podcast.
Rather than provide a subjective one-sided view, with every question that I asked about the future of AI, Frank responded with presenting both sides. He helped me understand “what does the optimist say?”, “what does the pessimist say?”, “what is the utopian view?”, “what’s the dystopian view?”… and in this way it quickly became one of the most balanced conversations I’ve had the chance to be a part of.
With over 30+ years of experience to pull from, Frank provided so many fascinating lenses through which to view our changing future… all the while outlining an insightful playbook of what we can do as individuals, communities, and societies to prepare for the inevitable change that’s going to come.
AJ drove the discussion on the following topics:
- Addressing the two tipping points that have occurred thus far in humanity and changed what it means to be human– and the coming third tipping point.
- Addressing some of the common fears that people have about the implications of advanced AI and robotics on the future.
- How the shift to an automated society might cause initial elimination of jobs, but ultimately will allow more time for pursuit of creative, entrepreneurial endeavors.
- A discussion on the characteristics needed to succeed in a world of change, and what you personally should do to prepare for it.
Enjoy the show!
Yesterday on Coffee Break with Game Changers, Bonnie D. Graham hosted a show focused on designing the future of humanity. You can listen to the rebroadcast here. The session abstract is included below. The show participants included: Bonnie, Masha Krol, Ian Gertler, Maricel Cabahug and myself.
In her opening monologue, Bonnie said:
The first impact of AI will be that more and more non-designers develop their creativity and social intelligence skills to bolster their employability – in the future, everyone will be a designer
With all the talk of AI and its potential negative impact on humanity, we lose sight of the positive. As an engine for augmentation, artificial intelligence is likely to advance our human potential. The effectiveness of what we do stands to improve – whether its creativity and design, or oriented in analytics. Some would prefer to call it “Augmented Intelligence” versus artificial intelligence. If we view the progression of AI on a spectrum, we could indeed reach the place of augmentation and never approach the other end of the spectrum. This lies at the heart of the artificial intelligence debate.
In a recent book titled The Future of Work, author Darrell M. West describes the Work 2.0 scenario on this emerging future visual. In exploring possible implications of a shifting work paradigm, he gets prescriptive about possible responses. This implication-response exercise sits at the heart of Future Thinking.
No one can predict this complex and uncertain future – but exercises like this help us see possible futures. In seeing them, we position ourselves to proactively shape them. In the context of work, Mr. West explores several possible responses, including another future scenario which he calls Republic 2.0. How this scenario plays out has a direct impact on the path of other scenarios. The scenario speaks to a new kind of politics. Mr. West states:
I just added another very good book to the Book Library: The Future of Work: Robots, AI, and Automation. Author Darrell M. West looks at a world in which our current views of work change. He explores the implications to our social contract and the policy decisions so critical to revising that contract for a new era. Structural change – which Mr. West explores in the book – has long been a tenet of my work. The future of many of our institutions will either change by our proactive acknowledgement that they must change – or they will be undermined.
The conversation so nicely positioned by our author is one that must happen at all levels of leadership. This does not have to be a Utopian versus Dystopian discussion. Rather, like the major disruptive periods of our past, leaders need to lead.
I just added another very good book to the Book Library: Life 3.0: Being Human in the Age of Artificial Intelligence – A New York Times Best Seller. Author Max Tegmark takes a fascinating journey through possible AI futures. His physics oriented perspective provides an interesting point of view, as humanity wrestles with the ultimate path of artificial intelligence.
Mr. Tegmark tackles the discussion around how much machines will encroach on human domains, by illustrating a metaphor from Hans Moravec:
Today on Coffee Break with Game Changers, Bonnie D. Graham hosted a show focused on designing the future of humanity. You can listen to the rebroadcast here. The session abstract is included below, as well as a Twitter stream that provides insight into the topic and our discussion. The show participants included: Bonnie, Masha Krol, Ian Gertler, Maricel Cabahug and myself.
Today on Coffee Break with Game Changers, Bonnie D. Graham hosted a show focused on the future of prediction. You can listen to the rebroadcast here. The session abstract is included below, as well as a Twitter stream that provides insight into the topic and our discussion. The show participants included: Bonnie, Gray Scott, and myself
The buzz: “Those who have knowledge, don’t predict. Those who predict, don’t have knowledge.” (Lao Tzu)
Given the uncertainty, pace, and unexpected nature of today’s world, there are too many unknowns for us to effectively predict the future. Reality check: A convergence across science, technology, politics, society, economics, the environment, and growing ethics discussion, has created a complex web that requires the type of system thinking that may exceed our human capacity. Do machines hold the answer? Can they predict the path of an overwhelming number of possible futures?
The experts speak. Frank Diana, TCS: “We have a duty to think hard about what may be, so as to better prepare society for the changes that may come” (Richard Baldwin). Gray Scott, Futurist: “Prediction is not just one of the things your brain does. It is the primary function of the neo-cortex, and the foundation of intelligence” (Jeff Hawkins). Join us for Tech Magic or Hype: Can We Teach Machines to Predict?