Maurice Conti is the Chief Innovation Officer at Alpha focused on what he calls the Augmented Age. He talks about it this way: We’re heading for a future where our natural human capabilities are going to be radically augmented in three ways: Computational systems will help us think. Robotic systems will help us make. And a digital nervous system will connect us to the world far beyond what our natural nervous system can offer.
I recently ran into a TCS colleague at a forum in which I presented. Ryan Metz is a Data Scientist working at our Cornell Innovation Lab. Ryan mentioned an Article he had written about the short term impact of AI – versus the long term concerns voiced by the likes of Elon Musk and Stephen Hawking. As he states in the article, the long term concern is that we will produce machines so intelligent that we lose control over them. They will become a new form of life that rules over us as we do the the animal kingdom.
Two recent articles highlight the dilemma faced in this era of rapid innovation: the potential to enhance humanity, and the opportunity to diminish it. This article on DeepFakes describes the challenge that society will face as Deepfake video and audio make it impossible to tell the difference between reality and fiction. Audio attacks using convincing forgeries can send stocks plummeting or soaring. How about mimicking a CEO’s voice to request a senior financial officer to transfer money? These are real examples provided by Symantec. This short Video describes the money transfer scenario.
Nick Burnett, Futurist and Education and learning entrepreneur, recently published an Article on education and artificial intelligence. The article launches a series that explores exponential technologies and their impact on learning and teaching. The post was co-authored with Nick Kairinos and the Fountech team. A focus on learners, teachers and leaders is critical, as education is the key to success in the 21st Century. I recommend the article.
In the past several weeks, the topic of Digital Ethics has come up several times. A critical piece of this discussion involves the bias that is and will be built into the applications of artificial intelligence. Amy Webb is a Quantified Futurist, Professor, Strategic Foresight at NYU, and the Founder and CEO of The Future Today Institute. In March of this Year, Amy published a book titled The Big Nine.
In her book, she tackles the issues associated with bias; specifically, the lack of diversity in computing. In this recent Article, Amy discusses the consequences of computer systems that don’t anticipate all the types of people who might use them. For example, Computers have started issuing prison sentences. A quick look at one of the largest technology companies underscores the severity of the issue: At Google, more than 95 percent of technical workers are white or Asian.
In reacting to the big focus on STEM, AMY had this to say: “If everyone is focused on the nuts and bolts of making software quickly at scale, where will they learn to design it with equity and care? Critical thinking is what the computers won’t be able to do,”. I recommend both the book and the article as a means of education and awareness regarding this critical issue of bias.
Amy Webb is a Quantified Futurist. She is a Professor, Strategic Foresight at NYU, and the Founder and CEO of The Future Today Institute. In March of this Year, Amy published a book titled The Big Nine.
Here is a description of the book via Amazon:
In this book, Amy Webb reveals the pervasive, invisible ways in which the foundations of AI–the people working on the system, their motivations, the technology itself–is broken. Within our lifetimes, AI will, by design, begin to behave unpredictably, thinking and acting in ways which defy human logic. The big nine corporations may be inadvertently building and enabling vast arrays of intelligent systems that don’t share our motivations, desires, or hopes for the future of humanity.
Much more than a passionate, human-centered call-to-arms, this book delivers a strategy for changing course, and provides a path for liberating us from algorithmic decision-makers and powerful corporations.
With the start of a new year, the traditional focus on predictions has begun. Prognosticators do so at their own peril – as this New Yorker Article clearly articulates. It seems that in 1968, the Foreign Policy Association (formed in 1918 to promote the League of Nations), celebrated its fiftieth anniversary by publishing a book of predictions about what the world would look like in fifty years. Well, here we are fifty years later, and if history is any guide, futurists have very little credibility. Although predictions these days may be even more difficult given our complex systems are progressing and scaling at an unprecedented rate, a macro-level focus can provide an ability to scan beyond the horizon. So rather than attempt to predict, here are my 2019 thoughts on several macro-level forces.