Nick Burnett, Futurist and Education and learning entrepreneur, recently published an Article on education and artificial intelligence. The article launches a series that explores exponential technologies and their impact on learning and teaching. The post was co-authored with Nick Kairinos and the Fountech team. A focus on learners, teachers and leaders is critical, as education is the key to success in the 21st Century. I recommend the article.
In the past several weeks, the topic of Digital Ethics has come up several times. A critical piece of this discussion involves the bias that is and will be built into the applications of artificial intelligence. Amy Webb is a Quantified Futurist, Professor, Strategic Foresight at NYU, and the Founder and CEO of The Future Today Institute. In March of this Year, Amy published a book titled The Big Nine.
In her book, she tackles the issues associated with bias; specifically, the lack of diversity in computing. In this recent Article, Amy discusses the consequences of computer systems that don’t anticipate all the types of people who might use them. For example, Computers have started issuing prison sentences. A quick look at one of the largest technology companies underscores the severity of the issue: At Google, more than 95 percent of technical workers are white or Asian.
In reacting to the big focus on STEM, AMY had this to say: “If everyone is focused on the nuts and bolts of making software quickly at scale, where will they learn to design it with equity and care? Critical thinking is what the computers won’t be able to do,”. I recommend both the book and the article as a means of education and awareness regarding this critical issue of bias.
Amy Webb is a Quantified Futurist. She is a Professor, Strategic Foresight at NYU, and the Founder and CEO of The Future Today Institute. In March of this Year, Amy published a book titled The Big Nine.
Here is a description of the book via Amazon:
In this book, Amy Webb reveals the pervasive, invisible ways in which the foundations of AI–the people working on the system, their motivations, the technology itself–is broken. Within our lifetimes, AI will, by design, begin to behave unpredictably, thinking and acting in ways which defy human logic. The big nine corporations may be inadvertently building and enabling vast arrays of intelligent systems that don’t share our motivations, desires, or hopes for the future of humanity.
Much more than a passionate, human-centered call-to-arms, this book delivers a strategy for changing course, and provides a path for liberating us from algorithmic decision-makers and powerful corporations.
With the start of a new year, the traditional focus on predictions has begun. Prognosticators do so at their own peril – as this New Yorker Article clearly articulates. It seems that in 1968, the Foreign Policy Association (formed in 1918 to promote the League of Nations), celebrated its fiftieth anniversary by publishing a book of predictions about what the world would look like in fifty years. Well, here we are fifty years later, and if history is any guide, futurists have very little credibility. Although predictions these days may be even more difficult given our complex systems are progressing and scaling at an unprecedented rate, a macro-level focus can provide an ability to scan beyond the horizon. So rather than attempt to predict, here are my 2019 thoughts on several macro-level forces.
The dialog surrounding artificial intelligence and ethics is amplifying. As society continues this rapid progression towards an automated future, the role of ethics becomes increasingly important. After painting a picture of several possible futures for an audience, I would ask this question: Is this a society that you want to live in. This discussion was captured in a post on Technology and Ethics, along with a poll to capture my readers thoughts on this same question (please take the poll to keep the dialog moving).
This question underscores the need to proactively manage the path towards these possible futures – and the discussion surrounding ethics plays a critical role. Enter initiatives like Moral Machine from MIT. As described by their website, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The moral machine provides a platform for building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas. Given the challenges of coding an ambiguous set of morals and ethics into machines, crowdsourcing makes great sense. So visit the Moral Machine platform and add your voice to the conversation.
I just finished a fantastic book on artificial intelligence and the evolutionary path of China and the U.S.. Author Kai-Fu Lee inspires, as he focuses on the astounding capabilities of AI, and the one thing that only humans can provide; love. The journey includes the author’s own brush with mortality, and proposes a path forward: the synthesis on which we must build our shared future is AI’s ability to think, coupled with a human’s ability to love. He believes this synergy harnesses the undeniable power of artificial intelligence to generate prosperity, while also embracing our essential humanity. His hope for our future lies both in this new synergy between artificial intelligence and the human heart, and an AI-fueled age of abundance that fosters love and compassion in our societies.
I recommend reading this book from cover to cover. In the meantime, here is a summary organized by several key themes.
Knowledge is the engine that drives human development – and it has been throughout history. Knowledge expanded in the hunter-gatherer days with the invention of fire. In those days, a human obtained all its food by foraging. Although the source of food did not change, fire allowed humans to cook food and consume more calories. The human brain expanded with this caloric increase, and soon we invented language – the first in a series of innovations that drove the growth of knowledge.