Two recent articles caught my eye this week. One article focused on the Fourth Generation of artificial intelligence, calling it artificial intuition. The other article explores the shift from artificial narrow intelligence to Artificial General Intelligence. In the case of artificial intuition, author Mark Gazit describes how helpful AI has become, and its ongoing limitations. Machine learning is still fully dependent on historic data. New and unknown scenarios leave data scientists helpless. Mr. Gazit suggests that in order to have true artificial intelligence, we need machines that can think on their own.Continue reading
I just finished a fantastic book on artificial intelligence and the evolutionary path of China and the U.S.. Author Kai-Fu Lee inspires, as he focuses on the astounding capabilities of AI, and the one thing that only humans can provide; love. The journey includes the author’s own brush with mortality, and proposes a path forward: the synthesis on which we must build our shared future is AI’s ability to think, coupled with a human’s ability to love. He believes this synergy harnesses the undeniable power of artificial intelligence to generate prosperity, while also embracing our essential humanity. His hope for our future lies both in this new synergy between artificial intelligence and the human heart, and an AI-fueled age of abundance that fosters love and compassion in our societies.
I recommend reading this book from cover to cover. In the meantime, here is a summary organized by several key themes.
Part three of Anticipating 2025 will summarize the third section of the book. This section focused on redesigning artificial intelligence, with a look at six important questions and the exploration of human-machine mergers. The six questions explored in this section are:
- Can we create a human-level artificial intelligence?
- If so, when?
- Will human-level artificial intelligence lead to super-intelligence?
- If super-intelligence arrives, will we like it?
- Can we upload our minds to computers?
- Can we de-risk the arrival of super-intelligence?
Like the first two sections, this section forces us to look at disruption through a different lens. Granted, the path forward is highly speculative, and even the most optimistic scenarios are likely years away from having transformative implications. Nonetheless, it does force us to broaden our lens beyond traditional views. For example, I’ve focused on the automation of knowledge work and all its ramifications, while the authors (Calum Chace, Martin Dinov, and Elias Rut) focus on creating super-intelligence by uploading our minds to computers. They explore a human-machine merger that they see as the enabler of super-intelligence benefits realization. This merger in the author’s view is the only way to avoid creating our successor. So yeah, that’s a little more impactful than automating knowledge work.