Last week, I had the opportunity to keynote the AI Strategy Forum hosted by C-Level in Zurich – a gathering of senior leaders exploring how artificial intelligence is reshaping business, strategy, and society. It was a timely and important conversation. We are rapidly moving beyond AI as a tool for automation and optimization. What’s emerging is a deeper, more systemic shift – one that challenges the very foundations of how we think about intelligence, work, value creation, and the role of human agency.
While in Zurich, I also participated in an interview with C-Level, where we discussed questions about the future of AI and its implications for business and society. The questions were sharp, and the discussion reinforced the need to move away from narrow predictions and toward a systemic lens – one that accounts for convergence across technology, economics, society, geopolitics, philosophy, and the environment.
As I’ve written before, we should rehearse the future by surfacing the right questions, model the plausible pathways, and prepare our institutions, organizations, and leaders to adapt in real time. AI plays a central role in that process – not just as a technological breakthrough, but as a force that will increasingly interact with every domain of our lives. Forums like this one also play a role by allowing leaders to proactively envision the future. You can view the C-Level interview post and excerpted questions here: C-Level Interview: Frank Diana on AI and the Future. I also had the opportunity to discuss the future with several journalists while in Calgary. One of our conversations was captured in this recent article.
My thanks to the team at C-Level for hosting such a forward-looking dialogue – and for giving business leaders the space to explore what AI means not just for next quarter, but for the next era. The full set of questions and answers are provided below.
Question: Which domain will feel AI’s impact first – and why?
Frank Diana: AI’s impact is unfolding simultaneously across multiple domains, but if I had to choose, I’d say the labor market will feel it most immediately. We’re already seeing displacement at the entry level and a widening productivity gap. But it’s not just about jobs – it’s about the ripple effects that touch education, economics, and even mental health. That’s why a systemic lens is essential: AI never acts in isolation.
Question: How do you see AI changing our definition of work and economic participation?
Frank Diana: We’re moving toward a world where work is decoupled from traditional employment. As AI takes over more functions, we’ll need to redefine human value – away from productivity metrics and toward creativity, caregiving, and meaning. That transition won’t be smooth, but it’s necessary. In fact, we may see the emergence of virtual work platforms that allow retirees or displaced workers to re-engage in entirely new ways.
Question: What risks do you see if our learning systems can’t keep pace with AI?
Frank Diana: The risk is a deepening divide between those who can adapt and those who can’t. Our learning infrastructure is outdated and too slow to react. AI will demand constant upskilling and re-skilling, but more importantly, a mindset shift toward life-long learning. If we fail to evolve our systems, we’re not just risking job loss – we’re risking societal fragmentation.
Question: How can societies manage AI-driven systemic change without fracturing?
Frank Diana: We need to invest in societal resilience, not just technological progress. That means strengthening institutions, rebuilding trust, and reimagining safety nets. History shows us that general purpose technologies – like steam or electricity—can destabilize societies before they uplift them. The key difference now is the speed and scope of change. The answer lies in anticipating convergence and acting across domains, not reacting within silos.
Question: How should business leaders rethink value creation in an AI-driven economy?
Frank Diana: Business leaders must stop thinking in terms of linear ROI and start thinking in terms of Return on Learning. Value creation will increasingly come from platforms, ecosystems, and the ability to navigate uncertainty. AI rewires the relationship between labor and capital – so capital must now flow toward capability building, not just cost-cutting. Those who wait for perfect clarity will be too late.
Question: Are military applications of AI a greater risk than economic or social disruption?
Frank Diana: They’re not mutually exclusive – they’re interdependent. AI in warfare is deeply concerning because it could trigger new forms of geopolitical instability or autonomous conflict escalation. But economic and social disruptions can be just as destabilizing. What worries me most is that these domains are colliding: military, economic, and social shocks could reinforce one another if we don’t take a systemic view.
Question: What’s one AI-driven possibility that leaders are still underestimating?
Frank Diana: The redefinition of intelligence itself. Most leaders are still thinking in terms of productivity tools or decision support. But we’re heading toward systems that learn, reason, and evolve. That shift will fundamentally challenge what it means to be human. Leaders who don’t engage with the philosophical and ethical dimensions of this transition will miss the bigger picture.
Question: Where do you see our moral and ethical norms being most tested by AI?
Frank Diana: We’re already seeing it with automated systems making decisions in healthcare, hiring, and criminal justice. But the biggest strain may come when humans lose agency – not just because AI takes over tasks, but because it shapes preferences and behavior. The erosion of moral clarity and consensus could be one of the defining challenges of the decade.
Question: Could AI be our greatest climate tool – or could it make inequality worse?
Frank Diana: Both are true – and that’s the paradox. AI could optimize grids, model climate scenarios, and accelerate restoration projects. But if access to those capabilities is concentrated in powerful nations or corporations, we could deepen existing inequities. The outcome depends on governance, openness, and how deliberately we address inclusion from the start.
Question: In one sentence each: What’s AI’s biggest opportunity and biggest risk?
Frank Diana: Opportunity: AI has the potential to augment human potential and accelerate progress across every domain of society. Risk: If unchecked, AI could amplify fragmentation, erode trust, and destabilize the very systems we rely on to function as a society.
Discover more from Reimagining the Future
Subscribe to get the latest posts sent to your email.
